green_amber (
green_amber) wrote2005-10-24 11:49 am
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
- ai,
- cats,
- consciousness,
- poems
Quote found on toilet floor
.. by
bondage_and_tea - anyone recognise it?
"It's amusing that our model of ourselves is that of an impenetrable machine we somehow need to decode and predict - then and only then can we make the right decisions in order to be happy. We set up miniature experiments and carefully monitor our responses and how others react to us to see if we should repeat or continue the experience. Frantically moving from one friend, lover, job, university, project, political cause, to the next, each briefly improving the situation and giving us the status and self-importance we need to get out of bed in the morning. Worrying about the global issues, reading the news religiously every day so we're informed individuals and can ramble on for hours about the pains of people in the world we'll never meet. Ignoring people we could share happiness with or - worse - learning methods of manipulation so we can influence those closest (proximal) to us, the satisfaction of a person molded feeding back into our personal status machine. Eye contact, use first name, soft tone, develop a rapport but not for too long lest honesty and humility creeps in. Helping and diplomacy rather than sharing and empathy."
I've always been intrigued by the fact that we compare how our brains work to whatever the current hip technology is. We think of ourselves as computers - how often have you said, my brain's crashed - the Victorians compared intelligence to the telegraph. What did stone age man think of themselves as? What will we think of intelligence when nanotechnology is really here?
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
"It's amusing that our model of ourselves is that of an impenetrable machine we somehow need to decode and predict - then and only then can we make the right decisions in order to be happy. We set up miniature experiments and carefully monitor our responses and how others react to us to see if we should repeat or continue the experience. Frantically moving from one friend, lover, job, university, project, political cause, to the next, each briefly improving the situation and giving us the status and self-importance we need to get out of bed in the morning. Worrying about the global issues, reading the news religiously every day so we're informed individuals and can ramble on for hours about the pains of people in the world we'll never meet. Ignoring people we could share happiness with or - worse - learning methods of manipulation so we can influence those closest (proximal) to us, the satisfaction of a person molded feeding back into our personal status machine. Eye contact, use first name, soft tone, develop a rapport but not for too long lest honesty and humility creeps in. Helping and diplomacy rather than sharing and empathy."
I've always been intrigued by the fact that we compare how our brains work to whatever the current hip technology is. We think of ourselves as computers - how often have you said, my brain's crashed - the Victorians compared intelligence to the telegraph. What did stone age man think of themselves as? What will we think of intelligence when nanotechnology is really here?
no subject
The same is true of, for example, amoebae. The point of the example was, I think, to push a point rather than to be obvious. In fact, being inobvious was probably its virtue.
His point, I think, is that asking whether something is conscious isn't just asking if it has the richness of perception and inner life that we do.
has anyone written in sf about a supercomputer consciousness that is fundamentally different from ours?
Hmm. Dunno. Wintermute and Neuromancer seem very strange, but they're left very vaguely sketched. Banks's Minds seem fairly straightforward by comparison. I can't offhand think of anyone who's tried to depict the mindset and experience of a radically-different conscious AI, no.
no subject
His point, I think, is that asking whether something is conscious isn't just asking if it has the richness of perception and inner life that we do.
Yes I get that - it's a good quote and I thank you for it. But what it pushes me into thinking/feeling is that there IS a crucial, patent difference between capacity to gather inputs, and "consciousness" , even of a diminished-from-human kind ; just as there is between capacity to perform useful functions on inputs and "intelligence". Andy's example of the fly to me goes into the "thermostat" class, as does your amoeba - feeling/acting but no consciousness, mere instinct - so I'm not just making a plain organic/inorganic distinction. (I recognise this gets us no further.)
no subject
no subject