SciFi and Fantasy Book Club discussion
TV and Movie Chat
>
the machine movie
date
newest »

message 1:
by
Edwin Miller
(new)
Apr 23, 2014 10:09AM

reply
|
flag



What if that same person is copied/transferred several times? Is each one a separate sentient being? What about the element of growth and development? What if the "copied" personality never changes? Is is still sentient then?

Those are some good points. To me this relates back to a point that comes up in the Animatrix: if we create robots with a level of intelligence and complexity of thought similar to ours, is it right to basically use them as slaves?

AI is a different kettle of fish. Everyone tends to assume that if a machine becomes self aware, it will think like a human. But we are the result of millions of years of evolution, hormones, our environment and our own cultural meddling. Machines will have their own priorities and values, plus the creators (us) will still have a strong say in how their "minds" develop. It is not an all or nothing proposition.
A self aware car will most likely be intensely focused upon being the best transporter it can be, with an OCD focus on maintenance and safety. It is very unlikely that it will suddenly decide it is a "slave" and strike out for freedom. It would have no reproductive drive or concept of leisure/laziness to encourage such a move.



I'm not trying to equate the two forms of intelligence. However, I am wondering if there comes a point when a form of intelligence (regardless of how similar it is to our own) is sufficiently advanced enough that ordering it what to do is no longer considered ethical.

I suspect the way we treat intelligent machines will reflect more on our own nature than on any objective evaluation of sentience. Look at how we treat the great apes (driving them to extinction, eating them, putting them in zoos) and they're just a whisker away from full human intelligence. Then there's our own poor, our own children, and our own prisoners, inarguably fully self-aware and fully human, yet frequently cruelly abused.

I think you've pointed out a very important distinction. In most places, human beings are, at least in theory, guaranteed certain basic rights. Of course, these aren't always upheld, but the point is more than people should have them. In most fictional worlds, AIs don't seem to have these basic rights. In this case telling them what to do is, I believe, very different from telling a normal person what to do. The AI doesn't really have the option to refuse.
I also think there are issues with programming sentient AIs to like something. If their programming makes them like it, as opposed to them naturally learning to like it, it could be compared to brainwashing. For instance, I believe there is a difference between socialising a highly intelligent robot to want to look after human beings and inserting code into its programming that makes it want to look after human beings.
With regards to the movie itself (i.e., the machine movie), I have to admit it raise some interesting questions, but it was a bit of a mess from a film-making perspective. Even taking into account what looks to be a fairly low budget, the script, acting, and so forth weren't exactly stellar.
Like with many things, if there are problems handling AI, I believe those problems will have their source in the human condition. We can, as you've pointed out, be incredibly cruel to our own kind. How much more cruel could we be to beings as different to us as AIs?

Or to instinct. We've been "programmed" over millions of years by evolution to have various in-built likes and dislikes. I've always seen programming such dispositions into AIs as the exact equivalent of this. Of course, evolution's crucible has been a harsh one - get the inbuilt behaviours wrong and the organism fails to reproduce. Only creatures with useful instincts now exist. In taking "short cuts" and programming instincts (like, say, caring about humans) into machines, we risk unforeseen consequences (as when we design incentive schemes at work) and it would probably take lots of trial and error to find the right set of behavioural preferences to make a socially acceptable and useful AI.
I haven't actually seen the film in question. I live in a tiny rural community where cinemas and Netflix - like intelligent machines - are only dreamed of :-)