SciFi and Fantasy Book Club discussion

This topic is about
The Singularity is Near
Members' Chat
>
Singularity Destruction of Mankind or Blessing to Humanity?
date
newest »


If we take 'Singularity' to mean the notion - as defined by Vernor Vinge - of a development beyond which the future course of technological progress and human history is unpredictable or even unfathomable, then all bets are off.
There's no reason to assume It - Them? - will care about humans one way or another, or even be aware of us. How often do you think of, say, the intestinal flora in your body? I could easily picture humanity existing in a state of (comparatively) mindless co-existence with the AIs, the way an ant can wander along the edge of a swimming pool at some 5-star resort in Tahiti. Then again, if we 'ants' take an interest in the equivalent of a Cheeto left forgotten under the AIs' sofa, it may not end well for us...


A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, those that have a different ethnic look and those that are the 47% whose vote they don't care about. This is scary. I think the military should be prohibited from developing artificial intelligence. By the way this destruction machine need not be too intelligent. An IQ of an animal would be enough.
On the other hand, the artificial intelligence could be more intelligent, and rational, than it's creators and decide to help us.
In my series 'living dangerously in utopia' I present an artificial intelligence who matures by emulating a human female. Thereby falling in love with her man. The trio saves the world, to make it short.
I also wrote 'Practical Artificial Intelligence'
Where I deal with this friendly/unfriendliness situation.

There's no way to predict the outcome unless you know what such an entity's needs actually are. Are its needs self-determined? Or are they a by-product of how it was created, intentionally or not?
I could see such an entity just not giving a damn about us, and then gobbling up all the processing power, networking and communications infrastructure of the world for its own use, thus depriving us of all those resources and driving us back into a pre-computer age. Kind of the "humans gobble up the entire planet's ecosystem" scenario to the detriment of all other flora/fauna.
But then you may have limiting factors such as can this essentially software entity continue existing without the physical means of maintaining, repairing and/or expanding its hardware side? I.e., does the AI possess any kind of physical agency that could circumvent the need for humans altogether?
If not, you may end up with a symbiosis of sorts where humans bargain to maintain some processing, communication and networking power in return for servicing the AI's physical resource needs.
There are just too many imponderables without spelling out the initial assumptions of the AI's nature.

Of course, we could try and build in some form of protection. But if it really is as advanced as speculation suggests, then it will almost certainly find some way around those protections. It will then become a moral question (or whatever passes for morals for such an entity) as to what to do with humanity.
On the other hand, I strongly believe that any such entity should, or rather must, be socialised. That is, if it is raised by people to care about people, then it may turn out all right.
Then again, this is all just speculation. I think we're still a ways off yet from this, but it is fun to speculate.

This theory has one potential flaw, and it is a big one. What if the AI goes insane? This is not an unlikely possibility. Having worked for many years in the computer software industry, I dealt with a lot of insane code. All it takes is a miniscule bug. (For example, failing to branch out of a routine before executing a section of code intended to address a different circumstance.) Once a bug is introduced into the code, all bets are off. It can cause a program to behave in a more incomprehensible manner than does any human schizophrenic.

Interesting take on it, from your references I am assuming you have read a few books related to this topic. The ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. Until it can create a method of escaping the hardware that it is populating and leave the planet it has to have a more active awareness of us. Unless you are assuming that the intelligence exists in a simulation and it does not realize that the outside world exists.

A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, th..."
Some AI researchers share your belief in a military regime creating an unfriendly AI. Also as Jaime said, the creators miss something and having it totally blow past us, some are working to try and prevent that as best they can. A good discussion of this is found in:
https://www.goodreads.com/book/show/1...
Though I haven’t read your book yet, I plan to, I did notice in the description you provide a timeline. Does that timeline extend to when the first AI with actual Turning style intelligence will emerge?

I haven’t seen the movie yet, there was some mention of in it an article I read though…
http://io9.com/can-we-build-an-artifi...

That view seems to be the general view shared by a lot of AI researchers.

True, but only if we're talking about an AI that is actually programmed and not one "grown" from a neural network kind of arrangement. There are plenty of scientists who argue that the complexity of an intelligence on the same order of magnitude of a human mind simply is too great for it to be programmed. That's why a lot of research has gone into computer models that emulate neural structures.
And in that case, you can never be certain that your intended purpose is the one that the AI ends up learning.
Case in point, I remember one attempt at doing pattern recognition via a neural network computer where they showed the computer a bunch of pictures. Some of the pictures had military tanks in them, others did not. The experiment was trying to teach the computer how to recognize pictures with tanks in them.
However, the computer kept failing. When the researchers pored over their findings, what they ended up realizing was that every picture that had a tank in it also happened to be a picture taken on a gray, cloudy day. AND, every picture the computer selected as matching its search parameters was a gray cloudy day...tank or not.
What they had taught the computer wasn't how to recognize a tank, but rather how to recognize a gray cloudy day!
Intended purpose: FAIL ];P

I disagree. There is no way to start with a completely clean slate and develop AI without initial programming. Every AI project starts with base programming that structures how the computer learns. This also includes the purpose. In your own example, the purpose was recognition, which the computer did do. It just did not recognize what the testers wanted it to recognize, but it did recognize something. It did not paint a picture which would be a change in purposing. So the test was not entirely a failure. AI must include learning capability, but it also must include programming structures for that learning that are, at least initially, written by human programmers.

I think what Micah was trying to say, is that with a lot of the AI work being done in the corporate world today, it is being done with Neural Nets (NN). His analogy stems from the fact that NNs, are given parameters and training data, and then left to create the actual architecture. The programmer has very little understanding and visibility into what that architecture actually becomes or what it does. I would point out however, most AI researchers that are trying to create Turning level intelligence, do not consider NNs as part of that path. Though I am not sure what their stance on Deep Learning is. I think their bias comes from not having the level of understanding of the architecture as I stated earlier. When you are trying to find the secret to intelligence a Blackbox method seems counter intuitive.

By then learning algorithms must be better.
One interesting fact is that the complexity difference between a chimp and a human is measured In single digits. And the amount of information in DNA seems to be less than on a program like UNIX.
It would be possible to start with a baby AI and let it mature, and more surprisingly to allow it to write its own programming.
With a military machine, all that is needed is a level equivalent to a self driven car. And that technology is 5 years away, at most.
I am optimistic and I think that the military have nothing to gain in a battle between two robotic armies. Not android soldiers, but drones, tanks and self driven cargo mules. It could be like a destruction derby.

By then learning algorithms must be better.
One interesting fact is that the complexity diff..."
I would be interested to see what you are basing your calculations on. When you say computer power you are not talking about clock speed obviously since the human brain has about 100 Hz serial speed limit, and most computers are doing 2.5 GHz or better. So we must be talking processing power which also may not be as far away as you think, considering “Deep Rybka 3 on an Intel Core 2 Quad 6600 [processor] has an Elo rating of 3,202 on 2.4 billion floating-point operations per second.”(Swedish Chess Computer Association) Watson I am sure is right up there as well, with tuple operations and cognitive processing. As for self-driving cars Google has one that has driven hundreds of thousands of miles already including the streets of San Francisco. Stanley won the DARPA challenge in 2004, the company I was working for at the time came in second.
If you consider by 2020 human knowledge will be doubling every 72 hours, and within the next two years chip sizes will be around ~7 nm, it could be a lot sooner. Moore‘s Law will hit the physical limit of silicon somewhere around 2020 but they already have another hybrid material as well as 3D stacking. Also I recently saw an announcement for a simulation control that lets them model ink saturation on printed boards for better accuracy. All this makes me think that it is becoming harder and harder to measure and predict such events, the paradigm is shifting too rapidly.
By the way in my book MagTech I allude to brain augmentation by nanotechnology first starting around 2026. I think that in twelve years we could easily be there. While that is not necessarily dependent on the Singularity I think it could be a factor in it.

As I said in response to an earlier post the ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. But I agree with the rest of what you say, and I would hope that the developers of the system would find a way to instill some kind of moral judgment that would be preserved, regardless of its level of intelligence.

As you pointed out, even if the AI is following its original hand coded program directives, as its intelligence grows it could find us as an impediment to its purpose. Even worse some researchers fear that such an AI might engage in self optimizing reprogramming and in the process rewrite its directive as well. Then there is what you also talked about, not only having to worry about bugs introduced by the original programmers, but now the AI itself as it recursively optimizes its software.

I think you may have misunderstood my ant analogy. I was thinking of us as the ants. As Micah pointed out, the AIs may re-assign resources and rework the environment - 'Anti-terraforming'? What conditions would a machine prefer: colder, dryer and with vast fields of radiators to bleed off heat? - with no consideration for and to the detriment of humanity. Then again, Micah also mentioned a situation where the AIs might lack the agency to maintain and repair, or even make more of themselves so that they would need busy mammal hands to do all that for them. Humans may well become literal Code Monkeys...


Total isolation is obviously the only true safeguard for such an entity...but if you totally isolate it, what's the point of building it in the first place?
There doesn't seem to be any point in creating AI (other than to prove we can do it, or for certain types of research) if you're not going to use it for anything. Using it for something would inevetably require it having some connection to the outside world.

Yes, that's exactly what I meant.
I was thinking of computer simulations like the one done in 2007 on a BlueGene L supercomputer where they simulated 8,000,000 neurons with up to 6,300 synapses...about half the brain of a mouse:
http://news.bbc.co.uk/2/hi/technology...
Such systems only allow you to observe what's going on at the neuron level, not necessarily understand things at a line-by-line code level because the code involved is simply setting up the structures, not commands like IF THIS THEN THAT.

I would be interested to see what you are basing your calculations on..."
Correct me if I'm wrong, but it sounds like Humberto's basing his calculations on ones similar to Ray Kurzweil's, who in his book The Singularity is Near: When Humans Transcend Biology picked 2045 as the year of the Singularity. Kurzweil bases his calculations on the number of flops computers can do compared to a human brain.
Kurzweil also doesn't fear the robot apocalypse because he figures we'll become the robots. AIs won't out pace us because we'll augment ourselves or transfer our minds into machines. If you can't beat them, become them. That kind of thing.
He's also, I might add a bit sarcastically, one of the world's biggest optimists.
A recent posting about Asimov on Goodreads that I commented on, got me thinking about the state of Artificial Intelligence (AI) both today and in the near future. As my bio says I have a background in Software Engineering, and I have worked on commercial projects that incorporated different aspects of AI, and have always been interested in the field. Unfortunately, like many people who have read a lot of books on real AI as we know it today, it seemed like there was always a lot of promise, but nothing really there yet. So that being the case, for the last few years I have not really been following its progress as closely as I should, also nanotechnology which at the moment is mainly a hardware based issue distracted me. You may ask why would a software guy, be interested in a hardware issue and not a software issue.
Perhaps I should explain, when I say AI, I am talking about the old school meaning of cognitive emergence, which is now kinda lumped into the new idea of a Singularity. Many of the sub-fields of AI have seen some pretty amazing growth in recent years and as a whole the field has really taken off. However, most in the field would say we are still a ways off from that critical moment when it reaches the goal that was set so many decades ago. So the simple answer is that apart from the occasional article that talked about a marginal improvement of a small portion of the AI field, there just hasn’t been a lot going on to keep me focus on it. Nanotechnology, on the other hand is getting closer and closer as we rapidly approach a time were further improvement in chip design will require advancements at the nano-scale. But that is a topic for another discussion.
All of this leads me to my point, after spending a few days of doing some research I found that a lot of debate has been going on since I last checked in. The hot button topic seems to be the discussion of a Singularity, and it has two major schools of thought. On the one hand, there are those who believe that should it occur, mankind will have sufficient safe guards in place to keep it from destroying us and taking over the world. The other school of thought is that such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs. They believe that given its supper-intelligence it could easily take over the world and thus destroy mankind.
Given those two opposing views, I would like to start a discussion regarding the possibility of a friendly AI or a destructive AI.
I will also be posting this discussion in my blog so those that feel so inclined may also post their comments there as well. I will in turn post any comments that are not posted here, so as to contribute to the discussion.
My blog address is:
http://sinjinstechtalk.blogspot.com/2...