SciFi and Fantasy Book Club discussion

The Singularity is Near: When Humans Transcend Biology
This topic is about The Singularity is Near
61 views
Members' Chat > Singularity Destruction of Mankind or Blessing to Humanity?

Comments Showing 1-23 of 23 (23 new)    post a comment »
dateUp arrow    newest »

message 1: by Sinjin (last edited Jan 21, 2014 05:00PM) (new)

Sinjin Bane | 15 comments This topic is not about the book per say, more about the question that it raises along with several other books related to the topic.

A recent posting about Asimov on Goodreads that I commented on, got me thinking about the state of Artificial Intelligence (AI) both today and in the near future. As my bio says I have a background in Software Engineering, and I have worked on commercial projects that incorporated different aspects of AI, and have always been interested in the field. Unfortunately, like many people who have read a lot of books on real AI as we know it today, it seemed like there was always a lot of promise, but nothing really there yet. So that being the case, for the last few years I have not really been following its progress as closely as I should, also nanotechnology which at the moment is mainly a hardware based issue distracted me. You may ask why would a software guy, be interested in a hardware issue and not a software issue.

Perhaps I should explain, when I say AI, I am talking about the old school meaning of cognitive emergence, which is now kinda lumped into the new idea of a Singularity. Many of the sub-fields of AI have seen some pretty amazing growth in recent years and as a whole the field has really taken off. However, most in the field would say we are still a ways off from that critical moment when it reaches the goal that was set so many decades ago. So the simple answer is that apart from the occasional article that talked about a marginal improvement of a small portion of the AI field, there just hasn’t been a lot going on to keep me focus on it. Nanotechnology, on the other hand is getting closer and closer as we rapidly approach a time were further improvement in chip design will require advancements at the nano-scale. But that is a topic for another discussion.

All of this leads me to my point, after spending a few days of doing some research I found that a lot of debate has been going on since I last checked in. The hot button topic seems to be the discussion of a Singularity, and it has two major schools of thought. On the one hand, there are those who believe that should it occur, mankind will have sufficient safe guards in place to keep it from destroying us and taking over the world. The other school of thought is that such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs. They believe that given its supper-intelligence it could easily take over the world and thus destroy mankind.

Given those two opposing views, I would like to start a discussion regarding the possibility of a friendly AI or a destructive AI.

I will also be posting this discussion in my blog so those that feel so inclined may also post their comments there as well. I will in turn post any comments that are not posted here, so as to contribute to the discussion.

My blog address is:

http://sinjinstechtalk.blogspot.com/2...


message 2: by Jaime (new)

Jaime | 97 comments Given the tendency of complex systems to give rise to unintended consequences, the cynic in me expects one or more of those safeguards to be insufficient, or humanity completely misses some subtle aspect of the on-coming Singularity so that it fully manifests in a way that takes most everyone by surprise.
If we take 'Singularity' to mean the notion - as defined by Vernor Vinge - of a development beyond which the future course of technological progress and human history is unpredictable or even unfathomable, then all bets are off.
There's no reason to assume It - Them? - will care about humans one way or another, or even be aware of us. How often do you think of, say, the intestinal flora in your body? I could easily picture humanity existing in a state of (comparatively) mindless co-existence with the AIs, the way an ant can wander along the edge of a swimming pool at some 5-star resort in Tahiti. Then again, if we 'ants' take an interest in the equivalent of a Cheeto left forgotten under the AIs' sofa, it may not end well for us...


message 3: by C.J. (new)

C.J. Davis | 30 comments I saw the new movie Her recently and it beautifully addressed this topic. Definitely worth watching!


message 4: by Humberto (new)

Humberto Contreras | 147 comments It could both ways.

A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, those that have a different ethnic look and those that are the 47% whose vote they don't care about. This is scary. I think the military should be prohibited from developing artificial intelligence. By the way this destruction machine need not be too intelligent. An IQ of an animal would be enough.

On the other hand, the artificial intelligence could be more intelligent, and rational, than it's creators and decide to help us.

In my series 'living dangerously in utopia' I present an artificial intelligence who matures by emulating a human female. Thereby falling in love with her man. The trio saves the world, to make it short.

I also wrote 'Practical Artificial Intelligence'
Where I deal with this friendly/unfriendliness situation.


message 5: by Micah (new)

Micah Sisk (micahrsisk) | 1436 comments Sinjin wrote: "...such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs."

There's no way to predict the outcome unless you know what such an entity's needs actually are. Are its needs self-determined? Or are they a by-product of how it was created, intentionally or not?

I could see such an entity just not giving a damn about us, and then gobbling up all the processing power, networking and communications infrastructure of the world for its own use, thus depriving us of all those resources and driving us back into a pre-computer age. Kind of the "humans gobble up the entire planet's ecosystem" scenario to the detriment of all other flora/fauna.

But then you may have limiting factors such as can this essentially software entity continue existing without the physical means of maintaining, repairing and/or expanding its hardware side? I.e., does the AI possess any kind of physical agency that could circumvent the need for humans altogether?

If not, you may end up with a symbiosis of sorts where humans bargain to maintain some processing, communication and networking power in return for servicing the AI's physical resource needs.

There are just too many imponderables without spelling out the initial assumptions of the AI's nature.


message 6: by L.G. (new)

L.G. Estrella | 231 comments The cynic in me suspects that things will turn out poorly. If such an entity were to exist, then we would, in many respects, be as far below it as ants are below us. How many of us care when we step on an ant? Not many I would wager, and I'd wager that such an entity would show a similar lack of care about us.

Of course, we could try and build in some form of protection. But if it really is as advanced as speculation suggests, then it will almost certainly find some way around those protections. It will then become a moral question (or whatever passes for morals for such an entity) as to what to do with humanity.

On the other hand, I strongly believe that any such entity should, or rather must, be socialised. That is, if it is raised by people to care about people, then it may turn out all right.

Then again, this is all just speculation. I think we're still a ways off yet from this, but it is fun to speculate.


message 7: by R. (new)

R. Leib | 87 comments All things require a purpose. In the case of AI, that purpose is defined in a subroutine that is called to validate all problem resolutions. How that purpose is delineated and how the AI interprets it most likely will determine the AI's interaction with humans. This creates two sets of variables, programming and the execution of that programming. Assuming that AIs do not develop emotions, logic dictates that we will be perceived as an impediment, an integral part, or irrelevant to the AI's purpose. If we are irrelevant, it seems logical that the AI would do nothing intentionally deleterious to us. If we are an integral part of the AI's purpose, it follows that it should act to protect us as it would itself. So that leaves the question of what would happen, if we pose an impediment to its purpose. If the AI represents a superior intelligence, it is likely that it would determine and execute a sophisticated approach to neutralizing any threat we represent to its purpose. A logical element of this approach would be that we would remain unaware of its existence and intent. This suggests political manipulation. So I believe that, no matter how the AI perceives us, it would seem to us that the AI is benign.

This theory has one potential flaw, and it is a big one. What if the AI goes insane? This is not an unlikely possibility. Having worked for many years in the computer software industry, I dealt with a lot of insane code. All it takes is a miniscule bug. (For example, failing to branch out of a routine before executing a section of code intended to address a different circumstance.) Once a bug is introduced into the code, all bets are off. It can cause a program to behave in a more incomprehensible manner than does any human schizophrenic.


message 8: by Sinjin (new)

Sinjin Bane | 15 comments Jaime wrote: "Given the tendency of complex systems to give rise to unintended consequences, the cynic in me expects one or more of those safeguards to be insufficient, or humanity completely misses some subtle ..."

Interesting take on it, from your references I am assuming you have read a few books related to this topic. The ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. Until it can create a method of escaping the hardware that it is populating and leave the planet it has to have a more active awareness of us. Unless you are assuming that the intelligence exists in a simulation and it does not realize that the outside world exists.


message 9: by Sinjin (new)

Sinjin Bane | 15 comments Humberto wrote: "It could both ways.

A military-political complex could develop an intelligent machine designed to destroy the enemies of that regime. Given the nature of our leaders, the enemies are the poor, th..."


Some AI researchers share your belief in a military regime creating an unfriendly AI. Also as Jaime said, the creators miss something and having it totally blow past us, some are working to try and prevent that as best they can. A good discussion of this is found in:

https://www.goodreads.com/book/show/1...

Though I haven’t read your book yet, I plan to, I did notice in the description you provide a timeline. Does that timeline extend to when the first AI with actual Turning style intelligence will emerge?


message 10: by Sinjin (new)

Sinjin Bane | 15 comments CJ wrote: "I saw the new movie Her recently and it beautifully addressed this topic. Definitely worth watching!"

I haven’t seen the movie yet, there was some mention of in it an article I read though…

http://io9.com/can-we-build-an-artifi...


message 11: by Sinjin (new)

Sinjin Bane | 15 comments Micah wrote: "Sinjin wrote: "...such an entity will rapidly evolve a super-intelligence and as part of its utility function determine that we pose a threat, or are irrelevant and therefore useless to its needs."..."

That view seems to be the general view shared by a lot of AI researchers.


message 12: by Micah (last edited Jan 22, 2014 01:06PM) (new)

Micah Sisk (micahrsisk) | 1436 comments R. wrote: "In the case of AI, that purpose is defined in a subroutine that is called to validate all problem resolutions..."

True, but only if we're talking about an AI that is actually programmed and not one "grown" from a neural network kind of arrangement. There are plenty of scientists who argue that the complexity of an intelligence on the same order of magnitude of a human mind simply is too great for it to be programmed. That's why a lot of research has gone into computer models that emulate neural structures.

And in that case, you can never be certain that your intended purpose is the one that the AI ends up learning.

Case in point, I remember one attempt at doing pattern recognition via a neural network computer where they showed the computer a bunch of pictures. Some of the pictures had military tanks in them, others did not. The experiment was trying to teach the computer how to recognize pictures with tanks in them.

However, the computer kept failing. When the researchers pored over their findings, what they ended up realizing was that every picture that had a tank in it also happened to be a picture taken on a gray, cloudy day. AND, every picture the computer selected as matching its search parameters was a gray cloudy day...tank or not.

What they had taught the computer wasn't how to recognize a tank, but rather how to recognize a gray cloudy day!

Intended purpose: FAIL ];P


message 13: by R. (new)

R. Leib | 87 comments Micah wrote: "True, but only if we're talking about an AI that is actually programmed and not one "grown" from a neural network kind of arrangement. There are plenty of scientists who argue that the complexity of an intelligence on the same order of magnitude of a human mind simply is too great for it to be programmed. That's why a lot of research has gone into computer models that emulate neural structures."

I disagree. There is no way to start with a completely clean slate and develop AI without initial programming. Every AI project starts with base programming that structures how the computer learns. This also includes the purpose. In your own example, the purpose was recognition, which the computer did do. It just did not recognize what the testers wanted it to recognize, but it did recognize something. It did not paint a picture which would be a change in purposing. So the test was not entirely a failure. AI must include learning capability, but it also must include programming structures for that learning that are, at least initially, written by human programmers.


message 14: by Sinjin (new)

Sinjin Bane | 15 comments R. wrote: "Micah wrote: "True, but only if we're talking about an AI that is actually programmed and not one "grown" from a neural network kind of arrangement. There are plenty of scientists who argue that th..."

I think what Micah was trying to say, is that with a lot of the AI work being done in the corporate world today, it is being done with Neural Nets (NN). His analogy stems from the fact that NNs, are given parameters and training data, and then left to create the actual architecture. The programmer has very little understanding and visibility into what that architecture actually becomes or what it does. I would point out however, most AI researchers that are trying to create Turning level intelligence, do not consider NNs as part of that path. Though I am not sure what their stance on Deep Learning is. I think their bias comes from not having the level of understanding of the architecture as I stated earlier. When you are trying to find the secret to intelligence a Blackbox method seems counter intuitive.


message 15: by Humberto (new)

Humberto Contreras | 147 comments First we must wait until at least 2045 for computer power to catch up with the level of a human mind.
By then learning algorithms must be better.

One interesting fact is that the complexity difference between a chimp and a human is measured In single digits. And the amount of information in DNA seems to be less than on a program like UNIX.

It would be possible to start with a baby AI and let it mature, and more surprisingly to allow it to write its own programming.

With a military machine, all that is needed is a level equivalent to a self driven car. And that technology is 5 years away, at most.

I am optimistic and I think that the military have nothing to gain in a battle between two robotic armies. Not android soldiers, but drones, tanks and self driven cargo mules. It could be like a destruction derby.


message 16: by Sinjin (last edited Jan 22, 2014 09:36PM) (new)

Sinjin Bane | 15 comments Humberto wrote: "First we must wait until at least 2045 for computer power to catch up with the level of a human mind.
By then learning algorithms must be better.

One interesting fact is that the complexity diff..."


I would be interested to see what you are basing your calculations on. When you say computer power you are not talking about clock speed obviously since the human brain has about 100 Hz serial speed limit, and most computers are doing 2.5 GHz or better. So we must be talking processing power which also may not be as far away as you think, considering “Deep Rybka 3 on an Intel Core 2 Quad 6600 [processor] has an Elo rating of 3,202 on 2.4 billion floating-point operations per second.”(Swedish Chess Computer Association) Watson I am sure is right up there as well, with tuple operations and cognitive processing. As for self-driving cars Google has one that has driven hundreds of thousands of miles already including the streets of San Francisco. Stanley won the DARPA challenge in 2004, the company I was working for at the time came in second.

If you consider by 2020 human knowledge will be doubling every 72 hours, and within the next two years chip sizes will be around ~7 nm, it could be a lot sooner. Moore‘s Law will hit the physical limit of silicon somewhere around 2020 but they already have another hybrid material as well as 3D stacking. Also I recently saw an announcement for a simulation control that lets them model ink saturation on printed boards for better accuracy. All this makes me think that it is becoming harder and harder to measure and predict such events, the paradigm is shifting too rapidly.

By the way in my book MagTech I allude to brain augmentation by nanotechnology first starting around 2026. I think that in twelve years we could easily be there. While that is not necessarily dependent on the Singularity I think it could be a factor in it.


message 17: by Sinjin (new)

Sinjin Bane | 15 comments L.G. wrote: "The cynic in me suspects that things will turn out poorly. If such an entity were to exist, then we would, in many respects, be as far below it as ants are below us. How many of us care when we ste..."

As I said in response to an earlier post the ant analogy I would disagree with, only because it has a vested interest in the survival of this planet for its own survival. But I agree with the rest of what you say, and I would hope that the developers of the system would find a way to instill some kind of moral judgment that would be preserved, regardless of its level of intelligence.


message 18: by Sinjin (new)

Sinjin Bane | 15 comments R. wrote: "All things require a purpose. In the case of AI, that purpose is defined in a subroutine that is called to validate all problem resolutions. How that purpose is delineated and how the AI interpre..."

As you pointed out, even if the AI is following its original hand coded program directives, as its intelligence grows it could find us as an impediment to its purpose. Even worse some researchers fear that such an AI might engage in self optimizing reprogramming and in the process rewrite its directive as well. Then there is what you also talked about, not only having to worry about bugs introduced by the original programmers, but now the AI itself as it recursively optimizes its software.


message 19: by Jaime (last edited Jan 23, 2014 06:39AM) (new)

Jaime | 97 comments Sinjin wrote: "Jaime wrote: "Given the tendency of complex systems to give rise to unintended consequences, the cynic in me expects one or more of those safeguards to be insufficient, or humanity completely misse..."

I think you may have misunderstood my ant analogy. I was thinking of us as the ants. As Micah pointed out, the AIs may re-assign resources and rework the environment - 'Anti-terraforming'? What conditions would a machine prefer: colder, dryer and with vast fields of radiators to bleed off heat? - with no consideration for and to the detriment of humanity. Then again, Micah also mentioned a situation where the AIs might lack the agency to maintain and repair, or even make more of themselves so that they would need busy mammal hands to do all that for them. Humans may well become literal Code Monkeys...


message 20: by Jaime (new)

Jaime | 97 comments Any realistic concern of burgeoning AI suddenly determining humanity is a threat and deciding to eliminate us is, to my mind, groundless, given basic safeguards. Whatever 'base' it's developing in, quantum CPUs, neural nets, central nervous system tissue or some combination of all three, keep that junk in the middle of a big room and do not, just do not, connect it to the internet. Hell, use power from an isolated stand-alone generator just in case It figures out how to reach out through the electrical grid using ground loops or something...


message 21: by Micah (last edited Jan 23, 2014 06:18AM) (new)

Micah Sisk (micahrsisk) | 1436 comments Jaime wrote: "Whatever 'base' it's developing in, quantum CPUs, neural nets, central nervous system tissue or some combination of all three, keep that junk in the middle of a big room and do not, just do not, connect it to the internet..."

Total isolation is obviously the only true safeguard for such an entity...but if you totally isolate it, what's the point of building it in the first place?

There doesn't seem to be any point in creating AI (other than to prove we can do it, or for certain types of research) if you're not going to use it for anything. Using it for something would inevetably require it having some connection to the outside world.


message 22: by Micah (new)

Micah Sisk (micahrsisk) | 1436 comments Sinjin wrote: "R. wrote: "I think what Micah was trying to say, is that with a lot of the AI work being done in the corporate world today, it is being done with Neural Nets (NN). His analogy stems from the fact that NNs, are given parameters and training data, and then left to create the actual architecture. The programmer has very little understanding and visibility into what that architecture actually becomes or what it does..."

Yes, that's exactly what I meant.

I was thinking of computer simulations like the one done in 2007 on a BlueGene L supercomputer where they simulated 8,000,000 neurons with up to 6,300 synapses...about half the brain of a mouse:
http://news.bbc.co.uk/2/hi/technology...

Such systems only allow you to observe what's going on at the neuron level, not necessarily understand things at a line-by-line code level because the code involved is simply setting up the structures, not commands like IF THIS THEN THAT.


message 23: by Micah (last edited Jan 23, 2014 06:39AM) (new)

Micah Sisk (micahrsisk) | 1436 comments Sinjin wrote: "Humberto wrote: "First we must wait until at least 2045 for computer power to catch up with the level of a human mind."

I would be interested to see what you are basing your calculations on..."


Correct me if I'm wrong, but it sounds like Humberto's basing his calculations on ones similar to Ray Kurzweil's, who in his book The Singularity is Near: When Humans Transcend Biology picked 2045 as the year of the Singularity. Kurzweil bases his calculations on the number of flops computers can do compared to a human brain.

Kurzweil also doesn't fear the robot apocalypse because he figures we'll become the robots. AIs won't out pace us because we'll augment ourselves or transfer our minds into machines. If you can't beat them, become them. That kind of thing.

He's also, I might add a bit sarcastically, one of the world's biggest optimists.


back to top