Jump to ratings and reviews
Rate this book

Solomon's Code: Humanity in a World of Thinking Machines

Rate this book
A thought-provoking examination of artificial intelligence and how it reshapes human values, trust, and power around the world.

Whether in medicine, money, or love, technologies powered by forms of artificial intelligence are playing an increasingly prominent role in our lives. As we cede more decisions to thinking machines, we face new questions about staying safe, keeping a job and having a say over the direction of our lives. The answers to those questions might depend on your race, gender, age, behavior, or nationality.

New AI technologies can drive cars, treat damaged brains and nudge workers to be more productive, but they also can threaten, manipulate, and alienate us from others. They can pit nation against nation, but they also can help the global community tackle some of its greatest challenges—from food crises to global climate change.

In clear and accessible prose, global trends and strategy adviser Olaf Groth, AI scientist and social entrepreneur Mark Nitzberg, along with seasoned economics reporter Dan Zehr, provide a unique human-focused, global view of humanity in a world of thinking machines.

288 pages, Hardcover

Published November 6, 2018

58 people are currently reading
217 people want to read

About the author

Olaf Groth

5 books5 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
16 (23%)
4 stars
25 (36%)
3 stars
20 (29%)
2 stars
6 (8%)
1 star
1 (1%)
Displaying 1 - 10 of 10 reviews
Profile Image for David Wineberg.
Author 2 books858 followers
December 15, 2018
The clearest way to describe the differences between artificial intelligence (AI) and regular computer development is that computers help us understand the world, and AI helps us change the world. That’s my definition (though thousands probably came up with the same idea), and Solomon’s Code reinforces it in every domain it examines.

AI digests unfathomable amounts of data to come up with methods and solutions that would take humans years and centuries to replicate. So driverless cars, arrangements of warehouse goods and defeating humans at chess and go are all examples of AI’s power to process data. If you want a computer to run a robotic arm, endlessly bolting car parts together, that’s old hat, garden-variety computer smarts at work. Saving your life by comparing your x-ray to a hundred million others around the world and coming up with an outside the box diagnosis in four minutes – that’s AI.

Olaf Groth and Mark Nitzberg have combed the planet to determine the state of the art in this nearly comprehensive look at developments, and in particular the thinking behind them. Because the considerations are huge and many. Things like language, religion, history, human rights and culture need to be considered, and so AI is taking different approaches around the world. Government philosophies towards AI startups also twist the landscape accordingly. The authors interview experts everywhere to tease out the differences in every nation’s approach, and the resulting developments.

They are careful not to be (too) swept up in the excitement, citing the very real risks AI poses for civilization. For every great solution, they pose controversies. There are issues like jobs lost, never having to think or remember again, privacy, security and deepening the already considerable gulf with the have-nots (“digital divide”).

For example, there is one inevitable dystopian AI scenario that is already well entrenched. In China, the authors say, the government has installed 170 million surveillance cameras. In addition to recording crime, they use AI facial and gait recognition to nail people by the millions. Things like crossing the street outside the crosswalk affect a person’s “social credit score” in China. Failing to pay for parking, failing to stop at stop signs, being drunk, paying taxes late - pretty much anything the government wants can affect the score, at the government’s whim. And the government demonstrates its whims all the time. A bad score keeps people from flying at all, or traveling first class in trains, or whatever the government wants to deny them. The worse the score, the less people can do in China.

This has gone beyond the famous episode of Black Mirror, in which a young woman, ever conscious of her social media and reputation score and always eager to please, trips up. This leads to a cascade of bad reviews, influenced and exacerbated by her deteriorating score, until her life is a total ruin. No one wants to be caught associating with a low score person. Party invitations cease. Store clerks shun her. Marriage to someone decent is out of the question. So is a career job or even an interview. Forget loans, mortgages or university acceptance. Low scorers are poison. That this is already happening in the world is horrifying.

Not participating is no answer either, as the lack of a score means zero access to anything. Ask any immigrant.

And then, do you really want to have to converse with your phone so it can send updates on your mental state and mood – to someone else? Because that will be part of the mandatory health monitoring system already required as a condition of employment in many large American companies. Fitbits are just the beginning.

As bad as that might be, worse is the fact that many AI systems are black boxes. We actually don’t know how they work. They learn by themselves, “improve” by themselves and decide by themselves, with no human oversight, input or control. No one can explain how they come to their decisions. (This is one rather large aspect of AI the book does not cover.)

The bottom line on AI being smarter than people is captured in the fact that computers can’t think. People think. They take numerous inputs into consideration, and weigh them unfairly, with biases and prejudices and gaps in knowledge. Worse, humans do not even know what consciousness is. They have no viable theory of what makes a person the person s/he is, and not someone else, or a toaster. Until and unless humans can define what consciousness is and how the brain creates, manages and tolerates it, there can be no threat of AI also having consciousness.

Solomon’s Code finishes weakly. The Conclusion is a hope that some sort of global oversight body will emerge to regulate AI developments worldwide, somehow taking everyone’s values and fears into account. It then ends with a bizarre Afterword that is really a Preface, explaining the value of what you are about to read. But for insight into the state of the world of AI, the book is very useful.

I liked Solomon’s Code because it is fair and balanced, and all but forces the reader to think, a property AI seeks to dispense with.

David Wineberg
Profile Image for Peter Tillman.
4,014 reviews465 followers
December 8, 2019
I tried this one, a chance pickup from the new bookshelf. I kept reading (skimming) for quite a ways before giving up. About the only part I enjoyed was two SF stories that start and end the book, which weren't bad. And this might not be a bad primer for someone new to the topic. As for me, I wasn't really learning anything. Back it goes!
Profile Image for Mark Steed.
64 reviews7 followers
April 6, 2019

This is a book about Artificial Intelligence that deliberately poses more questions than it answers. Groth and Nitzberg’s aim is to outline some of the most important multi-disciplinary debates that need to take place if AI ultimately is going to be beneficial to humanity. 


The authors take a fundamentally optimistic (but not utopian) view of AI and how it can benefit society, but this is grounded in the real politik of twenty-first century multi- and inter-national relations. This optimism is seen in the espousal of a model of human-AI symbiosis (Chapter 3) which enhances humanity:;


a “symbiotic relationship between artificial, human and other types of natural intelligence can unlock incredible ways to enhance the capacity of humanity and environment around us.” (p.69);


This is worked out in a number of ways through discussion of a number of important social debates: justice and fairness, privacy, security, surveillance and changing patterns of work.
In the first half of the book, Groth and Nitzberg discuss a range of important philosophical questions thrown up by AI about the nature of what it is to be human: self-consciousness (pp.96ff), human personhood, autonomy and free will; reshaping the sense of the self (p.76) and the ability for humans to change their values and beliefs over time (p.88);


The second half of the book is a call to arms to put in place a regulatory framework (“guardrails”) for the use of AI which maximises the benefits of AI, whilst mitigating potential harm. They argue that this will include drafting a “Digital Magna Carta” which defines human freedoms in the age of AI (p.232). In so doing, the authors recognise just how difficult this is likely to be. Indeed, about a third of the book is devoted to outlining the complexities of the emerging geopolitical context for these discussions. 


There is an excellent discussion of “the forces that shape the world’s divergent AI journeys” (Chapter 4), which outlines the different attitudes to AI and technology around the world: “the Digital Barons” (Google, Facebook, Amazon, Alibaba and Baidu); “the Cambrian Countries (US and China); “the Castle Countries” (Russian and Western Europe); “the Knights of the Cognitive Era” (military/defence-based AI – US, China and Israel); “the Improv Artists” (other countries developing aspects of AI – Nigeria, Indonesia, India and Barbados); “Astro Boy” (Japan); and “the CERN of AI” (Canada – the open-source concept of an international network of data generators);


What comes through this discussion is the range of ways that power, trust and values are being played out across societies, often driven by different regional philosophical traditions. For example, the influence of Taoist, Confucian and Communist thought on China; and the social challenges of an ageing population in Japan mean that these countries have fundamentally different attitudes to the West on issues such as privacy and the relationship between humans to machines. The authors rightly point out that this philosophical diversity poses significant challenges for anyone seeking to formulate a universal approach to regulating the use of AI. 


In the authors’ analysis of ‘the race for global AI influence’ (Chapter 5) the battle for control of data and AI is tantamount to a new arms race which has the potential to reshape the political world order (Putin: the country that leads on AI “will become the ruler of the world” p.151) and discuss each of the main protagonists in turn: US, China, Russia and the EU;


“Philosophies of regulation, influence and social and economic participation will conflict – as they should. Those clashes and their outcomes will coalesce around issues of values, trust and power” (p.163-4);


The authors close (chapter 8) by discussing possible ways in which the community of nations might establish “a global governance institution with a mutually accepted verification and enforcement capacity” (p.233). In so doing they discuss the lessons learned from other recent multinational treaties and governance models, such as the Montreal Protocol to reduce chloroflurorocarbons, the Paris Agreement on climate change, the Organization of the Prohibition of Chemical Weapons (OPCW), and the UN Global Compact. In light of these, they argue instead for a “new governance” model which draws its legitimacy from “its inclusion and the robustness of the norms and standards it disseminates”, but which is aligned “with existing pillar of global governance such as the United Nations or the World Trade Organization” (p.249) 


The authors conclude that “the Machine can make us better humans” (p.253);


“Combining the unique contributions of these sensing, feeling and thinking beings we call human with the sheer cognitive power of the artificially intelligent machine will create a symbio-intelligent partnership with the potential to lift us and the world to new heights.” (p.257);


Surprisingly for a work of this quality and nature, the book has no index and only has limited referencing.

Profile Image for John.
122 reviews1 follower
March 26, 2019
Reading it was like crawling through the sand in the Sahara - dry, repetitive and monotonous.
Other than the three C's, I personally did not learn anything from the multitude of repetitive examples throughout the entire book.
Profile Image for Alan Newton.
186 reviews6 followers
February 22, 2020
This was a very good book which I started over a year ago but ended up putting aside and never getting back around to as I tend to read multiple books at the same time. In any case, I finally finished it and my delay in doing so is certainly no blight on the content. Indeed, as one of my business school professors co-wrote the book and I may see him for dinner in San Francisco in a fortnight, I felt it necessary to complete my reading before then! 😂😂

Throughout, my mind was racing with so many different thoughts, which - I believe - is testament to the writing style and approach of the authors from the beginning. The future picture of AI use and fictional examples provided really help paint the picture and lead the reader to consider deeply philosophical and ethical questions throughout the book. The fictional elements are inter-spliced with real world examples, and powerful personal examples from the authors, that will resonate and stir emotion with the reader. The pros and cons of AI usage are conflated in a balanced way that will leave you pondering much about the future.

We are living in a 5th seismic shift moving from purely linear computing systems to more cognitive and human-like technologies, which is evolving at such a fast pace we can’t quite keep up, which means the regulation, legislation and protective controls that need to be in place, can’t keep up either.

It’s interesting ... I’ve read different authors and commentators (in this case Admiral James G. Stavridis) suggest that the commercial and political purveyors (or, actors, if you will) of AI and ML tech have largely honourable intentions and want to make the world a better place. “To a great extent, we do get a better world. But even so, far-reaching ripple effects and unintended consequences make these advanced cognitive technologies both a panacea and a Pandora’s box.” (The Admiral)

The authors states that AI will force us to consider what it means to be intelligent, human, and autonomous (of course, these are assumptive on the basis of philosophy and there’s an interesting side bar exploration here around Philosophy of the Mind and just such assumptions. After all, what is intelligence? Do we cease to be intelligent if we create something that dwarfs our own intelligence? What, indeed, is it to be human? Is this a sentient question? And are we really autonomous? This question may drive at concepts like free-will and determinism, not to mention consciousness. What is it to be conscious and does that mean our creations (AI) have consciousness if they have the ability to think for themselves? More questions than answers maybe, but we can try exploring them in greater depth using what we’ve learned.

“These issues will test our local and global conception of values, trust and power.”

The title refers to the biblical King Solomon, “an archetype of wealth and ethics-based wisdom but also a flawed leader.” This very aptly and cleverly encapsulates what we (as a species) face as a challenge with AI. At the 2018 Future Decoded Conference, Microsoft U.K. CEO, was overt about the dangers posed by AI and the need to carefully navigate the future.

Medical screening for cancer example: the authors paint a future picture of AI use which aptly paints the paradox between life-saving / prolonging of life actions through detailed information, but also the draw backs of said information in respect to how it is used (helping with health, love, career and financial decisions), whom has access and what restrictions or implications that may have on one’s lifestyle choices. Again, the notion of freewill would be apt here as the example suggests failure to take certain lifestyle choices that increase the possibility / probability of disease would increase the cost of insurance premiums, or could even invalidate it completely. It’s a clever story that paints a dark and light (yin and yang) future vision of the use of AI. It almost feels like reading the script for an episode of Black Mirror.
Profile Image for Nick.
Author 5 books11 followers
February 1, 2023
The first chapter starts out with a short science fiction story, and then the book transitions completely to a loosely stitched tangent that seems to weave between technology and culture and society without doing any of them well. It raises a number of topics and then dismisses them without coming to any tangible resolution. It also is strangely and vocally supportive of Chinese surveillance practices when they didn't need to discuss it at all.

In general the book just feels bad to read and I didn't come away having learned anything.
Profile Image for Ken Hamner.
371 reviews8 followers
March 6, 2019
Well written, thought provoking, and timely. Highly recommended.
Profile Image for Eddie Choo.
93 reviews6 followers
August 5, 2019
A good guide on AI policy

This book is a good guide on how AI policy might look like - at least in the preliminaries. A book to reread
Profile Image for Lime Street Labrador.
200 reviews6 followers
March 16, 2023
A bunch of loosely-connected talks about AI, wandering from potentials and risks to policy and implications. Bored me quickly.
Profile Image for Nasir Ali.
122 reviews3 followers
December 22, 2019
From Machine learning to thinking and teaching machine, how the human and machine will complement each other the book provide good synopsis is many scenerio covering social, legal and economic consequences
Displaying 1 - 10 of 10 reviews

Can't find what you're looking for?

Get help and learn more about the design.