Weaving a dramatic narrative that explains how breakdowns in these systems result in such disasters as the chain reaction crash of the Air France Concorde to the meltdown at the Chernobyl Nuclear Power Station, Chiles vividly demonstrates how the battle between man and machine may be escalating beyond manageable limits -- and why we all have a stake in its outcome. Included in this edition is a special introduction providing a behind-the-scenes look at the World Trade Center catastrophe. Combining firsthand accounts of employees' escapes with an in-depth look at the structural reasons behind the towers' collapse, Chiles addresses the question, Were the towers "two tall heroes" or structures with a fatal flaw?
“Vast machines running out of control might sound like something beyond human ken, but one way to keep matters in perspective is to think of a frontier. Americans know all about frontiers, having lost our western one more than a hundred years ago. No new geographic frontiers have opened for us since then, but a different kind of frontier is well under way and was opening even as the West was being won. It is the ‘machine frontier,’ and it is still unconquered. A machine frontier has, in a virtual sort of way, the same characteristics as a geographic frontier: dangers and rewards, bounded at the edges by unknown territory. So if we wanted to plot a movie, prototype testers would be our explorers; facility operators would be the cowboys; and entrepreneurs eager to dominate the new markets would serve as our cattle barons. While some technological dangers can be charted, other risks of a new technology cannot be known with certainty or in some cases even imagined until much later…” - James R. Chiles, Inviting Disaster: Lessons from the Edge of Technology
No disaster I've ever heard about bothers me as much as Air France Flight 447. On June 1, 2009, it was cruising high above the Atlantic Ocean, traveling from Rio de Janeiro to Paris. Entering into the clouds of a large thunderstorm system, the pitot tubes, which measure air speed, appear to have become blocked by ice crystals. This blockage caused the Airbus 330’s autopilot to disengage.
Reacting instinctively, the co-pilot grabbed the controls and did exactly what I would have done in his place: Yank back on the stick. When you are in a plane, after all, and you sense danger, the natural reaction is to get as far from the ground as possible. To gain altitude, and fast.
Of course, I am not a pilot. And you really shouldn’t yank back the stick of an airplane.
The Airbus 330 began to climb dramatically. Losing speed and with the nose up, the plane – which aside from the airspeed indicator was in perfectly normal operating condition – began to stall. Chimes and a warning voice filled the cockpit, captured on a cockpit voice recorder later pulled miraculously from the sea. “Stall!” the warning voice said. “Stall!” This was repeated seventy-five times.
To avoid a stall, you need to push the nose down and gain speed. Despite not being a pilot and having a sub-rudimentary understanding of physics, this is something even I understand. The copilot, however, in the throes of some silent panic, did not put the nose down. He wanted the plane to climb away from a danger that did not exist.
The plane fell like a rock, despite all systems go. The captain – who had been on break – only figured out what his copilot was doing in the last seconds, nearing 2,000 feet, without enough time to regain speed. The copilot, likely, never understood that he had killed them all. His last words were a question: “But what is happening?”
Air France Flight 447 crashed some eight years after the publication of James R. Chiles’ Inviting Disaster. Had it occurred earlier, though, or if Chiles had published his book later, it definitely would have been studied in these pages. It is a perfect example of so many of the topics that he discusses: the difficulty of understanding unseen systems; the erosion of skill that comes with overreliance on technology; the complacency that attends our belief in redundancies and fail-safes; and finally, the horrendous consequences that occur when the center is no longer holding.
Inviting Disaster is a survey of breakdowns that occur at the borderlands of the technological frontier, where the complexities of our machines sometimes surpass our ability to control them. Chiles undertakes this survey utilizing the case-study method, analyzing dozens of disasters and near-disasters to tease out their lessons. Some, such as the sinking of the Thresher and the explosion of the Challenger are infamous. Others are familiar, if a bit unknown in the particulars (the discussion on Three Mile Island, for instance, was quite enlightening). And many have been mostly lost to history (such as the Ocean Ranger drilling rig and the British submarine Thetis).
Chiles organizes these mishaps into thematic chapters that illustrate a certain point. For example, his account of Three Mile Island comes in the chapter on hidden systems, demonstrating what can happen when a piece of technology becomes too big to comprehend with a single pair of eyes (or even many pairs of eyes). Here, Chiles compares a 19th century engineer watching the pressure of his boiler, with the reactor operators at TMI-2, who were unable to eyeball the maze of pipework that cooled the reactor, and therefore believed that the water level was getting too high, when in fact, just the opposite was true. Other chapters are devoted to the dangers of rushing technology to meet a deadline (e.g., the Challenger); the trouble that comes from failing to perform proper tests; and the always-present human element (such as Eastern Air Lines Flight 401 that gradually lost altitude and crashed into the Everglades while its tunnel-focused crew attempted to confirm whether the landing gear was down).
Thankfully, it’s not all mayhem, death, and the rending of garments. Chiles also covers near-misses, such as Apollo 13, and looks at the precautions and protocols followed by high-risk industries (such as explosive manufacturers and helicopter linemen) that manage exceptional safety records.
Overall, Chiles does a decent job in narrating the myriad crashes, explosions, and breakdowns. He is focused on forensic examinations, meaning that you will (mostly) have to look elsewhere for the human side of the story. While the overall outline provided by the themed chapters is helpful, Chiles tends to needlessly confuse things by jumping manically from one topic to the next. Often, he will start to make a point and – just as he’s almost there – start talking about something else. I’m still waiting for him to finish his story about the Thetis. At times, Chiles is like an overexcited kid, so eager to tell you everything he’s learned that he ends up creating a word salad.
I am not going to go so far as to say that Inviting Disaster is an important book, though it clearly wants to be thought of that way. It also did not entirely convince me that we are drifting into a machine-fueled apocalypse. Indeed, I tend to think that future-tech such as airplane autopilots and self-driving cars are likely a good thing. I’m more worried about how late-stage capitalism reacts to a significant portion of its workforce being made obsolete by robots. (Robotics is not mentioned at all).
That said, Inviting Disaster did get me thinking a bit. While reading this on a commuter train, I couldn’t help but look around and attempt to spot all the stress points, the danger zones, the things that might snap or crack or fray or detach, leading to the dreaded cascading failure.
No one ever went broke warning of an impending disaster. Eventually, it happens, which is just in the nature of probabilities. Most of the time, however, as Chiles acknowledges, things do not go wrong.
With that said, Chiles is not wrong to sound a note of caution. I'd go so far as to say that parts of Inviting Disaster reads like prophecy. His discussion of Aero Peru 603’s “static port sensor tube” problems, which sent it into the Pacific in 1996, sounds a lot like Air France 447’s pitot tube malfunction. His offhand mention of the space shuttle Columbia’s heat tile issue in 1981 (“the critical tiles on the bottom of the shuttle stayed on, so the craft was spared having a hole burned through its aluminum skin upon reentry”) chillingly prefigures the craft’s death in 2003, along with all seven astronauts onboard.
It is easy to mock Cassandra, while forgetting that Cassandra was right. It is easy to forget that cutting-edge technology can really cut. Sometimes, we forget right up to the point where a Boeing 737 MAX falls twice from the clouds, or a GPS-and-radar equipped container ship like the El Faro sails right into a Category 3 hurricane. At that point, it is too late to start paying attention.
Accidents and disasters are often caused by simple, random events or the change in a normal sequence of actions, any one of which could affect the outcome. Had the path of the Air France Concorde been slightly different, or the piece of titanium not fallen off a DC-10, or the plane left a tad earlier or later, or a sealant been used in the fuel tanks, or any one of any other seemingly unimportant events taken place, the plane's tire would not have struck the titanium and a piece of tire would not have opened a substantial leak in the plane's fuel tank, and the passengers and crew would still be alive today.
Another related book worth reading is Normal Accidents by Charles Perrow. Perrow had studied several major accidents and concluded that some forms of technology are more open to chains of failure and that adding more safety systems can actually lead to an increased likelihood of an accident because of the increase in complexity. The systems become so "tightly coupled" that a failure in any part of the system almost inevitably leads to a chain of unmanageable and uncontrollable events.
Chiles goes Perrow one further and makes recommendations as to how training and people can prevent the accidents by breaking one of the links in the chain. It requires that individuals throughout the organization be empowered to call decisions into question or to halt actions they believe to be of concern. He observed several industries as air traffic control centers, and aircraft carriers, (not to mention helicopter repair of high-tension lines!) which have impressive safety records despite a high level of coupling and danger.
It's a fascinating book that examines why disasters happened and what lessons can be gleaned from those tragedies. For example, the explosion of the steamboat Sultana killed hundreds at a time (1865) when Americans were seemingly inured to disasters of all kinds ("between 1816 and 1848, 233 explosions on American steamboats had killed more than two thousand people"). Steamboats were constantly being destroyed by boiler explosions, and, despite industry objections, the federal government had issued all sorts of controls and inspections. In the case of the Sultana, the captain was in a hurry, he wanted to pack as many prisoners (released from Andersonville prison) on board as possible (being paid [$:] per soldier and [$:] per officer). The ship was way overloaded, which contributed to the boiler explosion because when the ship turned, its topheaviness caused the water level in the boiler to shift beyond safe levels. In addition, rather than have a crack in one boiler properly fixed, the captain had insisted on a patch that normally would have been fine, except that it was slightly thinner than the boilerplate on the rest of the boiler. That would have been OK, except that no one thought to change the setting on the emergency blowout valve to reflect the thinner metal of the repair, so a sequence of decisions that individually would have been unimportant resulted in a sequence that killed far more, on a percentage basis, than the 9/11 attacks.
It is possible to conduct accident-free operations, but Chiles says that it means changing normal operational culture and mindset. For example, challenging authority becomes crucial in preventing aircraft crashes and other jobs where people have to work as a team. The airlines have recognized this and no longer is there a pilot in command; the term now is pilot flying the plane with each pilot required to question the judgment of the other pilot if he/she thinks the pilot flying has made an unsafe move or decision.
I learned about the extraordinary safety record of companies that use helicopters to make repairs on high-tension electrical lines while the current is still on. That would certainly loosen my sphincter. The pilot hovers the craft within feet of the conductive lines while the electrician leans out on a platform, hooks a device to the line that makes the craft and everyone on it conduct up to 200,000 volts (they have to even wear conductive clothing), and makes repairs to the line. They have never had an accident in twenty-five years of doing this. Safety is paramount, they anticipate the unexpected, and everyone is an equal partner in the team and expected to point out conditions that might be unsafe. "A good system, and operators with good `crew resource management' skills, can tolerate mistakes and malfunctions amazingly well. Some call it luck, but it's really a matter a resilience and redundancy." Failing to have this resiliency can have tragic consequences. On December 29, 1972 an L-1011 crashed on approach to Miami because a light bulb indicating whether the landing gear was down had burned out and the entire four-man crew became involved in changing the bulb. They did not notice that someone had bumped the throttle lever releasing the autopilot that was supposed to keep them at two thousand feet, and the air traffic controller who noticed the deviation in altitude did not yell at them to pull up, not wanting to annoy the crew, but simply asked if everything was coming along. The plane crashed killing everyone on board.
Another key element is that people must be clear in speaking and writing, "even if doing so necessitates asking people to repeat what you told them. . . We know that people will try to avoid making trouble, particularly any trouble visible to outsiders, even though they are convinced that catastrophe is near." Chiles sites numerous instances where committed individuals went outside normal channels to get additional perspectives or assistance and prevented catastrophe. Those individuals always knew the leadership would back up their independent decisions even if they were wrong.
I have just scratched the surface. This book should be recommended reading for everyone.
Chiles offers a history of many disasters, accidents, misfortunes, and contends that the increasing complexity of machines in the modern age has raised the likelihood of disasters happening. He provides blow-by-blow descriptions of how the many disasters happened, exactly what went wrong. He notes that much misery might have been avoided by a true focus on safety uber alles, but notes that in most instances other factors were at play. Pushing to meet deadlines results in cutting corners, forcing workers to work when they are over tired, simple human screw-ups. What might have been unpleasant a hundred years ago now has the potential for mass misery. When masters of machines, for instance, were housed in the same space there developed a sense on the part of the engineers of what sounds, for example, might indicate, showing problems that gauges were not telling. Today, remote controllers who are physically removed from their charges do not have the opportunity to exercise that hands-on touch-and-feel. It bears mentioning that Chiles does not paint a completely black picture. He cites instances in which corporations actually did the right thing, at great cost. He also notes that there are some that have established mechanisms for rewarding employees who speak up.
I found that while the details of the events noted here were interesting, I had a hard time focusing. It became a bit too dry a recitation of facts. I suppose the book has value as a warning to be ever-vigilant, but did not move me much.
The author does a good job mixing older disasters with newer ones. He covers a lot of ground and a lot of disasters. The insights are powerful as are the stories told around each disaster. The focus is often the integration of man and machine, and the flaws inherent.
I found this book after watching Seconds From Disaster on the National Geographic channel due to my own interest in preventing catastrophe. When I was in the Special Forces (the Green Berets) a key component of our planning was anticipating all the possible ways things could go wrong and planning as best we could to avoid that catastrophe, and preparing as well as possible to survive it if it did occur.
This led me to start a new series of books: It Doesn't Just Happen: The Gift of Failure. While the author of this book does great detail on a wide spectrum of catastrophes, I focus on only 7 in each book, applying the Rule of Seven. Looking at the six cascade events leading up to the final catastrophe. As this author notes, there is always an element of human error in every catastrophe. Thus, every catastrophe can be avoided if we find that cascade event (often more than one) involving human error and eliminate it. I go beyond engineering disasters though to such events as the Donner Party and social disintegration leading to disaster. It's an interesting area everyone should learn about, whether thought this book or a show such as Second From Disaster. Because we all are a lot closer to such an event than we think.
On April 27, 1865, as the Civil War was winding down, the steamboat Sultana was carrying released Union POWs up the Mississippi River. Just north of Memphis, its boilers exploded, killing some people with shrapnel or boiling water, staring fires that burned or suffocated others, and causing others to jump into cold water where they drowned. According to different estimates, 1100-1700 people died, making it the worst maritime disaster in US history. Why did it explode? The boat was grossly overcrowded: even though its legal passenger capacity was 376, it was carrying approximately 2300 POWs, making the boat top-heavy and causing it list during turns beyond the angle it was designed to. A few days before, one of the boilers had developed a crack; the captain decided that repairing it properly would take too long, so he had a patch riveted to the boiler. The patch was thinner than regular boiler skin, but the emergency steam release valve wasn't adjusted accordingly. When the first boiler exploded, the shrapnel hit the others, and two of the other three boilers exploded, too; there were no bulkheads between them preventing this. The disaster did not have just have one cause; it had a cascade of multiple causes, and no measures taken to prevent such a cascade.
This book is a catalog of technological disasters: the explosion of space shuttle Challenger, the explosion of Air France Concorde, the Bhopal disaster, and more. Systems that are vastly more complicated than a 19th-century steamboat fail just as spectacularly. Chiles's solution for the future is: keep in mind that a disaster can happen; put in features that prevent it from cascading into a greater disaster so that, for example, an explosion at a chemical plant does not send debris flying around and causing more explosions; give the people who might see the telltale signs of the coming disaster the authority to prevent it.
Disaster porn at it's finest. This book is certainly not an in depth treatment of the subject, but it does provide an interesting perspectives on the commonalities inherent in the failure of complex machinery.
This excellent book was recommended in the appendix of a book on project management that I am reading, Making Things Happen.
This book recounts disasters and near misses from what the author calls "the machine frontier," that scary space where machines deviate from their intended function, either through malfunction or bad design. The Kindle version starts with a preface that describes step by step how the World Trade Center towers collapsed on 9/11. Then the book starts with a detailed description of the sinking of an offshore drilling platform, the Challenger disaster, and the near meltdown at Three Mile Island reactor 2. There is a handy list of disasters covered in an appendix, which also includes Apollo 1, Apollo 13, dams bursting, bridges collapsing, cargo doors blowing off of DC10s, the Concorde crashing, and the R-101 airship falling out of the sky.
Each disaster is viewed in minute detail to show exactly how a (usually minor) malfunction or design flaw cascades through a complex system aided by further mechanical malfunctions or human misjudgment until terrible forces are unleashed, usually resulting in a lot of dead people.
It is very interesting the way some disasters are juxtaposed against each other to show that there is nothing unique about them. People make the same mistakes over and over. For example, the stories of the fatal launches of the British airship R-101 (1930) and the Challenger space shuttle (1986) are told simultaneously in a way that makes it clear nothing was learned over the intervening 56 years. The same issues -- pressure to launch, ignoring technical advice, strongly worded but ultimately useless warnings, a culture of isolating and ignoring risks -- are present in both.
This book can be read on several levels. At the most basic level is the interesting level of reading about things that go boom. It's a little like slowing down to gawk at an accident, and there is enough detail provided to get a good, long look. On another level this book is full of lessons about leadership and project management. These lessons aren't explicitly stated until the end of the book (which tends to drag on just a little compared to the first part) but it's clear that each of these disasters was completely unavoidable in hindsight.
There are even lessons in this book for anyone involved in software development, even though when software fails the results aren't quite as spectacular. Highly recommended.
Sometimes I start a book and enjoy it so much I can tell on instinct it's about to become a part of my personality forever. This is probably the strangest book to ever enter that exalted category of mine, but boy is it a stunning example. I read this over the course of several days and, like a little kid, couldn't help but occasionally raise my head and describe with awe to whoever was nearest something I'd just finished absorbing from its pages — every aspect was just beyond fascinating to me. I loved the intersections of human psychology with complex mechanical systems, I loved the incorporation of near-misses and examples of when emergencies went right, I loved the connections Chiles built between separate incidents and the accessible way he explained it all. He was fair and compassionate in his recountings of tragedies that can inspire blind rage or grief, and utterly respectful of often-neglected members of these systems like line workers. Gosh, I could list things I like about it for days — the 9/11 analysis and recounting of flight AA96 come to mind as immediate standout chapters, but I found something fascinating about every one. Even simple things, like his use of 'she' pronouns when describing theoretical architects or engineers (in 2001!), made a positive impression on me.
I'm sure not everyone would love this as much as I did — some parts got repetitive, and the chaining of one incident anecdote to the next with little transition occasionally required me to step back a page and realign myself — but I'll certainly be recommending this to all the nonfiction readers in my life with an interest in disaster investigation or engineering in general. Now on to read every article this guy's ever written...
You might not want to do what I did and read this on an airline flight. A collection of technological disaster scenarios, Chiles digs down to explore how bad design, unintended consequences, and technology and people set at cross-purposes resulted in disaster. I've read a few technical journal articles on incidents mentioned in the book, and while it's clear it is adapted from a TV series, I wish there was a bit more depth - some of the incidents are described very summarily. That being said, it's well worth it. As long as you're not in mid-air.
An up close and human look at some infamous foul ups
If you want to know why the Concorde crashed or how things got so fouled up at Chernobyl or what went wrong at Three Mile Island, this very readable book is a good place to start. Chiles gives us diagrams, step-by-step chronologies, and a very human narrative to illuminate these and scores of other technological disasters in a way that makes it excruciatingly clear that most of them could have been prevented.
What these disasters have in common is human error, of course, but Chiles reveals that there were also foreshadowings and warnings of the horrors to come in the form of cracks, sagging roofs, parts that didn't quite fit, maintenance shortcuts taken, capacity limits reached, etc., that should have tipped off those in the know that something terrible was about to happen. Additionally, virtually all of the disasters happened because more than one thing went wrong.
Among the horror stories told in detail are:
The harrowing tale of the sinking of the drill rig Ocean Ranger in a North Atlantic gale in 1982, a disaster caused in part because somebody forgot to close the shutters on portlight windows; The Challenger space shuttle blow-up, which Chiles compares with the crash of the British hydrogen-filled dirigible R.101in 1921. Both were "megaprojects born out of great national aspirations"and both went forward "despite specific, written warnings of danger." (p. 67);
The Hubble Space Telescope fiasco in which a lens is incorrectly ground thereby partially "blinding" the telescope, a multi-billion dollar error that could have been prevented with just a little testing. In this chapter (subtitled: "Testing is Such a Bother") Chiles shows how disasters happen because proper tests are simply not performed;
An out of control police van that killed parade watchers in Minneapolis in 1998 when an off duty police officer not completely in the driver's seat inexplicably gunned the engine instead of hitting the brakes. This accident was in part caused by an alteration to "Circuit 511" that controls both the brake lights and (unbeknownst to the mechanics) an electric shift lock on the vehicle. Chiles notes that "The odds of pedal error go up when drivers are elderly, and also when drivers turn around in the seat to back their cars up." (p. 242);
The explosion at the Union Carbide plant in Bhopal, India in 1984--"the worst chemical disaster of all time"--that killed thousands of people. Chiles calls this a case of "Robbing The Pillar," a reference to the practice in coal minds of mining the coal pillars holding up the walls of the mines.
This is a book for the engineer in your soul, a treatise for the worry-wart on your shoulder, a recounting of responsibility for the accountant in your heart, and cautionary tales for the fear monger in the pit of your stomach. Chiles is gentle in focusing blame, but he does indeed name names and point fingers. He also gives us a prescription for preventing future disasters. In addition to the need to perform regular maintenance, and follow safety procedures to the letter, etc., he suggests how we might prevent "cognitive lock," the blinding sense that we've all experienced, that insists that THIS is the problem and not something else, or that such and such is what needs to be done, when in reality something else will work. He also advises that near misses ought to be reported and not swept under the rug (p. 202) and that "redline running" is dangerous and that under pressure we are sometimes apt to do the wrong thing, and therefore procedures to follow during crisis should be spelled out in advance.
--Dennis Littrell, author of “The World Is Not as We Think It Is”
In an age of advanced technology, bugs and glitches have become a part of modern existence. We entrust complex machines like airplanes with our lives without so much as batting an eye. Nonetheless, especially when a technology is being pioneered or developed, there's no way to escape mistakes... and those mistakes are sometimes fatal and tragic. Technology writer James Chiles describes himself as obsessed with tracing the history of technological mistakes. In this book, he shares disaster stories he's collected over the years, grouped by what human act triggered the malady.
This book makes its own fairly original genre by maintaining a perspective that most technologists seek to avoid. Most of us like to get things working. Thinking about how things might blow up can come unnaturally when all we're thinking about is how to succeed. This book can retrain us to take a defensive posture at times by seeing how humans are directly responsible for disastrous machine malfunctions.
Though highly schooled in the sciences, I'll admit this book caused me to slow down my reading pace. I often had to reread a paragraph because its scientific contents were so carefully described. While that's a credit to Chiles' intellectual mastery, it also limits the potential audience. If you are not interested in analyzing a malfunction along every angle, this book is probably not for you.
Fortunately, it is well-suited for engineers seeking to refine the quality of their work. For most technologists I know, avoiding a fatal disaster due to their work is high on the list of possible horror stories to eschew. This book's examples are grounded firmly in the machine world, and stories from information technology, now gaining in popularity, are almost completely avoided. Some of the stories harken back to the 1800s yet still convey the limits of human nature.
This book is almost 25 years old, and I'd be interested in a second edition of more recent stories. I'm sure computers and code would play an increasing role, and I'm curious what lessons might be learned. The world's complexity is only growing as the years proceed, and managing complex systems is an valuable professional skill. This book can refine readers' ability to identify escape hatches and foresee problems. That's why I chose to read it. Although it can also convey a sense of anxious worry, a measured dose of paranoia can drive increased performance in a work environment where lives are critically at stake.
To keep our expectations in proportion to reality, Chiles says, we need books about system failures causing disaster just as much as heroes preventing disaster. He gives us such a book. The common threads in the chapters are psychological causes which blinded people to their complex systems (such as an airplane, oil rig, or nuclear power plant) breaking down - or held them back from taking necessary action - until it was too late.
The stories are riviting if short. The takeaway lessons are mostly ones I'd already learned from my reading of air crash investigations, but they're good lessons: things such as reporting problems rather than keeping them quiet, stepping back to check whether your model holds up to reality, and not disabling safety equipment. But the overall theme of the book is to enlarge your imagination to keep in mind the possibility of disaster.
If you think you need this lesson, or if my description looks interesting, then go ahead and read this. Otherwise, there isn't any other deep lesson here.
I didn't read every single page of this one. I sort of skimmed through, looking for stories of technological disasters that were particularly interesting to me for whatever reason. I probably read two thirds of it, but I really enjoyed those two thirds.
We live in a world cluttered with engineering marvels that most of us don't understand, and even those who do understand them, may not understand them completely. Jet airplanes and nuclear rectors are simply too complicated. In this book James R. Chiles explores who our ignorance, carelessness, or lack of training can, and do, lead to disaster. The book is packed with both technological detail related in clear language intended for the average reader, and human drama. Death and destruction all over the place.
Sobering stuff, would love an update for 21st century
The dangers of life on the "machine frontier" are growing as the technological complexity of crucial systems increases. Would love to see an update applying author's lessons to things like AI, the Wuhan Flu response disaster, and the dangerous rush to green energy before it is ready. Because modern society as a whole is the ultimate complex system and we are busy undermining it, thinking it's too big to fail! And on a sentimental note, I wish Mr. Chiles had included the Wreck of the Edmund Fitzgerald!
Look. This book is about technology and it's 20 years old. But it's really about humans and our responsibilities when it comes to complex technology. So in that way it is still very relevant. And also, it's not like plane crashes and sudden unintentional acceleration aren't still very much in the news. But I imagine if this book was written today it would also include data breaches, AI, and the like. The technology has evolved, and will continue to do so, but the lessons are relevant.
Thoroughly enjoyed the various story of disaster or avoided disaster. Particularly appreciated the way the author group the stories around themes like "Blind Spots/Human Factor" so that the reader can extract important lessons applicable to their own domain.
While I do not work with actual dangerous machines, I do develop cloud software systems where lessons on how to detect and avoid potential disaster is extremely relevant even if not necessarily life threatening.
This type of book is my catnip so I wasn't surprised that I really enjoyed it. I appreciated that it covered some lesser-known incidents and near-misses that I hadn't heard of before. One lesson I took away from this book is to never ride a new submarine on its sea trials, especially in the project is under pressure for being late. You'll end up at the bottom of the sea, basically guaranteed.
Well-written with good anecdotes, details, and insights about lots of disasters and near misses. Has its share of horrifying episodes, of course. Don't read it on a flight. Or a train. Or a boat. Good takeaways for system design, though.
Very interesting book. It shows how a combination of small things can lead to a catastrophe and the importance of a safety culture within an organization. However, the book was published in 2001; the introduction talks about 9/11 but, obviously, recent catastrophes are not covered.
This is another really interesting book on things we don't like to think about. In this case, it's engineering disasters instead of big weather or earth science - and trust me, if you like Modern Marvels or Seconds From Disaster, you will enjoy this book.
Unlike Megadisasters, Chiles takes the engineering instead of the statistical view of horrible things that happen to modern humans. He argues that there is a system to every disaster, and that certain things happen in common more often than not. The goal is to recognize these patterns and use them to prevent future issues.
Some of these disasters were completely unknown to me - I'd heard of Piper Alpha, for instance, but Chiles instead talks about the Ocean Ranger. I had seen this explosion at a Nevada rocket fuel plant (which, holy freaking cow! Watch that!), but I had never heard of the explosions of fertilizer on board ships which completely obliterated part of a port, and why they happened. Of course I had heard of Three Mile Island, but no one was talking about it by the time I got old enough to ask about what it was. Chernobyl was the power plant disaster of choice. Chiles discusses TMI and makes the point that we'll likely never know what happened because of the case being so tied up in the court system.
Boiler explosions? They happen. Nitroglycerine? It's here. Air crashes? Covered. And Chiles wants to make sure that we understand the way that our own brains contribute to the problems, as they all too frequently do. The fact is that any human-designed system is going to have human-introduced weaknesses, and sometimes we learn the hard way what those are. We can, and do, learn - the chapter on nitroglycerine is incredibly illuminating - but the cost can be high, and is often made higher by our own limitations.
When my fiance was working in air traffic, he talked about the "swiss cheese" method of disaster avoidence - the idea that there will always be holes in a system, but you want to keep them from lining up. This is a book about instances where the hole went straight though the block of cheese, and how we can keep it from happening again.
This book is part of a sequence that started with the book, "Build your Own Spaceship". The link, not tenuous, but not the major theme of this book, is the Apollo 1 spacecraft fire in 1967 (killed 3 astronauts: "Gus" Grissom, Ed White, and Roger Chafee). These men died on the ground in the Apollo capsule that was still being developed. Had this not happened (and no further similar accidents), it is possible that Man would have trod on the Moon before 1969 (or earlier that year).
There is also a section on Apollo 13 (almost obligatory in any book dealing with NASA or technological machine disasters.)
The bulk of the book is a series (often intertwined) of disasters and near-misses (and interventions) that have happened throughout the industrial era. Mostly, of course, in the near past (last 100 years), but with some examples stretching back into the 19th century.
The author uses these examples to highlight, chapter-by-chapter, how the effects of complexity, "invisible" states/effects, and human ability contribute or cause major disasters. There are also examples of how these tendencies and short-comings can be (or at least seem to have been) overcome.
It's a very interesting take on things. Some of the examples are very well known, others less so, but all are interesting. It may not actually teach anyone who will go out an design something a lesson , but it makes you wish the book itself was longer (or newer - several recent disasters are not included, as it was written before 2000.)
It's a good, well conceived and executed book. Little tidbits out of the author's life help bring the book closer to home, as well. You should get a copy and read it.
Good read for anyone involved with anything more complex than a basic calculator. Sometimes, the author does seem to go on down unnecessary tangents but overall this raised my awareness of the dangers of complacency, not just in chemical plants where I work, but on the road and in planes. I drove safer the week I read this and was more aware of airplane safety. I loved his suggestion of counting the steps between your seat and the nearest exit whenever you get on the plane, as in an emergency, everyone gets disorientated.
It's good to have reminders like this book showing how quickly things can go wrong and how much can be done to avoid disasters by speaking up. There are always warnings and most disasters start from small deviations which are typically ignored.
The airplane stories were the most chilling as he gives multiple examples of situations in which if someone had spoken up, disaster could have been avoided. We're conditioned as passengers to keep quiet and pass the buck assuming that the person in authority must be aware of whatever strange sound or observation we've made. In every plane crash in this book, that assumption was terribly wrong! Not speaking up in an attempt to not look silly or whiny is a guarantee for disaster.
A hair away from 5 stars - the material is well researched and very interesting, but the writing suffers from an occasional lack of organization. Industrial disasters are often namedropped with little introduction, to add support to case studies that didn't need the assistance, then never mentioned again in that chapter. One otherwise solid chapter compares the similar histories and fates of the doomed Challenger launch and the R.101 dirigible 50 years before, but switches between the two with no meaningful transition, sometimes even from paragraph to paragraph. I read this on my Kindle so it is entirely possible that the print edition has some manner of visually distinguishing the two, but I found this chapter to be a jarring example of an ongoing issue throughout the book.
A hair away from 5 stars - the material is well researched and very interesting, but the writing suffers from an occasional lack of organization. Industrial disasters are often namedropped with little introduction, to add support to case studies that didn't need the assistance, then never mentioned again in that chapter. One otherwise solid chapter compares the similar histories and fates of the doomed Challenger launch and the R.101 dirigible 50 years before, but switches between the two with no meaningful transition, sometimes even from paragraph to paragraph. I read this on my Kindle so it is entirely possible that the print edition has some manner of visually distinguishing the two, but I found this chapter to be a jarring example of an ongoing issue throughout the book.
Inviting Disaster was a good book if you like to read stories of crazy things that have happened in the past and you want to know how they happened.I really enjoyed the story with the new view on September 11th,were he explained how the collapse of the twin towers happened.This book was very interesting in looking at the facts behind some of the more interesting cases in history were things go wrong.
I recommend this book to anyone who wouldn't mind going back and re reading lines a couple of times as the author does describe things with a higher vocabulary and so you must re read and look at the diagrams that are provided to understand the reading.Also I would recommend this book to anyone who is interested by uncovering mysteries that were not presented at the current time of the accident.