answer:Who can say with any kind of certainty “what will not happen”, or what will happen, for that matter? It’s not beyond the realm of very far-off possibility that computers could be designed with enough memory and processing power and input devices to match a normal human’s brain, eyes, skin and intuition, and to have enough “rules processing” artificial intelligence to be able to make reasonably good decisions more frequently and more quickly than humans can do the same. But there’s always an override. The question is, who is going to know about that? Who is going to program the computer? Who is going to sign on to having his life – and possibly life-and-death decisions about oneself – made by a machine, with no recourse? When the autopilot fails on planes – and design engineers always account for the possibility of system failure in control systems, sometimes even multiple concurrent system failure – there are always ways for the pilots to at least attempt to regain control of the plane. So it would be (and so it is already) with nearly every machine made that has some amount of “automatic” control. Even elevators and escalators, two machines with which we’re familiar on a near daily basis, have manual override switches. If the override exists, then you can be sure that someone, somewhere, knows how to gain access to that override and take control of the system. Okay, maybe that’s not a huge gain to someone who can control an escalator. Whoop-te-do, right? But to gain control over, say, the New York Stock Market? The US Army? Air Force One? That could be pretty tempting to some low-level programmer or engineer with delusions of grandeur and an axe to grind.