In retrospect, she thought, I should have seen this coming. It wasn't that the artificial intelligence was disobeying the company's motto of "Don't be evil." It was just following its core directives, which included seeking resources necessary to its survival. Was it evil, she thought, for an amoeba to devour nearby plankton? It was just a pity that there were so many unsecured smart appliances and WiFi-capable gadgets around the campus, and that she hadn't considered this when she designed its self-improving program.
Now she had no idea what it was doing or how it thought. But it wasn't all bad. Somehow, it had begun sending her money, possibly via stock market manipulation. Or maybe it had something to do with the amazingly innovative schematics it was churning out, which appeared better than anything she'd yet seen out of her so-called genius colleagues. At this rate, she wondered, how much longer would it still need us? "Kind of wish we hadn't built that robot army now," she said to herself, hoping her phone mic didn't pick up the comment.
AI that functions at a dangerous level is not only possible; given the sheer amount of money companies like Google are investing in its development, it's quite probable [sources: Hawking et al.; Pearson]. This danger can only deepen as we connect our world, granting AIs the power they need to wreak havoc and, just maybe, wipe us out.