It’s the end of the world as we know it and AI feels fine
“It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop … ”
For many of those born before the 1990s, those lines from the movie Terminator were their introduction to artificial intelligence (AI). And like the film’s much wiser older brother, Ridley Scott’s Blade Runner, they show us that once we humans have been exponentially outgrown by our machines of loving grace, the future looks like a grim place to be.
According to some futurists, we are a mere few decades away from AI far brighter than we can imagine right now: Artificial Superintelligence (ASI) and Artificial General Intelligence (AGI) are just around the corner, we are told. If this comes true, profound changes will occur in the world, and life will not be as we know it when we reach the Singularity.
Oxford University philosopher and AI expert Nick Bostrom says this will happen within the next three decades. He also believes that if AI creators and policy makers are not careful and fastidious in their management of AI, our machines may well wipe out the human species.
Killing us softly
Serious business, but we shouldn’t get too carried away. A bunch of tech giants are reportedly working on how to create AI responsibly with social progress in mind rather than working on making dystopian parables come true. Google, Microsoft, Amazon, Facebook and IBM are addressing more prosaic concerns than the obliteration of the human race, such as mass unemployment or intelligent computers controlling horrible weapons.
The New York Times reports that this “Industry Group” dealing with AI ethics is yet to have a name and is rather hush hush, but its objective will be to ensure that “AI research is focused on benefiting people, not hurting them.” This follows a new Stanford University report on the future of AI and how a lot of people will be affected in the near future.
The worry, of course, is that tech companies, like most businesses, focus on the “bottom line” and that the race to create the best AI might take place without an equal focus on the consequences of their creativity. The Stanford report, which is called Artificial Intelligence and Life in 2030, says that it won’t be possible to regulate AI. Governments are often too slow to catch up to advanced technologies.
This is one of the reasons for the industry Group; tech companies want to create a “self-policing organization,” according to the Times article: policing not super-intelligent AI, but machines intelligent enough to make a great impact on “health care, education, entertainment and employment” and the military, which the Stamford report believes will soon change significantly due to technological advancements. As for the Singularity, the report doesn’t go that far.
Photo credit: KOMUnews via Flickr
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.