On edge about AI? No.
A few years ago, Heather’s, Anatoli, and myself started Neurala with one goal: bring the results and insights of our Ph.D. work on brain-inspired computing into everyday technology. We wanted this technology to change the way society uses and benefits from machines: rather than each device needing 1 human brain to work, we wanted it to have "its own brains" at the service of the human owner. Simple goal, not so simple technology, but designed from the ground-up to be a technology that would be helpful to humanity. But sometimes people like to think differently.
Years have passed, working with NASA, Air Force, and many companies to build unique AI tech, and transitioning into products. Just a few days ago, we announced the release of our second consumer product, an app called Neurala Selfie Dronie for the Parrot Bebop Drone, which followed another app, called Roboscope, which does roughly the same for a ground robot called Parrot Jumping Sumo. In essence, these apps enables hands-free use of ground/air robots by leveraging a patented light-weight set of Artificial Neural Networks algorithms (also called Deep Learning - see here to learn more).
I was waiting for the time in which somebody would point out that this app is “the beginning of the end”, and now “AI would take over the world”. Why? Because of my personal experience: 1 out of 5 questions I receive in my public speaking engagements are about Terminator, and I do speak often…. This question clearly echoes a real, macroscopic, persistent worry that people have about AI.
For example, this morning I came across a Blog Post from Kurzweil AI, where the author ends the article with “Of course, it’s a small step from this technology to surveillance drones with facial recognition and autonomous weaponized unmanned aerial vehicles (see “The proposed ban on offensive autonomous weapons is unrealistic and dangerous” and “Why we really should ban autonomous weapons: a response“), especially given the recent news in Paris and Brussels and current terrorist threats directed to the U.S. and other countries.”
I have expressed my position on “Being on Edge” about AI in the past. In particular, in 2013, this article on the Geek Magazine is completely devoted to the issue of “bad” AI taking over the world. I re-state the position here.
I do not, for a minute, believe that AI is the real threat that humankind should be worried about. A cursory look at a history book, or Wikipedia, would do it. Browse for human tragedies, massacres, atrocities, genocides, homicides, violence, and the like, you will find the answer you are looking for. Humans beat machines infinity:0. The real enemy of humankind are humans their selves, their own kind. Ironically, our worst enemy is something that shares our very DNA, not something as alien to humans as a machine is.
Therefore, I urge people who are afraid of AI to act, and act now, but really direct the action towards the most crucial issue that humankind faces: itself.
On our end, Neurala will keep building technology to help magnify people’s productivity. We can’t change people intention in using this technology. Only you can do that.