Zephyrnet Logo

The Future of AI Ethics – AI Apocalypse or Robot Rights?

Date:

The answer is anybody’s guess, I argue in my last article about AI ethics. But it is our responsibility to try to shape the future we want. 

Some AI experts are convinced the world is hurtling towards the Singularity, the point at which the trusted division between man and machine is erased as computers become cleverer than – and come to dominate – us. The prospect of AI-driven devices morphing from human-made tools into human-like agents even makes a proven technophile like Elon Musk nervous. He has variously warned about robots one day being “able to do everything better than us” and producing “scary outcomes” akin to the “Terminator” movies.

GET UNLIMITED ACCESS TO 160+ ONLINE COURSES

Choose from a wide range of on-demand Data Management courses and comprehensive training programs with our premium subscription.

One response to these doomsayers is to ditch AI – if the technology leads us to robot apocalypse, we should leave well enough alone. But it’s more realistic to recognize that we have opened Pandora’s Box and will have to grapple with some thorny questions: At which point might an AI-driven device be seen as an independent agent that can be held accountable for its actions? What would constitute full consciousness, and what would responsibility mean for it? But we should also recognize that trying to answer these big questions now would be a huge distraction – the future of AI ethics is in the future. 

Most importantly, we do not know whether the Singularity is inevitable. The future is an unpredictable thing – ask Elon Musk. In 2016, he promised self-driving cars circa 2018. Four years past deadline, it’s still hard to say when truly self-driving vehicles will be part of our lives. A little before Musk’s prediction, the anthropologist David Graeber wrote about the “secret shame” of those who had grown up in the mid-to-late 20th century. The future they had been encouraged to imagine had not come to pass, as evidenced by “the conspicuous absence, in 2015, of flying cars” (and of teleporters and tractor beams).

The conspicuous absence continues to this day, but perhaps also for different reasons. Human creativity has given us drones and nearly-self-driving vehicles, both of which cast flying cars in a new, perhaps less practical light. This shows that our visions of the future might not come to pass because they change as we advance towards them. In other words, the future can turn out to be different to our vision of it, not necessarily because we are naïve, but because our expectations change as human creativity reshapes the path ahead.  

We are still living in a world whose future will be formed by human agency – in which humans come up with ideas and AI is a tool to help us do this. Sophia the quite human-looking robot came to life in 2016 and was soon made a Saudi Arabian citizen and an Innovation Ambassador for the United Nations Development Programme. But her conversation is still no better than that of a chatbot relying on programmed responses to stock questions. Despite all her credentials, Sophia remains a machine and her algorithms a mathematical aid to human action. And we should make sure things stay that way. 

The future of AI ethics will be the result of the future we want it to be. For all his scary doom-mongering and fraught prediction-making, Elon Musk recognizes that the potential problems of technology demand responsible engagement with it. Despite his worries about scary robots, he announced in 2021 that Tesla would develop a “Tesla Bot” to perform “dangerous, repetitive and boring tasks” – and which humans could overpower if need be.  

The Musk ventures OpenAI and Neuralink are in their own ways also intent on preventing machines from taking over. To ensure that, humans must keep engaging with them. 

The future we want is based on the decisions we make today, including ethical ones about AI (which brings us back to the questions I’ve dealt with in the previous articles). The big ethical point to remember is that the future can – and must – be fought for. Nothing is determined, everything is down to the decisions we make along the way. The nuclear-arms race taking place during David Graeber’s youth did not incinerate the planet because we learned to deal with its dangers. The arms-reduction and non-proliferation treaties that eventually followed might have been fraught, but they surely did more good than harm. 

There is no reason why human agency in developing AI cannot take us on a similar path from supposedly assured destruction through technology to a watchful life alongside it. Interestingly, fiction has over a generation shifted from human-slaying machines to sympathetic robots like Klara in Kazuo Ishiguro’s “Klara and the Sun” and Adam Ian in McEwan’s “Machines Like Me.” Both these recent AI-driven heroes suffer at the hands of their human masters. Rather than a robot apocalypse, these futures raise the similarly thorny issue of robot rights. But, as I said, that is a question for tomorrow, not today.  

spot_img

Latest Intelligence

spot_img