How would you feel if someone said you could quit your job and never have to work again but still get paid? Initially, quite good, probably. But look a little closer, and the answer might become more complicated.
That scenario of not having to work is something that we’re all eventually going to be faced with because of the rapid developments in artificial intelligence, according to tech billionaire Elon Musk.
Speaking via webcam at a conference in Paris last week, Musk said: “If you want to do a job that’s kind of like a hobby, you can do a job. But otherwise, AI and the robots will provide any goods and services that you want.”
It’s not entirely clear how this scenario would work in practice. Musk reckons there would need to be what he called “universal high income”, a lifelong wage people would receive from their government for just being citizens.
Many others in the tech community have made predictions like this and understandably they’ve attracted concerns because our work is intrinsically linked to our identity (especially in capitalist economies). Even Musk has concerns about the existential questions AI raises: “If the computer and robots can do everything better than you, does your life have meaning?’
But while Musk warns us about the risk AI poses to our jobs, his company xAI closed a $6 billion (£4.7bn) investment round this week that will make it among the best-funded challengers to OpenAI. Musk says xAI will use the funds to build infrastructure that will increase xAI’s ability to learn.
The race to build bigger and better AI is well and truly on. And based on the amount of money investors are pouring into it, the perceived impact of this technology is vast.
It may seem like a reckless contradiction to warn about technology while simultaneously surging ahead in developing it. But this approach is not new. When you look back at most recent technological breakthroughs, from cars to nuclear fission, their creators raised safety concerns from the outset.
So how do we avoid the potential problems associated with this new technology while still extracting the benefits?
One approach is to get governments involved. Musk’s universal high-income solution suggests that he favours this.
Another approach is to look at mitigating some of the risks using technology.
I was recently at a gathering of the great and the good from the decentralised data storage and cryptography world in Lugano, Switzerland.
It may not sound riveting but decentralised tech is one of the most interesting things happening right now as it could help us develop a better, safer version of the world with AI in it.
Take, for example, a project called ‘Solid” run by Sir Tom Berners-Lee (a Brit and father of the web) at Massachusetts Institute of Technology in Boston. Solid aims to create a system for “true data ownership as well as improved privacy” using “decentralised social applications”.
The idea is to allow you to have a secure, unique online identity that you can move wherever you want. That’s handy if you’re trying to live in a world where AI might otherwise be stripping you of your identity.
So, like the invention of seatbelts for car safety, AI safety will likely rely on a collaboration of both tech and regulatory innovation.
It is reassuring to see governments getting involved. In the UK, we’re leading the way with regulatory safety in the new AI Safety Institute, whose mission is to “minimise surprise to humanity from rapid and unexpected advances in AI”.
Collaboration is the key and if AI investors and technologists collaborate with the rest of society towards making this technology safe and secure, the impact of AI may unleash incredible potential that we can all benefit from.
Conversely, if the big AI firms get caught up in competing with one another to drive profit and increase their wealth (which isn’t entirely inconceivable), AI might eventually do more than steal your job. It could create a two-tiered system between the owners of AI and the rest of us.
And ultimately no one, including Elon Musk, benefits from that.