Beyond the contemporary issues lurk deeper long-term questions. What should and shouldn’t we automate? Should we create lethal autonomous weapons, or should a human always take charge of life-and-death decisions? How will we earn a living and even find meaning if AIs eliminate jobs? And if we ever create AIs that are smarter than us, will they value the same things we value – not least human life itself?

I pay close attention to whatever Cennydd says on the intersection of technology  and ethics, and this article reinforces why. He covers many of the different challenges around the exploding world of AI. Source: Artificial intelligence: who owns the future? –