AI enthusiasts wave off the notion that the technology will lead to mass unemployment. A lot of people once drove horse-drawn carts and made buggy whips, they say. Losing those jobs to automobiles didn’t lead to breadlines; on the contrary.Doomers respond that, in the case of AI, we’re not the drivers; we’re the horses. The optimists’ retort, that horses’ lives got better as they went from work animals to luxury items, is no help. Have a look at what happened to the equine population in the first half of the 20th century.
Whatever AI’s ultimate impact on unemployment, this back-and-forth highlights the idea that AI is unlike all the technologies that went before, with greater complexity, greater upsides and greater risks — for labour, cyber security, national defence, mental health and so on. So those controlling it have special responsibilities. Everyone in the AI industry acknowledges this. It is expressed in OpenAI’s “Model Spec” guidelines and papers on the topic by Anthropic CEO Dario Amodei, which lay down guidelines about what AI companies will allow their models to do.
But AI companies and their models will follow one rule before all others: they will seek to maximise returns for their shareholders, up to the limits set by law. When the law of profit conflicts with the company’s internal principles, profit will win every time.