Every spring, I take red-eyes from Austin, Texas, to Oxford, England, to teach a graduate seminar on AI and philosophy. When I land, I change into a suit and a tie, something I never did as a start-up founder. Crossing the lawns of St Catherine’s College, I feel like an actor stepping into costume.
In the seminar room, the students are already seated. Rhodes scholars, philosophers, MIT-trained computer scientists. The kind of people who will build the next generation of AI. Each week, we pair old questions with new technologies: we read Plato or Mill, then hear from someone at Google DeepMind or Midjourney about what’s newly possible.
When my students have done the reading, they can summarise any argument with precision. Ask one to explain Mill’s harm principle and you’ll get a clear, accurate account. But ask whether autonomy really matters the way Mill says it does — both for what it produces and for what it means to be human — and the room falls silent.