The Future Computed: AI and Its Role in Society
Microsoft published The Future Computed: Artificial Intelligence and its role in society this week, and I have spent most of today and this evening reading through it. It is a thoughtful and at times surprisingly candid piece of writing from a company that is both a major builder of AI systems and, in publishing this, clearly trying to participate in a broader public conversation about what those systems mean.
I would recommend reading it. It is available freely and it is not a long read — but it is a dense one in the best sense, and it rewards taking your time rather than skimming.
What the Book Gets Right
One of the things I found most refreshing is that the book does not pretend AI is simply a neutral tool that will automatically produce good outcomes. There is an explicit acknowledgement that the technologies being developed have real potential for misuse, that they can encode and amplify biases if not built carefully, and that the companies developing them have a responsibility that extends beyond commercial considerations.
This framing matters. It is easy for technology companies to publish material about AI that is effectively an extended product announcement dressed up in the language of social responsibility. The Future Computed feels more genuinely engaged than that. Brad Smith and Harry Shum are asking real questions about governance, accountability, and the social contract between technology companies and the societies they operate in.
The Principles That Stand Out
The book articulates six principles that Microsoft is applying to its AI development: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. What I found interesting — and useful — is reading these not just as aspirations but as engineering and design commitments. How do you actually build fairness into a machine learning model? How do you operationalise transparency in a way that is meaningful to the people affected by an automated decision?
These are not easy questions and the book does not pretend to have answered them definitively. But the act of naming them clearly and treating them as engineering problems rather than purely philosophical ones feels like the right framing.
What I Am Still Thinking About
The part of the book I keep returning to is the discussion around the future of work. This is, frankly, where the most anxiety exists in the broader public consciousness around AI, and understandably so. The honest answer about what happens to employment as automation becomes more capable is that nobody knows with certainty — but the directional uncertainty is significant.
What the book argues, and what I find broadly persuasive, is that the right response is not to try to slow the development of the technology but to invest seriously in human adaptability — in education systems that teach people how to learn continuously rather than front-loading all learning into the first two decades of life, in social safety nets that can absorb transitions, in helping people move alongside the technology rather than being displaced by it.
That is easy to write and hard to do. The practical policy challenges are formidable. But I think the framing is correct.
Why This Conversation Matters Now
We are, I think, at a point where the decisions being made about how AI is built, deployed, and governed will have long-term consequences that are very difficult to reverse. The norms being established now — about data use, about transparency, about accountability when automated systems make decisions that affect people's lives — will shape the technology environment for a generation.
That is why I think it matters that the companies building this technology are engaging seriously and publicly with these questions rather than treating them as a distraction from the engineering work. The Future Computed is part of that engagement, and it is worth your time.
Go and read it.
Continue exploring