A well-reported look at the frontiers of information technology as brought to the world courtesy of artificial intelligence.
“I think this will be the most transformative and beneficial technology humanity has yet invented,” Silicon Valley tech tycoon Sam Altman once exalted of ChatGPT, the AI engine built on a vast corpus of words. Hao, a writer for The Atlantic and other publications, takes a more measured view of the accomplishments of Altman and his OpenAI, a tech firm with significant transparency issues and a curious structure, part nonprofit, part for profit. Hao opens with Altman’s being fired in November 2023 at the hands of his board and his quick return to the company with few of those issues resolved, a drama that, Hao writes, “highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence?” It’s an urgent question indeed, given that AI increasingly governs us in making decisions about judicial sentencings, college admissions, health insurance payouts, and so on. Moreover, Hao writes, AI development has become increasingly secretive, with the evolving product put to uses that “could amplify and exploit the fault lines in our society.” Against booster promises that AI will solve the climate crisis and discover a cure for cancer, Hao—who found employees blocked from speaking with her “beyond sanctioned conversations”—looks at some unhappy realities: For one, data centers consume huge amounts of energy, with one planned facility using nearly as much power as New York City; for another, most of the corpus of AI’s large language models overlooks the developing world, where, not coincidentally, a great deal of AI-related grunt work is happening for low wages in places like Kenya and Chile.
A pointed account raises needed questions about how AI is to be regulated to do no—or at least less—harm.