AI, "Humanity", and Dr. Manhattan Syndrome
A communications intervention
Back in January, it came to light via FEC reporting that OpenAI’s president and co-founder Greg Brockman and his wife had made a monumental $25 million donation to MAGA Inc. last September—one of the largest individual political donations of 2025.
When interviewed by WIRED about his newfound political largesse, Brockman explained the check in rather grand terms. “This mission, in my mind, is bigger than companies, bigger than corporate structures,” he said. “We are embarking on a journey to develop this technology that’s going to be the most impactful thing humanity has ever created.”
The word that sticks out to me here is “humanity”. He writes a $25 million check with his wife to a partisan political operation, one with very specific policy positions affecting very specific people, and explains it in the language of humanity. The kind that lives in essays and mission statements, not the kind that has healthcare anxieties or gets deported or loses jobs or disagrees with you about politics. Capital-H, abstract, floating-above-the-fray Humanity.
Brockman is not an outlier. If you’ve worked in or around big tech for any length of time, you’ve met the type, probably several dozen of them. They’re everywhere in AI. They care enormously about Humanity. They’d do anything for Humanity. They just can’t be bothered with actual people.
And if these executives and companies don’t see and address the disconnect in their public messaging, they’re doomed to keep losing the battle for hearts and minds the industry desperately needs them to win.
The View From Orbit
When I contemplate a mascot for this type of executive, the image that comes to mind is blue, nude, and levitating: Dr. Manhattan.
For those who haven’t read writer Alan Moore and illustrator Dave Gibbons’ “Watchmen”, here’s the quick version: Jon Osterman is a nuclear physicist who gets disintegrated in a lab accident and reconstitutes himself as a being of godlike power. He can see across time. He can manipulate matter at the atomic level. He is, for all practical purposes, omniscient and omnipotent. And over the arc of the story, he gradually loses the ability to give a shit about people.
This isn’t a flaw in his character. Moore wrote it as the inevitable consequence of operating at that altitude. Manhattan can perceive the entire arc of human civilization. He understands the quantum mechanics underlying all of existence. He genuinely does care about humanity’s survival in some detached cosmic sense. But he can’t maintain a relationship with the woman he loves or comfort someone who’s grieving. Individual suffering becomes statistically insignificant when you’re tracking the movements of atoms and the trajectory of species.
The crucial part—the part that makes “Watchmen” more than a comic book—is that Manhattan doesn’t experience this as a loss, but as clarity. He thinks he’s seeing more clearly than everyone else. The people around him can tell that something essential has been lost, but he can’t see it himself because the view from orbit is so intoxicating.
Replace “nuclear physicist” with “AI executive,” and you have a disturbingly accurate portrait of a particular mode of tech leadership that has exploded across the industry over the past few years. (Minus the nudity, thankfully.)
Why the Abstraction is Irresistible
I don’t think these guys are truly indifferent. I’ve met and worked with many Dr. Manhattan types over two decades in this industry—an industry I'm personally and professionally invested in, as someone who builds with these tools daily and whose consulting business is substantially tied to AI. They are, in the majority of cases, formidably intelligent, voraciously curious, and capable of accomplishing amazing feats. The problem doesn’t lie in any of these things, but in the comfort the worldview they inhabit provides.
Humanity, the concept, is an extraordinarily comfortable thing to care about. It’s theoretical. It’s malleable. You can model it and optimize for it. You can write essays about it on a blank white page at the top of your company’s domain hierarchy, and nobody can pin you down on specifics.
People, on the other hand, are a nightmare. They’re real and present, messy, inconsistent, and contradictory. They get angry at you, sue you, organize against you, show up outside your office with signs. They have the temerity to worry about their job rather than the species-level trajectory of labor markets. They want to know why their kid is using your chatbot to cheat on homework instead of appreciating that you’re building the most important technology in human history.
Humanity holds still for your grand plans. People do not.
There’s a second force at work beyond comfort, one I’ve written about before: centering Humanity in your rhetoric casts you as the hero of civilization’s story. When Brockman says the mission is “bigger than companies, bigger than corporate structures,” he’s writing himself into a narrative where the $25 million to MAGA Inc. isn’t a political act with winners and losers, but a strategic move in the grand project. The partisan specifics dissolve into the universal mission. And if you can’t see that? Well, you’re just not thinking at the right altitude.
This is the rhetorical equivalent of a judo throw. By elevating the conversation to the civilizational plane, any critique of the specific decision looks petty and small by comparison. You’re worried about Medicare funding? I’m worried about the survival of Humanity. Who sounds more serious?
We’ve Already Run This Experiment
Here’s where it gets really instructive for the AI industry, and where Moore’s choice to root Dr. Manhattan in nuclear physics turns out to be more than a narrative device. The American nuclear industry already ran this exact communications exercise and failed catastrophically.
It started with a campaign draped in the language of progress. Eisenhower’s “Atoms for Peace” initiative, launched in 1953, was internally conceived as “psychological warfare”—a PR effort to manage public fear of nuclear weapons by redirecting attention toward the peaceful atom. They partnered with Walt Disney and sent traveling exhibits around the country. AEC Chairman Lewis Strauss promised that nuclear energy would give future generations electrical power “too cheap to meter.” All Humanity-scale rhetoric: the betterment of the species, the march of progress, the inevitable arc of science lifting all boats.
When the public began raising concerns about safety and costs, the industry commissioned research to understand why people weren’t getting on board. What came back was what social scientists now call the “deficit model”—the idea that public opposition was rooted in a deficit of knowledge, not a surplus of legitimate concern. The prescription was always more education, more persuasion, more explaining. Never more listening. The National Academies later observed that “trust is especially undermined if experts dismiss public concerns, or when these concerns are perceived to be dismissed.”
Let me be clear about causation, because the AI parallel only works if we’re honest about it. The communications failures didn’t kill nuclear power. The disasters did. But two decades of talking over the public meant the industry had built precisely zero reservoir of human-scale trust to draw on when the real crises hit. Nuclear pioneer Alvin Weinberg admitted in 1976 (three years before Three Mile Island) that “the public perception and acceptance of nuclear energy appears to be the question that we missed rather badly.” After TMI and Chernobyl confirmed the public’s worst suspicions, over a hundred planned U.S. reactors were cancelled.
Moore wrote “Watchmen” in 1986 and 1987, steeped in exactly this nuclear anxiety. Manhattan is the nuclear establishment made flesh: brilliant, powerful, operating with genuinely good intentions at the civilizational level, and absolutely useless to the individual humans whose lives were shaped by the technology he embodied.
The Wrong Solution to the Right Problem
Which brings us back to our Dr. Manhattan Syndrome-afflicted AI execs and what might be the most revealing detail in the WIRED piece: Brockman’s proximate motivation for ramping up political spending is that public opinion has turned against AI.
He’s very right about the problem. Pew Research Center data from mid-2025 shows that 50% of Americans now say they’re more concerned than excited about AI, up from 37% in 2021. Fifty-seven percent rate AI’s societal risks as high, while only 25% say the same about its benefits. Fifty-nine percent lack confidence that U.S. companies will develop and use AI responsibly. Those numbers are terrifying for anyone who builds or advocates for AI products.
But think about what the people behind those numbers are actually worried about. They’re not anxious about AI in the abstract, per se, but its implications. They’re anxious about their job, their kid’s homework, their creative work getting scraped without permission, their privacy. Human-scale concerns that are specific, personal, and grounded in the daily texture of individual lives.
And Brockman’s response to this very specific, very human anxiety is to... float further up into the philosophical stratosphere while writing a mega-checks to a partisan PAC and explaining it in the language of civilizational mission. It’s like a doctor hearing a patient who says, “My knee hurts,” who then delivers a lecture on the elegance of the musculoskeletal system. The patient doesn’t need you to appreciate the beauty of human biology. They need you to look at their damn knee.
The Pew data on AI skepticism, importantly, cuts across party lines. This isn’t a red or blue problem. By picking a partisan side and then wrapping it in Humanity language, Brockman has managed to simultaneously politicize a non-partisan anxiety and fail to address the underlying concern. He’s told half the country he’s against them politically, and told the other half he cares about them only as an abstraction. Nobody gets to feel like an actual human in this interaction.
This is the Dr. Manhattan Syndrome trap in action. The more godlike your perspective, the more you lose touch with the people your technology actually affects, and the less capable you become of addressing their concerns in terms they recognize as genuine.
A Thousand Songs in Your Pocket
There is another way to talk about transformative technology. Steve Jobs demonstrated it for twenty-five years.
Jobs rarely talked about ‘Humanity’ as an abstraction or a project to be managed. He almost never spoke about the ‘trajectory of the species’ in his product launches. Instead, he talked about you. Your pocket. Your music. Your photos. What you create with this thing. The entire Apple communications architecture was built around the second person singular.
When Jobs introduced the iPod, he didn’t say, “This device represents a paradigm shift in humanity’s relationship with media.” He said, “A thousand songs in your pocket.” When he launched the iPhone, he didn’t claim it would transform the trajectory of civilization. He showed you what you could do with it.
Even “Think Different”—the closest Apple ever got to a grand civilizational statement—was populated with specific people. Einstein. Gandhi. Lennon. Earhart. Actual humans with names and faces and stories, not a blank white page with portentous text.
Jobs was arguably more of a genuine visionary than any of these AI leaders. The man reinvented multiple industries. He had more right to talk in civilizational terms than Brockman does. But when he did, he framed it as a contribution to the species, not a control mechanism over it. He understood that to impact humanity, you have to build tools for individual humans. It was a strategic and philosophical choice rooted in the understanding that people don’t connect with abstractions.
Jobs stayed at ground level because that’s where the customers live. Dr. Manhattan floated away because the view from orbit was more beautiful. One of them built the most valuable company on earth. The other left the planet entirely.
This shouldn’t be read as an argument against having any kind of civilizational vision. You may need one, depending on your mission. The engineers grinding through hard problems on brutal timelines aren’t signing up for “a better chatbot.” The all-hands, the investor pitch, the recruiting dinner: these are exactly the right rooms for Manhattan altitude. Jobs had his own reality distortion field, too. He just kept it largely inside the building.
The error isn’t thinking at a civilizational scale. It’s delivering the recruiting pitch at the press conference. Different rooms. Different audiences. Different concerns.
It’s Not Too Late to Come Back Down
I know that comms practitioners at OpenAI, Anthropic, Google, and other major players in the AI industry read this newsletter. Some of you are friends, some are acquaintances. This part is for you.
The nuclear establishment had decades of calcified one-way communication before Three Mile Island blew the lid off. They had built an entire institutional culture around talking over the public’s head. By the time the crisis hit, the patterns were so deeply embedded that recovery was essentially impossible. That industry still suffers from it today, despite having one of the best safety records in the energy industry.
AI is, for all intents and purposes, four years old as a public-facing technology. ChatGPT launched in November 2022. The trust deficit right now is a trend line, not a fixed reality. The Pew numbers are moving in the wrong direction, but they haven’t calcified yet. There is still time to change the trajectory. But there has to be will, too.
And I don’t think the leaders of these companies are incapable of introspection and change, either. Dr. Manhattan Syndrome isn’t a permanent condition. It’s a habit of altitude, reinforced by an ecosystem—investors, conference organizers, friendly podcasters, peers—that rewards civilizational rhetoric and never demands specificity.
What breaking it looks like isn’t complicated, though it is uncomfortable. It means talking about people, not Humanity. It means naming trade-offs rather than papering them over with mission statements. It means your CEO sitting across from a skeptical reporter and engaging with specific concerns about job displacement, creative rights, privacy—on those terms, not by retreating to the cosmos. It means making your customer the protagonist of your story instead of casting your company as the hero of civilization.
Some of you comms people already know all this and have been making these arguments internally, only to be overruled by leadership teams drunk on the view from orbit. I’ve been that person. I know how it feels. Maybe this gives you a piece of evidence to bring into the next room where the decisions get made.
But here’s the real pitch: it works. Trust is built at the human level, one person and one concern at a time, not at the level of civilizational altitude. The companies that figure this out in the next few years will own the next era of technology. The ones that keep floating above it all will eventually discover, as Manhattan did, that they’ve drifted so far from the people they claim to serve that they’ve become irrelevant to them.
Here’s the final irony of the Dr. Manhattan metaphor, and the one most worth sitting with. The actual hero of “Watchmen” isn’t Manhattan, the guy with the godlike power, the civilizational perspective, and the ability to see across time. It’s the messy, compromised, ground-level humans making ugly choices in real time with imperfect information, conflicting loyalties, and the full weight of consequences on their shoulders.
Manhattan has the power. But he’s the least useful person in the room when it actually matters. He can reshape matter, but he can’t reshape public trust. He’s not the villain. He’s just irrelevant. And for someone who claims to care about Humanity, that might be the worst thing you can be.
You don’t have to be him. Put some pants on and come back down to earth. There are actual people down here who could use your help.


