Vision Matters
On continuing Anthropic & Pentagon tension, Musk vs. OpenAI, and an underreported AI phone rumor
This week, rather than doing one news story deep dive I’m going to bring you three quick takes on the AI news this week. There is so much happening in AI, and it’s impossible to keep up with all of it. Here are three recent stories and the thread between them, from an integral humanism lens. Please share your thoughts in the comments!
🧠 This Week’s Point
1: Anthropic Is Not Invited Back to the Pentagon… Yet.
If you’ve been following the saga between Anthropic (maker of Claude) and the Pentagon since my March piece, here is an update of sorts.
On May 1, the Pentagon announced AI deals with eight companies: OpenAI (maker of ChatGPT), Google, SpaceX, NVIDIA, Microsoft, Oracle, AWS, and a startup called Reflection. These are Anthropic’s major competitors, and they were not in the room. The company remains formally designated a “supply risk to national security”, a label normally reserved for foreign adversaries, after refusing to let Claude be used for fully autonomous weapons or mass domestic surveillance of Americans.
Given recent history, it’s not surprising that Anthropic was excluded from this group. But the White House and the Pentagon do not appear to be on the same page. Defense Secretary Pete Hegseth called Anthropic CEO Dario Amodei an “ideological lunatic” at a Senate hearing last week. Days earlier, Trump told CNBC a deal with Anthropic is “possible” and that he likes “high IQ people”. GW Law professor Jessica Tillipman said it well in The Hill: “I see this frankly as the White House allowing the Pentagon to try to save face on one hand, while still negotiating on the other”.
The biggest wild card right now, and the strongest one in Anthropic’s hand, is Mythos, Anthropic’s new cybersecurity model. It can find undiscovered vulnerabilities in critical systems which, in the wrong hands, can create a roadmap for attacking them. The NSA is already using it and the Pentagon’s own CTO referred to it as “a separate national security moment” while maintaining the blacklist. Axios reported that the White House is now floating guidance to let other agencies use Anthropic regardless, which continues to complicate the Pentagon’s position.
While policy inconsistencies aren’t unusual for this administration, it has put Anthropic in an interesting position from an AI ethics point of view. They held the line and opted out of their Pentagon partnership, based on their principles about how AI should and should not be used. Anthropic’s move around Mythos — to warn about the dangers and create an industry working group to test it — might have been a bit of PR, but it was also probably the responsible thing to do.
It makes me hope both that Anthropic’s ethics are earnestly held and that they will continue to be as the company continues to grow in power and influence. They are well positioned right now as a company holding both a moral high ground and a technical one; it’s much easier to make the world a better place when you’re highly competent. It’s worth watching closely what happens with Anthropic’s continued talks with the White House, and whether the company will continue to wield its increasing power in a benevolent way.
2: Musk and OpenAI’s Nonprofit Status
A trial two years in the making finally landed in a federal courtroom in Oakland last week, and it has the makings of Bravo reality drama.
Elon Musk, who cofounded OpenAI in 2015, donated $38 million, and left the board in 2018, is suing CEO Sam Altman and board president Greg Brockman, as well as Microsoft, alleging that they betrayed the nonprofit’s mission to enrich themselves. It seems that this case is likely to lose and the jury’s verdict is advisory only, with the judge holding the final say.
Two days before the trial began, Musk texted Brockman about a settlement. When Brockman suggested both sides drop their claims, Musk replied, “By the end of this week, you and Sam will be the most hated men in America.” The room has been full of back and forth claims of what was and wasn’t promised, who has responsibility for OpenAI’s success, and from my point of view, none come across as primarily concerned about building AGI that “benefits humanity” in this.
Brockman’s own journal, which has become a large part of the trial this week, notes: “Can't see us turning this into a for-profit without a very nasty fight… His [Elon’s] story will correctly be that we weren't honest with him in the end”. Another entry asked, “Financially, what will take me to $1B?” His equity stake in OpenAI’s for-profit subsidiary is now worth $30B, without contributing any money to the nonprofit.
Having worked in tech nonprofits for over a decade, I am glad in a way that OpenAI is becoming more of a true for-profit, though it still is run by a nonprofit board. It’s simply more honest. It was never a “charity” in the traditional sense, and a nonprofit status can create a moral cover for work that is not so humanitarian. But how to reconcile the work and dollars put into the company’s nonprofit beginnings (not to mention its consumption of so much free training data) with its current $852 billion valuation is a true moral quagmire.
3: OpenAI Might Be Building a Smartphone
This one is a rumor from a credible source, Ming-Chi Kuo, a technology analyst known for accurate predictions of Apple’s products and plans. I was surprised to see this story not as widely shared. We’ll see if Kuo’s track record translates to OpenAI’s world; the truth is, this seems like a risky but logical move for OpenAI, despite how bad of an idea for humanity it might seem.
In late April it was reported that OpenAI is developing a smartphone in partnership with Qualcomm, MediaTek, and LuxShare (a major Apple manufacturer). The smartphone would have no apps, just an AI agent that would handle everything from ordering a car to booking a table to replying to your messages. It would also do this through continuous conversation, with persistent knowledge of your location, activities, and environment, running the background.
This is my nightmare phone.
It seems clear that OpenAI faces a serious challenge in the hardware space, since there is still little to see for their partnership with Jony Ive and the logistical labyrinth that is the global smartphone supply chain. But OpenAI has become a behemoth that can take this risk, and it would clearly track with their ambitions.
The vision of this type of smartphone is concerning to me on two main levels.
First, phones are already a real challenge for attention and presence in the world. I’ve long been concerned about the way they capture not only children’s attention but also adults, and more and more people have noticed and supported this movement. But Bricks and other screen time management apps can only go so far.
I’ve personally looked into getting the Lightphone, a kind of anti-smartphone designed to give you back your attention. I ordered one around when this Substack started last December, thinking it would be at least a great experiment to write about. Since that time almost six months ago, I’ve received three emails that my phone will be delayed. I honestly feel for the company. They are dealing with supply chain issues that they are probably too small, compared to an Apple or Samsung, to solve for. It seems like a great, bold idea — but they can’t get the phone to me. OpenAI could.
The second concern that I have is that if OpenAI’s smartphone vision takes over, and even if there are competitors using a similar model, this would drastically reshape the Internet. If apps don’t exist on a smartphone, then services wouldn’t need the same kind of a human interface, they’d need an AI one. What would the purpose of a web browser be? Of a website URL? I’m sure these would still be accessed in other ways, at least for some time, but it would have to drastically change the shape of the World Wide Web if widely used.
🏹 Vision Matters
The throughline in these stories this week just feels incredibly clear to me: who is leading AI’s vision matters so deeply. Whether it’s our day-to-day use of smartphones, how war is conducted, or what a nonprofit status means, AI and the personal ambitions of the people building it are going to have an outsize effect on our lives.
And as much as I know that the leaders of some of these companies are deeply deserving of criticism and scrutiny, there is also a deeply human element here of how fallible we all are, and how the possibility of becoming a multi-billionaire could affect the long-term vision of the absolute best of us. It can’t just be up to a small handful of fallible (mostly) men to direct what the AI future looks like.
🫀 The Human Bit
This morning on a walk I saw bees buzzing around some beautiful pink coral vines, almost ran into a very frantic squirrel, and then saw this monarch fly past me and land on a mango tree, all in the space of about sixty seconds.
It connected to a Daily episode I listened to soon afterward that had an excellent roundtable of the Artemis astronauts answering kids’ questions about space. It’s as adorable as you’d expect, and I highly recommend a listen. But one part included homework from Reid Wiseman. He said “the next time you see something in bloom or something growing out of the ground, just stop for a second, and look at it, and just be impressed by it… the simplest little thing can be the most impressive thing you’ve seen all week.”
Well, color me impressed.
Stay human ✌🏼
Emily








