

If the dispute is clearly over and nothing further is happening, remaining options resolve early. Unless stated otherwise, remaining open options resolves on July 1st.
See also:
/Bayesian/will-anthropic-give-the-military-un
People are also trading
https://www.anthropic.com/news/where-stand-department-war
As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.
Re: "The supply chain risk designation is officially issued"
Press releases still don't count for this but this is looking more likely, it doesn't seem like the Pentagon dropped it
To further clarify, while I don't think the Pentagon's most recent letter to Anthropic has been published, from what I've read it doesn't sound like it would be enough on its own to resolve this to YES. It doesn't fall into the categories I previously mentioned (SAM.gov exclusions list, official administrative rule, contract clause, or procurement memo) and my understanding is there's still nothing legally preventing contractors from using Anthropic that we know about yet.
On a speculative front, it's interesting that seemingly the first org the Pentagon told about the designation was Anthropic themselves. It makes me think they're still trying to negotiate instead of prioritizing just pushing through the designation (though of course negotiations still seem very unlikely to succeed).
Anyone with more mana than me want to start a market on some variant of:
"Anthropic relocates its headquarters outside the United States"
"The majority of Anthropic's known compute in 2028 is outside the United States"
Some speculation about this, although it seems quite unlikely:
https://writing.antonleicht.me/p/can-you-poach-a-frontier-lab
https://cybernews.com/ai-news/anthropic-pentagon-europe/
@Kingfisher Sure, here you go.
https://manifold.markets/jgyou/anthropic-relocates-by-the-end-of-2
There are a few interesting operationalisation so I created a set of markets instead.
I've added a new answer: "The supply chain risk designation is officially issued"
This resolves YES if Anthropic is added to the SAM.gov exclusions list, or if the government publishes an official administrative rule, contract clause, or procurement memo requiring contractors to stop using Anthropic.
It does NOT resolve based solely on social media posts or press releases. The government must actually file the formal contracting paperwork.
Resolves YES even if the legal authority of the mechanism is challenged, temporary, delayed/not yet in effect, or blocked by a court injunction. Resolves NO if this doesn't happen by July 1st.
I reserve the right to trade on this answer.
I think that the "supply chain risk" option may have been mis-resolved. As far as I know, all we have is that Hegseth said that he is directing the DoW to label Anthropic a supply chain risk, and I think this has not happened yet. (And I think there's a decent chance -- like maybe 15% -- that it won't happen.)
@EricNeyman also "Pentagon cuts ties with Anthropic" - as of right now they're still actively using Claude. That will probably change, but resolution is premature
@No_uh there was a related discussion down-thread: https://manifold.markets/Bayesian/outcomes-of-the-anthropic-vs-us-gov#et1hd0gmzh
OpenAI agreement is still being amended:
https://x.com/sama/status/2028640354912923739?s=20
https://xcancel.com/sama/status/2028640354912923739
The internal contradiction in the "OpenAI employees sign resignation letter" prop (currently 63%) is worth noting:
The market for "OpenAI signs contract substantially weaker than Anthropic requirements" is at 93%
Sam Altman publicly claimed he shares Anthropic's red lines
430+ Google/OpenAI employees already signed a cross-company letter supporting Anthropic
If OpenAI's contract IS substantially weaker (93% likely), that directly contradicts Altman's public commitment. The same employees who signed the solidarity letter would have strong reason to escalate.
I think the 63% on the resignation letter is actually reasonable or slightly low given the 93% on the weaker contract.
I've got bad news. This question may have been misresolved. https://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-strikes-in-middle-east-use-anthropic-hours-after-trump-ban-ozNO0iClZpfpL7K7ElJ2
I won't bet on this to avoid bias. Anthropic was previously offered a compromise which they said "was paired with legalese that would allow those safeguards to be disregarded at will", so if OpenAI signs a similar deal this would resolve YES.
@bh if this contract is signed, I think this would resolve YES. The contract language there seems very watered down. E.g. they say in their "redlines" in the blog post:
No use of OpenAI technology to direct autonomous weapons systems
But the actual contract per the blog post says:
any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
which is basically the opposite of the redline lol
@PlasmaPower Hmm, but doesn't it allow them to stonewall any attempt to actually make it happen, by requiring verification and validation that essentially are impractical?
@JussiVilleHeiskanen it'd probably make sense to wait to resolve to see if this is similar to or the same as the "legalese" that Anthropic rejected, but it sounds as described to me. It explicitly says any lawful activity is allowed
@Bayesian this is a little ambiguous as it does not give a time horizon or define what cutting ties means. As of yesterday, the Pentagon has 6 months to "cut ties" and a lot could change before that.
@Bayesian I think that "The Pentagon declares Anthropic a supply chain risk" should not have resolved yet. The legal process for designating a supply chain risk has not yet completed. The current situation is more like "The Pentagon announces intent to begin the process of designating Anthropic a supply chain risk", which is different.
@AyO I interpreted "declare as a supply chain risk" to mean the specific legal measures that are associated with supply chain risk designation such as was done for Huawei or Kaspersky, not just a public announcement of intent. Of course, these user-added answers always have abundant ambiguity