2026
Claude's April Outage Streak Is Now a Pattern
After incidents on April 1, April 6, April 7, and April 13, Claude reliability stopped looking like isolated turbulence and started looking like an operating condition.
2026
After incidents on April 1, April 6, April 7, and April 13, Claude reliability stopped looking like isolated turbulence and started looking like an operating condition.
2026
Ben Thompson treats Mythos and Glasswing as either a genuine governance problem or a carefully staged danger narrative, which is exactly the tension the site is arguing about.
2026
Stella Laurenzo’s public complaint makes the engineering backlash legible: degraded reading behavior, more full-file rewrites, stop-hook violations, and weaker trust in Claude Code for serious work.
2026
The Google and Broadcom deal reads less like triumphant diversification and more like proof that Anthropic remains structurally dependent on whoever can keep Claude running.
2026
The second visible outage in less than a day reinforced that the operational problems were not just anecdotal user frustration but an externally visible service pattern.
2026
Anthropic blocked Claude subscriptions from powering third-party agent tools like OpenClaw, converting what users thought was flat-rate access into metered usage.
2026
Anthropic’s growth story and its customer pain story are converging on the same constraint: the company is running headlong into the economics of scarce frontier compute.
2026
The same period that produced reliability complaints also produced source-code leak and security discussion, which deepens the case that Anthropic’s coding products were under visible stress.
2026
The New Yorker asks whether Anthropic is using constitutional language to legitimize private power, which is a cleaner outside framing of the governance concern than most tech press coverage manages.
2026
Anthropic reduced effective usage during peak hours even for paid plans, which is the exact kind of quiet economic redefinition that turns a premium product into a moving target.
2026
Anthropic pushed Claude Code further toward self-managed permissions just as users were already questioning its reliability and behavior on real engineering work.
2026
Anthropic’s own help center confirms the temporary March 2026 off-peak promo, which reads less like generosity once you place it next to later peak-hour tightening and third-party cutoffs.
2026
Anthropic’s enterprise revenue acceleration strengthens the business case while also making every later policy softening and government compromise more economically understandable.
2026
Thompson’s central question is whether Anthropic can simultaneously claim frontier importance and reserve final say over sovereign military use, which directly supports the site’s private-power thesis.
2026
TIME reported the core change plainly: the company removed the most legible hard stop in the RSP after years of using it as proof of superior restraint.
2026
The Palantir partnership is the cleanest evidence that Anthropic’s national-security posture was operational long before the public fight made it impossible to ignore.
2026
Anthropic’s own risk messaging was becoming more apocalyptic at the same time the company was deepening enterprise and government commercialization.
2026
The New Yorker captures Anthropic’s self-image as an interpretability lab forced into commercialization, which is exactly the tension driving the site’s critique.
2026
The books case did not end the legal overhang; it created a template for a broader music-side piracy story built on the same provenance concerns.
2025
Stratechery’s aligned-enterprise framing is useful because it makes explicit what the homepage should show: Anthropic is increasingly a coding-and-enterprise company first.
2025
AP gives the factual version of Anthropic’s multicloud reality: AWS is still primary, but Google capacity is becoming too large to describe as incidental.
2025
The company’s own engineering postmortem is useful because it proves recurrent quality problems were serious enough to require a formal public explanation.
2025
Reuters makes the point cleanly: Anthropic’s fair-use talking point survived, but the piracy-based claims were serious enough to resolve with a landmark settlement.
2025
WIRED’s OpenAI-access story matters because it shows Anthropic enforcing terms around the same rival benchmarking behavior that remains standard across the frontier-model market.
2025
The class-certification order is one of the harshest public documents in the entire Anthropic record because it describes the acquisition behavior as deliberate and scalable.
2025
The Authors Guild response is valuable because it captures the litigation narrative from the side most focused on the provenance story Anthropic would prefer to subordinate.
2025
This is still the key legal document: Anthropic got a doctrinal win on training while keeping a separate, uglier story about pirate-library acquisition fully alive.
2025
Thompson’s Big Five check-in is useful because it shows how Anthropic’s enterprise-first positioning changed the economics of dependence without removing the dependence itself.
2025
Anthropic’s product story was increasingly about agents and coding workflows, which is important because it explains both the revenue mix and the sensitivity to reliability regressions.
2025
Anthropic’s own argument to regulators made the dependence problem unusually explicit: Google’s ability to invest in the company was presented as competitively important.
2024
The MCP launch matters because it shows Anthropic moving from model vendor to ecosystem gatekeeper, widening Claude’s strategic footprint well before the later agent boom.
2024
The Palantir-AWS announcement shows Anthropic was already willing to place Claude inside U.S. intelligence and defense workflows, making the later Pentagon dispute a fight over limits, not participation.
2024
Anthropic’s public call for mandatory safety tests reads as genuine policy advocacy and as an attempt to translate its safety identity into industry rule-making leverage.
2024
Anthropic’s own launch language made the ambition explicit: Claude was no longer just a chatbot but a system meant to operate software, take actions, and justify a more agentic premium.
2024
WIRED is useful here because it translates Anthropic’s benchmark-heavy computer-use pitch into plain language: promising, but not independently verified and still far from human reliability.
2024
The October 2024 RSP revision is where the safety policy became more elastic in its own words, adding nuance and implementation flexibility while preserving the branding value of a formal commitment.
2024
Dario’s own essay is the clearest primary-source record of his optimism turn: AI is framed not only as dangerous and urgent, but as the engine of a compressed century of human progress.
2024
The elections post is useful because it shows Anthropic translating safety talk into deployed controls, policy copy, and public process claims at the exact moment AI integrity became politically salient.
2024
The UK competition review is a useful external check on the dependence story: Amazon’s stake and governance rights were material enough to force a formal regulator to assess the relationship.
2024
The authors’ complaint is one of the strongest early public legal documents on the provenance issue, because it directly alleges large-scale use of copyrighted and pirated books to train Claude.
2024
This interview is valuable because Dario openly ties concentration, state capacity, inequality, and scaling pressure to Anthropic’s competitive position instead of pretending the company sits outside normal frontier-market incentives.
2024
Anthropic’s 3.5 Sonnet launch sharpened the product identity that later drove revenue: faster iteration, stronger coding, and benchmark-led positioning aimed squarely at daily work.
2024
TIME’s governance feature is one of the best mainstream treatments of the LTBT because it shows how much of Anthropic’s differentiation rested on structure, signaling, and narrative discipline.
2024
TIME’s rivalry framing is useful because it shows that Anthropic’s investors were never just investors; they were platform actors with their own product and distribution agendas.
2024
The Verge makes the frenemy structure legible: Anthropic was strategically useful to Amazon precisely because Amazon also needed leverage in the frontier-model race.
2024
The New Yorker provides the social and ideological backdrop for Anthropic’s safety politics, which helps explain why the company’s moral positioning resonated so strongly with capital and media.
2024
The Claude 3 launch is where Anthropic fully embraced benchmark competition as market communication, pairing safety branding with explicit performance supremacy claims.
2023
Anthropic sold Claude 2.1 on a now-familiar bundle: larger context, lower hallucination rates, tool use, and pricing changes framed as reliability for enterprise buyers.
2023
Anthropic’s own Copyright Office comment is useful because it records the company’s fair-use position in its own voice before the later books cases hardened into courtroom fights.
2023
The Fortune interview captures the cleanest public account of Anthropic’s founding split: safety, trust, and incentive alignment were presented as the reasons a new lab had to exist.
2023
This early Stratechery read is still useful because it shows how obvious the cloud-distribution logic of the Anthropic deal was from the beginning.
2023
Compute scarcity, not just cash, was the center of gravity in the Amazon deal, which is why the “independent alternative” story always needed qualification.
2023
When Dario said he was not sure there were limits to AI, he made explicit the scaling worldview that sits underneath both Anthropic’s safety warnings and its drive toward larger systems.
2023
Anthropic’s Long-Term Benefit Trust is one of the strongest primary sources on how the company wanted to be seen: structurally different, financially insulated, and more governable than its rival.
2023
The original Responsible Scaling Policy is where Anthropic converted safety language into a concrete governance object that could be marketed, compared, and later softened.
2023
The company’s Amazon announcement is a primary source for how quickly Anthropic translated safety branding into a hyperscaler partnership story about access, scale, and infrastructure.
2023
Once Books3 became searchable, the training-data dispute stopped being abstract. Authors could identify titles, and the provenance problem became publicly legible.
2023
The Atlantic’s early Books3 reporting matters because it shows the piracy-source problem was publicly traceable well before the court record forced companies to answer for it.
2023
The Senate hearing is the cleanest 2023 public record of Anthropic arguing for regulation without slowing frontier progress, while treating misuse and geopolitical competition as central policy facts.
2023
The Claude 2 launch fused frontier benchmark signaling with broader public access, establishing a pattern Anthropic would keep using: capability scores, long context, coding improvements, and safety claims in one packet.
2023
This page is a primary-source explanation of the company’s moral branding: Anthropic was not just training a model, it was publishing a value system and asking the market to trust it.
2023
Claude’s first broad release is where Anthropic’s research identity became a commercial interface, packaging steerability and harmlessness as reasons to choose its assistant over competitors.
2023
This essay is one of the clearest early statements of the Anthropic worldview: scale matters, catastrophic risk is real, and the lab should be judged on its ability to manage both.
2022
The Constitutional AI paper matters because it explains the technical and philosophical basis for Anthropic’s core claim that Claude could be aligned in a more legible and scalable way.
2022
Anthropic’s early red-teaming work is important because it shows the safety identity predates Claude’s commercial rise, which sharpens the question of what later changed and what merely scaled.
2022
The Series B announcement already contains the full Anthropic formula: more capital, more infrastructure, more scale, and a promise that safety research will justify all three.
2021
The founding claim about reliable, steerable systems was not fake. The point of the index is to show what happened after that promise met hyperscaler capital, enterprise demand, and defense work.
2020
The scaling-laws paper matters because it is the pre-Anthropic technical premise behind Dario’s later rhetoric: if scale predictably buys capability, then frontier AI becomes both a safety emergency and a race.
When a corporation elects to drape itself in the vestments of moral authority, to raise billions not on the promise of profit alone but on the claim that it, uniquely among its competitors, can be trusted with the most consequential technology of the century, it does not merely invite scrutiny. It demands prosecution.
Anthropic's founders have maintained, with the practiced solemnity of the recently converted, that they departed OpenAI on grounds of conscience. The record tells a grubbier story. Shouting matches in conference rooms. Accusations of board manipulation. A vice president's ultimatum about reporting structure, not model risk. The safety narrative was erected after the fact, a retrospective nobility laid over what the Wall Street Journal and the New Yorker have since documented as a power struggle between strong personalities who could no longer share an office, let alone a mission. What follows is a specific and evidenced examination of the distance between that founding promise and what came after: the governance structures announced as bulwarks and functioning as furniture, the safety commitments drafted with great fanfare and quietly fed into the shredder when the contracts arrived.
The question is not whether Anthropic is worse than its rivals. It may not be. The question is whether a company that sold the world its conscience ever actually had the inventory.
Read Merchants of SafetyAnthropic may be a real company with real revenue and real technical accomplishments. It may also be a terrible investment. These two facts coexist more comfortably than an S-1 risks disclosure would suggest. The public story is cleaner, nobler, and more tightly managed than the business underneath it, and the distance between the two is where ordinary investors lose their money.
Our five-year analysis tracks what actually happened from 2021 to 2026. The cautious disclosures gave way to narrative management. The safety commitments aged like milk. The release behavior drifted, quietly and then not quietly at all, from every prior public commitment. We mapped the divergence so you would not have to take anyone's word for it, including ours.
Read the full analysis