It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
Does this mean that Anthropic has more than reached AGI, seeing as OpenAI has officially defined "AGI" as any AI that manages to create more than a hectocorn's worth (100 unicorns, or $100B) in economic value?
I was thinking this is going to happen because last night I got an email about them fixing how they collect sales taxes. Having been part of a couple of IPO/acquisitions, I thought to myself: "Nobody cares about sales taxes until they need to IPO or sell."
I love claude, but looking at google it seems like it will just be a matter of time before Google/Gemini will be a better product. Just looking at how much Google have improved their AI game the last couple months. I'm putting my money on google, I assume the reason they are doing an IPO right now is to be able to cash in on the investment before google surpasses them.
Just how much of the market do retail investors control? I thought they were a drop in the bucket.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
Anthropic is burning roughly $1B a quarter right now, has no clear path to profitability, and is still riding on the same “we’re the safe AI” narrative that’s starting to wear thin as everyone else catches up on safety tooling. Their revenue run-rate is reportedly in the low single-digit billions at best, which would put them at a price-to-sales multiple of 50–100× if they actually hit that valuation. For context, OpenAI at its last round was “only” ~80B on similar (or higher) revenue expectations.
The moat feels increasingly shaky too. Claude is great, but the gap to GPT-4o, Gemini 2, and the open-source frontier is shrinking fast, and they’re still heavily dependent on AWS credits rather than owning their own infra like Google or Meta. At $300B they’d be priced for perfection in a world where perfection doesn’t exist yet.
I’d be shocked if it actually prices anywhere near that. Curious what others think.
> reportedly in the low single-digit billions at best
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
The optimistic view is that Anthropic is one of about four labs in the world capable of generating truly state-of-the-art models. Also, Claude Code is arguably the best tool in its category at the moment. They have the developer market locked in.
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
The missing piece here is Anthropic is not playing the same game. Consumer branding and larger user base are concerns for OpenAI vs Google. Personal chatbot/companion/ search isn’t their focus.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
> Cursor had won the developer market from the previous winner copilot
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
Most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
almost every single AI doomer i listen to hasnt updated any of their priors in the last 2 years. these people are completely unaware of what is actually happening at the frontier or how much progress has been made.
You haven’t actually looked at their fundamentals. They’re profitable serving current models including training costs and are only losing money on future RD training, but if you project future revenue growth on future generations of models you get a clear path to profitability.
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
> They’re profitable serving current models including training costs
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
1. Sounds like exactly when early investors and insiders would want to cash in and when retail investors who “have heard of the company and like the product” will buy without a lot of financial analysis.
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
Okay, let’s see you guys get passed the inference costs disclosure. According to WSJ it is enough to kill the frontier shop business model. It’s one of the biggest things blocking OpenAI
Inference costs aren't a problem, selling inference is almost certainly profitable. The problem is that its (probably) not profitable enough to cover the training and other R&D costs.
You did not parse that article properly. It regurgitates only what everyone else keeps saying: when you conflate R&D costs with operating costs, then you can say these companies are 'unprofitable'. I'd propose with a proper GAAP accounting they are profitable right now; by proper I mean that you amortize out the costs of R&D against the useful life of the models as best you can.
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
Yes to IPO you have to submit an S-1 form which requires the last 3 years of your full financials and much more. You can’t just IPO without disclosing how your business works and whether it makes or loses money and how much.
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
I think you know what you wrote is wrong. Its called holding the bag.
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
I think they’re waiting for bargain bin deals once the bubble collapses.
Would have made a lot of sense a few years ago, but not now.
Why are you assuming Anthropic is for sale? They have a clear path to profitability, booming growth, and a massive and mission driven founding team.
They could make more money keeping control of the company and have control.
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
It depends on how much they can sell for.
> They have a clear path to profitability
I'd love to see evidence for such a thing, because it's not clear to me at all that this is the case.
I personally think they're the best of the model providers but not sure if any foundation model companies (pure play) have a path to profitability.
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
Maybe Anthropic simply don’t want to be acquired
Amazon and Microsoft are protecting themselves from the bubble.
Yes, repackaging and reselling AI is a starkly better business than creating frontier models
Lol, no one would want to buy that trash.
Same w/ Perplexity.
Does this mean that Anthropic has more than reached AGI, seeing as OpenAI has officially defined "AGI" as any AI that manages to create more than a hectocorn's worth (100 unicorns, or $100B) in economic value?
If they have reached AGI (whatever the definition), we should be prioritizing looking for signs of misanthropy.
That S1 is gonna make for a fun read. It'll make Adam Neumann blush.
Dario Amodei gives of strong Adam Neumann vibes. He claimed "AI will replace 90% of developers within 6 months" about a year ago...
that wework s1 was gold
Elevating the world's consciousness! https://www.wework.com/newsroom/wecompany
I was thinking this is going to happen because last night I got an email about them fixing how they collect sales taxes. Having been part of a couple of IPO/acquisitions, I thought to myself: "Nobody cares about sales taxes until they need to IPO or sell."
I love claude, but looking at google it seems like it will just be a matter of time before Google/Gemini will be a better product. Just looking at how much Google have improved their AI game the last couple months. I'm putting my money on google, I assume the reason they are doing an IPO right now is to be able to cash in on the investment before google surpasses them.
It's a hot take, I know :D
It could be smart for them to get in now with so much talk of a bubble or potential stock market correction.
"Be first, be smarter, or cheat" well. Being first might really be the best game theory move if the collapse will start from you.
But they aren't the first. Google is the first frontier model lab to go public.
...this -> those bags wont hold themselves now, will they ?
Source: https://giftarticle.ft.com/giftarticle/actions/redeem/3ffefa...
Retail investors yoloing into AI at peak bubble vibes sounds about right
Just how much of the market do retail investors control? I thought they were a drop in the bucket.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
This is the real note - if the company was truly valuable, they wouldn't IPO, they'd get slurped up by someone big.
Modern IPOs are mainly dumping on retail and index investors.
Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
https://www.ey.com/en_us/insights/ipo/trends
> In a statement, an Anthropic spokesperson said: “We have not made any decisions about when, or even whether, to go public.”
They are going public.
Well, they have to. Every grift needs bagholders.
If they get to be a memestock, they might even keep the grift going for a good while. See Tesla as a good example of this.
Anthropic is burning roughly $1B a quarter right now, has no clear path to profitability, and is still riding on the same “we’re the safe AI” narrative that’s starting to wear thin as everyone else catches up on safety tooling. Their revenue run-rate is reportedly in the low single-digit billions at best, which would put them at a price-to-sales multiple of 50–100× if they actually hit that valuation. For context, OpenAI at its last round was “only” ~80B on similar (or higher) revenue expectations. The moat feels increasingly shaky too. Claude is great, but the gap to GPT-4o, Gemini 2, and the open-source frontier is shrinking fast, and they’re still heavily dependent on AWS credits rather than owning their own infra like Google or Meta. At $300B they’d be priced for perfection in a world where perfection doesn’t exist yet. I’d be shocked if it actually prices anywhere near that. Curious what others think.
> reportedly in the low single-digit billions at best
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
The optimistic view is that Anthropic is one of about four labs in the world capable of generating truly state-of-the-art models. Also, Claude Code is arguably the best tool in its category at the moment. They have the developer market locked in.
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
The missing piece here is Anthropic is not playing the same game. Consumer branding and larger user base are concerns for OpenAI vs Google. Personal chatbot/companion/ search isn’t their focus.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
> Claude Code is arguably the best tool in its category at the moment. They have the developer market locked in.
I am old enough (> 1 year old) to remember when Cursor had won the developer market from the previous winner copilot.
Google or Apple should have locked down Anthropic.
> Cursor had won the developer market from the previous winner copilot
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
If they had, they would have killed it.
Google should be stomping everyone else but it's ad addiction in search will hold it back. Innovators dilemma...
Cursor still wins over Claude Code because Cursor has privacy mode
> They have the developer market locked in.
when has anything been 'locked in', someone comes with a better tool people will switch.
> They have the developer market locked in
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
Most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
> the gap to GPT-4o, Gemini 2 ... is shrinking fast
Are you ... aware that OpenAI and Google have launched more recent models?
almost every single AI doomer i listen to hasnt updated any of their priors in the last 2 years. these people are completely unaware of what is actually happening at the frontier or how much progress has been made.
Their ignorance is your opportunity.
That jumped out at me too. Like a time-traveling comment or something!
Like "someone" who's knowledge cutoff is from a while back...
This is what happens when someone copies and pastes their old comment, note the other tells.
Nope, more like a LLM that doesn't know about GPT 5 and Gemini 3
You haven’t actually looked at their fundamentals. They’re profitable serving current models including training costs and are only losing money on future RD training, but if you project future revenue growth on future generations of models you get a clear path to profitability.
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
> They’re profitable serving current models including training costs
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
1. Sounds like exactly when early investors and insiders would want to cash in and when retail investors who “have heard of the company and like the product” will buy without a lot of financial analysis.
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
> A 300bn IPO ... raising 3bn
Would be so massively oversubscribed that it would become a $600bn company by the end of the day (which is a good tactic for future fund raising too).
I suspect if/when Anthropic does its next raise VCs will be buyers still not sellers.
is this comment created by AI? acc created in last 24 hours, lots of long ai-speak
Okay, let’s see you guys get passed the inference costs disclosure. According to WSJ it is enough to kill the frontier shop business model. It’s one of the biggest things blocking OpenAI
https://www.wsj.com/tech/ai/big-techs-soaring-profits-have-a...
Inference costs aren't a problem, selling inference is almost certainly profitable. The problem is that its (probably) not profitable enough to cover the training and other R&D costs.
Don't forget all the other costs of their business, like paying sales and solutions people (expensive, not going away any time soon).
You did not parse that article properly. It regurgitates only what everyone else keeps saying: when you conflate R&D costs with operating costs, then you can say these companies are 'unprofitable'. I'd propose with a proper GAAP accounting they are profitable right now; by proper I mean that you amortize out the costs of R&D against the useful life of the models as best you can.
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
Do you mean as part of going public they need to make public how much they spend on inference versus how much they make?
Yes to IPO you have to submit an S-1 form which requires the last 3 years of your full financials and much more. You can’t just IPO without disclosing how your business works and whether it makes or loses money and how much.
AGI will become IPO and everyone will forget and move on.
This seems contrary to their stated goal to prioritize AI safety.
It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders.
Unless you're a benefit corp, this is true for private companies as well. Quick q - which of the AI companies are benefit corps?
"We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers."
-google cofounders Larry Page and Sergey Brin
then came the dot com bubble.
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
No that's not what they think, that's why they used sarcasm.