_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   How AI conquered the US economy: A visual FAQ
       
       
        cess11 wrote 5 hours 14 min ago:
        On the face of it, it looks positively habsburgian.
       
        Animats wrote 7 hours 8 min ago:
        See "Spending on AI data centers is so massive that it’s taken a
        bigger chunk of GDP growth than shopping—and it could crash the
        American economy"[1]
        The US is having a giant AI boom and a recession in the rest of the
        economy.
        
        Historically, the classic example of this is railway mania of the mid
        19th century.[2] That started in 1830, with the Liverpool and
        Manchester Railway.[3] This was when the industrial revolution got out
        of beta. There were earlier railroads, but with dual tracks, stations,
        signals, tickets, schedules, and reasonably good steam engines, the
        Liverpool and Manchester worked like a real service. It was profitable.
        Then lots of others started building and over-building railroads, with
        varying degrees of success. See Panic of 1847.
        
        It was really too early for good railroads. Volume production of steel
        didn't exist. Early railroads were built with wood and iron, not very
        well. Around 1880-1900, everything was rebuilt in steel, and got
        bigger, better, and safer.
        
        Consider the early Internet. We had TCP/IP working across the US in the
        early 1980s, but it wasn't a big deal commercially for another 10-15
        years, it wasn't everywhere until the early 2000s, and it wasn't out of
        bubble mode until 2010 or so. [1] [2]
        
   URI  [1]: https://fortune.com/2025/08/06/data-center-artificial-intellig...
   URI  [2]: https://en.wikipedia.org/wiki/Railway_Mania
   URI  [3]: https://youtu.be/pDEnsraYx3k?t=505
       
          jcgrillo wrote 1 hour 42 min ago:
          The really scary thing is all the value in the AI boom is predicated
          on the belief that the technology is "early" and that it will improve
          over time. We're seeing the opposite. Instead, all the competing
          models are basically converging on the same benchmark performance
          numbers, as we saw yesterday with the gpt-5 debacle. This suggests
          that performance is actually topping out, which makes intuitive sense
          if advancements in LLMs are proportional to their training data.
          They've already used up all the data. So it very well could be what
          we see right now is basically as good as it gets, or at least
          approximately so. The market is not ready for that.
       
        scotty79 wrote 7 hours 39 min ago:
        Measuring such things in dollars rubs me the wrong way. Is using
        inflation adjusted dollar common or do people just compare nominal
        values?
       
        throwmeaway222 wrote 9 hours 55 min ago:
        AI bills are in the thousands of dollars per day for some small AI
        startups, and in the 10-100k range per day for large companies.
        
        So as many people have speculated that OpenAI has no moat - are wrong.
       
          antonvs wrote 5 hours 57 min ago:
          A moat is protection from competition. What you’ve described is not
          a moat.
       
          jjtheblunt wrote 9 hours 40 min ago:
          What is the moat you are implying?  (affordability?)
       
        knowaveragejoe wrote 10 hours 14 min ago:
        > METR also conducted an in-depth study that asked experienced
        developers to code with a popular AI assistant. After they finished
        their tasks, the developers claimed that using the AI had made them 20
        percent more productive. But independent evaluators in the study
        actually concluded that using AI did the opposite: it increased task
        completion time by about 20 percent.
        
        This somewhat reflects my experience... I can totally see the
        back-and-forth dance taking longer in some cases.
        
        But I also think there is more than efficiency being unlocked here.
        Sure, a developer might have cranked out a rough MVP in less time, but
        with this they're also often continuously updating a README, tests and
        other infrastructure as they go.
        
        One could argue about whether that's its own footgun - relying on tests
        that don't really test what they should, and let critical bugs through
        later.
       
        johnnienaked wrote 13 hours 2 min ago:
        All I see is B-U-B-B-L-E
       
        ThinkBeat wrote 13 hours 51 min ago:
        The fact that there is massive spending in AI in the tech sector, 
        isn't that just a -possible- sign of a other bust coming down the road?
        
        We have seen it before, again and again.
       
        jameslk wrote 14 hours 14 min ago:
        I found this analysis insightful: [1] > However, this pace is likely
        unsustainable going forward. The sharp acceleration in capex is likely
        behind us, and the recent growth rate may not be maintained. Any
        sustained weakness in final demand will almost certainly affect future
        investment, as AI demand ultimately depends on business revenues and
        profits, which are tied to nominal GDP. Realized and forecasted capex
        remain elevated, while free cash flow and cash and cash equivalents are
        declining for hyperscalers.
        
   URI  [1]: https://x.com/dampedspring/status/1953070287093731685
       
        maerF0x0 wrote 14 hours 21 min ago:
        > “They’re generating unprecedented amounts of free cash flow,”
        Cembalest told me. “They make oodles and oodles of money, which is
        why they can afford to be pouring hundreds of billions of dollars of
        capital spending each year into AI-related R&D and infrastructure.”
        
        IMO this should be a trigger for investors that they have not been
        receiving their profits, and instead the profits are being dumped into
        CEOs next big bets that will fuel their stockbased compensation gains.
        Also to blame is the government's  culpability here for creating both
        tax incentives and a lack of laws that say profits must be returned as
        dividends (they can always be DRIP'd back into the company as new
        shares if desired, it's absurd to say its better for investors, when
        the alternative actually gives more choice).
       
        jimmydoe wrote 14 hours 28 min ago:
        This matches the tech job market: if you are not in top corp or labs,
        your hard work is most likely subsidizing the $ 1.5M paycheck for
        OpenAI employees.
       
        csours wrote 14 hours 34 min ago:
        > Nobody can say for sure whether the AI boom is evidence of the next
        Industrial Revolution or the next big bubble. All we know is that
        it’s happening.
        
        In hindsight, it will be clear, and future generations (if any exist)
        will ask: "Why didn't you understand what was happening at the time"
        
        My answer: Noise. Just because you can find someone who wrote down the
        answer at the time, doesn't mean that they really understood the
        answer, at least not to the extent that we will understand with
        hindsight.
        
        Future history is contingent.
       
        GolfPopper wrote 15 hours 4 min ago:
        Remember, the appropriate way to parse use of "the economy" in popular
        press is to read it as "rich peoples' yatch money".
       
        krunck wrote 22 hours 0 min ago:
        Please stop using stacked bar charts where individual lines(plus a
        Total) line would help the poor reader comprehend the data better.
       
        ChrisArchitect wrote 22 hours 24 min ago:
        Related:
        
        AI is propping up the US economy
        
   URI  [1]: https://news.ycombinator.com/item?id=44802916
       
          GoatInGrey wrote 15 hours 33 min ago:
          I'm noticing how that article is myopically discussing equity
          valuations rather than actual economic output and worker
          productivity.
       
        doyouevensunbro wrote 22 hours 44 min ago:
        > because the oligarchs demanded it
        
        There, summed it up for you.
       
          voidhorse wrote 11 hours 23 min ago:
          Not sure why you were down voted. It's true and it actually answers
          the question in the article title, which the article doesn't even
          explore, let alone answer.
       
        biophysboy wrote 22 hours 55 min ago:
        >“The top 100 AI companies on Stripe achieved annualized revenues of
        $1 million in a median period of just 11.5 months—four months ahead
        of the fastest-growing SaaS companies.”
        
        This chart is extremely sparse and very confusing. Why not just plot a
        random sample of firms from both industries?
        
        I'd be curious to see the shape of the annualized revenue distribution
        after a fixed time duration for SaaS and AI firms. Then I could judge
        whether its fair to filter by the top 100. Maybe AI has a rapid decay
        rate at low annualized revenue values but a slower decay rate at higher
        values, when compared to SaaS. Considering that AI has higher marginal
        costs and thus a larger price of entry, this seems plausible to me. If
        this is the case, this chart is cherry picking.
       
        hackable_sand wrote 22 hours 59 min ago:
        What about food and housing? Why can't America invest in food and
        housing instead?
       
          booleandilemma wrote 51 min ago:
          Having food and housing would make the population too comfortable.
          They want everyone a little on edge. You don't work, you don't eat.
       
          daedrdev wrote 15 hours 12 min ago:
          the US systematically taxes and forbids new housing in many ways as
          local voters desire. Setback requirements, 100K+ hookup costs,
          stairway standards, density limits, parking minimums and regulations,
          community input, allowing rejection of new housing despite it
          following all rules, abuse of environmental regulations (which ends
          up hurting the environment by blocking density), affordable housing
          requirements (a tax on each new housing block to fund affordable
          units on the side) all prevent new housing form being built.
       
          GoatInGrey wrote 15 hours 37 min ago:
          Because investing in housing means actually changing things.  There's
          a "Don't just do something, stand there!" strategy of maximizing
          comfort and minimizing effort, that must be overcome.
       
          righthand wrote 22 hours 2 min ago:
          Is anyone starving in America? Why would there need to be focus on
          food production? We have huge food commodities.
       
            vdupras wrote 16 hours 2 min ago:
            Let them eat cake, right?
            
   URI      [1]: https://www.ers.usda.gov/topics/food-nutrition-assistance/...
       
          margalabargala wrote 22 hours 43 min ago:
          America has spent a century investing in food. We invested in food so
          hard we now have to pay farmers not to grow things, because otherwise
          the price crash would cause problems. Food in America is very cheap.
       
            jimt1234 wrote 7 hours 2 min ago:
            > Food in America is very cheap.
            
            And that's a whole different problem. Cheap != inexpensive.
       
            hackable_sand wrote 21 hours 41 min ago:
            It's reassuring to be reminded that every child in America must
            justify their existence or starve to death.
       
              rank0 wrote 9 hours 32 min ago:
              Christ…
              
              What does that even mean and what do you want changed?
       
              margalabargala wrote 20 hours 8 min ago:
              Okay, that's too far. That's not true at all.
              
              Children in America do not starve to death. There is no famine,
              economically manmade or otherwise.
              
              This is America. We will happily allow and encourage your child
              to go into arbitrary amounts of debt from a young age to be fed
              at school.
       
        bravetraveler wrote 23 hours 1 min ago:
        They mention rate of adoption, compared to the internet. Consider the
        barriers to entry. Before we all got sick of receiving AOL CDs, the
        prospect of 'going online' was incredibly expensive and sometimes
        laborious.
        
        More people subscribe to/play with a $20/m service than own/admin
        state-of-the-art machines?! Say it ain't so /s
       
          thewebguyd wrote 21 hours 9 min ago:
          > More people subscribe to/play with a $20/m service than own/admin
          state-of-the-art machines?! Say it ain't so /s
          
          The problem is, $20/m isn't going to be profitable without better
          hardware, or more optimized models. Even the $200/month plan isn't
          making money for OpenAI. These companies are still in the "sell at a
          loss to capture marketshare" stage.
          
          We don't even know if being an "AI Company" is viable in the first
          place - just developing models and selling access. Models will become
          a commodity, and if hardware costs ever come down, open models will
          win.
          
          What happens when OpenAI, Anthropic, etc. can't be profitable without
          charging a price that consumers won't/can't afford to pay?
       
        vannevar wrote 23 hours 12 min ago:
        >Nobody can say for sure whether the AI boom is evidence of the next
        Industrial Revolution or the next big bubble.
        
        Like the Internet boom, it's both. The rosy predictions of the dotcom
        era eventually came true. But they did not come true fast enough to
        avoid the dotcom bust. And so it will be with AI.
       
          GoatInGrey wrote 15 hours 48 min ago:
          My suspicion is that there's a there there, but it doesn't align with
          the predictions.  This is supported by the tension between AI doom
          articles and the leading models experiencing diminishing performance
          gains while remaining error-prone.  This is to speak nothing of the
          apparent LLM convergence limit of a ketamine-addled junior developer.
           Which is a boundary the models seem destined to approach
          indefinitely without ever breaching.
          
          The "bust" in this scenario would hit the valuations (P/E ratio) of
          both the labs and their enterprise customers, and AI businesses
          dependant on exponential cost/performance growth curves with the
          models.  The correction would shake the dummies (poorly capitalized
          or scoped businesses) out of the tree, leaving only the viable
          business and pricing models still standing.
          
          That's my personal prediction as of writing.
       
        OtherShrezzing wrote 23 hours 17 min ago:
        >Without AI, US economic growth would be meager.
        
        The assumption here is that, without AI, none of that capital would
        have been deployed anywhere. That intuitively doesn't sound realistic.
        The article follows on with:
        
        >In the last two years, about 60 percent of the stock market’s growth
        has come from AI-related companies, such as Microsoft, Nvidia, and
        Meta.
        
        Which is a statement that's been broadly true since 2020, long before
        ChatGPT started the current boom.  We had the Magnificent Seven, and
        before that the FAANG group. The US stock market has been tightly
        concentrated around a few small groups for a decades now.
        
        >You see it in the business data. According to Stripe, firms that
        self-describe as “AI companies” are dominating revenue growth on
        the platform, and they’re far surpassing the growth rate of any other
        group.
        
        The current Venn Diagram of "startups" and "AI companies" is two mostly
        concentric circles. Again, you could have written the following
        statement at any time in the last four decades:
        
        > According to [datasource], firms that self-describe as “startups”
        are dominating revenue growth on the platform, and they’re far
        surpassing the growth rate of any other group.
       
          johnnyanmac wrote 6 hours 8 min ago:
          >The assumption here is that, without AI, none of that capital would
          have been deployed anywhere.
          
          With this recessionary behavior, it might not be that far fetched an
          assumption. I'm not sure where else it would flow outside of being
          hoarded up in assets if there wasn't this big speculation everyone
          wants to take advantadge of. Especially when you consider that
          there's so much money flowing into AI, but AI is not as of yet
          profitable.
       
            AlecSchueler wrote 2 hours 26 min ago:
            That's it, it's unusual times and the AI boom could be covering
            things up. I personally completely divested from US businesses
            since the start of the year but I can understand the allure of
            wanting to ride that particular gravy train
       
          golly_ned wrote 9 hours 16 min ago:
          Derek Thompson is not well suited to this kind of work. He is much
          better suited to his usual lame, predictable, centrist, lukewarm
          commentary on tired political and social topics.
       
          johnnienaked wrote 13 hours 0 min ago:
          The capital would have, should have maybe, been deployed in stock
          buybacks and dividends. Investment already doesn't happen in the US,
          and that's an expected thing for a well-developed country with a
          trade deficit.
       
          camgunz wrote 15 hours 31 min ago:
          I think it's more likely the assumption is you'd expect a far more
          diversified market. If we're really in a situation where the
          rational, good reasons move is to effectively ignore 98% of
          companies, that doesn't say good things about our economy (verging on
          some kind of technostate). You get into weird effects like "why
          invest in other companies" leading to "why start a company that will
          just get ignored" leading to even more consolidation and less
          dynamism.
       
            istjohn wrote 11 hours 3 min ago:
            But startups are thriving. That doesn't suggest decreasing dynamism
            to me. It suggests that there are abundant gains to be had by
            exploiting technological progress, and the legacy economy is not
            availing itself of these opportunities. A thriving tech startup
            sector is surely key to dynamism.
       
              camgunz wrote 4 hours 59 min ago:
              This metric has obvious problems, but 120/134 of YC's S25 batch
              are AI-based [0]. 90% is, I guess, better than 98%, but woof. So,
              depends on what you mean by "thriving", but if diversity factors
              in there at all then at least YC is proving you wrong.
              
              [0]: [1] (search for "AI" and it gives 120)
              
   URI        [1]: https://www.ycombinator.com/companies?batch=Summer%20202...
       
              BrenBarn wrote 5 hours 55 min ago:
              If "thriving" means "moving quickly towards being bought by one
              of the big companies" then that's an illusory diversity.
       
              johnnyanmac wrote 6 hours 6 min ago:
              AI startups are getting a lot of money. "Thriving" is a stretch
              from there unless the acquisition of money itself is the success
              bar.
              
              Meanwhile, I hear pretty much any startup not associated with AI
              is finding it harder to get funding.
       
              UncleOxidant wrote 8 hours 38 min ago:
              > But startups are thriving.
              
              What do you base this statement on? Is there data?
       
              Supermancho wrote 9 hours 1 min ago:
              > But startups are thriving
              
              I'm just some guy with an opinion. I worked in startups for 20
              years. Startups are called exciting or thriving or good bets
              because a tiny few are successful and even fewer are trying to
              compete with established companies. Capital is pumped into lots
              of little ones regardless of the technology de-jour or market
              opportunity. They generally don't last. Statistically, if you
              were an AI startup from 6 years ago, you're long gone and you
              made out with what you could scrape together on the way out.
              Startups are thriving by feeding off dreams of grandeur, with
              very few happening upon the right combination of personality,
              capability and market enough to last for a decade. Is that
              thriving or thrashing? Don't confuse the velocity of gambling
              with the volume of opportunity.
       
              cman1444 wrote 9 hours 40 min ago:
              I agree, except that it seems dynamism is almost restricted to
              digital tech. I wish tech would spread its dynamism a little
              better into legacy industries, and give some productivity
              gains/disruption to those areas.
       
                bobthepanda wrote 4 hours 8 min ago:
                there's been plenty of disruption in traditional industries
                like retail, automotive, media, communications, etc.
                
                part of the problem is that the remaining set of industries is
                pretty tough to make dynamic using technology simply because
                the explosive market size isn't there for various reasons. if
                you wanted to disrupt aviation, for example, a plane takes tens
                of billions of dollars to bring to market, and an airline
                requires outlaying billions in capex on planes.
       
                  Supernaut wrote 2 hours 11 min ago:
                  Such a shame. The passenger aircraft industry could certainly
                  stand to be shaken up with some of that "move fast and break
                  things" startup dynamism!
       
          conductr wrote 15 hours 46 min ago:
          I think the money is chasing growth in a market that is mostly
          mature. Tech is really the only hope in that situation, so that's
          where the dollars land.
       
          rsanek wrote 20 hours 6 min ago:
          >assumption here is that, without AI, none of that capital would have
          been deployed anywhere. That intuitively doesn't sound realistic
          
          For the longest time, capex at FAANG was quite low. These companies
          are clearly responding specifically to AI. I don't think it's
          realistic to expect that they would raise capex for no reason.
          
          >a statement that's been broadly true since 2020, long before ChatGPT
          started the current boom
          
          I guess it depends on your definition of "long before," but the
          ChatGPT release is about mid-way between now and 2020.
          
          As for the startup vs. AI company point, have you read Stripe's
          whitepaper on this? They go into detail on how it seems like AI
          companies are indeed a different breed.
          
   URI    [1]: https://stripe.com/en-br/lp/indexingai
       
            trod1234 wrote 19 hours 47 min ago:
            The sunsetting of research tax breaks would explain why they threw
            everything into this.
            
            They also view labor as a replaceable cost, as most accountant
            based companies that no longer innovate act. They forget that if
            you don't hire people, and pay people, you don't have any sales
            demand and this grows worse as the overall concentration or
            intensity of money in few hands grows. Most AI companies are funded
            on extreme leverage from banks that are money-printing and this
            coincided with 2020 where the deposit requirements were set to 0%
            effectively removing fractional banking as a system.
       
          thiago_fm wrote 21 hours 49 min ago:
          I agree, in any time in US history there has always been those 5-10
          companies leading the economic progress.
          
          This is very common, and this happens in literally every country.
          
          But their CAPEX would be much smaller, as if you look at current
          CAPEX from Big Tech, most of it are from NVidia GPUs.
          
          If a Bubble is happening, when it pops, the depreciation applied to
          all that NVidia hardware will absolute melt the balance sheet and
          earnings of all Cloud companies, or companies building their own Data
          centers like Meta and X.ai
       
            rwmj wrote 4 hours 11 min ago:
            And NVidia don't even make the GPUs!  They're all made on a brave
            little island off the coast of China, while 7 huge US companies
            shuffle them around and exchange vast amounts of money for them.
       
          onlyrealcuzzo wrote 22 hours 23 min ago:
          1. People aren't going to take on risk and deploy capital if they
          can't get a return.
          
          2. If people think they can get an abnormally high return, they will
          invest more than otherwise.
          
          3. Whatever other money would've got invested would've gone wherever
          it could've gotten the highest returns, which is unlikely to have the
          same ratio as US AI investments - the big tech companies did share
          repurchases for a decade because they didn't have any more R&D to
          invest in (according to their shareholders).
          
          So while it's unlikely the US would've had $0 investment if not for
          AI, it's probably even less likely we would've had just as much
          investment.
       
            sgt101 wrote 4 hours 34 min ago:
            A big dynamic in this is that the business models for two of the
            biggest players are at stake.
            
            Google is facing a significant danger that its search advertising
            business is going to just disappear. If people are using AI to find
            stuff or get recommendations then then aren't using google. Why
            should a photographer continue to spend $200 a month on ads when
            their clients are coming from openAI? Especially when OpenAI is
            using the organic search results. Meta is facing the same sort of
            issue, if the eyeballs aren't on insta then the ad $$$ go somewhere
            else.
            
            So if they have money, and can get money, they will invest it to
            protect their businesses - all of it.
            
            MS and VC's are doing the opposite, they are investing with the
            idea of taking the attention that Google and Meta currently have,
            but they are also following the "I'm scared" signal that Google and
            Meta have put in the market.
       
              on_meds wrote 1 hour 9 min ago:
              Every major consumer facing AI player (besides Anthropic and
              Apple) is incorporating advertisements into their product
              currently, or exploring how to, to include OpenAI.
              
              So I’m not as optimistic that Google’s business of
              advertising is in danger, instead it’s just transforming.
       
            whoknowsidont wrote 11 hours 57 min ago:
            >1. People aren't going to take on risk and deploy capital if they
            can't get a return.
            
            I'm sorry, what? This has happened all the time, and in increasing
            volleys, since 2008.
       
            bayarearefugee wrote 20 hours 50 min ago:
            > 1. People aren't going to take on risk and deploy capital if they
            can't get a return.
            
            > 2. If people think they can get an abnormally high return, they
            will invest more than otherwise.
            
            Sounds like a good argument for wealth taxes to limit this natural
            hoarding of wealth absent unreasonably good returns.
       
              johnnyanmac wrote 6 hours 2 min ago:
              we 1000% need all kinds of wealth taxes. The money's been hoarded
              for decades and I don't think that money is going to the next
              generation when the boomers kick the bucket. Inheritance tax,
              ultra high income tax, taxes on stocks if possible. The
              government wants to defund food stamps and healthcare while
              giving the robber barons trillions in tax breaks. Something's
              gotta give.
       
            metalliqaz wrote 21 hours 30 min ago:
            >  1. People aren't going to take on risk and deploy capital if
            they can't get a return.
            
            This doesn't seem to align with the behavior I've observed in
            modern VCs.  It truly amazes me the kind of money that gets
            deployed into silly things that are long shots at best.
       
              disgruntledphd2 wrote 21 hours 28 min ago:
              When you think about all of VC being like 1% of a mostly boring
              portfolio it makes more sense (from the perspective of the people
              putting the money in).
       
            jlarocco wrote 21 hours 34 min ago:
            > it's probably even less likely we would've had just as much
            investment.
            
            I doubt it.  Investors aren't going to just sit on money and let it
            lose value to inflation.
            
            On the other hand, you could claim non-AI companies wouldn't  start
            a new bubble, so there'd be fewer returns to reinvest, and that
            might be true, but it's kind of circular.
       
              onlyrealcuzzo wrote 21 hours 2 min ago:
              Correct - that's why you'd put it in Treasuries which have a
              positive real return for the first time in ~25 years - or, as I
              mentioned elsewhere - invest it somewhere else if you see a
              better option.
       
                BobbyJo wrote 14 hours 44 min ago:
                Which is an even better argument when you look at how yields
                have been behaving. AI is sucking the air out of the room.
       
                monocasa wrote 16 hours 3 min ago:
                From a certain macro perspective, of no one is going to beat
                the Treasury, where is the Treasury going to get that money?
       
                  msgodel wrote 5 hours 9 min ago:
                  From people who want to lend them money? Or do you mean where
                  is the Treasury going to find production?
       
                  bdangubic wrote 11 hours 58 min ago:
                  they’ll print it of course :)
       
                    johnnyanmac wrote 6 hours 4 min ago:
                    Sadly, this is exactly what the administration unironically
                    desires. We're arguably in a recession and Trump wants to
                    speed it up to a depression, ransacking the US and making
                    off with the money.
       
            jayd16 wrote 21 hours 45 min ago:
            Why is it "unlikely" that the alternative is not US investment by
            these US companies?
            
            The big US software firms have the cash and they would invest in
            whatever the market fad is, and thus, bring it into the US economy.
       
              onlyrealcuzzo wrote 21 hours 1 min ago:
              No - traditionally they return it as share buybacks, because they
              don't have any good investments.
       
                UncleOxidant wrote 8 hours 35 min ago:
                Or, like Apple they just sit on the cash waiting for better
                opportunities.
       
                  onlyrealcuzzo wrote 1 hour 55 min ago:
                  Apple had huge share buybacks.
                  
                  Companies need to have a healthy pile of cash as a percentage
                  of their expenses.
                  
                  Apple just has enormous revenues and expenses.
       
                  int_19h wrote 6 hours 26 min ago:
                  It's not just Apple, they all sit on huge piles of cash.
       
          thrance wrote 22 hours 25 min ago:
          > > Without AI, US economic growth would be meager.
          
          > The assumption here is that, without AI, none of that capital would
          have been deployed anywhere. That intuitively doesn't sound
          realistic.
          
          That's the really damning thing about all of this, maybe all this
          capital could have been invested into actually growing the economy
          instead of fueling this speculation bubble that will burst sooner or
          later, bringing along any illusion of growth into its demise.
       
            justonceokay wrote 21 hours 25 min ago:
            If the economy in my life has taught me anything, it’s that there
            will always be another bubble. The Innovators Dilemma mentions that
            bubbles aren’t even a bad thing in the sense that useful
            technologies are often made during them, it’s just that the
            market is messy and lots of people end up invested in  the bubble.
            It’s the “throw spaghetti at the wall” approach to market
            growth. Not too different than evolution, in which most mutations
            are useless but all mutations have the potential to be
            transformative.
       
              azemetre wrote 10 hours 32 min ago:
              I took their comment to mean that we could have given school
              children free lunch or implement universal childcare. Like actual
              useful things that would unlock tremendous economic value, but
              instead I was given the lying machine and told to make it
              productive.
       
            ryandrake wrote 22 hours 19 min ago:
            Or all that money might have been churning around chasing other
            speculative technologies. Or it might have been sitting in US
            Treasuries making 5% waiting for something promising. Who knows
            what is happening in the parallel alternate universe? Right now, it
            feels like everyone is just spamming dollars and hoping that AI
            actually becomes a big industry, to justify all of this economic
            activity. I'm reminded of Danny DeVito's character's speech in the
            movie Other People's Money, after the company's president made an
            impassioned speech about why its investors should keep investing:
            
            "Amen. And amen. And amen. You have to forgive me. I'm not familiar
            with the local custom. Where I come from, you always say "Amen"
            after you hear a prayer. Because that's what you just heard - a
            prayer."
            
            At this point, everyone is just praying that AI ends up a net
            positive, rather than bursting and plunging the world into a 5+
            year recession.
       
              pfannkuchen wrote 12 hours 50 min ago:
              > Or it might have been sitting in US Treasuries making 5%
              
              Nit: if this happened, I believe the treasury yield would
              plummet.
       
          biophysboy wrote 22 hours 36 min ago:
          Its also not fair to compare AI firms with others using growth
          because AI is a novel technology. Why would there be explosive growth
          in rideshare apps when its a mature niche with established
          incumbents?
       
            dragontamer wrote 22 hours 25 min ago:
            I think the explosive growth that people want is in manufacturing.
            Ex: US screws, bolts, rivets, dies, pcbs, assembly and such.
            
            The dollars are being diverted elsewhere.
            
            Intel a chip maker who can directly serve the AI boom, has failed
            to deploy its 2nm or 1.8nm fabs and instead written them off. The
            next generation fabs are failing. So even as AI gets a lot of
            dollars it doesn't seem to be going to the correct places.
       
              trhway wrote 6 hours 22 min ago:
              >I think the explosive growth that people want is in
              manufacturing. Ex: US screws, bolts, rivets, dies, pcbs, assembly
              and such.
              
              And the [only] way to get that explosive growth is robotics. That
              is the Post-Post-Industrial Revolution that we're stepping into -
              it is when manufacturing stops being separate from the
              knowledge-based economy and instead becomes a part of it aa a
              form of an output, specifically a  physical-world form of output
              from the knowledge-based economy.
              
              >The dollars are being diverted elsewhere.
              
              The dollars are going in exactly right direction - AI. After LLM
              the companies like NVDA and Google are making next steps -
              foundational world models and robotics.
              
              >Intel a chip maker who can directly serve the AI boom
              
              Intel is a managers' gravy train - just like for example Sun
              Microsystems was 20 years ago. Forget about it.
              
              >Intel ... has failed to deploy its 2nm or 1.8nm fabs and instead
              written them off. So even as AI gets a lot of dollars it doesn't
              seem to be going to the correct places.
              
              The dollars go to NVDA instead of Intel. Seems exactly to correct
              place.
       
              biophysboy wrote 21 hours 48 min ago:
              They're not going to get it. The political economy of East Asia
              is simply better suited for advanced manufacturing. The US wants
              the manufacturing of East Asia without its politics. Sometimes
              for good reason - being an export economy has its downsides!
       
                geodel wrote 21 hours 0 min ago:
                Indeed. Just now our kid's therapist told us they are moving
                out from current school district because some chemical plant is
                coming up near by. More than pollution it is the attitude that
                any kind physical product factory is blight on Disney-fied
                suburbia and its white collar folks.
       
                dragontamer wrote 21 hours 40 min ago:
                Taiwan isn't some backwater island making low skilled items.
                
                USA lost mass manufacturing (screws and rivets and zippers),
                but now we are losing cream of the crop world class
                manufacturing (Intel vs TSMC).
                
                If we cannot manufacture then we likely cannot win the next
                war. That's the politics at play. The last major war between
                industrialized nations shows that technology and manufacturing
                was the key to success. Now I don't think USA has to
                manufacture all by itself, but it needs a reasonable plan to
                get every critical component in our supply chain.
                
                In WW2, that pretty much all came down to ball bearings. The
                future is hard to predict but maybe it's chips next time.
                
                Maybe we give up on the cheapest of screws or nails. But we
                need to hold onto elite status on some item.
       
                  dangus wrote 16 hours 5 min ago:
                  I think this is a gross oversimplification and an incorrect
                  assessment of the US’ economic manufacturing capabilities.
                  
                  The US completely controls critical steps of the chip making
                  process as well as the production of the intellectual
                  property needed to produce competitive chips, and the
                  lithography machines are controlled by a close ally that
                  would abide by US sanctions.
                  
                  The actual war planes and ships and missiles are of course
                  still built in the USA. Modern warfare with stuff that China
                  makes like drones and batteries only gets you so far. They
                  can’t make a commercially competitive aviation jet engine
                  without US and Western European suppliers.
                  
                  And the US/NAFTA has a ton of existing manufacturing
                  capability in a lot of the “screws and rivets”
                  categories. For example, there are lots of automotive parts
                  and assembly companies in the US. The industry isn’t as big
                  as it used to be but it’s still significant. The US is the
                  largest manufacturing exporter besides China.
       
                  biophysboy wrote 20 hours 39 min ago:
                  > Taiwan isn't some backwater island making low skilled
                  items.
                  
                  Definitely not! Wasn't trying to imply this.
                  
                  > If we cannot manufacture then we likely cannot win the next
                  war.
                  
                  If you think a war is imminent (a big claim!), then our only
                  chance is to partner with specialized allies that set up shop
                  here (e.g. Taiwan, Japan, South Korea). Trying to resurrect
                  Intel's vertically integrated business model to compete with
                  TSMC's contractor model is a mistake, IMO.
       
        dsign wrote 23 hours 17 min ago:
        I don't think AI is having much impact on the bits of the economy that
        have to do with labor and consumption. Folk who are getting displaced
        by AI are, for now, probably being re-hired to fix AI mess-ups later.
        
        But if, or when AI gets a little better, then we will start to see a
        much more pronounced impact. The thing competent AIs will do is to
        super-charge the rate at which profits don't go to labor nor to social
        security, and this time they will have a legit reason: "you really
        didn't use any humans to pave the roads that my autonomous trucks use.
        Why should I pay for medical expenses for the humans, and generally for
        the well-being of their pesky flesh? You want to shutdown our digital
        CEO? You first need to break through our lines of (digital) lawyers and
        ChatGPT-dependent bought politicians."
       
          tehjoker wrote 16 hours 0 min ago:
          Well, if you don't use humans, then you're using machine labor that
          will drive prices down to at-cost in a competitive environment and
          strip profits in the end. Profits come from underpaying labor.
       
        andsoitis wrote 23 hours 20 min ago:
        > Artificial intelligence has a few simple ingredients: computer chips,
        racks of servers in data centers, huge amounts of electricity, and
        networking and cooling systems that keep everything running without
        overheating.
        
        What about the software? What about the data? What about the models?
       
        freetonik wrote 23 hours 38 min ago:
        Interesting that the profits of those bottom 490 companies of S&P 500
        do not rise with the help of AI technology, which is supposedly sold to
        them at a reduced rate as AI vendors are bleeding money.
       
          onlyrealcuzzo wrote 22 hours 16 min ago:
          We'll never know what would've happened without AI.
          
          1. There profits could otherwise be down.
          
          2. The plan might be to invest a bunch up front in severance and AI
          Integration that is supposed to pay off in the future.
          
          3. In the future that may or may not happen, and it'll be hard to
          tell, because it may pay off at the same time an otherwise recession
          is hitting, which smoothes it out.
          
          It's almost as if it's not that simple.
       
          roncesvalles wrote 23 hours 4 min ago:
          Other than NVIDIA, the profits of the S&P 10 haven't risen either.
          It's just that the market is pricing them very optimistically.
          
          IMO this is an extremely scary situation in the stock market. The AI
          bubble burst is going to be more painful than the Dotcom bubble
          burst. Note that an "AI bubble burst" doesn't necessitate a belief
          that AI is "useless" -- the Internet wasn't useless and the Dotcom
          burst still happened. The market can crash when it froths up too
          early even though the optimistic hypotheses driving the froth
          actually do come true eventually.
       
            jayd16 wrote 21 hours 27 min ago:
            I'm curious to see the bubble burst.  I personally don't think it
            will be anything like the dotcom era.
            
            The benefits have just not been that wide ranging to the average
            person.  Maybe I'm wrong but, I don't AI hype as a cornerstone of
            US jobs, so there's no jobs to suddenly dry up.  The big companies
            are still flush with cash on hand, aren't they?
            
            If/when the fad dies I'd think it would die with a wimper.
       
              kevinventullo wrote 7 hours 33 min ago:
              The benefits have just not been that wide ranging to the average
              person
              
              I think you might be underestimating how many non-technical
              people are using LLM’s daily.
       
              felixfurtak wrote 10 hours 49 min ago:
              It's definitely more like the railways of the 1880s. Lots of
              companies, with more-or-less the same product, competing for a
              highly extrapolated theoretical demand. Out of all that
              excitement comes an eventual crash and rationalization. Railways
              today are much more boring, regulated and utilitarian affairs.
              Many are state owned and still often loss-making.
       
              aDyslecticCrow wrote 19 hours 46 min ago:
              I think AI has great potential to change as much as the internet.
              But i dont consider LLMs to be the right type to do that.
              
              Self driving cars and intelligent robotics is the real goldmine.
              But we still don't seem to have the right architecture or
              methods.
              
              I say that because self driving cars are entirely stagnant
              despite the boom AI interest and resources.
              
              Personally i think we need a major breakthrough in reinforcement
              learning, computer vision (which are still mostly stuck at feed
              forward CNNs) and few shot learning. The tranformer is a major
              leap, but its not enough on its own.
       
                jayd16 wrote 13 hours 39 min ago:
                I'm not saying things couldn't change.    I'm only looking at the
                landscape as it is now and imagining what would happen if the
                funding stops because of lack of consumer interest.
                
                In general I do not agree that the economy is overleveraged on
                AI just like it is not overleveraged on cyrpto currency.  If
                the money dries up, I don't expect economy wide layoffs.
       
                  aDyslecticCrow wrote 1 hour 27 min ago:
                  If money dries up, a bunch of startups living on speculative
                  assets will trip, and the s&p  will fall in evaluation. But
                  that doesn't actually affect the core economy much.
                  
                  I think we're in a massive AI bubble, but its a bubble that
                  doesn't affect normal consumers much, so it's not too
                  dangerous or concerning.
                  
                  As for my prediction of what is needed for truly useful AI
                  automation... i suspect the current bubble will pop before we
                  solve it.
       
            andsoitis wrote 21 hours 51 min ago:
            > Other than NVIDIA, the profits of the S&P 10 haven't risen
            either.
            
            That’s not correct. Did you mean something else?
       
            Workaccount2 wrote 22 hours 45 min ago:
            We are still in the "land grab" phase where companies are offering
            generous AI plans to capture users.
            
            Once users get hooked on AI and it becomes an indispensable
            companion for doing whatever, these companies will start charging
            the true cost of using these models.
            
            It would not be surprising if the $20 plans of today are actually
            just introductory rate $70 plans.
       
              esafak wrote 22 hours 28 min ago:
              I'd be surprised because (free) open source are continually
              closing the gap, exerting downward pressure on the price.
       
                bravesoul2 wrote 8 hours 5 min ago:
                Meaning the real value is in infra. Running local is super
                expensive as you need to provision 100% and there are high
                table stakes. So people will tend to use a cloud offering. If
                the model doesnt differentiate then its how you mix the models,
                the capacity and uptime, the "GPT wrappers" etc.
       
                pegasus wrote 21 hours 28 min ago:
                They're not really free, someone still has to pay for the
                compute cost.
       
                cg5280 wrote 21 hours 37 min ago:
                Hopefully we see enough efficiency gains over time that this is
                true. The models I can run on my (expensive) local hardware are
                pretty terrible compared to the free models provided by Big
                LLM. I would hate to be chained to hardware I can't afford
                forever.
       
                  aDyslecticCrow wrote 19 hours 42 min ago:
                  The breakthrough of diffusion for tolken generation bumped
                  down compute alot. But there are no local open sources
                  versions yet.
                  
                  Distillation for specialisation can also raise the capacity
                  of the local models if we need it for specific things.
                  
                  So its chugging along nicely.
       
                Workaccount2 wrote 22 hours 20 min ago:
                I don't think it will be much of an issue for large providers,
                anymore than open source software has ever been a concern for
                Microsoft. The AI market is the entirety of the population, not
                just the small sliver who knows what "VRAM" means and is
                willing to spend thousands on hardware.
       
                  jayd16 wrote 21 hours 33 min ago:
                  > anymore than open source software has ever been a concern
                  for Microsoft.
                  
                  So a big concern then? (Although not a death sentence)
       
                    jononor wrote 19 hours 44 min ago:
                    The modern Microsoft with Azure, Office360, etc is not much
                    threatened by open source software. Especially with Azure,
                    open source is a fantastic compliment which they would like
                    the world to produce as much of as possible. The same with
                    AI models. They would look at charge for AI hosting and
                    services, at premium due to already being integrated in
                    businesses. They are going to bundle it with all their
                    existing moneymakers, and then just jack up the price. No
                    sale needed, just a bump in the invoices that are flowing
                    anyway.
       
                      jayd16 wrote 13 hours 9 min ago:
                      The phrasing was "has ever been" but even in the modern
                      era, you're only looking at their winners.
                      
                      Shouldn't something like Kubernetes or Android's flavor
                      of open source be on the radar?  Seems like there are
                      plenty of large players that might turn their 4th place
                      closed source API into a first place open ecosystem.
       
                      const_cast wrote 16 hours 26 min ago:
                      They're definitely very threatened by open source - a lot
                      of software infrastructure these days is built off of
                      open source software. In the 2000s, it wasn't. It was
                      Microsoft, MSS, COM, Windows server, etc all the way
                      down. Microsoft has basically been earned alive by open
                      source software, it's just hard to tell because they were
                      so huge that, even taken down a few pegs, they're still
                      big.
                      
                      Even today, Azure and AWS are not really cheaper or
                      better - for most situations, they're more expensive, and
                      less flexible than what can be done with OS
                      infrastructure. For companies who are successful making
                      software, Azure is more of a kneecap and a regret. Many
                      switch away from cloud, despite that process being
                      deliberately painful - a shocking mirror of how it was
                      switching away from Microsoft infrastructure of the past.
       
                  esafak wrote 22 hours 18 min ago:
                  You can get open source models hosted for cheap too; e.g.,
                  through OpenRouter, AWS Bedrock, etc. You do not have to run
                  it yourself.
       
        snitzr wrote 23 hours 41 min ago:
        Billion-dollar Clippy.
       
          PessimalDecimal wrote 23 hours 17 min ago:
          Trillions, right?
       
        hnhg wrote 23 hours 45 min ago:
        I found this the most interesting part of the whole essay - "the ten
        largest companies in the S&P 500 have so dominated net income growth in
        the last six years that it’s becoming more useful to think about an
        S&P 10 vs an S&P 490" - which then took me here: [1] Can anyone shed
        light on what is going on between these two groups. I wasn't convinced
        by the rest of the argument in the article, and I would like something
        that didn't just rely on "AI" as an explanation.
        
   URI  [1]: https://insight-public.sgmarkets.com/quant-motion-pictures/out...
       
          atleastoptimal wrote 12 hours 1 min ago:
          People have more room to buy more digital goods. There's far less
          room to buy more physical goods. People aren't going to double their
          stomach size to eat more McDonalds, but there's no limit to how much
          more data, software or AI tokens a person could require.
       
          k-i-r-t-h-i wrote 19 hours 19 min ago:
          power law explains the distribution, but the distribution is getting
          more extreme over the years, likely due to (market structure, macro
          conditions, tech economics, etc)
       
          whitej125 wrote 22 hours 30 min ago:
          That which might be of additional interest... look at how the top 10
          of the S&P 500 has changed over the decades[1].
          
          At any point in time the world thinks that those top 10 are
          unstoppable.  In the 90's and early 00's... GE was unstoppable and
          the executive world was filled with acolytes of Jack Welch.  Yet here
          we are.
          
          Five years ago I think a lot of us saw Apple and Google and Microsoft
          as unstoppable.  But 5-10 years from now I bet you we'll see new
          logos in the top 10.  NVDA is already there.  Is Apple going to
          continue dominance or go the way of Sony?  Is the business model of
          the internet changing such that Google can't react quick enough. 
          Will OpenAI go public (or any foundational model player).
          
          I don't know what the future will be but I'm pretty sure it will be
          different.
          
   URI    [1]: https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...
       
            onlyrealcuzzo wrote 22 hours 20 min ago:
            There was always some subset of the S&P that mattered way more than
            the rest, just like the S&P matters way more than the Russel.
            
            Typically, you probably need to go down to the S&P 25 rather than
            the S&P 10.
       
          foolswisdom wrote 22 hours 36 min ago:
          The primary goal of big companies is (/has become) maintaining market
          dominance, but this doesn't always translate to a well run business
          with great profits, it depends on internal and external factors.
          Maybe profits should have actually gone down due to tarrifs and
          uncertainty but the big companies have kept profit stable.
       
            andsoitis wrote 21 hours 56 min ago:
            > Maybe profits should have actually gone down due to tarrifs and
            uncertainty but the big companies have kept profit stable.
            
            If you’re referencing Trump’s tariffs, they have only come into
            effect now, so the economic effects will be felt in the months and
            years ahead.
       
          nowayno583 wrote 22 hours 41 min ago:
          It is a very complex phenomenon, with no single driving force. The
          usual culprit is uncertainty, which itself can have a ton of root
          causes (say, tariffs changing every few weeks, or higher inflation
          due to government subsidies).
          
          In more uncertain scenarios small companies can't take risks as well
          as big companies. The last 2 years have seen AI, which is a large
          risk these big companies invested in, pay off. But due to uncertainty
          smallish companies couldn't capitalize.
          
          But that's only one possible explanation!
       
            automatic6131 wrote 22 hours 37 min ago:
            > The last 2 years have seen AI, which is a large risk these big
            companies invested in, pay off
            
            LOL. It's paying off right now, because There Is No Alternative.
            But at some point, the companies and investors are going to want to
            make back these hundreds of billions. And the only people making
            money are Nvidia, and sort-of Microsoft through selling more Azure.
            
            Once it becomes clear that there's no trillion dollar industry in
            cheating-at-homework-for-schoolkids, and nvidia stop selling more
            in year X than X-1, very quickly will people realize that the last
            2 years have been a massive bubble.
       
              nowayno583 wrote 22 hours 32 min ago:
              That's a very out of the money view! If you are right you could
              make some very good money!
       
                automatic6131 wrote 21 hours 42 min ago:
                No  as you and I both know - I can't. Because it's a
                qualitative view, and not a quantitative one. I would need to
                know _when_, quite precisely, I will turn out to be right.
                
                And I don't know, because I have about 60 minutes a week to
                think about this, and also good quantitative market analysis is
                really hard.
                
                So whilst it may sound like a good reposte to go "wow, I bet
                you make so much money shorting!" knowing that I don't and
                can't, it's also facile. Because I don't mind if I'm right in
                12, 24 or 60 months. Fwiw, I thought I'd be right in 12 months,
                12 months ago. Oops. Good thing I didn't attempt to "make
                money" in an endeavor where the upside is 100% of your wager,
                and the downside theoretically infinite.
       
                  nowayno583 wrote 20 hours 39 min ago:
                  Your reasoning is correct if you think about negotiating
                  options, or going all in on a trade, but its not quite right
                  for stocks. The borrowing rates for MSFT and NVDA - even for
                  a retail investor - are less than 1% yearly. So if your view
                  is right you could hold a short on them for years. The market
                  cap for these companies has already incorporated a large
                  capex investment for AI DCs. As long as you use a reasonable
                  rebalancing strategy, and you are right that their current
                  investment in AI will not pay off, you will make money.
                  
                  Mind you, this is a view that exists - a few large hedge
                  funds and sell side firms currently hold negative
                  positions/views on these companies.
                  
                  However, the fact of the matter is, fewer people are willing
                  to take that bet than the opposite view. So it is reasonable
                  to state that view with care.
                  
                  You might be right at the end of the day, but it is very much
                  not obvious that this bet has not (or will not) pay off.
       
                    kevinventullo wrote 7 hours 45 min ago:
                    Won’t you get margin called if the stock goes up in the
                    meantime?
       
          moi2388 wrote 23 hours 7 min ago:
          They are 40% of the S&P 500, so it makes sense that they are primary
          drivers of its growth.
          
          They are also all tech companies, which had a really amazing run
          during Covid.
          
          They also resemble companies with growth potential, whereas other
          companies such as P&G or Walmart might’ve saturated their market
          already
       
            andsoitis wrote 21 hours 53 min ago:
            > They are also all tech companies, which had a really amazing run
            during Covid.
            
            Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is
            also arguable whether Tesla is a tech company or whether it is a
            car company.
       
              ahmeneeroe-v2 wrote 21 hours 40 min ago:
              Berkshire holds ~$60B+ of Apple and is also exposed to AI through
              its power-utility arm, Berkshire Hathaway Energy.
       
                andsoitis wrote 21 hours 29 min ago:
                > Berkshire holds ~$60B+ of Apple and is also exposed to AI
                through its power-utility arm, Berkshire Hathaway Energy.
                
                Apple is 22% of BRK’s holdings. The next biggest of their
                investments are Amex, BoA, Coke, Chevron.
                
                They are not a tech company.
       
                  ahmeneeroe-v2 wrote 18 hours 20 min ago:
                  BRK has significant AI exposure through both Apple and
                  Berkshire Hathaway Energy. So while they are not a tech
                  company, they have more exposure to the AI boom than
                  basically any other non-tech company.
       
          rogerkirkness wrote 23 hours 15 min ago:
          Winner takes most is now true at the global economy level.
       
        stackbutterflow wrote 23 hours 57 min ago:
        Predicting the future is always hard.
        
        But the only thing I've seen in my life that most resembles what is
        happening with AI, the hype, its usefulness beyond the hype, vapid
        projects, solid projects, etc, is the rise of the internet.
        
        Based on this I would say we're in the 1999-2000 era. If it's true what
        does it mean for the future?
       
          api wrote 22 hours 48 min ago:
          I too lived through the dot.com bubble and AI feels identical in so
          many ways.
          
          AI is real just like the net was real, but the current environment is
          very bubbly and will probably crash.
       
            bwfan123 wrote 11 hours 23 min ago:
            I lived through dot-com, and there are so many parallels. Large
            amount of money is chasing a dream which wont materialize in the
            near term.
            
            Recent deja-vus are articles like this:
            
            "The 20-Somethings Are Swarming San Francisco’s A.I. Boom" and
            
            "Tech Billboards Are All Over San Francisco. Can You Decode Them?"
            
            If I recall correctly, after the 2000 bust, folks fled silicon
            valley abandoning their leased BMWs at the SFO airport. 101 had no
            traffic jams. I wonder if that will repeat this time around.
       
            thewebguyd wrote 21 hours 21 min ago:
            It definitely feels identical. We had companies that never had any
            hope of being profitable (or even doing anything related to the
            early internet to begin with), but put .com in your name and
            suddenly you are flooded with hype and cash.
            
            Same thing now with AI. The capital is going to dry up eventually,
            no one is profitable right now and its questionable whether or not
            they can be at a price consumers would be willing or able to pay.
            
            Models are going to become a commodity, just being an "AI Company"
            isn't a moat and yet every one of the big names are being invested
            in as if they are going to capture the entire market, or if there
            even will be a market in the first place.
            
            Investors are going to get nervous, eventually, and start expecting
            a return, just like .com. Once everyone realizes AGI isn't going to
            happen, and realize you aren't going to meet the expected return
            running a $200/month chatbot, it'll be game over.
       
          keiferski wrote 23 hours 17 min ago:
          Well, there’s a fundamental difference: the Internet blew up
          because it enabled people to connect with each other more easily:
          culturally, economically, politically.
          
          AI is more-or-less replacing people, not connecting them. In many
          cases this is economically valuable, but in others I think it just
          pushes the human connection into another venue. I wouldn’t be
          surprised if in-person meetup groups really make a comeback, for
          example.
          
          So if a prediction about AI involves it replacing human cultural
          activities (say, the idea that YouTube will just be replaced by AI
          videos and real people will be left out of a job), then I’m quite
          bearish. People will find other ways to connect with each other
          instead.
       
            LinuxAmbulance wrote 21 hours 16 min ago:
            Businesses are overly optimistic about AI replacing people.
            
            For very simple jobs, like working in a call center? Sure.
            
            But the vast majority of all jobs aren't ones that AI can replace.
            Anything that requires any amount of context sensitive human
            decision making, for example.
            
            There's no way that AI can deliver on the hype we have now, and
            it's going to crash. The only question is how hard - a whimper or a
            bang?
       
              prewett wrote 11 hours 19 min ago:
              As a customer, nothing infurates me like an AI call center. If I
              have to call, it's because I have an account problem that
              requires me to speak with someone to resolve it.
              
              I moved states and Xfinity was billing me for the month after I
              cancelled. I called, pressed 5 (or whatever) for billing. "It
              looks like your cable modem is disconnected. Power-cycling your
              modem resolves most problems. Would you like to do that now?" No.
              "Most problems can be resolved by power-cycling your modem, would
              you like to try that now?" No, my problem is about billing, and
              my modem is off-line because I CANCELLED MY SERVICE! They asked
              three more times (for a total of five) before I could progress.
              For reasons I have now forgot I had to call back several times,
              going through the whole thing again.
              
              There is a name for someone who pays no attention to what you
              say, and none of them are complimentary. AI is, fundamentally, an
              inhuman jerk.
              
              (It turned out that they can only get their database to update
              once a month, or something, and despite the fact that nobody
              could help me, they issued me a refund in a month when their
              database finally updated. The local people wanted to help, but
              could not because my new state is in a different region and the
              regions cannot access each other.)
       
              SrslyJosh wrote 13 hours 24 min ago:
              > For very simple jobs, like working in a call center? Sure.
              
              Klarna would like a word.
              
              > Anything that requires any amount of context sensitive human
              decision making, for example.
              
              That describes a significant percentage of call center work.
       
            dfedbeef wrote 22 hours 35 min ago:
            There's also the difference that the internet worked.
       
              justonceokay wrote 21 hours 15 min ago:
              In a classically disruptive way, the internet provided an
              existing service (information exchange) in a way that was in many
              ways far less pleasant than existing channels (newspapers,
              libraries, phone). Remember that the early Internet was mostly
              text, very low resolution, un credentialed, flaky, expensive, and
              too technical for most people.
              
              The only reason that we can have such nice things today like
              retina display screens and live video and secure payment
              processing is because the original Internet provided enough value
              without these things.
              
              In my first and maybe only ever comment on this website defending
              AI, I do believe that in 30 or 40 years we might see this first
              wave of generative AI in a similar way to the early Internet.
       
          baggachipz wrote 23 hours 40 min ago:
          Classic repeat of the Gartner Hype Cycle. This bubble pop will dwarf
          the dot-bomb era. There's also no guarantee that the "slope of
          enlightenment" phase will amount to much beyond coding assistants.
          GenAI in its current form will never be reliable enough to do
          so-called "Agentic" tasks in everyday lives.
          
          This bubble also seems to combine the worst of the two huge previous
          bubbles; the hype of the dot-com bubble plus the housing bubble in
          the way of massive data center buildout using massive debt and
          security bundling.
       
            bspammer wrote 3 hours 19 min ago:
            Last week I tested out the agent mode of chatGPT by asking it to
            plan a week's meals, then add all the ingredients to an online
            shopping basket for me. It worked pretty much flawlessly, the only
            problem was it ran out of time before it could add the last few
            ingredients, which doesn't exactly seem like an unsolvable problem.
       
            Traubenfuchs wrote 21 hours 39 min ago:
            I fully agree that there will be a pop, there must be. Current
            evaluations and investments are based on monumentally society
            destroying assumptions. But with every disappointing, incremental
            and non evolutionary model generation the chance increases that the
            world at large realizes that those assumptions are wrong.
            
            What should I do with my ETF? Sell now, wait for the inevitable
            crash? Be all modern long term investment style: "just keep
            invested what you don't need in the next 10 years bro"?
            
            This really keeps me up at night.
       
              Dr4kn wrote 13 hours 6 min ago:
              If you're sure enough that there is going to be a big crash I
              would move the money into gold, bonds or other more secure
              assets. After a crash you can invest again.
              
              I don't know why buffet sold a lot of shares over the last few
              years to sit on a huge pile of cash, but I could guess.
              
              The Job market looks like shit, people have no money to buy stuff
              and credit card debt is skyrocketing. When people can't buy stuff
              it is bad for the economy. Even if AI is revolutionary then we
              would need people spending money to keep the economy going, and
              with more AI taking jobs that wouldn't happen.
              
              If AI doesn't work out the market is going to crash and the only
              companies keeping the market growing are going to wipe out all
              that growth.
              
              No matter how I look at it I don't see a thriving market.
       
            ben_w wrote 23 hours 17 min ago:
            Mm. Partial agree, partial disagree.
            
            These things, as they are right now, are essentially at the
            performance level of an intern or recent graduate in approximately
            all academic topics (but not necessarily practical topics), that
            can run on high-end consumer hardware. The learning curves suggest
            to me limited opportunities for further quality improvements within
            the foreseeable future… though "foreseeable future" here means
            "18 months".
            
            I definitely agree it's a bubble. Many of these companies are
            priced with the assumption that they get most of the market; they
            obviously can't all get most of the market, and because these
            models are accessible to the upper end of consumer hardware,
            there's a reasonable chance none of them will be able to capture
            any of the market because open models will be zero cost and the
            inference hardware is something you had anyway so it's all running
            locally.
            
            Other than that, to the extent that I agree with you that:
            
            > GenAI in its current form will never be reliable enough to do
            so-called "Agentic" tasks in everyday lives
            
            I do so only in that not everyone wants (or would even benefit
            from) a book-smart-no-practical-experience intern, and not all
            economic tasks are such that book-smarts count for much anyway.
            This set of AI advancements didn't suddenly cause all cars
            manufacturers to suddenly agree that this was the one weird trick
            holding back level 5 self driving, for example.
            
            But for those of us who can make use of them, these models are
            already useful (and, like all power tools, dangerous when used
            incautiously) beyond merely being coding assistants.
       
            brookst wrote 23 hours 23 min ago:
            The Internet in its 1999 form was never going to be fast enough or
            secure enough to support commerce, banking, or business operations.
       
              falcor84 wrote 22 hours 53 min ago:
              Exactly, it took an evolution, but there was no discontinuity. At
              some point, things evolved enough for people like Tim O'Reilly to
              say that we know have "Web 2.0", but it was all just small steps
              by people like those of us here on this thread, gradually making
              things better and more reliable.
       
            thecupisblue wrote 23 hours 30 min ago:
            > GenAI in its current form will never be reliable enough to do
            so-called "Agentic" tasks in everyday lives
            
            No, but GenAI in it's current form is insanely useful and is
            already shifting the productivity gears into a higher level. Even
            without 100% reliable "agentic" task execution and AGI, this is
            already some next level stuff, especially for non-technical people.
       
              RSHEPP wrote 9 hours 11 min ago:
              I would love to figure out why I don't see this at my company.
              People still shipping at the same rate as before, customers
              bringing up more and more bugs, problems that require planning
              for scale are not being thought about (more bugs), zero tests
              still being written. All the code I am seeing generated is like
              one shot garbage, with no context around our system or the
              codebase as a whole.
       
              lm28469 wrote 22 hours 49 min ago:
              > especially for non-technical people.
              
              The people who use llms to write reports for other people who use
              llms to read said reports ? It may alleviate a few pain points
              but it generates an insane amount of useless noise
       
                thecupisblue wrote 22 hours 33 min ago:
                Considering they were already creating useless noise, they can
                create it faster now.
                
                But once you get out of the tech circles and bullshit jobs,
                there is a lot of quality usage, as much as there is shit
                usage. I've met everyone from lawyers and doctors to architects
                and accountants who are using some form of GenAI actively in
                their work.
                
                Yes, it makes mistakes, yes, it hallucinates, but it gets a lot
                of fluff work out of the way, letting people deal with actual
                problems.
       
                  bluefirebrand wrote 11 hours 13 min ago:
                  > I've met everyone from lawyers and doctors to architects
                  and accountants who are using some form of GenAI actively in
                  their work
                  
                  I can't wait for the first massive medical mistakes from LLM
                  reliance
       
              ducktective wrote 23 hours 25 min ago:
              Very simple question:
              
              How do people trust the output of LLMs? In the fields I know
              about, sometimes the answers are impressive, sometimes totally
              wrong (hallucinations). When the answer is correct, I always feel
              like I could have simply googled the issue and some variation of
              the answer lies deep in some pages of some forum or stack
              exchange or reddit.
              
              However, in the fields I'm not familiar with, I'm clueless how
              much I can trust the answer.
       
                simianwords wrote 21 hours 55 min ago:
                Your internal verifier model in your head is actually good
                enough and not random. It knows how the world works and
                subconsciously applies a lot of sniff tests it has learned over
                the years.
                
                Sure a lot of answers from llms may be inaccurate - but you
                mostly identify them as such because your ability to verify
                (using various heuristics) is good too.
                
                Do you learn from asking people advice? Do you learn from
                reading comments on Reddit? You still do without trusting them
                fully because you have sniff tests.
       
                  bluefirebrand wrote 16 hours 4 min ago:
                  > You still do without trusting them fully because you have
                  sniff tests
                  
                  LLMs produce way too much noise and way too inconsistent
                  quality for a sniff test to be terribly valuable in my
                  opinion
       
                    simianwords wrote 6 hours 37 min ago:
                    that's where i disagree. the noise is not that high at all
                    and is vastly exaggerated. of course if you go too deep
                    into niche topics you will experience this.
       
                    geraldwhen wrote 12 hours 14 min ago:
                    The problem is that content is dead. You can’t find
                    answers any more on Google because every website is ai
                    generated and littered with ads.
                    
                    YouTube videos aren’t much better. Minutes of fluff are
                    added to hit a juicy 10 minute mark so you can see more
                    ads.
                    
                    The internet is a dead place.
       
                      bluefirebrand wrote 11 hours 13 min ago:
                      I have zero belief that AI won't follow this trend as
                      well
       
                thecupisblue wrote 22 hours 30 min ago:
                If you are a subject matter expert, as is expected to be of the
                person working on the task, then you will recognise the issue.
                
                Otherwise, common sense, quick google search or let another LLM
                evaluate it.
       
                svara wrote 22 hours 52 min ago:
                This is really strange to me...
                
                Of course you don't trust the answer.
                
                That doesn't mean you can't work with it.
                
                One of the key use cases for me other than coding is as a much
                better search engine.
                
                You can ask a really detailed and specific question that would
                be really hard to Google, and o3 or whatever high end model
                will know a lot about exactly this question.
                
                It's up to you as a thinking human to decide what to do with
                that. You can use that as a starting point for in depth
                literature research, think through the arguments it makes from
                first principles, follow it up with Google searches for key
                terms it surfaces...
                
                There's a whole class of searches I would never have done on
                Google because they would have taken half a day to do properly
                that you can do in fifteen minutes like this.
       
                  dfedbeef wrote 22 hours 34 min ago:
                  Such as
       
                    svara wrote 22 hours 28 min ago:
                    I went through my ChatGPT history to pick a few examples
                    that I'm both comfortable sharing and that illustrate the
                    use-case well:
                    
                    > There are some classic supply chain challenges such as
                    the bullwhip effect. How come modern supply chains seem so
                    resilient? Such effects don't really seem to occur anymore,
                    at least not in big volume products.
                    
                    > When the US used nuclear weapons against Japan, did Japan
                    know what it was? That is, did they understood the
                    possibility in principle of a weapon based on a nuclear
                    chain reaction?
                    
                    > As of July 2025, equities have shown a remarkable
                    resilience since the great financial crisis. Even COVID was
                    only a temporary issue in equity prices. What are the main
                    macroeconomic reasons behind this strength of equities.
                    
                    > If I have two consecutive legs of my air trip booked on
                    separate tickets, but it's the same airline (also answer
                    this for same alliance), will they allow me to check my
                    baggage to the final destination across the two tickets?
                    
                    > what would be the primary naics code for the business
                    with website at [redacted]
                    
                    I probably wouldn't have bothered to search any of these on
                    Google because it would just have been too tedious.
                    
                    With the airline one, for example, the goal is to get a
                    number of relevant links directly to various airline's
                    official regulations, which o3 did successfully (along with
                    some IATA regulations).
                    
                    For something like the first or second, the goal is to
                    surface the names of the relevant people / theories
                    involved, so that you know where to dig if you wish.
       
                dsign wrote 22 hours 59 min ago:
                This is true.
                
                But I've seen some harnesses (i.e., whatever Gemini Pro uses)
                do impressive things. The way I model it is like this: an LLM,
                like a person, has a chance to produce wrong output. A quorum
                of people and some experiments/study usually arrives to a "less
                wrong" answer. The same can be done with an LLM, and to an
                extent, is being done by things like Gemini Pro and o3 and
                their agentic "eyes" and "arms". As the price of hardware and
                compute goes down (if it does, which is a big "if"), harnesses
                will become better by being able to deploy more computation,
                even if the LLM models themselves remain at their current
                level.
                
                Here's an example: there is a certain kind of work we haven't
                quite yet figured how to have LLMs do: creating frameworks and
                sticking to them, e.g. creating and structuring a codebase in a
                consistent way. But, in theory, if one could have 10 instances
                of an LLM "discuss" if a function in code conforms to an agreed
                convention, well, that would solve that problem.
                
                There are also avenues of improvement that open with more
                computation. Namely, today we use "one-shot" models... you
                train them, then you use them many times. But the structure,
                the weights of the model aren't being retrained on the output
                of their actions. Doing that in a per-model-instance basis is
                also a matter of having sufficient computation at some
                affordable price. Doing that in a per-model basis is practical
                already today, the only limitation are legal terms, NDAs, and
                regulation.
                
                I say all of this objectively. I don't like where this is
                going; I think this is going to take us to a wild world where
                most things are gonna be way tougher for us humans. But I don't
                want to (be forced to) enter that world wearing rosy lenses.
       
                threetonesun wrote 23 hours 7 min ago:
                There's a few cases:
                
                1. For coding, and the reason coders are so excited about GenAI
                is it can often be 90% right, but it's doing all of the writing
                and researching for me. If I can reduce how much I need to
                actually type/write to more reviewing/editing, that's a huge
                improvement day to day. And the other 10% can be covered by
                tests or adding human code to verify correctness.
                
                2. There are cases where 90% right is better than the current
                state. Go look at Amazon product descriptions, especially
                things sold from Asia in the United States. They're probably
                closer to 50% or 70% right. An LLM being "less wrong" is
                actually an improvement, and while you might argue a product
                description should simply be correct, the market already
                disagrees with you.
                
                3. For something like a medical question, the magic is really
                just taking plain language questions and giving concise
                results. As you said, you can find this in Google / other
                search engines, but they dropped the ball so badly on summaries
                and aggregating content in favor of serving ads that people
                immediately saw the value of AI chat interfaces. Should you
                trust what it tells you? Absolutely not! But in terms of "give
                me a concise answer to the question as I asked it" it is a step
                above traditional searches. Is the information wrong? Maybe!
                But I'd argue that if you wanted to ask your doctor about
                something that quick LLM response might be better than what
                you'd find on Internet forums.
       
                jcranmer wrote 23 hours 12 min ago:
                One of the most amusing things to me is the amount of AI
                testimonials that basically go "once I help the AI over the
                things I know that it struggles with, when it gets to the
                things I don't know, wow, it's amazing at how much it knows and
                can do!" It's not so much Gell-Mann amnesia as it is Gell-Mann
                whiplash.
       
                likium wrote 23 hours 12 min ago:
                We place plenty of trust with strangers to do their jobs to
                keep society going. What’s their error rate?
                It all ends up with the track record, perception and experience
                of the LLMs. Kinda like self-driving cars.
       
                  rwmj wrote 22 hours 33 min ago:
                  When it really matters, professionals have insurance that
                  pays out when they screw up.
       
                    likium wrote 19 hours 6 min ago:
                    I do believe that's where we're heading, people holding
                    jobs to hold accountability for AI.
       
                  morpheos137 wrote 22 hours 42 min ago:
                  Strangers have an economic incentive to perform. AI does not.
                  What AI program is currently able to modify its behavior
                  autonomously to increase its own profitablity? Most if not
                  all current public models are simply chat bots trained on old
                  data scraped off the web. Wow we have created an economy
                  based on cultivated Wikipedia and Reddit content from the
                  2010s linked together by bots that can make grammatical
                  sentences and cogent sounding paragraphs. Isn't that great? I
                  don't know, about 10 years ago before google broke itself, I
                  could find information on any topic easily and judge its
                  truth using my grounded human intelligence better than any AI
                  today.
                  
                  For one thing AI can not even count. Ask google's AI to draw
                  a woman wearing a straw hat. More often than not the woman is
                  wearing a well drawn hat while holding another in her hand.
                  Why? Frequently she has three arms. Why? Tesla self driving
                  vision can't differentiate between the sky and a light
                  colored tractor trailer turning across traffic resulting in a
                  fatality in Florida.
                  
                  For something to be intelligent it needs to be able to think
                  and evaluate the correctness of its thinking correctly. Not
                  just regurgitate old web scrapings.
                  
                  It is pathetic realy.
                  
                  Show me one application where black box LLM ai is generating
                  a profit that an effectively trained human or rules based
                  system couldn't do better.
                  
                  Even if AI is able to replace a human in some tasks this is
                  not a good thing for a consumption based economy with an
                  already low labor force participation rate.
                  
                  During the first industrial revolution human labor was scarce
                  so machines could economically replace and augnent labor and
                  raise standards of living. In the present time labor is not
                  scarce so automation is a solution in search of a problem and
                  a problem itself if it increasingly leads to unemployment
                  without universal bssic income to support consumption. If
                  your economy produces too much with nobody to buy it then
                  economic contraction follows. Already young people today
                  struggle to buy a house. Instead of investing in chat bots
                  maybe our economy should be employing more people in building
                  trades and production occupations where they can earn an
                  income to support consumption including of durable items like
                  a house or a car. Instead because of the fomo and hype about
                  AI investors are looking for greater returns by directing
                  money toward scifi fantasy and when that doesn't materialize
                  an economic contraction will result.
       
                    likium wrote 18 hours 25 min ago:
                    My point is humans make mistakes too, and we trust them,
                    not because we inspect everything they say or do, but from
                    how society is set up.
                    
                    I'm not sure how up to date you are but most AIs with tool
                    calling can do math. Image generation hasn't been
                    generating weird stuff since last year. Waymo sees >82%
                    fewer injuries/crashes than human drivers[1].
                    
                    RL _is_ modifying its behavior to increase its own
                    profitability, and companies training these models will
                    optimize for revenue when the wallet runs dry.
                    
                    I do feel the bit about being economically replaced. As a
                    frontend-focused dev, nowadays LLMs can run circles around
                    me. I'm uncertain where we go, but I would hate for people
                    to have to do menial jobs just to make a living.
                    
                    [1] 
                    
   URI              [1]: https://www.theverge.com/news/658952/waymo-injury-...
       
                      bluefirebrand wrote 16 hours 2 min ago:
                      > My point is humans make mistakes too, and we trust
                      them,
                      
                      We trust them because they are intrinsically and
                      extrinsically motivated not to mess up
                      
                      AI has no motivation
       
                keiferski wrote 23 hours 16 min ago:
                I get around this by not valuing the AI for its output, but for
                its process.
                
                Treat it like a brilliant but clumsy assistant that does tasks
                for you without complaint – but whose work needs to be double
                checked.
       
          baxtr wrote 23 hours 46 min ago:
          "It is difficult to make predictions, especially about the future" -
          Yogi Berra (?)
          
          But let’s assume we can for a moment.
          
          If we’re living in a 1999 moment, then we might be on a Gartner
          Hype Cycle like curve. And I assume we’re on the first peak.
          
          Which means that the "trough of disillusionment" will follow.
          
          This is a phase in Hype Cycle, following the initial peak of inflated
          expectations, where interest in a technology wanes as it fails to
          deliver on early promises.
       
        amunozo wrote 23 hours 57 min ago:
        This is going to end badly, I am afraid.
       
          paulnpace wrote 1 hour 9 min ago:
          Everywhere I use "A.I.", I am forced to use "A.I.", and the thing I
          am forced to use it for has been made worse, as a result of the
          "A.I."
          
          For those comparing the "A.I." bubble to the .com bubble, they are
          missing the point that even a mostly normie user such as myself,
          logging on with my 14.4 kbps modem I instantly had something that was
          new and useful. With "A.I.", I haven't found anything, at least for
          myself, that is useful.
       
          m_ke wrote 23 hours 50 min ago:
          Could all pop today if GPT5 doesn’t benchmark hack hard on some new
          made up task.
       
            falcor84 wrote 22 hours 47 min ago:
            I don't see how it would "all pop" - same as with the internet
            bubble, even if the massive valuations disappear, it seems clear to
            me that the technology is already massively disruptive and will
            continue growing its impact on the economy even if we never reach
            AGI.
       
              m_ke wrote 22 hours 19 min ago:
              Exactly like the internet bubble. I've been working in Deep
              Learning since 2014 and am very bullish on the technology but the
              trillions of dollars required for the next round of scaling will
              not be there if GPT-5 is not on the exponential growth curve that
              sama has been painting for the last few years.
              
              Just like the dot com bubble we'll need to wash out a ton of
              "unicorn" companies selling $1s for $0.50 before we see the long
              term gains.
       
                falcor84 wrote 15 hours 11 min ago:
                > Exactly like the internet bubble.
                
                So is this just about a bit of investor money lost? Because the
                internet obviously didn't decline at all after 2000, and even
                the investors who lost a lot but stayed in the game likely
                recouped their money relatively quickly. As I see it, the
                lesson from the dot-com bust is that we should stay in the
                game.
                
                And as for GPT-5 being on the exponential growth curve -
                according to METR, it's well above it:
                
   URI          [1]: https://metr.org/blog/2025-03-19-measuring-ai-ability-...
       
                  0xdde wrote 15 hours 1 min ago:
                  I wouldn't say "well above" when the curve falls well within
                  the error bars. I wonder how different the plot would look if
                  they reported the median as their point estimate rather than
                  mean.
       
            mewpmewp2 wrote 23 hours 46 min ago:
            I don't expect GPT-5 to be anything special, it seems OpenAI hasn't
            been able to keep its lead, but even current level of LLMs to me
            justifies the market valuations. Of course I might eat my words
            saying that OpenAI is behind, but we'll see.
       
              apwell23 wrote 23 hours 42 min ago:
              > I don't expect GPT-5 to be anything special
              
              because ?
       
                Workaccount2 wrote 22 hours 38 min ago:
                Well word on the street is that the OSS models released this
                week were Meta-Style benchmaxxed and their real world
                performance is incredibly underwhelming.
       
                input_sh wrote 23 hours 20 min ago:
                Because everything past GPT 3.5 has been pretty unremarkable?
                Doubt anyone in the world would be able to tell a difference in
                a blind test between 4.0, 4o, 4.5 and 4.1.
       
                  falcor84 wrote 22 hours 32 min ago:
                  I would absolutely take you on a blind test between 4.0 and
                  4.5 - the improvement is significant.
                  
                  And while I do want your money, we can just look at LMArena
                  which does blind testing to arrive at an ELO-based score and
                  shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's
                  over twice likely to be judged better on an arbitrary prompt,
                  and the difference is more significant on coding and
                  reasoning tasks.
       
                  apwell23 wrote 22 hours 50 min ago:
                  > Doubt anyone in the world would be able to tell a
                  difference in a blind test between 4.0, 4o, 4.5 and 4.1.
                  
                  But this isn't 4.6 . its 5.
                  
                  I can tell difference between 3 and 4.
       
                    dwater wrote 22 hours 28 min ago:
                    That's a very Spinal Tap argument for why it will be more
                    than just an incremental improvement.
       
        croes wrote 1 day ago:
        I see hardware and AI companies‘ revenues rise.
        
        Shouldn’t the customers‘ revenue also rise if AI fulfills its
        productivity promises?
        
        Seems like the only ones getting rich in this gold rush are the shovel
        sellers. Business as usual.
       
          sofixa wrote 23 hours 22 min ago:
          > Shouldn’t the customers‘ revenue also rise if AI fulfills its
          productivity promises
          
          Not necessarily, see the Jevons paradox.
       
            croes wrote 20 hours 11 min ago:
            Jevron is about higher resource consumption and costs but the
            output and therefore the revenue should rise too.
            
            Maybe not the profit but at least the revenue.
       
            metalliqaz wrote 20 hours 48 min ago:
            Applying the Jevons paradox to this scenario should still result in
            revenues going up, assuming the employee labor being optimized adds
            value to the company. (they would add more)
       
          thecupisblue wrote 23 hours 29 min ago:
          The biggest problem is the inability of corporate middle management
          to actually leverage GenAI.
       
          mewpmewp2 wrote 23 hours 47 min ago:
          If it's automation it could also reduce costs of the customers. But
          that is a very complex question. It could be that there isn't enough
          competition in AI and so the customers are getting only marginal
          gains while AI company gets the most. It could also be that for
          customers the revenue / profits will be delayed as implementation
          will take time, and it could be upfront investment.
       
       
   DIR <- back to front page