_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   McKinsey wonders how to sell AI apps with no measurable benefits
       
       
        rchaud wrote 2 min ago:
        McKinsey 2 years ago:
        
        > The firm’s earlier research suggested that 2027 would be the first
        year when AI technology would be able to match the typical human’s
        performance in tasks that involve “natural-language understanding.”
        Now, McKinsey reckons it will happen this year."
        
        > "Generative AI will give humans a new “superpower”, and the
        economy a much-needed productivity injection, said Lareina Yee, a
        senior partner at the firm and chair of McKinsey Technology, in the
        report.
        
        -
        
   URI  [1]: https://archive.is/mhYIn
       
        rsynnott wrote 41 min ago:
        > Software vendors keen to monetize AI should tread cautiously, since
        they risk inflating costs for their customers without delivering any
        promised benefits such as reducing employee head count.
        
        ... Wait, why would the _vendor_ care about that? It's the customers
        who should be cautious; unscrupulous vendors will absolutely sell them
        useless snake oil with no qualms, if they're willing to buy it.
        
        > These leaders are increasingly making budget trade-offs between head
        count investment and AI deployment, and expect vendors to engage them
        on value and outcomes, not just features.
        
        The cheek of them! Actually demanding that the product be useful!
       
        capestart wrote 43 min ago:
        the hype surrounding AI is exaggerated, but a good deal of the tools is
        not providing real value. Customers are observing that their costs are
        increasing without the expected productivity gains or layoffs. The
        absence of quantifiable ROI and obscure pricing structures are major
        deterrents. It will be difficult to convince people about the real
        potential of AI until the sellers establish unmistakable advantages and
        fix the pricing.
       
        silexia wrote 51 min ago:
        Did McKinsey write this with AI?
       
        afavour wrote 51 min ago:
        Occurs to me that AI is a fundamental threat to the likes of McKinsey.
        You bring in the consultants when you want to make a decision but don't
        want any of the responsibility for making it. In the future they'll
        just give that task to an anonymous AI. "Nothing we can do!"
       
        _fizz_buzz_ wrote 56 min ago:
        I recently talked to someone who works at a company that builds fairly
        complicated machinery (induction heating for a certain material
        processing). He works in management and they did a week long workshop
        with a bunch of the managers to figure out where AI will make their
        company more efficient. What they came up with was that they could feed
        a spec from a customer into an AI and the AI will create the CAD
        drawings, wiring diagrams, software etc. by itself. And they wrote a
        report on it. And I just had to break it to him: The thing that AI is
        actually best at, is replacing these week-long workshop where managers
        are bs-ing around to write reports. Also, it shouldn't be the managers
        doing a top down approach where to deploy AI. Get the engineers,
        technicians, programmers etc. and they should have a workshop to make
        plans where to use AI, because they probably already are experimenting
        with it and understand where it works well and where it doesn't quite
        cut it yet.
       
        npilk wrote 1 hour 4 min ago:
        > For every $1 spent on model development, firms should expect to have
        to spend $3 on change management, which means user training and
        performance monitoring
        
        I think the general point here is true, but it's also brilliant framing
        from a company selling consulting services.
       
          caminante wrote 51 min ago:
          It couldn't get sillier. Oh wait!
          
          > Price levels: How should vendors set price levels when the cost of
          inferencing is dropping rapidly? How should they balance value
          capture with scaling adoption?
          
          This is written for B2B target clients as if it's pulling back the
          veil on pricing strategy and negotiating. Hire McKinsey to get you
          the BEST™ deal in town.
       
        btbuildem wrote 1 hour 16 min ago:
        This is because they're trying to reduce the wrong headcount. The
        largest inefficiencies in corpo orgs lie in the ways they organize
        their knowledge and information stores, and in how they manage decision
        making.
        
        The rank and file generally have a really good grasp on their subset of
        the domain -- they have expertise and experience, as well as local
        context. Small teams, their managers -- those are the ones who actually
        perform, and deliver value.
        
        As you move up the hierarchy, access to information does not scale.
        People in the middle are generally mediocre performers, buried in
        process, ritual and politic. In addition to these burdens, the
        information systems do their best to obscure knowledge, with the usual
        excuses of Safe and Secure (tm) -- things are siloed, search does not
        work, archives are sunsetted, etc.
        
        In some orgs tribalism also plays an outsized role, with teams acting
        competitive, which largely results in wasted resources and seven
        versions of the same failed attempt at New Shiny Thing.
        
        Then as we look higher yet in the hierarchy, the so-called decision
        makers don't really do anything that cannot be described as "maximize
        profit" or "cut costs", all while fighting not to get pulled down by
        the Lord of the Flies shenanigans of their underlings. They are the
        most replaceable.
        
        A successful "AI Transformation" would come in top-down, going after
        the most expensive headcount first. Only truly valuable contributors
        would remain at that level. Organizational knowledge bases would allow
        to search, analyze and reason about the institutional knowledge accrued
        in corporate archives over the years, enabling much more effective
        decision making. Meanwhile, the ICs would benefit from the AI boost,
        outsourcing some menial tasks to the machine, with the dual benefit of
        levelling up their roles, and feeding the machine more context about
        the lower-level work done across the org.
       
          fluidcruft wrote 8 min ago:
          I think another barrier is that end users don't trust IT to not pull
          the rug out from under us. It's quite a bit of effort to learn and
          figure out workflows for actually getting work done and IT doesn't
          tend to give a shit about that. Particularly enterprise IT's attitude
          about trials can kiss my ass. Enterprise IT has their timeline, and I
          have my deadlines. I'll get to it when I have time.
          
          But particularly we're always dealing with IT security "experts"
          taking things away and breaking everything and never bothering figure
          out how we're supposed to use computers to actually get any work done
          ("hmmm. we didn't think about that... we'll get back to you" is a
          common response from these certified goons). Apparently the security
          gods have decided we can't have department file servers anymore
          because backups are too difficult to protect against ransomware or
          something so we're all distracted with that pronouncement from the
          mountain trying to figure out how to get anything done at the moment.
       
          josefritzishere wrote 19 min ago:
          Management will not volunteer to replace themselves. So that would
          mean that if that's all that Ai is trully good at... the product is
          unsalable.
       
            apwell23 wrote 1 min ago:
            do you mean a VP laying off a middle manager would send a signal
            that he can be similarly replaced?
       
          tempodox wrote 35 min ago:
          Logically you’re right, but power does not follow logic.  And so
          it’s the lowest levels that get replaced by “AI”.
       
          re-thc wrote 42 min ago:
          > A successful "AI Transformation" would come in top-down, going
          after the most expensive headcount first.
          
          Do you still need an "AI Transformation" then? Sounds like just axe
          the CEO or cut their enormous salary = profit?
       
          nostrademons wrote 53 min ago:
          I've wondered sometimes what the root of this dynamic is, and why
          corporations are as inefficient as they are.  I've come to the
          conclusion that it's deliberate.
          
          When I look at top-level decision-makers at my Mag-7 employer, they
          are smart people.  Many of them were go-getters in their earlier
          career, responsible for driving some very successful initiatives, and
          that's why they're at the top of the company.  And they're very
          intentional about team structure: being close enough to senior
          directors and VPs to see some of their thinking, I can tell that they
          understand exactly who the competent people are, who gets things
          done, who likes to work on what, and then they put those people at
          the bottom of the hierarchy with incompetent risk-averse people above
          them.  Then they'll pull them out and have them report directly to a
          senior person when there's a strategic initiative that needs doing,
          complete it, and then re-org them back under a middle-manager that
          ensures nothing gets done.
          
          I think the reason for this is that if you have a wildly successful
          company, the last thing you want to do is screw it up.    You're on top
          of the world, money is raking in from your monopoly - and you're in
          zugzwang.  Your best move is not to play, because any substantive
          shift in your product or marketplace risks moving you to a position
          where you aren't so advantaged.  So CEOs of successful companies have
          a job to do, and that job is to ensure that nothing happens.  But
          people's natural inclination is to do things, and if they aren't
          doing things inside your company they will probably be doing things
          outside your company that risk toppling it.  So you put one section
          of the company to work digging holes, and put the other section to
          work filling them in, and now everybody is happy and productive and
          yet there's no net external change to your company's position.
          
          Why even have employees then?  Why not just milk your monopoly, keep
          the team lean, and let everybody involved have a big share of the
          profits?  Some companies do actually function like this, eg. Nintendo
          and Valve famously run with fairly small employee counts and just
          milk their profits, some HFT trading shops like RennTech just give
          huge employee dividends and milk their position.
          
          But the problem is largely politics.  For one, owning a monopoly
          invites scrutiny; there are a lot of things that are illegal, and if
          you're not very careful, you can end up on the wrong side of them. 
          Two, owning an incredibly lucrative business makes you a target for
          competition, and for rule-changes or political action that affect
          your incredibly lucrative business.  Perhaps that's why examples of
          highly-profitable businesses that stay small often involve staying
          secret (eg. HFT) or being in an industry that everybody else
          dismisses as inconsequential (eg. gaming or dating).
          
          By having the huge org that does nothing, the CEO can say "Look, I
          provide jobs.  We're not a monopoly because we have an unfair
          advantage, we compete fairly and just have a lot of people working
          very hard."  And they can devote a bunch of people to that legal
          compliance and PR to make sure they stay on the right side of the
          government, and it also gives them the optionality to pull all those
          talented people out and unmuzzle them when there actually is a
          competitive threat.
       
            apwell23 wrote 8 min ago:
            >  So CEOs of successful companies have a job to do, and that job
            is to ensure that nothing happens. But people's natural inclination
            is to do things, and if they aren't doing things inside your
            company they will probably be doing things outside your company
            that risk toppling it. So you put one section of the company to
            work digging holes, and put the other section to work filling them
            in, and now everybody is happy and productive and yet there's no
            net external change to your company's position.
            
            i work at a large music streamer and this perfectly describes my
            workplace. when i was outside i never understood why that company
            needs thousands and thousands of ppl to run what looks like
            stagnant product that hasn't changed much in years.
       
            potatolicious wrote 40 min ago:
            > "Why even have employees then? Why not just milk your monopoly,
            keep the team lean, and let everybody involved have a big share of
            the profits?"
            
            So we're seeing this play out. There are two factors that exist in
            tension here:
            
            - The valuation of many of these companies depend on the perception
            that they are The Future. Part of that is heavy R&D spending and
            the reputation that they hire The Best. Even if the company mostly
            just wants to sit and milk its market position, keeping the stock
            price afloat requires looking like they're also innovative and
            forging the future.
            
            - Some companies are embracing the milk-it-for-all-its-worth life
            stage of their company. You see this in some of the Mag-7 where
            compensation targets are scaling down, explicit and implicit
            layoffs, etc. This gear-shifting takes time but IMO is in fact
            happening.
            
            The tightrope they're all trying to walk is how to do the latter
            without risking their reputation as the former, because the mythos
            that they are the engines of future growth is what keeps the stock
            price ticking.
       
            arethuza wrote 45 min ago:
            Have you seen the "sociopath, clueless, loser" model?
            
   URI      [1]: https://www.ribbonfarm.com/2009/10/07/the-gervais-principl...
       
              apwell23 wrote 7 min ago:
              these articles uses to be so popular in HN back in the day. now
              its back to "let pretend we are meritocracy. politics is just
              learning how to work with other ppl"  .
       
              nostrademons wrote 41 min ago:
              Yeah I have.  Was a major influence on my thinking, though like
              all models, it's incomplete.  A lot of my comment is filling in
              the holes in that series - what are the mechanisms by which
              sociopaths make the organization function?  Why does the system
              as a whole function like this?
       
          lenerdenator wrote 1 hour 1 min ago:
          > This is because they're trying to reduce the wrong headcount.
          
          > A successful "AI Transformation" would come in top-down, going
          after the most expensive headcount first.
          
          This isn't a mistake. McKinsey consultants and their executives at
          their clients are a part of the same clique. You don't get into
          either without going to the right schools, being in the right
          fraternities, and knowing the right people. "Maximize profit" and
          "cut costs" are to be read as "keep the most money for ourselves in
          the form of earnings per share and dividends" and "pay fewer people".
          And since you can convert shares to money by gutting companies,
          there's no real incentive to remain competitive in the greater
          marketplace.
       
        daft_pink wrote 1 hour 16 min ago:
        I definitely find some of the features AI is going into like my PDF
        reader are useless and troubling
       
          harvey9 wrote 1 hour 1 min ago:
          Adobe Reader on my work pc doesn't even do ctrl-f very well.
       
        luciferin wrote 1 hour 18 min ago:
        I am curious if the timing have impacted the inability to measure a
        benefit. AI is rolling out at the same time as widespread return to
        office campaigns. Remote work was widely studied and touted as
        improving efficiency, but no one is showing the drop for RTO. Is AI in
        part just balancing it out?  There's also an ongoing massive brain
        drain. Many companies are either laying off their most tenured and
        competent employees, or they are making life miserable for them in the
        hopes that they quit.
        
        All of this said, using AI in your back end takes a huge amount of time
        from your users and employees. You have to vary multiple prompts, you
        have to make the output sane, touch it up, etc. The most useful part of
        AI for me has been using it to learn something new, or push through a
        task that I otherwise couldn't do. I was able to partially rewrite a
        logging window to reduce CPU use significantly. It took me over two
        weeks of back and forth with AI to figure out a workable solution and
        implement it into the software. I competent programmer probably could
        have done it better than I did in less than an hour. There's no
        business benefit to a help desk person being able to spend 2 weeks
        writing code that an engineer would be much better suited to handling.
        But maybe that engineer could write it in 10 minutes instead of an hour
        if they used AI to understand the software first.
       
          SecretDreams wrote 1 hour 13 min ago:
          > Is AI in part just balancing it out?
          
          Likely, no. In my industry, I see a fraction of ICs using it well, a
          fraction of leadership using it for absolute dog shit idea
          generation, and the remainder using it to make their jobs easier in
          the short run, while incurring debt in the long run since nobody is
          "learning" from AI summaries and most people don't seem to be reading
          the generated "AI notes" sent in emails.
          
          By and large, I think AI is going to hurt my workplace based on the
          current trajectory, but it won't be realized until we are in a hard
          hole to dig out of.
       
        its-kostya wrote 1 hour 19 min ago:
        We have to accept that sometimes technology that was envisioned to
        change the future one way, may be beneficial in other ways instead -
        and that's okay. We are very clearly still in the phase of "throw AI at
        everything and see where it is useful." For example, just yesterday I
        was sent a contract to sign via DigiSign. There was a "Summarize
        contract with AI" button. Having read the contract in full, I was
        curious how good the summary would be. The summary was very low
        fidelity and did not go into the weeds of the contract and I would be
        essentially signing the contract blind. Although AI is pretty good at
        summarizing key point of things like articles and conversations, this
        was a very poor use case imho. But hey, they tried it and hopefully see
        it is a waste. Nothing wrong with iterating we just have to converge on
        acceptable use cases.
       
          cozzyd wrote 1 hour 18 min ago:
          I wonder if the AI summary could be legally perilous in this case
       
            rsynnott wrote 33 min ago:
            I assume it had the usual "this is AI, and thus probably bullshit"
            disclaimers that one finds on such things.
       
            HelloMcFly wrote 48 min ago:
            Easy to imagine that many organizations using it don't necessarily
            want the signees to really read the document in full anyway, much
            less get an informative summary with Reasons To Be Cautious of
            Signing as one of the summary categories.
       
        rybosworld wrote 1 hour 19 min ago:
        Looking at this from a software dev perspective:
        
        - The moment AI is actually good enough to replace us, it will also be
        incredibly easy to create new software/apps/whatever. There could/would
        be a billion solo dev SAAS companies eating the lunch of every
        traditional tech org.
        
        - People (Executives) seem to underestimate just how much of the work
        is iterating and refining a product over a long time. Getting an LLM
        good enough to complete a Jira task is missing the point.
        
        - IMO LLM's are also completely draining the motivation of workers. A
        lot of software devs are intrinsically motivated by solving the
        problem. If your role is being watered down to "prompt the chat bot and
        baby sit what comes out", the motivation disappears. This also
        absolutely destroys any of the creativity/discovery that comes out of
        solving the task hands-on.
       
          resolutefunctor wrote 55 min ago:
          Your perspective on your last point is interesting. I actually feel
          the opposite, its become a motivator for me.
          
          I used to love coding, and did it a ton. Then it became less and less
          part of my job, and I started hating coding. It was so frustrating
          when I knew exactly what needed to be done in the code, but had to
          spend the time doing low value stuff like typing syntax, tracing
          through the code to find the right file to edit, etc when I'm already
          strapped for time.
          
          LLMs and agentic coding tools have allowed me to not spend time on
          the low-value tasks of typing, but instead on the high-value tasks of
          solving problems like you mentioned. Just interesting the different
          perspectives we have.
       
            rybosworld wrote 23 min ago:
            That's a fair point and I actually agree with you. A large part of
            writing code is doing something menial as you said.
            
            I think both of the viewpoints are valid depending on where you're
            at in your career.
            
            We can imagine a junior developer who isn't quite bored with those
            low-value tasks just yet.
            
            As you grow more senior/experienced, the novel problems become
            harder to find - and those are the one's you want to work on. AI
            can certainly help you cut through the chaff so you have more time
            to focus on those.
            
            But trends are trends and AI is increasingly getting better at
            solving the novel/interesting problems that I think you're
            referring to.
            
            Everyone's different and I know there are folks who are excited to
            not have to write a single line of code. I'd wager that's not
            actually most engineers/developers though.
            
            People still garden by hand because it's innately satisfying.
       
        alberth wrote 1 hour 21 min ago:
        McKinsey has pitched my company on projects where their compensation is
        entirely outcome-based — for example, if a project generates $20
        million in incremental revenue, they would earn 10% of that amount.
        
        I have to admit, the results they demonstrated — which we validated
        using our own data — were impressive.
        
        The challenge, however, is that outcome-based contracts are hard for
        companies to manage, since they still need to plan and budget for
        potential costs upfront.
        
        So while even when you have measurable benefits - it's still not so
        easy either.
        
        EDIT:
        
        To clarify the issue — companies are used to budgeting for
        initiatives with fixed costs. But in an outcome-based contract, the
        cost is variable.
        
        As a result, finance teams struggle to plan or allocate budgets because
        the final amount could range widely — for example, $200K, $2M, or
        even $20M — depending on the results achieved.
        
        Additionally, you almost then need a partial FTE just to manage these
        contracts to ensure you don't overpay because the results are wrongly
        measured, etc.
       
          dahcryn wrote 16 min ago:
          that's usually in areas where they are very certain.
          
          I'd be surprised if they'd do that for GenAI projects, maybe only for
          really good clients that pay them 50mln+ a year anyway
       
          athrowaway3z wrote 48 min ago:
          Well, that's fucking scary. I'd be digging deep if I was on the
          board.
          
          Either
          
          - the execs are leaving a laughably easy 20m on the table McKinsey
          knew they'd make (how did they know, and why didn't we)
          
          - they're dealing with insider information - especially dangerous if
          McKinsey is changing dependencies around.
          
          - they're doing some creative accounting
       
          halper wrote 55 min ago:
          How is that hard? They put 90% of their estimated revenue as net
          revenue (post-McK tax) in the budget? Seems about as hard as the
          underlying problem, which is guessing ("forecasting") the revenue.
       
          caminante wrote 1 hour 2 min ago:
          What's upfront about a backloaded earnout?
          
          You model it as a fixed %, variable cost and run revenue
          sensitivities. It either meets your investment criteria or doesn't.
       
            mschild wrote 55 min ago:
            I'd imagine the opportunity cost and man power. Even though
            McKinsey should do the work they will need access to people and
            information to accomplish it.
       
              caminante wrote 28 min ago:
              Parent said McK's fee is 100% contingent.
              
              If the company doesn't have the resources available to execute
              something they've validated, then that's a funding issue that can
              be solved.
              
              Either way, McK's structure doesn't make it "hard for a company
              to manage." The investment committee approves or rejects.
       
        jstummbillig wrote 1 hour 29 min ago:
        > Many software firms trumpet potential use cases for AI, but only 30
        percent have published quantifiable return on investment from real
        customer deployments.
        
        "Only" 30%. Interesting framing.
       
          rsynnott wrote 35 min ago:
          This means that only 30% are even _claiming_ to have shown anything
          quantifiable. Given that such claims tend to be essentially puffery,
          the _real_ rate is presumably far lower.
       
          walkabout wrote 1 hour 13 min ago:
          This kind of "data driven" corporate stuff is, IME, so bullshitty and
          hand-wavy that I'd assume if only 30% are able to claim to have found
          quantifiable ROI (most of them with laughably bad methodology a
          slightly-clever 10th grader who half paid attention in their science
          and/or stats classes could spot) it means that only 5% or fewer
          actually found ROI.
       
        rubzah wrote 1 hour 30 min ago:
        Dot-bomb 2.0.
       
        croes wrote 1 hour 31 min ago:
        Quite ironic that that comes from McKinsey because their service also
        rarely has a measurable benefit
       
          walkabout wrote 1 hour 21 min ago:
          Our ability to measure management productivity in general is
          basically nonexistent. It's an area of academic study and AFAIK the
          state-of-the-art remains not much better than [shrug emoji].
          
          Remember that when they're wrecking your productivity by trying to
          twist your job into something they can measure in a spreadsheet.
       
        fuzzfactor wrote 1 hour 32 min ago:
        >Consultant says software vendors risk hiking prices without cutting
        costs or boosting productivity
        
        From what I know of the firm, it looks like clients have come to the
        right place if they want a consultant with great experience at hiking
        prices without cutting costs or boosting productivity.
       
        add-sub-mul-div wrote 1 hour 33 min ago:
        It has to be tough for them. They can't not be in on the grift, it's
        too big and they're too big. But the grift doesn't really benefit them.
       
          walkabout wrote 1 hour 18 min ago:
          They're probably salty that the only jobs of theirs they can figure
          out how to automate away with AI are the
          too-cheap-to-bother-automating Indian workers who author their
          PowerPoint decks.
       
            jncfhnb wrote 56 min ago:
            the people in India don’t author the slides. They just make them
            look nice. We have some AI for it.
       
        hansmayer wrote 1 hour 38 min ago:
        Did the report from Deloitte come in yet ? :)
       
        oytis wrote 1 hour 39 min ago:
        Next they should start wondering why to sell AI apps with no measurable
        benefits
       
          jncfhnb wrote 1 hour 0 min ago:
          Because you want to sell what people want to buy
       
            oytis wrote 9 min ago:
            If people wanted to buy it, then "how" wouldn't be a difficulty
       
          joelthelion wrote 1 hour 21 min ago:
          That one is easy to answer from their perspective :)
       
        richardw wrote 1 hour 47 min ago:
        Suggest posting the original:
        
   URI  [1]: https://www.mckinsey.com/industries/technology-media-and-telec...
       
        jmuguy wrote 1 hour 47 min ago:
        I hadn't ever tried Notion before but I sort of vaguely understood it
        was a nice way to make some documentation and wiki type content.  I had
        a need for something like a table that I could filter that I would
        normally just do in Google Sheets.  So I go check out Notion and their
        entire site is focused on AI.  Look at what this agent can do, or that.
         I signed up and the entire signup flow is also focused on AI.    Finally
        I was able to locate what I thought was their core offering - the wikis
        etc.  And ended up pretty impressed with the features they have for all
        of that.
        
        Now maybe Notion customers love all these AI features but it was super
        weird to see that stuff so prominently given my understanding of what
        the company was all about.
       
          ozgrakkurt wrote 37 min ago:
          Would strongly recommend avoiding notion. They have super scummy
          practices for billing, removing users from company account etc.
       
          throwaway0123_5 wrote 42 min ago:
          I'm a heavy Notion user and haven't once used the AI features. I use
          AI on a near-daily basis outside Notion, but it just isn't something
          I need from Notion. On the other hand at least it isn't that
          intrusive in Notion unlike in some other apps.
       
          walkabout wrote 1 hour 23 min ago:
          Approximately 95% of my experience using "AI" so far is as something
          I accidentally activate then waste a few seconds figuring out how to
          make it stop. What little I've seen of other people's experiences
          with it on e.g. screen sharing calls mirrors my own. I saw someone
          the other day wrestling with Microsoft's AI stuff while editing a
          document and it was comically similar to Clippy trying to help but
          just fucking things up, except kinda worse because it was a lot less
          polite about it.
          
          (And I develop "AI" tools at my day job right now...)
       
          tveita wrote 1 hour 24 min ago:
          It's for investors AFAICT. When Masayoshi Son opens your home page it
          better say 'AI' in big bold letters.
          
          Is your product a search engine? It's AI now. [1][2]
          
          Is it a cache? Actually, it's AI. [3] A load balancer? Believe it or
          not, AI. [4] [1] [2] [3] [4]
          
   URI    [1]: https://www.elastic.co/
   URI    [2]: https://vespa.ai/
   URI    [3]: https://redis.io/
   URI    [4]: https://www.f5.com/
       
            rkachowski wrote 1 hour 2 min ago:
            whoa I'm out of the loop, what the fuck happened to redis?
       
              afavour wrote 47 min ago:
              Venture capital
       
          an0malous wrote 1 hour 25 min ago:
          The startup I work at is doing the same strategy pivot, we’re
          integrating AI into every feature of the platform. Every textbox or
          input field has the option to generate the value from AI. Feature
          that no one used when it was a simple form with a button can now be
          done through our chatbot. We have two key product metrics for the
          entire company and one of them is how many AI tokens our users are
          generating.
       
            Aperocky wrote 8 min ago:
            AI tokens that you pay for?
       
            liveoneggs wrote 46 min ago:
            My job is talking like this to but I don't understand why we need
            to keep any of the textboxes at all if the bot is populating
            everything.
       
          mountainriver wrote 1 hour 35 min ago:
          You kind of have to be or a competitor will come out being AI first
          and may get a bunch of funding
       
          wintermutestwin wrote 1 hour 38 min ago:
          Notion customer here and their AI crap keeps interrupting my
          workflow. Pretty stupid move on their part because they have
          motivated me to ditch the subscription.
       
          geerlingguy wrote 1 hour 40 min ago:
          They used to be like a really easy to use collaborative wiki. And I
          used it for a couple distributed projects and loved that aspect.
          
          But I'm guessing their growth was linear, and hard fought, after
          initial success over tools like Atlassian's which are annoying and
          expensive.
          
          So to get back to hypergrowth, they had to stuff AI in every nook and
          cranny.
       
            pydry wrote 1 hour 29 min ago:
            the sad part is that it wasnt entirely nonsensical to use AI to
            improve notion's use as a knowledge base but the way they actually
            used it was in the most hamfisted ways possible.
       
          inquirerGeneral wrote 1 hour 45 min ago:
          Just read a single interview by the CEO etc, they are all in on AI
       
        richardw wrote 1 hour 50 min ago:
        First AI came for the artists
        
        And I did not speak out
        
        Because I was not an artist
       
          jncfhnb wrote 1 hour 3 min ago:
          Kind of the opposite of the article’s sentiment
       
          bluefirebrand wrote 1 hour 23 min ago:
          Speak for yourself. I've been shouting against AI from minute 1, in
          defense of artists and anyone else
          
          It doesn't matter. People are convinced it's a miracle technology, so
          I'm just a backwards luddite resisting progress
       
        mojuba wrote 1 hour 52 min ago:
        AI in its present form is probably the strangest and the most
        paradoxical tech ever invented.
        
        These things are clearly useful once you know where they excel and
        where they will likely complicate things for you. And even then,
        there's a lot of trial and error involved and that's due to the
        non-deterministic nature of these systems.
        
        On the one hand it's impressive that I can spawn a task in Claude's app
        "what are my options for a flight from X to Y [+ a bunch of additional
        requirements]" while doing groceries, then receive a pretty good
        answer.
        
        Isn't it magic? (if you forget about the necessity of adding "keep it
        short" all the time). Pretty much a personal assistant without the
        ability of performing actions on my behalf, like booking tickets - a
        bit too early for that.
        
        Then there's coding. My Copilot has helped me dive into a gigantic
        pre-existing project in an unfamiliar programming language pretty fast
        and yet I have to correct and babysit it all the time by intuition. Did
        it save me time? Probably, but I'm not 100% sure!
        
        The paradoxicality is in that there's probably no going back from AI
        where it already kind of works for us individually or at org levels,
        but most of us don't seem to be fully satisfied with it.
        
        The article here pretty much confirms the paradox of AI: yes, orgs
        implement it, can't go back from it and yet can't reduce the headcount
        either.
        
        My prediction at the moment is that AI is indeed a bubble but we will
        probably go through a series of micro-bursts instead of one gigantic
        burst. AI is here to stay almost like a drug that we will be willing to
        pay for without seeing clear quantifiable benefits.
       
          johndhi wrote 1 hour 27 min ago:
          I feel like one benefit of humans is you can find someone you can
          truly trust under almost all circumstances and delegate to them.
          
          With AI you have a thing you can't quite trust under any circumstance
          even if it's pretty good at everything.
       
            mojuba wrote 44 min ago:
            A hammer doesn't always work as desired, it depends on your skills
            plus some random failures. When it works however, you can see the
            result and are satisfied with it - congratulations, you saved some
            time by not using a rock for the same task.
       
              johndhi wrote 18 min ago:
              I can trust a hammer will be a hammer, though.
       
          deepsquirrelnet wrote 1 hour 28 min ago:
          It’s a result of the lack of rigor in how it’s being used.
          Machine learning has been useful for years despite less than 100%
          accuracy, and the way you trust it is through measurement. Most
          people using or developing with AI today have punted on that because
          it’s hard or time consuming. Even people who hold titles of machine
          learning engineer seem to have forgotten.
          
          We will eventually reach a point where people are teaching each other
          how to perform evaluation. And then we’ll probably realize that it
          was being avoided because it’s expense to even get to the point
          where you can take a measurement and perhaps you didn’t want to
          know the answer.
       
        Printerisreal wrote 1 hour 54 min ago:
        Truth is out people. They are confessing it now.
       
        JCM9 wrote 1 hour 57 min ago:
        What most companies and CEOs fail to grasp is that with all the talk of
        headcount cuts from AI customers are expecting that AI will LOWER
        pricing and costs, not raise it. Challenge is that the cost cutting
        story is mostly vaporware (as many other studies have shown) so CEOs
        are in a tough spot. They can’t both boast to shareholders at how
        much cost savings they got from rolling out AI and then charge
        customers more.
        
        All this is pretty textbook setup for how this bubble finally implodes
        as companies fail to deliver on their AI investments and come under
        fire from shareholders for spending a ton with little return to show
        for it.
       
          gorbachev wrote 1 hour 50 min ago:
          I can't wait until the AI vendors start charging according to the
          true costs of their tools (+ profit margin). Let's see what the cost
          savings are then.
          
          Additionally once the AI vendors have locked these companies to their
          ecosystem, the enshittification will start and the companies who
          reduced their headcount to bare minimum will start to see why that
          was a really, really bad idea.
       
        givemeethekeys wrote 2 hours 2 min ago:
        > Software vendors keen to monetize AI should tread cautiously, since
        they risk inflating costs for their customers without delivering any
        promised benefits such as reducing employee head count.
        
        That's easy. Reduce the headcount first, and then let the remaining
        team of poor and desperate, I mean, elite engineers and support teams 
        AI for    /s.
        
        When will boards replace executive leadership with AI? If Return to
        Office taught us anything, it was that we already need a couple, and
        the rest of them copy and paste. Well, AI can do that! Also /s, but
        maybe just 50%.
       
        StableAlkyne wrote 2 hours 4 min ago:
        > while quoting an HR executive at a Fortune 100 company griping: "All
        of these copilots are supposed to make work more efficient with fewer
        people, but my business leaders are also saying they can't reduce head
        count yet."
        
        I'm surprised McKinsey convinced someone to say the quiet part out loud
       
          dahcryn wrote 21 min ago:
          both make a lot of sense, but the biggest mistake they make is to see
          people as capacity, or as a counter.
          
          Each human can be a bit more productive, I fully believe 10-15% is
          possible with today's tools if we do it right. But each human has it
          unique set of experience and knowledge. If I do my job a bit faster,
          and you do your job a bit faster. But if we are a team of 10, and we
          do all our job 10% faster, doesn't mean you can let one of us go. It
          just means, we all do our job 10% faster, which we probably waste by
          drinking more coffee or taking longer lunch breaks
       
          loudmax wrote 59 min ago:
          Organizations that successfully adapt are those that use new
          technology to empower their existing workers to become more
          productive.  Organizations looking to replace humans with robots are
          run by idiots and they will fail.
       
          jf22 wrote 1 hour 0 min ago:
          This part was never quiet...
          
          The quiet part out loud phrase is overused.
       
          treis wrote 1 hour 5 min ago:
          I think they can.  IME LLMs have me working somewhat less and doing
          somewhat more. It's not a tidal wave but I'm stuck a little bit less
          on bugs and some things like regex or sql I'm much faster.  It's
          something like 5-10% more productive.  That level of slack is easy to
          take up by doing more but theoretically it means being able to lose 1
          out of every 10-20 devs.
       
          iamleppert wrote 1 hour 50 min ago:
          How does it make sense to trade one group of labor (human) who are
          generally loosely connected, having little collective power for
          another (AI)? What you're really doing isn't making work more
          "efficient", you're just outsourcing work to another party -- one who
          you have very little control over. A party that is very well
          capitalized, who is probably interested in taking more and more of
          your margin once they figure out how your business works (and that's
          going to be really easy because you help them train AI models to do
          your business).
       
            newsclues wrote 1 hour 45 min ago:
            It’s the same as robots in a factory.
       
              rswail wrote 1 hour 16 min ago:
              Except that the people that make robots for factories, aren't
              interested in making whatever that factory is making.
       
                iamleppert wrote 45 min ago:
                That's not required. All that is required is becoming a sole
                source of labor, or a source that is the only realistic choice
                economically.
                
                If you ask me, that's the real long game on AI. That is exactly
                why all these billionaires keep pouring money in. They know
                it's the only way to continue growth is to start taking over
                large sections of the economy.
       
          _fat_santa wrote 1 hour 53 min ago:
          I find it all quite strange:
          
          - AI companies of course will try and sell you that you can reduce
          headcount with AI
          
          - CEO's will parrot this talking point without ever talking a closer
          look.
          
          - Everyone lower down on the org chart minus the engineers are
          wondering why the change hasn't started yet.
          
          - Meanwhile engineers are ripping their hair out cause they know that
          AI in it's current state will likely not replace any workers.
          
          Pretty soon we will have articles like "That time that CEO's thought
          that AI could replace workers".
       
            rsynnott wrote 37 min ago:
            > Pretty soon we will have articles like "That time that CEO's
            thought that AI could replace workers".
            
            Yup, it's just the latest management fad. Remember Six Sigma? Or
            Agile (in its full-blown cultish form; some aspects can be mildly
            useful)? Or matrix management? Business leaders, as a class, seem
            almost uniquely susceptible to fads. There is always _some_ magic
            which is going to radically increase productivity, if everyone just
            believes hard enough.
       
            parliament32 wrote 1 hour 17 min ago:
            > ripping their hair out
            
            I mean, nah, we've seen enough to these cycles to know exactly how
            this will end.. with a sigh and a whimper and the Next Big Thing
            taking the spotlight. After all, where are all the articles about
            how "that time that CEOs thought blockchain could replace
            databases" etc?
       
            itake wrote 1 hour 38 min ago:
            The incentive structure for managers (and literally everyone up the
            chain) is to maximize headcount. More people you managed, the more
            power you have within the organization.
            
            No one wants to say on their resume, "I manage 5 people, but trust
            me, with AI, its like managing 20 people!"
            
            Managers also don't pay people's salaries. The Tech Tools budget is
            a different budget than People salaries.
            
            Also keep in mind, for any problem space, there is an unlimited
            number of things to do. 20 people working 20% more efficiently wont
            reach infinity any faster than 10 people.
       
              deaux wrote 1 hour 18 min ago:
              > The incentive structure for managers (and literally everyone up
              the chain) is to maximize headcount. More people you managed, the
              more power you have within the organization
              
              Ding ding ding!
              
              AI can absolutely reduce headcount. It already could 2 years ago,
              when we were just getting started. At the time I worked at a
              company that did just that, succesfully automating away thousands
              of jobs which couldn't pre-LLMs. The reason it ""worked"" was
              because it was outsourced headcount, so there was very limited
              political incentive to keep them if they were replaceable.
              
              The bigger and older the company, the more ossified the
              structures are that have a want to keep headcount equal, and
              ideally grow it. This is by far the biggest cause of all these
              "failed" AI projects. It's super obvious when you start noticing
              that for jobs that were being outsourced, or done by
              temp/contracted workers, those are much more rapidly being
              replaced. As well as the fact that tech startups are hiring much
              less than before. Not talking about YC-and-co startups here,
              those are global exceptions indeed affected a lot by ZIRP and
              what not. I'm talking about the 99.9% of startups that don't get
              big VC funds.
              
              A lot of the narrative on HN that it isn't happening and AI is
              all a scam is IMO out of reasonable fear.
              
              If you're still not convinced, think about it this way. Before
              LLMs were a thing, if I asked you what the success rate of
              software projects at non-tech companies was, what would you have
              said? 90% failure rate? To my knowledge, the numbers are indeed
              close. And what's the biggest reason? Almost never "this problem
              cannot be technically solved".     You'd probably name other, more
              common reasons.
              
              Why would this be any different for AI? Why would those same
              reasons suddenly disappear? They don't. All the politics, all the
              enterprise salesmen, the lack of understanding of actual needs,
              the personal KPIs to hit - they're all still there. And the
              politics are even worse than with trad. enterprise software now
              that the premise of headcount reduction looms larger than ever.
       
                sofixa wrote 34 min ago:
                > The bigger and older the company, the more ossified the
                structures are that have a want to keep headcount equal, and
                ideally grow it.
                
                I don't know, most of the companies doing regular layoffs
                wheneveer they can get away with it are pretty big and old. Be
                it in tech - IBM/Meta/Google/Microsoft, or in physical things -
                car manufacturers, shipyards, etc.
       
                ckcheng wrote 1 hour 0 min ago:
                Yes, and it’s instructive to see how automation has reduced
                head count in oil and gas majors. The reduction comes when
                there’s a shock financially or economically and layoffs are
                needed for survival. Until then, head count will be stable.
                
                Trucks in the oil sands can already operate autonomously in
                controlled mining sites, but wide adoption is happening slowly,
                waiting for driver turnover and equipment replacement cycles.
       
              2OEH8eoCRo0 wrote 1 hour 18 min ago:
              > for any problem space, there is an unlimited number of things
              to do.
              
              That's what I've wondered. We don't just run out of work,
              products, features, etc. We can just build more but so can the
              competition right?
       
              skeeter2020 wrote 1 hour 20 min ago:
              Maybe 40 years ago or in some cultures, but I've always focused
              on $ / person. If we have a smaller team that can generate $2M in
              ARR per developer that's far superior to $200K. The problem is
              once you have 20 people doing the job nobody thinks it's possible
              to do it with 10. You're right that "there is an unlimited number
              of things to do" and there's really obvious things that must be
              done and must not be done, but the majority IME are should or
              could be done, and in every org I've experienced it's a challenge
              to constrain the # of parallel initiatives, which is the
              necessary first step to reducing active headcount.
       
                paulsutter wrote 1 hour 16 min ago:
                Exactly, it’s much easier with a new organization.
                
                In my previous company, we would speculate about where to use
                AI and we were never sure.
                
                In the new company we use AI for everything and produce more
                with substantially fewer people
       
                  ToucanLoucan wrote 1 hour 1 min ago:
                  Does anyone want what you're producing though?
                  
                  I don't mean to be dismissive and crappy right out of the
                  gate with that question, I'm merely drawing on my experience
                  with AI and the broader trends I see emerging: AI is
                  leveraged when you need knowledge products for the sake of
                  having products, not when they're particularly for something.
                  I've noticed a very strange phenomenon where middle managers
                  will generate long, meandering report emails to communicate
                  what is, frankly, not complicated or terribly deep
                  information, and send them to other people, who then
                  paradoxically use AI to summarize those emails, likely into
                  something quite similar to what was prompted to be generated
                  in the first place.
                  
                  I've also noticed it being leveraged heavily in spaces where
                  a product existing, like a news release, article, social
                  media post, etc. is in itself the point, and the quality of
                  it is a highly secondary notion.
                  
                  This has led me to conclude that AI is best leveraged in such
                  cases where nobody including the creator of a given thing
                  really... cares much what the thing is, if it's good, or does
                  it's job well? It exists because it should exist and it's
                  existence performs the function far more than anything to do
                  with the actual thing that exists.
                  
                  And in my organization at least, our "cultural opinion" on
                  such things would be... well if nobody cares what it says,
                  and nobody is actually reading it... then why the hell are we
                  generating it and then summarizing it? Just skip the whole
                  damn thing and send a short, list email of what needs
                  communicating and be done.
       
                    delusional wrote 18 min ago:
                    > Does anyone want what you're producing though?
                    
                    He's either lying or hard-selling. The company in his
                    profile "neofactory.ai" says they "will build our first
                    production line in Dallas, TX in Q3." well, we just entered
                    Q4, so not that. Despite that it has no mentions online and
                    the website is just a "contact us" form.
       
                    silverquiet wrote 52 min ago:
                    The anthropologist David Graeber wrote a book called
                    "Bullshit Jobs" that explored the subject. It shouldn't be
                    surprising that a prodigious bullshit generator could find
                    a use in those roles.
       
                  ipython wrote 1 hour 8 min ago:
                  Do you have any examples of the types of tasks you’ve found
                  the most success with using ai ?
       
            btbuildem wrote 1 hour 39 min ago:
            I am still of the conviction that "reducing employee head count"
            with AI should start at the top of the org chart. The current
            iterations of AI already talk like the C-suites, and deliver
            approximately same value. It would provide additional benefits, in
            that AIs refuse to do unethical things and generally reason
            acceptably well. The cost cutting would be immense!
            
            I am not kidding. In any large corps, the decision makers refuse to
            take any risks, show no creativity, move as a flock with other
            orgs, and stay middle-of-the-road, boring, beige khaki. The current
            AIs are perfect for this.
       
              visarga wrote 3 min ago:
              > I am still of the conviction that "reducing employee head
              count" with AI should start at the top of the org chart. The
              current iterations of AI already talk like the C-suites
              
              That is exactly what it can't do. We need someone to hold liable
              in key decisions.
       
              mbesto wrote 34 min ago:
              > In any large corps, the decision makers refuse to take any
              risks, show no creativity, move as a flock with other orgs, and
              stay middle-of-the-road, boring, beige khaki.
              
              It's hard to take this sentiment seriously from a source that
              doesn't have direct experience with the c-suite. The average
              person only gets to see the "public relations" view of the
              c-suite (mostly the CEO) so I can certainly see why a "LLM based
              mouthpiece" might be better.
              
              The c-suite is involved in thousands of decisions that 90% of the
              rest of the world is not privy to.
              
              FWIW - As a consumer, I'm highly critical of the robotic-like
              external personas the c-suite take on so I can appreciate the
              sentiment, but it's simply not rooted in any real experience.
       
              skeeter2020 wrote 1 hour 17 min ago:
              It's not the top IME, but the big fat middle of the org chart
              (company age seems to mirror physical age maybe?) where middle to
              senior managers can hide out, deliver little demonstratable value
              and ride with the tides. Some of these people are far better at
              surfing the waves than they are at performing the tasks of their
              job title, and they will outlast you, both your political skills
              and your tolerance for BS.
       
              A4ET8a8uTh0_v2 wrote 1 hour 22 min ago:
              One could argue that they deliver a better value than meat
              leaders.
       
              jordanb wrote 1 hour 33 min ago:
              "Wow this AI both writes and reads email? That's about 90 percent
              of my job and -- I presume -- 90 percent of what happens around
              here!"
       
                imglorp wrote 1 hour 25 min ago:
                AND it can sit meetings all day and not forget any decisions;
                that's the other 90 percent of a manager's day.
       
                  rwmj wrote 1 hour 14 min ago:
                  And just like senior managers, every time you ask it a
                  question, it starts a new context.
       
                    walkabout wrote 59 min ago:
                    Can it turn simple yes-or-no questions, or "hey who's the
                    person I need to ask about X?" into scheduled phone calls
                    that inexplicably invite two or three other people as an
                    excuse to fill up its calendar so it looks very busy?
       
              johndhi wrote 1 hour 37 min ago:
              Not a crazy idea. Sergey at Google said it's best at replacing
              managers fwiw
       
            newsclues wrote 1 hour 46 min ago:
            AI is most capable of replacing the humans who have the power to
            decide or influence the choice to replace humans with AI.
            
            But managers will not obsolete themselves.
            
            So right now AI should be used to monitor and analyze the workforce
            and find the efficiency that can be achieved with AI.
       
          jordanb wrote 2 hours 2 min ago:
          Also strange that this executive is worried about how the business
          continues to function after the people are gone. That's not the
          McKinsey Way!
       
        bgwalter wrote 2 hours 5 min ago:
        That is good news for once. If McKinsey is on the case, "AI" will soon
        fail (after McKinsey has raked in consultancy fees).
       
        jordanb wrote 2 hours 5 min ago:
        Maybe they could send them to Harvard Business School?
       
       
   DIR <- back to front page