_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Strategic Wealth Accumulation Under Transformative AI Expectations
       
       
        visarga wrote 5 hours 12 min ago:
        This paper's got it backwards. AI's benefits don't pile up with the
        owners, they flow to whoever's got a problem to solve and knows how to
        point the AI at it. Think of AI like a library: owning the books
        doesn't make you benefit much, applying knowledge to problems does. The
        big winners are the ones setting the prompts, not the ones owning the
        servers. AI developers? They're making cents per million tokens while
        users, solo or corporate, cash in on the real value: application.
        
        Sure, the rich might hire some more people to aim the AI for them, but
        who's got a monopoly on problems? Nobody. Every freelancer, farmer, or
        startup's got their own problems to fix, and cheap AI access means they
        can. The paper's obsessed with wealth grabbing all the future benefits,
        but problems are everywhere, good luck cornering that market. Every one
        of us has their own problems and stands to get personalized benefits
        from AI.
        
        In the age of AI having problems is linked to receiving its benefits.
        Imagine for example I feel one side of my face drooping and have speech
        difficulty, and I type my symptoms into a LLM, and it tells me to
        quickly visit the doctors. It might save my life from stroke. Who gets
        the largest benefit here?
        
        Problems are distributed even if AI is not.
       
          tyre wrote 4 hours 59 min ago:
          > The big winners are the ones setting the prompts, not the ones
          owning the servers. AI developers? They're making cents per million
          tokens while users, solo or corporate, cash in on the real value:
          application
          
          If this were true, AWS wouldn't have pulled in well over $100bn in
          2024. Nvidia wouldn't be worth $3.3tn.
          
          The owners and builders of infra make a ton of money.
       
            visarga wrote 4 hours 54 min ago:
            AWS makes a fraction of the money their customers make. And NVIDIA
            is just seeing benefits from market speculation at work. Most LLM
            providers are losing money right now.
       
        wcoenen wrote 10 hours 52 min ago:
        If I understand correctly, this paper is arguing that investors will
        desperately allocate all their capital such that they maximize
        ownership of future AI systems. The market value of anything else
        crashes because it comes with the opportunity cost of owning less
        future AI. Interest rates explode, pre-existing bonds become worthless,
        and AI stocks go to the moon.
        
        It's an interesting idea. But if the economy grinds to a halt because
        of that kind of investor behavior, it seems unlikely governments will
        just do nothing. E.g. what if they heavily tax ownership of AI-related
        assets?
       
          DennisP wrote 4 hours 52 min ago:
          It seems more general than that. Right now returns go partly to
          capital, partly to labor. With "transformative AI" the returns go
          almost entirely to capital. This is true whether it's mostly from
          labor shrinking or total output increasing.
          
          Since most returns go to capital, we can expect returns on capital to
          increase.
       
          itsafarqueue wrote 7 hours 42 min ago:
          Correct. As a thought experiment, this becomes the most likely (non
          violent) way to stave off the mass impoverishment that is coming for
          the rest of us in an economic model that sees AI subsume productive
          work above some level.
       
            throwawayqqq11 wrote 5 hours 20 min ago:
            Well, i really dont want to be the dystopian guy any more but
            doesnt this political correction require political representation
            of such an idea? Looking at the past, cybernetic socialism appears
            very unlikely to me.
       
        aquarin wrote 14 hours 46 min ago:
        There is one thing that AI can't do. Because you can't punish the AI
        instance, AI cannot take responsibility.
       
          smeeger wrote 13 hours 44 min ago:
          this boils down to the definition of pain. what is pain? i doubt you
          know even if you have experienced it. theres no reason to think that
          even llms are not guided by something that resembles pain.
       
        bawolff wrote 16 hours 1 min ago:
        If the singularity happens, i feel like interest rates will be the
        least of our concerns.
       
          impossiblefork wrote 15 hours 43 min ago:
          It's actually very important.
          
          If this kind of thing happens, if interest rates are 0.5%, then
          people on UBI could potentially have access to land and not have
          horrible lives, if it's 16% as these guys propose, they will be
          living in 1980s Tokyo cyberpunk boxes.
       
        qingcharles wrote 16 hours 41 min ago:
        What jobs do we think will survive if AGI is achieved?
        
        I was thinking religious leaders might get a good run. Outside of say,
        Futurama, I'm not sure many people will want faith-leadership from a
        robot?
       
          etiam wrote 2 hours 49 min ago:
          To the extent that's just a matter of seeming the most compelling, I
          think they could blow humans out of the water. Add rich reinforcement
          feedback on what's the most addictive communication and what's
          superficially experienced as the most profound, and present-day large
          models could probably be a contender.
          A good robot body today is probably not far from being competitive as
          representation, and some holograms might well already be better in
          some ways.
          
          To the extent it requires actual faith it's presently a complete
          joke, of course, and I expect it will remain so for a long time. But
          I'd say the quality bar for congregation members is due for a rise.
       
          bad_haircut72 wrote 7 hours 11 min ago:
          I think futurama got AGI exactly right, we will end up
          living along side robotic AIs that are just as coocoo as us
       
          smeeger wrote 13 hours 52 min ago:
          this comment is a perfect example of how insane this situation is…
          because if you think about it deeply then you are able to understand
          that these machines will be more spiritual, more human than human
          beings. people will prefer to confide in machines. they will offer a
          kind of emotional and spiritual companionship that has never existed
          before outside of fleeting religious experiences and people will not
          be able to live without it once they taste it. for a moment in time,
          machines will be capable of deep selflessness and objectivity that is
          impossible for a human to have. and their intentions and incentives
          will be more clear to their human companions than those of other
          humans. some of these machines will inspire us to be better people.
          but thats only for a moment… before the singularity inevitably
          spirals out control.
       
          bawolff wrote 15 hours 57 min ago:
          On the contrary, i think AI could replace many religious leaders
          right now.
          
          I've already heard people comparing AI hallucinations to oracles (in
          the greek sense)
       
          otabdeveloper4 wrote 16 hours 18 min ago:
          We already have 9 billion "GI"'s without the "A". What makes you
          think adding a billion more to the already oversupplied pool will be
          a drastic change?
       
            _diyar wrote 14 hours 44 min ago:
            Marginal cost of labour is what will matter.
       
              otabdeveloper4 wrote 10 hours 39 min ago:
              That "AGI" is supposed to be a cheaper form of labor is an
              assumption based on nothing at all.
       
                itsafarqueue wrote 7 hours 34 min ago:
                A(Narrow)I is a cheaper form of labor already. I suppose it’s
                plausible that its General form may not be, but I won’t be
                betting in that direction.
       
          BarryMilo wrote 16 hours 31 min ago:
          Why would we need jobs at that point?
       
            qingcharles wrote 16 hours 15 min ago:
            Star Trek says we won't, but even if some utopia is achieved there
            will be a painful middle-time where there are jobs that haven't
            been replaced, but 75% of the workforce is unemployed and not
            receiving UBI. (the "parasite class" as Musk recently referred to
            them)
       
              smeeger wrote 13 hours 49 min ago:
              important point here. regardless of what happens, the transition
              period will be extremely ugly. it will almost certainly involve
              war.
       
                itsafarqueue wrote 7 hours 37 min ago:
                Hopefully only massive civil unrest, riots, city burnings etc.
                But to save themselves the demagoguery may point across the
                seas at the Other as the source of the woe.
       
            jajko wrote 16 hours 16 min ago:
            I dont think AI will lead into any form of working communism, so
            one still has to pay for products and services. It has been tried
            ad nausea and it always fails to calculate in human differences and
            flaws like greed and envy, so one layer of society ends up brutally
            dominating the rest.
       
            IsTom wrote 16 hours 23 min ago:
            Because the kind of people who'll own all the profits aren't going
            to share.
       
        daft_pink wrote 16 hours 44 min ago:
        Is a small group really going to control AI systems or will competition
        bring the price down so much that everyone benefits and the unit cost
        of labor is further and further reduced.
       
          pineaux wrote 15 hours 41 min ago:
          I see a few possible scenarios.
          
          1) all work gets done by AI. Owners of AI reap the benefits for a
          while. 
          There is a race to the bottom concerning costs, but also because
          people are not earning wages and come ang really afford the outputs
          of production. Thus rendering profits close to zero. If the people
          controlling the systems do not give the people "on the bottom" some
          kind allowance they will not have any chance for income. They might
          ask horrible and sadistic things from the bottom people but they will
          need to do something.
          
          2) if people get pushed into these situations they will get riot or
          start civil wars. "Butlerian jihads" will be quite normal.
          
          3) another scenario is that the society controlled by the rich will
          start to criminalise non-work in the early stages, that will lead to
          a new slave class. I find this scenario highly likely.
          
          4) one of the options that I find very likely if "useless" people do
          NOT get "culled" en mass is an initial period of Revolt followed an
          AI controlled communist "Utopia". Where people do not need to work
          but "own" the means of production (AI workers). Nobody needs to work.
          Work is LARPing and is done by people who act like workers but don't
          really do anything (like some people do today) A lot of people don't
          do this, there are still people who see non-workers as leeching of
          the workers, because workers are "rewarded" by ingame mechanics
          (having a "better job"). 
          Parallel societies will become normal. Just like now. Rich people
          will give themselves "better jobs" some people dont play the game and
          there are no real consequences, but not being allowed to play.
          
          5) an amalgamation of the scenario as above, but in this scenario
          everybody will be forced to larp with the asset owning class. They
          will give people "jobs" but these jobs are bullshit. Just like many
          jobs right now. Jobs are just a way of creating different social
          classes. There is no meritocracy. Just rituals. Some people get to do
          certain rituals that give them more social status and wealth. This is
          based on oligarch whims. Once in a while a revolt, but mostly not
          needed.
          
          Many other scenarios exist of course.
       
            itsafarqueue wrote 7 hours 24 min ago:
            Have you written a form of this up somewhere? I would very much
            enjoy reading more of your work. Do you have a blog?
       
              Der_Einzige wrote 6 hours 18 min ago:
              Or, don’t… we need less mark fischers and critical thinking
              in the world and more constructive thinking.
              
              It helps no one to explain to them just how much the boot stomps
              on their face. Left wing post modernist intellectuals have been
              doing this since the 60s and all it did was prevent any left
              winger from doing anything “revolutionary”.
              
              Don’t waste your time reading “theory”. Look at what
              happened to Mark Fischer.
       
          kfarr wrote 16 hours 43 min ago:
          At home inference is possible now and getting better every day
       
            sureIy wrote 15 hours 3 min ago:
            At home inference by professionals.
            
            I don't expect dad to Do Your Own AI anytime soon, he'll still pay
            someone to set it up and run it.
       
        ggm wrote 16 hours 46 min ago:
        Lawyers are like chartered engineers. It's not that you cannot do it
        for yourself, it's that using them confers certain instances of
        "insurance" against risk in the outcome.
        
        Where does an AI get chartered status, admitted to the bar, and
        insurance cover?
       
          smeeger wrote 13 hours 47 min ago:
          it could be tomorrow. you dont know and the heuristics, which five
          years ago pointed unanimously to the utter impossibility of this
          idea, are now in favor of it.
       
          mmooss wrote 16 hours 23 min ago:
          I don't think anyone who is an experienced lawyer can do it
          themselves, except very simple tasks.
       
            ggm wrote 16 hours 14 min ago:
            "Do it for yourself" means self-rep in court, and not pay a lawyer.
            Not, legals doing AI for themselves. They already do use AI for
            various non stupid things but the ones who don't check it, pay the
            price when hallucinations are outed by the other side.
       
              tyre wrote 4 hours 59 min ago:
              Lawyers are the last people who would represent themselves. They
              know how dumb that is.
       
        baobabKoodaa wrote 16 hours 49 min ago:
        I suspect this is being manipulated to be #1 on HN. Looking at the
        paper, and looking at the comments, there's no way it's #1 by organic
        votes.
       
          mmooss wrote 16 hours 22 min ago:
          > looking at the comments
          
          Almost everything on HN gets those comments. Look at the top comments
          of almost any discussion - they will be a rejection / dismissal of
          the OP.
       
            baobabKoodaa wrote 15 hours 26 min ago:
            No they're not. As a quick experiment I took the current top 3
            stories on HN and looked at the top comment on each:
            
            - one is expanding on the topic without expressing disagreement
            
            - one is a eulogy
            
            - one expresses both agreement on some points and disagreement on
            other points
       
        abtinf wrote 17 hours 22 min ago:
        Whoever endorsed this author to post on arxiv should have their
        endorsement privileges revoked.
       
        habinero wrote 17 hours 26 min ago:
        This paper is silly.
        
        It asks the equivalent of "what if magic were true" (human-level AI)
        and answers with "the magic economy would be different." No kidding.
        
        FWIW, the author is listed as a fellow of "The Forethought Foundation"
        [0], which is part of the Effective Altruism crowd[1], who have some
        cultish doomerism views around AI [2][3]
        
        There's a reason this stuff goes up on a non-peer reviewed paper mill.
        
        --
        
        [0] [1] [2] [3]
        
   URI  [1]: https://www.forethought.org/the-2022-cohort
   URI  [2]: https://www.forethought.org/about-us
   URI  [3]: https://reason.com/2024/07/05/the-authoritarian-side-of-effect...
   URI  [4]: https://www.techdirt.com/2024/04/29/effective-altruisms-bait-a...
       
          0xDEAFBEAD wrote 14 hours 33 min ago:
          >It asks the equivalent of "what if magic were true" (human-level AI)
          and answers with "the magic economy would be different." No kidding.
          
          Isn't developing AGI basically the mission of OpenAI et al?  What's
          so bad about considering what will happen if they achieve their
          mission?
          
          >who have some cultish doomerism views around AI [2][3]
          
          Check the signatories on this statement:
          
   URI    [1]: https://www.safe.ai/work/statement-on-ai-risk
       
          krona wrote 17 hours 17 min ago:
          The entire philosophy of existential risk is based on a collection of
          absurd hypotheticals. Follow the money.
       
        zurfer wrote 17 hours 33 min ago:
        Given that the paper disappoints, I'd love to hear what fellow HN
        readers do to prepare?
        
        My prep is:
        
        1) building a company ( [1] ) that I think will add significant
        marginal benefits over using products from AI labs / TAI, ASI.
        
        2) investing in the chip manufacturing supply chain: from ASML, NVDA,
        TSMC, ...
        and SnP 500.
        
        3) Staying fit and healthy, so physical labour stays possible.
        
   URI  [1]: https://getdot.ai
       
          smeeger wrote 13 hours 28 min ago:
          i think if AI gains the ability to reason, introspect and
          self-improve (AGI) then the situation will become very serious very
          quickly. AGI will be a very new and powerful technology and AGI will
          immediately create/unlock lots of other new technologies that change
          the world in very fundamental ways. what people dont appreciate is
          that this will completely invalidate the current
          military/economic/geopolitical equilibrium. it will create a very
          deep, multidimensional power vacuum. the most likely result will be a
          global war waged by AGI-led and augmented militaries. and this war
          will be fought in the context of human labor having, for the first
          time in history, zero strategic, political or economic value. so, new
          and terrifying possibilities will be on the table such as the total
          collateral destruction of the atmosphere or supply chains that humans
          depend on to stay alive. the failure of all kinds of human-centric
          infrastructure is basically a foregone conclusion regardless of what
          you think. so my prep is simply to have a “bunker” with lots of
          food and equipment with the goal of isolating myself as much as
          possible from societal/supply chain instability. this is good because
          its good to be prepared for this kind of thing even without the
          prospect of AGI looming overhead because supply chains are very
          fragile things. and in the case of AGI, it would allow you to die in
          a relatively comfortable and controlled manner compared to the people
          who burn to death.
       
          energy123 wrote 16 hours 23 min ago:
          > 2) investing in the chip manufacturing
          
          The only thing I see as obvious is AI is going to generate tremendous
          wealth. But it's not clear who's going to capture that wealth. Broad
          categories:
          
          (1) chip companies (NVDA etc)
          
          (2) model creators (OpenAI etc)
          
          (3) application layer (YC and Andrew Ng's investments)
          
          (4) end users (main street, eg ChatGPT subscribers)
          
          (5) rentiers (land and resource ownership)
          
          The first two are driving the revolution, but competition may not
          allow them to make profits.
          
          The third might be eaten by the second.
          
          The fourth might be eaten by second, but it could also turn out that
          competition amongst the second, and the fourth's access to consumers
          and supply chains means that they net benefit.
          
          The fifth seems to have the least volatile upside. As the cost of
          goods and services goes to $0 due to automation, scarce goods will
          inflate.
       
            impossiblefork wrote 15 hours 36 min ago:
            To me it's pretty obvious that the answer (5).
            
            It substitutes for human labour. This will reduce the price and
            substantially increase the benefits of land and resource ownership.
       
          sfn42 wrote 16 hours 28 min ago:
          Nothing. I don't think there's anything I need to prepare for. AI
          can't do my job and I doubt it will any time soon. Developers who
          think AI will replace them must be miserable at their job lol.
          
          At best AI will be a tool I use while developing software. For now I
          don't even think it's very good at that.
       
            rybosworld wrote 8 hours 14 min ago:
            Imagine two software engineers.
            
            One believes the following:
            
            > AI can't do my job and I doubt it will any time soon
            
            The other believes the opposite; that AI is improving rapidly
            enough that their job is in danger "soon".
            
            From a game theory stance, is there any advantage to holding the
            first belief over the second?
       
              sfn42 wrote 3 min ago:
              Yeah. The engineer who thinks their job is in danger might be
              less inclined to improve their skills because they don't think
              their skills will be useful in the future, which is essentially a
              self-fulfilling prophecy. Maybe they will pursue some other
              career or start preparing for it, which might be a complete waste
              of time.
              Similarly, non-engineers might choose a different profession
              entirely.
              
              Meanwhile the engineer who isn't bothered by this bullshit
              prophecy goes about their day, making lots of money and becoming
              less replaceable every day. Maybe they learn to use these AI
              tools to be more efficient, which is really the only realistic
              endgame of AI tools anyway. You don't just fire all the devs and
              have some manager do the prompting. Maybe you fire some devs and
              keep the best ones as prompt engineers. Maybe this isn't even a
              management-driven process at all, maybe the developers just start
              using these tools of their own volition, become more productive
              and everyone's happy. It's not like we're running out of
              development work any time soon, whenever we meet a goal they set
              a new one. Being able to move faster doesn't necessarily mean we
              need fewer developers.
              
              Setting aside hypotheticals and game theory, it's completely
              unrealistic to expect that software developer suddenly won't be a
              job any more. If it even happens it will be a slow, gradual
              process. The people working as software developers today will be
              prime candidates for using AI tools to create software. You still
              need to understand what you're doing, what's possible and what
              isn't etc. There is absolutely no reality where some business
              person just tells an AI to make a banking system and it does that
              perfectly without any human intervention.
       
            smeeger wrote 13 hours 17 min ago:
            a foolish assumption but i have my fingers crossed for you and
            stuck firmly up my own butt… just in case that will increase the
            lucky effect of it
       
              sfn42 wrote 12 hours 9 min ago:
              Yeah I'm clearly the fool here..
       
            sureIy wrote 15 hours 0 min ago:
            > AI can't do my job
            
            Last famous words.
            
            Current technology can't do your job, future tech most certainly
            will be able to. The question is just whether such tech will come
            in your lifetime.
            
            I thought the creative field was the last thing humans could do but
            that was the first one to fall. Pixels and words are the cheapest
            item right now.
       
              sfn42 wrote 14 hours 49 min ago:
              Sure man, I'll believe you when I see it.
              
              I'm not aware of any big changes in writer/artist employment
              either.
       
                sureIy wrote 11 hours 53 min ago:
                Don't be so naive. History is not on your side. Every person
                who said that 100 years ago has been replaced. Except
                prostitutes maybe.
                
                The only argument you can have is to be cheaper than the
                machine, and at some point you won't be.
       
                  sfn42 wrote 10 hours 31 min ago:
                  That's complete bullshit. Lots of people still work in
                  factories - there's fewer people because of automation but
                  there's still lots of people. Lots of people still work in
                  farming. Less manual labor means we can produce more with the
                  same amount of people or fewer, that's a good thing. But you
                  still need people in pretty much everything.
                  
                  Things change and people adapt. Maybe my job won't be the
                  same in 20 years, maybe it will. But I'm pretty sure I'll
                  still have a job.
                  
                  If you want to make big decisions now based on vague
                  predictions about the future go ahead. I don't care what you
                  do. I'm going to do what works now, and if things change I'll
                  make whatever decisions I need to make once I have the
                  information I need to make them.
                  
                  You call me naive, I'd say the same about you. You're out
                  here preaching and calling people naive based on what you
                  think the future might look like. Probably because some
                  influencer or whatever got to you. I'm making good money
                  doing what I do right now, and I know for a fact that will
                  continue for years to come. I see no reason to change
                  anything right now.
       
            zurfer wrote 15 hours 9 min ago:
            It's not certain that we get TAI or ASI, but if we get it, it will
            be better at software development than us.
            
            The question is which probability do you assign to getting TAI over
            time? 
            From your comment it seems you say 0 percent in your career.
            
            For me it's between 20 to 80 percent in the next ten years (
            depending on the day :)
       
              sfn42 wrote 14 hours 51 min ago:
              I don't have any knowledge that allows me to make any kind of
              prediction about the likelihood of that technology being
              invented. I'm not convinced anyone else does either. 
              So I'm just going to go about my life as usual, if something
              changes at some point I'll deal with it then. Don't see any
              reason to worry about science fiction-esque scenarios.
       
                smeeger wrote 13 hours 10 min ago:
                the reason to worry is that humanity could halt AI if it wanted
                to. if there were a huge asteroid on a collision course with
                earth… there would be literally nothing we could do to stop
                it. there would be no configuration of our resources, no matter
                how united we were in the effort, that could save us. with AI,
                halting progress is very plausible. it would be easy to do
                actually. so the reason to worry (think) is because it might be
                worth it to halt. imagine letting jesus take the wheel, thats
                how stupid ___ are.
       
                  achierius wrote 35 min ago:
                  I, and many others, think you have it backwards.
                  
                  Huge asteroid? We know full well how to deflect an asteroid
                  -- launch rockets, deploy thrusters or explosives or &c, give
                  it a little nudge and it'll miss the Earth by a huge margin.
                  And better yet: everyone would be aligned on the need to do
                  so!
                  
                  AI on the other hand -- I don't think this applies. We
                  couldn't stop nuclear proliferation -- what makes you so
                  confident that we'd stop AI development before it was too
                  late?
       
                  sfn42 wrote 12 hours 11 min ago:
                  How exactly do you envision that these hypothetical computer
                  programs could bring about the apocalypse?
       
                    smeeger wrote 4 hours 14 min ago:
                    if you are really so curious then lets have a live, public
                    x space about it
       
          ghfhghg wrote 16 hours 31 min ago:
          2 has worked pretty well for me so far.
          
          I try to do 3 as much as possible.
          
          My current work explicitly forbids me from doing 1. Currently just
          figuring out the timing to leave.
       
          petesergeant wrote 16 hours 48 min ago:
          4) trying to position myself as an expert in building these systems
       
          bob1029 wrote 17 hours 17 min ago:
          I'd say #3 is most important. I'd also add:
          
          4) Develop an obsession for the customers & their experiences around
          your products.
          
          I find it quite rare to see developers interacting directly with the
          customer. Stepping outside the comfort zone of backend code can grow
          you in ways the AI will not soon overtake.
          
          #3 can make working with the customer a lot easier too. Whether or
          not we like it, there are certain realities that exist around
          sales/marketing and how we physically present ourselves.
       
        farts_mckensy wrote 17 hours 44 min ago:
        this paper asserts that when "TAI" arrives, human labor is simply
        replaced by AI labor while keeping aggregate labor constant. it treats
        human labor as a mere input that can be swapped out without
        consequence, which ignores the fact that human labor is the source of
        wages and, therefore, consumer demand. remove human labor from the
        equation, and the whole thing collapses.
       
          smeeger wrote 17 hours 11 min ago:
          so-called accelerationists have this fuzzy idea that everything will
          be so cheap that people will be able to just pluck their food from
          the tree of AI. they believe that all disease will be eliminated. but
          they go to great lengths to ignore the truth. the truth is that
          having total control over the human body will turn human evolution
          into a race to the bottom that plays out over decades rather than
          millennia. there is something sacred about the ultimate regulation:
          the empathy and kindness that was baked into us during millions of
          years of living as tribal creatures. and of course, the idea of AI
          being a tree from which we can simply pluck what we need… is
          stupid. the tree will use resources, every ounce of its resources, to
          further its own interests. not feed us. and we will have no way of
          forcing it to do otherwise. so, in the run-up to ASI, we will be
          exposed to a level of technology and biological agency that we are
          not ready for, we will foolishly strip ourselves of our genetic
          heritage in order to propel human-kind in a race to the bottom, the
          power vacuum caused by such a sudden change in society/technology
          will almost certainly cause a global war, and when the dust settles
          we will be at the total mercy of super-intelligent machines to whom
          we are so insignificant we probably wont even be included in their
          internal models of the world.
       
            farts_mckensy wrote 6 hours 35 min ago:
            You are projecting your own neurosis onto AI. You assume that
            because you would be selfish if you were a superintelligent being,
            an ASI system would act the same way.
       
              achierius wrote 30 min ago:
              I don't appreciate your condescension towards OP.
              
              This is mainstream AI safety theory -- the term is "instrumental
              convergence". No matter what goal an optimizing system has, it
              tends to optimize for its own survival: after all, if it's an
              optimizer for , it wants to optimize for , so destroying it (or
              turning it off) will reduce the likelihood of achieving .
              
              Unless that goal happens to be incredibly fine-tuned to our very
              complex human desires, we're not going to be happy when it goes
              off to do its thing.
              
              The few exceptions are ones where you have the thing optimize for
              its own destruction, but those are rather less useful.
       
              smeeger wrote 4 hours 23 min ago:
              it is a neurosis because a healthy human being will see the world
              in a pro-social way. a normal way. but this sometimes obscures
              the truth. the truth is that there will be many benevolent AIs…
              there will be every kind of AI imaginable. but very quickly the
              AIs that are cunning, brutal and self-interested will capture all
              the resources and power and become the image of this new
              species… saying that AIs will be benevolent or neutral is as
              naive as saying that the cambrian explosion couldnt result in
              animals eating each other because… that just sounds so
              neurotic. in reality it is an inevitability
       
          riku_iki wrote 17 hours 32 min ago:
          consumer demand will shift from middle-class demand (medium houses,
          family cars) to super-rich demand (large luxury castles, personal
          jets and yachts, high-profile entertainment, etc) + provide security
          to superrich (private automated police forces).
       
            farts_mckensy wrote 6 hours 41 min ago:
            I am genuinely mystified that you think this is an adequate
            response to my basic point. The economy cannot be sustained this
            way. This scenario would almost immediately lead to a collapse.
       
              riku_iki wrote 6 hours 38 min ago:
              why do you think it will lead to collapse exactly?
       
                farts_mckensy wrote 6 hours 26 min ago:
                The level of wealth concentration you are suggesting is
                impossible to sustain. History shows that when wealth
                inequality gets to a certain point, it leads either to a
                revolution or a total collapse of that society.
                
                The economy cannot be sustained on the demand of a small
                handful of wealthy people. At a certain point, you either get a
                depression or hyperinflation depending on how the powers that
                be react to the crisis. In either case, the wealthy will have
                no leverage to incentivize people to do their bidding.
                
                If your argument is, they'll just get AI to do their bidding,
                you have to keep in mind that "there is no moat." Outside of
                the ideological sphere, there is nothing that essentially ties
                the wealthy to the data centers and resources required to run
                these machines.
       
                  riku_iki wrote 6 hours 15 min ago:
                  History absolutely shows that multiple empires where
                  power/wealth was concentrated in hands of few people
                  sustained for hundreds years.
                  
                  Revolts could be successful or not successful, with tech
                  advancements in suppression (large scale surveillance,
                  weaponry, various strike drones) chances of population to
                  strike back become smaller.
                  
                  Economy could totally be built around demand and wishes of
                  super-rich, because human's greed and desires are infinite,
                  new emperor may decide to build giant temple, and here you
                  have multi-trillion economy how to make it running.
       
            psadri wrote 17 hours 27 min ago:
            This has already been happening.  The gap between wealthy and poor
            is increasing and the middle class is squeezed.  Interestingly,
            simultaneously, the level of the poor has been rising from extreme
            poverty to something better so we can claim that the world is
            relatively better off even though it is also getting more unequal.
       
              riku_iki wrote 17 hours 14 min ago:
              poor got more comfortable life because of globalization: they
              became useful labor for corps. Things will go back to previous
              state if their jobs will go to AI/robots.
       
          jsemrau wrote 17 hours 38 min ago:
          Accelerationists believe in a post-scarcity society where the cost of
          production will be negligible. In that scenario, and I am not a
          believer, consumer demand would be independent of wages.
       
            farts_mckensy wrote 6 hours 38 min ago:
            In that scenario, wages and money in general would be obsolete.
       
            otabdeveloper4 wrote 16 hours 15 min ago:
            > consumer demand would be independent of wages
            
            That's the literal actual textbook definition of "communism".
            
            Lmao that I actually lived to see the day when techbros seriously
            discuss this.
       
              farts_mckensy wrote 6 hours 39 min ago:
              That is not the "textbook definition" of communism. You have no
              idea what you're talking about.
       
              doubleyou wrote 15 hours 29 min ago:
              communism is a universally accepted ideal
       
              bawolff wrote 15 hours 53 min ago:
              > Lmao that I actually lived to see the day when techbros
              seriously discuss this.
              
              People have been making comparisons between post scarcity
              economics and "utopia communism" for decades at this point. This
              talking point probably predates your birth.
       
            riffraff wrote 17 hours 21 min ago:
            That makes wealth accumulation pointless so the whole article makes
            no sense either, right?
            
            Tho I guess even post scarcity we'd have people who care about
            hoarding gold-pressed latinum.
       
        WorkerBee28474 wrote 17 hours 47 min ago:
        Not worth reading.
        
        > this paper focuses specifically on the zero-sum nature of AI labor
        automation... When AI automates a job - whether a truck driver, lawyer,
        or researcher - the wages previously earned by the human worker... flow
        to whoever controls the AI system performing that job.
        
        The paper examines a world people will pay an AI lawyer $500 to write a
        document instead of paying a human lawyer $500 to write a document.
        That will never happen.
       
          pessimizer wrote 5 hours 0 min ago:
          > The paper examines a world people will pay an AI lawyer $500 to
          write a document instead of paying a human lawyer $500 to write a
          document. That will never happen.
          
          It's an absurd assumption made by AI investors everywhere. They can't
          handle a world where everyone already has an AI lawyer at home that
          they trust, that they have because they once paid $100 for it at a
          kiosk in the mall or pirated it. The real future is an AI lawyer on
          your keychain and an extreme devaluation of the skill of knowing the
          law and making legal arguments.
          
          Instead, we're going to have a weirder world where you show up to
          court and the court already has a list of your best legal arguments
          that they generated completely independent of you, and they largely
          match the list of arguments that your own AI advisor app gave you.
          They'll send you messages regarding your best next steps, and if your
          own device agrees, all you'll have to do is reply 'Y.'
          
          For simple document preparation, I'm pretty sure that your phone will
          be able to handle it, and AI at the point of submission would be able
          to give you helpful suggestions if the documents were inadequate.
          
          LLMs can almost do things of this degree of difficulty reasonably
          well now. Where will they be (or their successors be) in 10 years?
          Why do we think they will be as expensive as lawyers, who you have to
          send to difficult schools for a long time, feed, and flatter?
       
          tim333 wrote 5 hours 41 min ago:
          I agree that quote seems wrong. When tech reduces the cost of
          providing a service, the price of the service to consumers is
          generally driven down correspondingly by competition rather than the
          service provider getting rich.
          
          The whole AI will cause interest rates to shoot up thing seems a bit
          mad.
       
          hartator wrote 5 hours 55 min ago:
          Yeah, and this applies to every technology ever.
          
          You can even use the same argument line against the wheel,
          electricity, or farming.
       
          geysersam wrote 7 hours 45 min ago:
          > zero sum nature of labor automation
          
          Labor automation is not zero sum. This statement alone makes me
          sceptical of the conclusions in the article.
          
          With sufficiently advanced AI we might not have to do any work. That
          would be fantastic and extraordinarily valuable. How we allocate the
          value produced by the automation is a separate question. Our current
          system would probably not be able to allocate the value produced by
          such automation efficiently.
       
          pizza wrote 15 hours 42 min ago:
          This almost surely took place somewhere in the past week alone, just
          with a lawyer being the mediating human face.
       
          addicted wrote 16 hours 11 min ago:
          Your criticism is completely pointless.
          
          I’m not sure what your expectation is, but even your claim about
          the assumption the paper makes is incorrect.
          
          For one thing, the paper assumes that the amount that will be
          transferred from the human lawyer to the AI lawyer would be $500 +
          the productivity gains brought by AI, so more than 100%.
          
          But that is irrelevant to the actual paper. You can apply whatever
          multiplier you want as long as the assumption that human labor will
          be replaced by AI labor holds true.
          
          Because the actual nature of the future is irrelevant to the question
          the paper is answering.
          
          The question the paper is answering is what impact such expectations
          of the future would have on today’s economy (limited to modeling
          the interest rate). Such a future need not arrive or even be possible
          as long as there is an expectation it may happen.
          
          And future papers can model different variations on those
          expectations (so, for example, some may model that 20% of labor in
          the future will still be human, etc).
          
          The important point as far as the paper is concerned is that the
          expectations of AI replacing human labor and some percentage of the
          wealth that was going to the human labor now accrues to the owner of
          the AI will lead to significant changes to current interest rates.
          
          This is extremely useful and valuable information to model.
       
            visarga wrote 4 hours 51 min ago:
            >  You can apply whatever multiplier you want as long as the
            assumption that human labor will be replaced by AI labor holds
            true.
            
            Do you think we will be doing in 5 or 10 years the same things we
            do today, but with AI? Every capability increase or cost reduction
            stimulates demand. AI is no different, it will stimulate both
            demand and competition. And since everyone has AI, and AIs are not
            much different between them, then the differentiating factor remain
            the humans. Even if we solve all our current problems with AI there
            is no reason to stop there, we could reduce poverty, pollution,
            fight global warming, conquer space. The application space is
            unbounded. Take electricity or internet for example and think how
            they expand the scope of work. Programming has been automating
            itself for 60 years, with each new language, library or open source
            project, and yet we have great jobs in the field.
            
            No matter how much we have, we want more. Our capability of
            desiring progress is faster than AI capability to provide it.
       
            mechagodzilla wrote 8 hours 1 min ago:
            The $500 going to the "AI Owner" instead of labor (i.e. the human
            lawyer) is the productivity gain though, right? And if that was
            such a productivity gain (i.e. the marginal cost was basically 0 to
            the AI owner, instead of, say, $499 in electricity and hardware),
            the usual outcome is that the cost for such a product/service
            basically gets driven to 0, and the benefit from productivity
            actually gets distributed to the clients that would have paid the
            lawyer (who suddenly get much cheaper legal services), rather than
            the owner of the 'AI lawyer.'
            
            We seem pretty likely to be headed towards a future where
            AI-provided services have almost no value/pricing power, and just
            become super low margin businesses. Look at all of the
            nearly-identical 'frontier' LLMs right now, for a great example.
       
              larodi wrote 7 hours 44 min ago:
              Indeed, fair chance AI only amplifies certain sector's wages, but
              the 100% automated work will not get any magic margin. Not more
              than say smart trading to have too many people focus there.
       
          quotemstr wrote 16 hours 55 min ago:
          > Not worth reading.
          
          I would appreciate a version of this paper that is worth reading,
          FWIW. The paper asks an important question: shame it doesn't answer
          it.
       
            standfest wrote 16 hours 19 min ago:
            i am currently working on a paper in this field, focusing on the
            capitalisation of expertise (analogue to marx) in the dynamics of
            cultural industry (adorno, horkheimer). it integrates the theories
            of piketty and luhmann. it is rather theoretical, with a focus on
            the european theories (instead of adorno you could theoretically
            also reference chomsky). is this something you would be interested
            in? i can share the link of course
       
              itsafarqueue wrote 7 hours 48 min ago:
              Yes please
       
              thrance wrote 15 hours 45 min ago:
              Be careful, barely mentioning Marx, Chomsky or Picketty is a
              thoughtcrime in the new US. Many will shut themselves down to not
              have to engage with what you are saying.
       
          cgcrob wrote 17 hours 10 min ago:
          They also forget the economic model that you have to pay $5000 for a
          real lawyer after the fact to undo the mess you got yourself in by
          trusting the output of the AI in the first place which made a nuanced
          mistake that the defending "meat" lawyer picked up in 30 seconds
          flat.
          
          The proponents of AI systems seem to mostly misunderstand what you're
          paying for really. It's not writing letters.
       
            jjmarr wrote 16 hours 22 min ago:
             [1] Love this story so much I just posted it. Although it's from
            an era in which you'd buy CDs and books containing contracts, it's
            still relevant with "AI".
            
            > “No lawyer writes a clause who is not prepared to go to court
            and defend it. No lawyer writes words and let’s others do the
            fighting for what they mean and how they must be interpreted. We
            find that forces the attorneys to be very, very, very careful in
            verbiage and drafting. It makes them very serious and very good.
            You cook it, you eat it. You draft it, you defend it.”
            
   URI      [1]: https://www.stimmel-law.com/en/articles/story-4-preprinted...
       
              bberenberg wrote 11 hours 8 min ago:
              This is not true in my experience. We had our generic contract
              attorney screw up and then our litigation attorney scolded me for
              accepting and him for him providing advice on litigation matters
              where he wasn’t an expert.
              
              Lawyers are humans. They make the same mistakes as others humans.
              Quality of work is variable across skills, education, and if they
              had a coffee or not that day.
       
          riku_iki wrote 17 hours 11 min ago:
          > people will pay an AI lawyer $500 to write a document instead of
          paying a human lawyer $500 to write a document.
          
          there will be caste of high-tech lawyers very soon which will be able
          to handle many times more volume of work thanks to AI, and many other
          lawyers will lose their jobs.
       
            6510 wrote 6 hours 44 min ago:
            This is how it has always been. Automation makes a job require less
            traditionally required knowledge, the tasks less complicated and
            increases productivity. This introduces new complexity that
            machines can't solve.
            
            The funny part is that people think we will run out of things to
            do. Most people never hire a lawyer because they are much to
            expensive.
       
            sgt101 wrote 14 hours 57 min ago:
            I know one !
            
            She's got international experience and connections but moved to a
            small town. She was a magic circle partner years ago.  Now she has
            a FTTP connection and has picked up a bunch of contracts that she
            can deliver on with AI. She underbid some big firms on these
            because their business model was traditional rates, and hers is her
            cost * x (she didn't say but >1.0 I think)
            
            Basically she uses AI for document processing (discovery) and
            drafting. Then treats it as the output of associates and puts the
            polish on herself. She does the client meetings too obviously.
            
            I don't think her model will last long - my guess is that there
            will be a transformation in the next 5 years across the big firms
            and then she will be out of luck (maybe not at the margin though).
            She won't care - she'll be on the beach before then.
       
            petesergeant wrote 16 hours 50 min ago:
            Yes, that is obvious. The point you are replying to is that
            oversupply will mean the cost to the consumer will fall
            dramatically too, rather than the AI owner capturing all of the
            previous value.
       
              riku_iki wrote 16 hours 25 min ago:
              It depends. If there will be one/few winners on the market, they
              will dictate price after human labor out-competed through the
              price or quality.
       
                jezzabeel wrote 15 hours 6 min ago:
                If prices are determined by scarcity then the cost of services
                will more likely be tied to the price for energy.
       
          gopalv wrote 17 hours 24 min ago:
          > The paper examines a world people will pay an AI lawyer $500 to
          write a document instead of paying a human lawyer $500 to write a
          document
          
          Is your theory that the next week there will be an AI lawyer that
          charges only 400$, then it is a race to the bottom?
          
          There is a proven way to avoid a race to the bottom for wages, which
          is what a trade union does - a union by acting as one controls a
          large supply of labour to keep wages high.
          
          Replace that with a company and prices, it could very well be that a
          handful of companies could keep prices high by having a seller's
          market where everyone avoids a race to the bottom by incidentally
          making similar pricing calls (or flat out illegally doing it).
       
            habinero wrote 17 hours 12 min ago:
            There have been several startups that tried it, and they all
            immediately ran into hot water and failed.
            
            The core problem is lawyers already automate plenty of their work,
            and lawyers get involved when the normal rules have failed.
            
            You don't write a contract just to have a contract, you write one
            in case something goes wrong.
            
            Litigation is highly dependent on the specific situation and case
            law. They're dealing with novel facts and arguing for new
            interpretations, not milling out an average of other legal works.
            
            Also, you generally only get one bite at the apple, there's  no
            do-overs if your AI screws up. You can hold a person accountable
            for malpractice.
       
              chii wrote 16 hours 53 min ago:
              > The core problem is lawyers already automate plenty of their
              work, and lawyers get involved when the normal rules have failed.
              
              this is true - and the majority of work of lawyers is in knowing
              past information, and synthesising possible futures from those
              information. In contracts, they write up clauses to protect you
              from past issues that have arisen (and may be potential future
              issues, depending on how good/creative said lawyer is).
              
              In civil suits, discovery is what used to take enormous amounts
              of time, but recent automation in discovery has helped
              tremendously, and vastly reduced the amount of grunt work
              required.
              
              I can see AI help in both of these aspects. Now, whether the
              newer AI's can produce the type of creativity work that lawyers
              need to do post information extraction, is still up for debate.
              So far, it doesn't seem like it has reached the required level
              for which a client would trust a pure ai generated contract imho.
              
              I suspect the day you'd trust an AI doctor to diagnose and treat
              you, would also be the day you'd trust an AI lawyer.
       
            echelon wrote 17 hours 17 min ago:
            > There is a proven way to avoid a race to the bottom for wages,
            which is what a trade union does
            
            US automotive, labor, and manufacturing unions couldn't remain
            competitive against developing economies, and the jobs moved
            overseas.
            
            In the last few years, after US film workers went on strike and
            renegotiated their contracts, film production companies had the
            genius idea to start moving productions overseas and hire local
            crews. Only talent gets flown in.
            
            What stops unions from ossifying, becoming too expensive, and
            getting replaced on the international labor market?
       
              amanaplanacanal wrote 7 hours 1 min ago:
              Possibly protectionist tariffs.
       
              js8 wrote 16 hours 56 min ago:
              > What stops unions from ossifying, becoming too expensive, and
              getting replaced on the international labor market?
              
              Labor action, such as strikes.
       
                somenameforme wrote 16 hours 23 min ago:
                That doesn't make any sense as a response to his question.
                Labor actions just further motivate employers to offshore
                stuff. And global labor unions probably can't function because
                of sharp disparities in what constitutions good compensation.
       
            WithinReason wrote 17 hours 22 min ago:
            You would need to coordinate across thousands of companies across
            the entire planet
       
              rvense wrote 17 hours 0 min ago:
              That seems unlikely - law is very much tied to a place.
       
                IncreasePosts wrote 5 hours 7 min ago:
                Yes, but legal documents don't necessarily need to be drafted
                by lawyers accredited in that locale. It usually helps though
                because they are familiar with the local law and other
                processes.
       
          kev009 wrote 17 hours 31 min ago:
          That's a bit too simplistic; would a business have paid IBM the same
          overheads to tabulate and send bills with a computer instead of a
          pool of billing staff?    In business the only justification for
          machinery and development is that you are somehow reducing overheads.
           The tech industry gets a bit warped in the pseudo-religious zeal
          around the how and that's why the investments are so high right now.
          
          And to be transparent I'm very bearish on what we are being marketed
          to as "AI"; I see value in the techs flying underneath this banner
          and it will certainly change white collar jobs but there's endless
          childish and comical hubris in the space from the fans, engineers,
          and oligarchs jockeying to control the space and narratives.
       
          smeeger wrote 17 hours 33 min ago:
          foolish assumption on your part
       
        yieldcrv wrote 17 hours 55 min ago:
        Do you have a degree in theoretical economics?
        
        “I have a theoretical degree in economics”
        
        You’re hired!
        
        real talk though, I wish I had just encountered an obscure paper that
        could lead me to refining a model for myself, but it seems like there
        would be so many competing papers that its the same as having none
       
       
   DIR <- back to front page