_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Boring is good
       
       
        mallowdram wrote 1 hour 13 min ago:
        The computer is not boring, per se, it's neutral when unplugged. The
        idea it accelerates binary primarily for prediction purposes in
        extracting values from the arbitrary, this is not only exciting, it's
        illusory.
        
        The writer fails to grasp what rabbit hole we've gone down since the
        70s/90s when we began applying the principles of prediction to
        computation, then horizontalized them via web. This was the most
        exciting time because it added a vast illusory value to the arbitrary,
        it's a time of massive piracy that posed as corporatocracy.
        
        Once this prediction became automated by AI, yes, now the piracy
        becomes boring, and in turn reveals what was going on all along.
       
        Aldipower wrote 3 hours 33 min ago:
        This is a good article, but
        
        > We keep asking them to do “intelligent things” and find out a)
        they really aren’t that good at it, and b) replacing that human task
        is far more complex than we originally thought
        
        I never thought that. From the beginning on this were easy to uncover
        marketing statements.
       
        Hendrikto wrote 5 hours 36 min ago:
        > Whenever there is hype, we shuffled into the easy path, forcing the
        tech into the product without understanding its weaknesses. We are more
        worried about being left behind than actually doing something of value.
       
        otikik wrote 5 hours 38 min ago:
        > We’re here to solve problems, not look cool.
        
        Shots fired
       
        liampulles wrote 6 hours 1 min ago:
        LLMs are useful in contexts where fuzzy and hazily accurate is
        acceptable. A developer trying to hack together some solution through
        trial and error for example.
        They are less useful in contexts where accuracy is expected or legally
        required. An audit log for example.
        
        Many businesses have made bad judgements of where the distinction is,
        some don't even recognise a distinction. This will improve over time.
       
        DeathArrow wrote 9 hours 35 min ago:
        The investment fund that acquired the company that acquired our company
        requests all that all companies it owns go big on cloud and AI, no
        matter what, because this raises valuation and they can sell them for
        bigger profits.
        
        I have nothing against cloud or AI per se, but I still believe in the
        right tool for the right job and in not doing things just for the sake
        of it. While raising valuation is a good thing, raising costs, delaying
        more useful features and adding complexity should also be taken into
        account.
       
          sfn42 wrote 15 min ago:
          Easy, just quit. Go work for a serious company instead of these
          grifters.
       
        mindwok wrote 10 hours 48 min ago:
        I feel that with LLMs and AI, people are furiously trying to argue the
        reality they desire into existence. I've never read more articles
        predicting the future than on this topic (I am guilty of it, too.)
       
          qustrolabe wrote 6 hours 9 min ago:
          Predicting AGI and all those "complete replacement of jobs" are
          boring example of this. What's more annoying to me are people who do
          that arguing into existence about reality they want where AI is
          completely useless and fake and can't do anything and make you stupid
          and ten more loudest clickbait titles of recent year
       
          fuzztester wrote 9 hours 10 min ago:
          >I feel that with LLMs and AI, people are furiously trying to argue
          the reality they desire into existence.
          
          The emperor's new clothes ...
       
        nine_k wrote 10 hours 53 min ago:
        The author of "Choose boring technology" regretted the choice of the
        word "boring" [1].
        
        Anyway, boring is bad. Boring is what spends your attention on
        irrelevant things. Cobol's syntax is boring in a bad way. Go's error
        handling is boring in a bad way. Manually clicking through screens
        again and again because you failed to write UI tests is boring in a bad
        way.
        
        What could be "boring in a good way" is something that gets things done
        and gets out of your way. Things like HTTPS, or S3, or your keyboard
        once you have leaned touch typing, are "boring in a good way". They
        have no concealed surprises, are well-tested in practice, and do what
        they say on the tin, every time.
        
        New and shiny things can be "boring in the good way", e.g. uv [2]. Old
        and established things can be full of (nasty) surprises, and, in this
        regard, the opposite of boring, e.g. C++.
        
        [1]
        
   URI  [1]: https://boringtechnology.club/#30
   URI  [2]: https://github.com/astral-sh/uv
       
          tpoacher wrote 6 hours 0 min ago:
          > The author of "Choose boring technology" regretted the choice of
          the word "boring"
          
          Well, yes, but only in the sense that people kept giving him beef
          about how boring is a bad word in their mind, not because it was a
          bad word for this context per se. Which is somewhat ironic given your
          comment!
          
          I suppose what you're getting at is the difference between boring,
          and "boooooriiiiiing".
       
            adastra22 wrote 4 hours 55 min ago:
            If “boring” coveys the wrong, and at times even opposite
            meaning in listeners ears, then “boring” was a bad word choice,
            even in context.
       
              worthless-trash wrote 3 hours 49 min ago:
              There is no good choice of words, because you can not control how
              people interpret language.
              
              Everything at some point will be interpreted incorrectly.
       
          buster wrote 7 hours 56 min ago:
          What you describe is the difference between tedious and simple.
          
          Boring is good. I don't want to be excited by technology, i want to
          be bored in the sense that it's simple and out of my way.
          
          Same for KISS. I tend to tell people to not only keep things simple,
          but boring even. Some new code i need to read and fix or extend? I
          want to be bored. Bored means it's obvious and simple.
          
          The difference? There are many complex libraries. By definition they
          are not simple technology.
          
          For example a crypto library. Probably one of the most complex tasks.
          I would consider it a good library if it's boring to use/extend/read.
       
          prmph wrote 8 hours 38 min ago:
          Why conflate boring with old? "Boring" in this context means: proven
          and stable. Yes, that would take some time to become apparent, but
          the converse is not necessarily the case: a tech does not become
          "boring" in a good way simply because it is old.
          
          All this was my understanding before, so not sure why you think
          "boring" was meant to be equivalent to "old"
       
            ozim wrote 8 hours 22 min ago:
            Problem is official definition is „not interesting, tedious”
            with synonym „dull”.
            
            As much as I do get the idea, I can see how promoting use of
            tedious to use and dull tools is something that really misses the
            mark.
            
            Well known and mature tools are still sharp and lots of them are
            not tedious to use.
       
              Ekaros wrote 7 hours 49 min ago:
              What is the opposite of dull? Exiting?
              
              I do not want my browser to be exiting. I do not want for it to
              change every week. Say moving buttons to different places.
              Changing how address bar operates. Maybe trying new short cut
              keys...
              
              Same goes for most useful software. I actually do want them to be
              dull. And do their job and not get in between and make my day
              more interesting by having to fight against it.
       
                hbarka wrote 13 min ago:
                Boring is the classic turn signal stalk on my car’s steering
                column. Exciting is when Elon Muck decided to make it a push
                button. He did not consider that British drivers would have to
                do hand gymnastics in a roundabout. A few months later there
                was a third-party Chinese made turn signal stalk add-on. And of
                course the boring turn signal indicator was brought back by
                Tesla. We could also talk about the wing doors on the Model X
                or the exciting /s Cybertruck.
       
                ozim wrote 6 hours 6 min ago:
                Well known and mature tools are still sharp and lots of them
                are not tedious to use..
                
                I picked "sharp" not "exciting".
                
                Dull knife doesn't do its job, you want tools that do the job
                efficiently. "Boring" as it seems was interpreted as picking
                tools that don't do the job efficiently. That is why original
                idea creator found "Choose boring technology" most likely
                misunderstood.
       
          kmarc wrote 8 hours 46 min ago:
          What you just described fits my definition of boring, which is some
          function of (time passed, individual at keyboard)
          
          Cobol was (and for some, still is) exciting at first, but _becomes_
          boring once you master it, and the ecosystem evolves to fix or work
          around its shortcomings. Believe it or not, even UX/UI testers can
          deal with and find happiness in clicking through UIs for the tenth
          thousand time (sure, last time I saw such Tester, was at around
          2010).
          
          This doesn't mean the technology itself becomes bad or stays good. It
          just means the understanding (and usage patterns) solidifies, so it
          becomes less exciting, hence: "boring".
          
          But you can't sell a book with the title "Choose well-established
          technology". Because people would be like, no sht, Sherlock, I don't
          need a book to know that.
       
        fnord77 wrote 12 hours 9 min ago:
        > He uses the example of the dynamo, an old-fashioned term for a
        powerful electric motor.
        
        um, dynamo is a generator, it takes mechanical energy and turns into to
        electricity.
       
          adastra22 wrote 4 hours 39 min ago:
          While technically correct (the best kind of correct), they are or can
          be literally the same mechanism. Apply a mechanical force and you
          will create DC current. Apply a voltage and you will generate motion.
       
        sothatsit wrote 14 hours 2 min ago:
        I tend to think that the reason people over-index on complex use-cases
        for LLMs is actually reliability, not a lack of interest in boring
        projects.
        
        If an LLM can solve a complex problem 50% of the time, then that is
        still very valuable. But if you are writing a system of small LLMs
        doing small tasks, then even 1% error rates can compound into highly
        unreliable systems when stacked together.
        
        The cost of LLMs occasionally giving you wrong answers is worth it for
        answers to harder tasks, in a way that it is not worth it for smaller
        tasks. For those smaller tasks, usually you can get much closer to 100%
        reliability, and more importantly much greater predictability, with
        hand-engineered code. This makes it much harder to find areas where
        small LLMs can add value for small boring tasks. Better auto-complete
        is the only real-world example I can think of.
       
          raincole wrote 8 hours 1 min ago:
          Yeah. Is it even proven that LLMs don't hallucinate for smaller
          tasks? The author seems to imply that. I fail to see how it could be
          true.
       
            adastra22 wrote 4 hours 50 min ago:
            No? That is trivially not the case. Ask an LLM something outside
            its training data and it will hallucinate the answer. How can it do
            anything else? Maybe its hallucination ends up being correct, but
            not all of the time.
       
          a_bonobo wrote 13 hours 19 min ago:
          >If an LLM can solve a complex problem 50% of the time, then that is
          still very valuable
          
          I'd adjust that statement - If an LLM can solve a complex problem 50%
          of the time and I can evaluate correctness of the output, then that
          is still very valuable. I've seen too many people blindly pass on LLM
          output - for a short while it was a trend in the scientific
          literature to have LLMs evaluate output of other LLMs? Who knows how
          correct that was. Luckily that has ended.
       
            empiko wrote 3 hours 3 min ago:
            > Who knows how correct that was. Luckily that has ended.
            
            What do you mean it ended? I still see tons of NLP papers with this
            methodology.
       
            adastra22 wrote 4 hours 52 min ago:
            > for a short while it was a trend in the scientific literature to
            have LLMs evaluate output of other LLMs? Who knows how correct that
            was.
            
            Highly reliable. So much so that is basically how modern LLMs work
            internally. Also speaking from personal experience in the projects
            I work on, it is the chief way to counteract hallucination,
            poisoned context windows, and scaling beyond the interaction limit.
            
            LLMs evaluating LLM output works surprisingly well.
       
            danpalmer wrote 10 hours 15 min ago:
            > I've seen too many people blindly pass on LLM output
            
            I misread this the first time and realised both interpretations are
            happening. I've seen people copy-paste out of ChatGPT without
            reading, and I've seen people "pass on" or reject content simply
            because it has been AI generated.
       
            sothatsit wrote 13 hours 11 min ago:
            True! This is what has me more excited about LLMs producing Lean
            proofs than written maths proofs. The Lean proofs can be proved to
            be correct, whereas the maths proofs require experts to verify them
            and look for mistakes.
            
            That said, I do think there are lots of problems where verification
            is easier than doing the task itself, especially in computer
            science. I think it is easier to list tasks that aren't easier to
            verify than to do from scratch actually. Security is one major one.
       
              hansvm wrote 11 hours 37 min ago:
              Even there it's risky. LLMs are good at subtly misstating the
              problem, so it's relatively easy to make them prove things which
              look like the thing you wanted but which are mostly unrelated.
       
                sothatsit wrote 8 hours 16 min ago:
                Yes, Lean only lets you be confident in the contents of the
                proof, not how it was formed. But, I still think that's pretty
                cool and valuable.
       
        alberth wrote 14 hours 8 min ago:
        OT: Since the author is a former Apple UX designer who worked on the
        Human Interface Guidelines, I hope he shares his thoughts on the recent
        macOS 26 and iOS updates - especially on Liquid Glass.
        
   URI  [1]: https://jenson.org/about-scott/
       
        sporkxrocket wrote 14 hours 9 min ago:
        I like this article, and I didn't expect to because there's been
        volumes written about how you should be boring and building things in
        an interesting way just for the hell of it, is bad (something I don't
        agree with).
        
        Small models doing interesting (boring to the author) use-cases is a
        fine frontier!
        
        I don't agree at all with this though:
        
        > "LLMs are not intelligent and they never will be."
        
        LLMs already write code better than most humans. The problem is we
        expect them to one-shot things that a human may spend many
        hours/days/weeks/months doing. We're lacking coordination for long-term
        LLM work. The models themselves are probably even more powerful than we
        realize, we just need to get them to "think" as long as a human would.
       
          liampulles wrote 5 hours 37 min ago:
          "think" is doing a lot of heavy lifting there. Let's remember that
          neural networks are designed primarily not as an accurate brain
          models but as matrix models that can be calculated on in a feasible
          time scale on GPUs.
       
          da_chicken wrote 13 hours 6 min ago:
          The issue is one that's been stated here before: LLMs are language
          models. They are not world models. They are not problem models. They
          do not actually understand world or the underlying entities
          represented by language, or the problems being addressed. LLMs
          understand the shape of a correct answer, and how the components of
          language fit together to form a correct answer. They do that because
          they have seen enough language to know what correct answers look
          like.
          
          In human terms, we would call that knowing how to bullshit. But just
          like a college student hitting junior year, sooner or later you'll
          learn that bullshitting only gets you so far.
          
          That's what we've really done. We've taught computers how to
          bullshit. We've also managed to finally invent something that lets us
          communicate relatively directly with a computer using human
          languages. The language processing capabilities of an LLM are an
          astonishing multi-generational leap. These types of models will
          absolutely be the foundation for computing interfaces in the future.
          But they're still language models.
          
          To me it feels like we've invented a new keyboard, and people are
          fascinated by the stories the thing produces.
       
            adastra22 wrote 4 hours 41 min ago:
            I could make the exact same argument about the activation loops
            happening in your brain when you typed this out.
            
            Transformer architectures are not replicas of human brain
            architrcture, but they are not categorically different either.
       
            rbranson wrote 8 hours 55 min ago:
            Is it bullshitting to perform nearly perfect language to language
            translation or to generate photorealistic depictions from text
            quite reliably? or to reliably perform named entity extraction or
            any of the other millions of real-world tasks LLMs already perform
            quite well?
       
              da_chicken wrote 4 hours 27 min ago:
              Picking another task like translation which doesn't really
              require any knowledge outside of language processing is not a
              particularly good way to convince me that LLMs are doing anything
              other than language processing. Additionally, "near perfect" is a
              bit overselling it, IMX, given that they still struggle with
              idioms and cultural expressions.
              
              Image generation is a bit better, except it's still not really
              aware of what the picture is, either. It's aware of what images
              are described as by others, let alone the truth of the generated
              image. It makes pictures of dragons quite well, but if you ask it
              for a contour map of a region, is it going to represent it
              accurately? It's not concerned about truth, it's concerned about
              truthiness or the appearance of truth. We know when that
              distinction is important. It doesn't.
       
            zoom6628 wrote 9 hours 33 min ago:
            THIS !
       
          bigstrat2003 wrote 13 hours 29 min ago:
          > LLMs already write code better than most humans.
          
          If you mean better than most humans considering the set of all
          humans, sure. But they write code worse than most humans who have
          learned how to write code. That's not very promising for them
          developing intelligence.
       
            adastra22 wrote 4 hours 43 min ago:
            I have been writing code for nearly 30 years. I have my name on a
            couple of ingenious patents.
            
            I have fully integrated LLMs into May daily workflow, and am
            routinely surprised at the innovative solutions that Claude
            sometimes comes up with. There is a lot of generated code that is
            far beyond my own capabilities, even as an expert in the field in
            which I’m working.
       
        tibbar wrote 14 hours 21 min ago:
        I think this is, essentially, a wishful take. The biggest barrier to
        models being able to do more advanced knowledge work is creating
        appropriately annotated training data, followed by a few specific
        technical improvements the labs are working on. Models have already
        nearly maxed out "work on a well-defined puzzle that can be feasibly
        solved in a few hours" -- stunning! -- and now labs will turn to
        expanding other dimensions.
       
          adastra22 wrote 4 hours 46 min ago:
          There are plenty of ways of writing more capable software stacks
          using LLMs which don’t rely on reinforcement learning. If anything,
          the AI labs have too much of a focus on building larger models with
          bigger or better sets of labeled data, where algorithmic changes will
          let you do more with the same tools.
       
        mikepalmer wrote 14 hours 41 min ago:
        "LLMs are not intelligent and they never will be."
        
        If he means they will never outperform humans at cognitive or robotics
        tasks, that's a strong claim!
        
        If he just means they aren't conscious... then let's don't debate it
        any more here. :-)
        
        I agree that we could be in a bubble at the moment though.
       
          vjvjvjvjghv wrote 2 hours 22 min ago:
          The word “never” is a dangerous one. I remember that computers
          would “never” beat humans at chess. And people would never use
          the internet for banking.
       
          fakwandi_priv wrote 8 hours 59 min ago:
          I have heard people on both ends of the spectrum:
          
          - LLM's are too limited in capabilities and make too many mistakes
          - We're still in the DOS era of LLM's
          
          I'm leaning more towards the the 2nd, but in either case pandora's
          box has been opened and you can already see the effects of the
          direction our civilization is moving towards with this technology.
       
            Aldipower wrote 3 hours 32 min ago:
            DOS actually worked pretty well, at least it worked honest.
       
              pjmlp wrote 3 hours 17 min ago:
              I had lots of fun with DOS, even if I was the only PC guy in a
              circle of friends that were Amiga owners.
       
        stephenlf wrote 15 hours 17 min ago:
        Great take. I personally find the thought of spec-driven development
        tedious and boring. But maybe that’s a good thing.
       
        akagusu wrote 2 days ago:
        I also agree that boring is good, but in our current society you won't
        get a job for being boring, and when you get a job, it's is guaranteed
        you are not being paid to solve problems.
       
          kmarc wrote 8 hours 44 min ago:
          Some early retirees who started learning Cobol just 8 years ago would
          very much disagree with you :-)
       
          com2kid wrote 14 hours 8 min ago:
          > but in our current society you won't get a job for being boring,
          
          One can argue that every other field of engineering outside of
          Software Engineering, specializes in making complex things into
          boring things.
          
          We are the unique snowflakes that take business use cases and build
          castle in the clouds that may or may not actually solve the business
          problem at hand.
       
            voxelghost wrote 12 hours 39 min ago:
            We're everything from the Architect to the concrete guy,to the
            framer, carpenter, sparky, and plumber.
            
            ... and if it all falls down, don't blame us - you clicked the EULA
            /s
       
          Tagbert wrote 14 hours 43 min ago:
          One of my main job functions is to watch out for and solve problems.
       
          keyle wrote 15 hours 16 min ago:
          > and when you get a job, it's is guaranteed you are not being paid
          to solve problems
          
          That's just your experience, based on your geolocation and chain of
          events.
       
       
   DIR <- back to front page