_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Pnpm has a new setting to stave off supply chain attacks
       
       
        lloydatkinson wrote 7 min ago:
        Why must this be in a workspace file which you may not even have if you
        aren't in a monorepo, why not in package.json? Anyway, it's progress.
       
        tripplyons wrote 26 min ago:
        If I run:
        
        pnpm config set -g minimumReleaseAge 1440
        
        Does that work as well? I can't tell if the global settings are the
        same as workspace settings, and it lets me set nonsense keys this way,
        so I'm not sure if there is a global equivalent.
       
        NamlchakKhandro wrote 1 hour 47 min ago:
        npm is now the tutorial filter.
        
        If I see someone using npm as a cli tool unironically...
       
        user1999919 wrote 1 hour 55 min ago:
        but what will we ever do without our: "developers are lazy, developers
        are dumb, left-pad degeneracy!"
       
        tuananh wrote 2 hours 20 min ago:
        in corp settings, you usually have a proxy registry. you can setup
        firewall there for this kind of things to filter out based on license,
        cve, release date, etc...
       
        Ozzie_osman wrote 2 hours 38 min ago:
        Does anyone understand why npm isn't adding these sorts of features?
       
          JoshuaEN wrote 1 hour 3 min ago:
          There was an NPM RFC for this feature (though not as focused on
          supply chain attacks) in 2022, but the main response mirrored some of
          the other comments in here.
          
          "waiting a length of time doesn’t increase security, and if such a
          practice became common then it would just delay discovery of
          vulnerabilities until after that time anyways"
          
   URI    [1]: https://github.com/npm/rfcs/issues/646#issuecomment-12824971...
       
        chr15m wrote 2 hours 59 min ago:
        The correct value for this setting is infinity seconds. Upgrades should
        be considered and deliberate, not automatic.
       
          JoshuaEN wrote 44 min ago:
          I don't think this is realistic in the default npm ecosystem where
          projects can have 1000s of dependencies (with the majority being
          transitive with fuzzy versions).
          
          Though pnpm does have a setting to help with this too: [1]
          time-based, which effectively pins subdependencies based on the
          published time of the direct dependency.
          
   URI    [1]: https://pnpm.io/settings#resolutionmode
       
          esafak wrote 54 min ago:
          No, it isn't. Upgrades should be routine, like exercising. With your
          approach it becomes increasingly difficult and eventually impossible
          to upgrade anything since it requires moving a mountain. An update a 
          ̶d̶a̶y̶ week makes the tech debt go away.
       
          kibwen wrote 2 hours 28 min ago:
          The downside of this approach is that this is how you create an
          ecosystem where legitimate security fixes never end up getting
          applied. There's no free lunch, you need to decide whether you're
          more concerned about vulnerabilities intentional backdoors (and thus
          never update anything automatically) or vulnerabilities from ordinary
          unintentional bugs (and thus have a mechanism for getting security
          updates automatically).
       
        bamboozled wrote 3 hours 23 min ago:
        Can anyone tell me if yarn just as vulnerable as NPM? Isn't it the
        packages that are vulnerable and not the package manger software
        itself?
       
        __MatrixMan__ wrote 4 hours 1 min ago:
        Its not a bad idea, might help in certain cases.
        
        But the real solution to this kind of attack is to stop resolving
        packages by name and instead resolve them by hash, then binding a name
        to that hash for local use.
        
        That would of course be a whole different, mostly unexplored, world,
        but there's just no getting around the fact that blindly accepting
        updated versions of something based on its name is always going to
        create juicy attack surface around the resolution of that name to some
        bits.
       
          mort96 wrote 3 hours 30 min ago:
          The problem here isn't, "someone introduced malware into an existing
          version of a package". The problem is, "people want to stay up to
          date, so when a new patch version is released, everyone upgrades to
          that new patch version".
       
            __MatrixMan__ wrote 2 hours 39 min ago:
            The problem is that they implicitly do so.  If they had to enter
            the hash of the latest and greatest version, the onus would be on
            them at that time to scrutinize it. At worst the spread of the
            malicious package would be slowed, and at best it would be stopped.
       
              mort96 wrote 1 hour 0 min ago:
              Surely you'd achieve the same thing by making people manually
              enter a new version number?
              
              I'm not inherently against the idea of specifying a hash, it
              would protect against NPM hosting infrastructure being
              compromised, but again, that's not what we're seeing here
       
          frankdejonge wrote 3 hours 46 min ago:
          Resolving by hash is a half solution at best. Not having automated
          dependency upgrades also has severe security downsides. Apart from
          that, lock files basically already do what you describe, they contain
          the hashes and the resolution is based off the name while the hash
          ensures for the integrity of the resolved package. The problem is
          upgrade automation and supply chain scanning. The biggest issue there
          is that scanning is not done where the vulnerability is introduced
          because there is no money for it.
       
            __MatrixMan__ wrote 2 hours 33 min ago:
            Do you suppose that automated dependency upgrades are less likely
            to introduce malicious code than to remove it? They're about
            compliance, not security. If I can get you to use malicious code in
            the first place I can also trick you into upgrading from safe code
            to the vulnerable code in the name of "security".
            
            As for lock files, they prevent skulduggery after the maintainer
            has said "yeah, I trust this thing and my users should too" but the
            attacks we're seeing is upstream of that point because maintainers
            are auto-trusting things based on their name+version pair, not
            based on their contents.
       
              debazel wrote 1 hour 42 min ago:
              > If I can get you to use malicious code in the first place I can
              also trick you into upgrading from safe code to the vulnerable
              code in the name of "security".
              
              Isn't the whole point that malicious actors usually only have a
              very short window where they can actually get you to install
              anything, before shut out again? That's the whole point of having
              a delay in the package-manager.
       
                __MatrixMan__ wrote 1 hour 10 min ago:
                Who is going to discover it in that time? Not the maintainers,
                they've already released it. Their window for scrutiny has
                passed.
                
                There is some sense in giving the early adopters some time to
                raise the alarm and opting into late adoption, but isn't that
                better handled by defensive use of semantic versioning?
                
                Consider the xzutils backdoor. It was introduced a month before
                it was discovered, and it was discovered by a user.
                
                If that user had waited a few days, it would just have been
                discovered a few days later, during which time it may have been
                added to an even wider scope of downstream packages. That is,
                supposing they didn't apply reduced scrutiny due to their
                perception that it was safe due to the soak period.
                
                Its not nothing, but its susceptible to creating a false sense
                of security.
       
                  debazel wrote 1 hour 1 min ago:
                  The maintainers did notice in both of the recent attacks, but
                  it takes time to regain access to your compromised account to
                  take the package down, contact npm, etc.
                  
                  All recent attacks have also been noticed within hours of
                  release by security companies that automatically scan all
                  newly released packages published to npm.
                  
                  So as far as I know all recent attacks would have been
                  avoided by adding a short delay.
       
          mirekrusin wrote 3 hours 55 min ago:
          name + version are immutable, you can't republish packages in npm
          under existing version.
          
          you can only unpublish.
          
          content hash integrity is verified in lockfiles.
          
          the problem is with dependencies using semver ranges, especially wide
          ones like "debug": "*"
          
          initiatives like provenance statements [0] / code signing are also
          good complement to delayed dependency updates.
          
          also not running as default / whitelisting postinstall scripts is
          good default in pnpm.
          
          modifying (especially adding) keys in npmjs.org should be behind
          dedicated 2fa (as well as changing 2fa)
          
          [0]
          
   URI    [1]: https://docs.npmjs.com/generating-provenance-statements
       
            __MatrixMan__ wrote 2 hours 44 min ago:
            Those are promises that npm intends to keep, but whether they do or
            not isn't something that you as a package user can verify.  Plus
            there's also the possibility that the server you got those bits
            from was merely masquerading as npm.
            
            The only immutability that counts is immutability that you can
            verify, which brings us back to cryptographic hashes.
       
        h4ch1 wrote 4 hours 9 min ago:
         [1] There's an open discussion about adding something similar to bun
        as well^
        
        minimumReleaseAge doesn't seem to be a bulletproof solution so there's
        still some research/testing to be done in this area
        
   URI  [1]: https://github.com/oven-sh/bun/issues/22679
       
        keraf wrote 4 hours 51 min ago:
        I might be naive but why isn't any package manager (npm, pnpm, bun,
        yarn, ...) pushing for a permission system, where packages have to
        define in the package.json what permission they would like to access?
        À la Deno but scoped to dependencies or like mobile apps do with their
        manifest.
        
        I know it would take time for packages to adopt this but it could be
        implemented as parameters when installing a new dependency, like `npm i
        ping --allow-net`. I wouldn't give a library like chalk access to I/O,
        processes or network.
       
          IanCal wrote 4 hours 39 min ago:
          I feel like that would require work from the language side, or at
          least runtimes. Is there a way of stopping code in one package from,
          say, hitting the network?
          
          You might be able to do this around install scripts, though disk
          writing is likely needed for all (but perhaps locations could be
          controlled).
       
            Filligree wrote 4 hours 35 min ago:
            We've seen a lot of stunningly incompetent attacks that
            nevertheless get to a lot of people.
            
            Yeah, it needs work from the language runtime, but I think even a
            hacky, leaky 'security' abstraction would be helpful, because the
            majority of malware developers probably aren't able to break out of
            a language-level sandbox, even if the language still allows you to
            do unsafe array access.
            
            Then we can iterate.
       
        the_mitsuhiko wrote 4 hours 51 min ago:
        I think uv should get some credit for being an early supporter of this.
         They originally added it as a hidden way to create stable fixtures for
        their own tests, but it has become a pretty popular flag to use.
        
        This for instance will only install packages that are older than 14
        days:
        
        uv sync --exclude-newer $(date -u -v-14d '+%Y-%m-%dT%H:%M:%SZ')
        
        It's great to see this kind of stuff being adopted in more places.
       
          mcintyre1994 wrote 4 hours 44 min ago:
          Nice, but I think the config file is a much better implementation for
          protecting against supply chain attacks, particularly those targeting
          developers rather than runtime. You don’t want to rely on every
          developer passing a flag every time they install. This does suffer
          from the risk of using `npm install` instead of `pnpm install`
          though.
          
          It would also be nice to have this as a flag so you can use it on
          projects that haven't configured it though, I wonder if that could be
          added too.
       
            ramses0 wrote 2 hours 14 min ago:
            Just Minimum Version Selection in conjunction with "Minimum
            non-Vulnerable Version" (and this "--minAge") would do a lot, and
            effectively suss out a lot of poorly/casually maintained packages
            (eg: "finished" ones). [1] MVS makes tons of sense that you
            shouldn't randomly uptake "new" packages that haven't been
            "certified" by package maintainers in their own dependencies.
            
            In the case of a vulnerable sub-dependency, you're effectively
            having to "do the work" to certify that PackageX is compatible with
            PackageY, and "--minAge" gives industry (and maintainers) time to
            scan before insta-pwning anyone who is unlucky that day.
            
   URI      [1]: https://research.swtch.com/vgo-mvs#upgrade_timing
       
            cap11235 wrote 2 hours 45 min ago:
            You can put the uv setting in pyproject.toml or uv.toml.
       
              fainpul wrote 1 hour 39 min ago:
              But then you have to hardcode a timestamp, since this is not
              gonna work in uv.toml:
              
                exclude-newer = $(date -uv -14d '+%Y-%m-%dT%H:%M:%SZ')
       
              mcintyre1994 wrote 2 hours 14 min ago:
              Nice, supporting both definitely seems ideal.
       
        _betty_ wrote 5 hours 27 min ago:
        how about requiring some kind of interaction if they want to run an
        install script?
       
          jsheard wrote 5 hours 13 min ago:
          Pnpm already did that:
          
   URI    [1]: https://github.com/pnpm/pnpm/releases/tag/v10.0.0
       
        wallrat wrote 5 hours 29 min ago:
        There are a few commercial products that allow you to do this also for
        other ecosystems (e.g. maven, nuget, pypi etc), including ours [1] Good
        to see some OSS alternatives showing up!
        
   URI  [1]: https://docs.bytesafe.dev/policies/delay-upstream/
       
        gausswho wrote 5 hours 44 min ago:
        'Delayed dependency updates' is a response to supply-side attacks in
        the JavaScript world, but it aptly describes how I have come to
        approach technology broadly.
        
        Large tech companies, as with most industry, have realized most people
        will pay with their privacy and data long before they'll pay with
        money. We live in a time of the Attention Currency, after all.
        
        But you don't need to be a canary to live a technology-enabled life.
        Much software that you pay with your privacy and data has free or cheap
        open-source alternatives that approach the same or higher quality. When
        you orient your way of consuming to 'eh, I can wait till the version
        that respects me is built', life becomes more enjoyable in myriad ways.
        
        I don't take this to absolute levels. I pay for fancy pants LLM's,
        currently. But I look forward to the day not too far away where I can
        get today's quality for libre in my homelab.
       
        progx wrote 6 hours 18 min ago:
        That solve not really the problem.
        
        A better (not perfect) solution: Every package should by AI analysed on
        an update before it is public available, to detect dangerous code and
        set a rating.
        
        In package.json should be a rating defined, when remote package is
        below that value it could be updated, if it is higher a warning should
        appear.
        
        But this will cost, but i hope, that companies like github, etc. will
        allow package-Repositories to use their services for free. Or we should
        find a way, to distribute this services to us (the users and devs) like
        a BOINC-Client.
       
          philipwhiuk wrote 1 hour 5 min ago:
          A better solution is restricting package permissions.
       
          jonkoops wrote 6 hours 16 min ago:
          Ah, yes! The universal and uncheatable LLM! Surely nothing can go
          wrong.
       
            NitpickLawyer wrote 5 hours 23 min ago:
            Perfect is the enemy of good. Current LLM systems + "traditional
            tools" for scanning can get you pretty far into detecting the low
            hanging fruit. Hell, I bet even a semantic search with small
            embedding models could give you a good insight into "what's in the
            release notes matches what's in the code". Simply flag it for being
            delayed a few hours, till a human can view it. Or run additional
            checks.
       
            progx wrote 6 hours 13 min ago:
            I can't wait to read about your solution.
       
              orphea wrote 4 hours 59 min ago:
              You don't need to be a chef to tell that the soup is too salty.
       
            progx wrote 6 hours 14 min ago:
            As i wrote "not perfect". But better than anything else or nothing.
       
              robertlagrant wrote 6 hours 1 min ago:
              The Politician's Syllogism[0] is instructive.
              
              [0]
              
   URI        [1]: https://en.wikipedia.org/wiki/Politician's_syllogism
       
                progx wrote 5 hours 53 min ago:
                OK, we are here now on reddit or facebook?
                
                I thought we discuss here problems and possible solutions.
                
                My fault.
       
                  rpdillon wrote 1 hour 1 min ago:
                  I'm not sure why everyone is so hostile. Your idea has merit,
                  along the lines of a heuristic that you trigger a human
                  review as a follow-up. I'd be surprised if this isn't exactly
                  the direction things go, although I don't think the tools
                  will be given for free, but rather made part of the platform
                  itself, or perhaps as an add-on service.
       
                  robertlagrant wrote 4 hours 29 min ago:
                  I don't think "we should use AI to solve this" is a solution
                  proposal.
       
        OskarS wrote 6 hours 22 min ago:
        I have a question: when I’ve seen people discussing this setting,
        people talk about using like ”3 days” or ”7 days” as the
        timeout, which seems insanely short to me for production use. As a C++
        developer, I would be hesitant to use any dependency in the first six
        months of release in production, unless there’s some critical CVE or
        something (then again, we make client side applications with
        essentially no networking, so security isn’t as critical for us,
        stability is much more important).
        
        Does the JS ecosystem really move so fast that you can’t wait a month
        or two before updating your packages?
       
          diegof79 wrote 3 hours 5 min ago:
          Transitive dependencies are the main issue.
          
          Suppose you have a package P1 with version 1.0.0 that depends on D1
          with version ^1.0.0. The “^” indicates a range query. Without
          going into semver details, it helps update D1 automatically for minor
          patches or non-breaking feature additions.
          
          In your project, everything looks fine as P1 is pinned to 1.0.0.
          Then, you install P2 that also uses D1. A new patch version of D1
          (1.0.1) was released. The package manager automatically upgrades to
          1.0.1 because it matches the expression ^1.0.0, as specified by P1
          and P2 authors.
          
          This can lead to surprises. JS package managers use lock files to
          prevent changes during installs. However, they still change the lock
          file for additions or manual version upgrades, resolving to newer
          minor dependencies if the version range matches. This is often
          desirable for bug fixes and security updates. But, it opens the door
          to this type of attack.
          
          To answer your question, yes, the JS ecosystem moves faster, and pkg
          managers make it easy to create small libraries. This results in many
          “small” libraries as transitive dependencies. Rewriting these
          libraries with your own code works for simple cases like left-pad,
          but you can’t rewrite a webserver or a build tool that also has
          many small transitive dependencies. For example, the chalk library is
          used by many CLI tools to show color output.
       
          dtech wrote 3 hours 10 min ago:
          Waiting 6 months to upgrade a dependency seems crazy, that's
          definitely not a thing in other languages or maybe companies. (It
          might be due to priorization, but not due to some rule of thumb)
          
          In the JVM ecosystem it's quite common to have Dependabot or Renovate
          automatically create PRs for dependency upgrades withing a few hours
          of it being released. If it's manual it highly irregular and depends
          on the company.
       
            slowroll22 wrote 1 hour 53 min ago:
            For a previous place I worked - on some of our products 6 months
            was the minimum - and explicitly a year for a few of the
            dependencies.
            
            The main deciding factors were the process and frequency it was
            released / upgraded by us or our customers.
            
            The on-prem installs had the longest delay because once it was out
            there it was harder for us to address issues. Some customers also
            had a change freeze in place once things have been approved which
            was a pain to deal with if we needed to patch something for them.
            
            Products that had a shorter release or update cycle (e.g. the
            mobile app) had a shorter delay (but still a delay) because any
            issue could be addressed faster.
            
            The services that were hosted by us had the shortest delay on the
            order of days to weeks.
            
            There were obviously exceptions in both directions but we tried to
            avoid them.
            
            Prioritisation wasnt really an issue - a lot of dependencies were
            increased on internal builds so we had more time to test and verify
            before committing to it once it reached our stability rules.
            
            Other factors that influenced us: 
            - Blast radius - a buggy dependency in our desktop/server
            applications had more chance to cause damage than our hosted web
            application so it rolled a little slower for dependencies.
            
            - Language (more like ergonomics of the language) - updating our
            C++ deps was a lot more cumbersome than JS deps)
       
              esafak wrote 57 min ago:
              As long as you can quickly upgrade a package when there's a
              security patch you're good. You make it sound like that's not the
              case, though.
       
          patwolf wrote 3 hours 12 min ago:
          It's common to have npm auditing enabled, which means your CI/CD will
          force you to update to a brand new version of a package because a
          security vulnerability was reported in an older one.
          
          I've also had cases where I've found a bug in a package, submitted a
          bug report or PR, and then immediately pulled in the new version as
          soon as it was fixed. Things move fast in the JavaScript/npm/GitHub
          ecosystem.
       
          codemonkey-zeta wrote 3 hours 25 min ago:
          I think the surface area for bugs in a C++ dependency is way bigger
          than a JS one. Pulling in a new node module is not going to segfault
          my app, for example.
       
          creesch wrote 5 hours 10 min ago:
          > Does the JS ecosystem really move so fast that you can’t wait a
          month or two before updating your packages?
          
          Really depends on the context and where the code is being used. As
          others have pointed out most js packages will use semantic
          versioning. For the patch releases (the last of the three numbers),
          for code that is exposed to the outside world you generally want to
          apply those rather quickly. As those will contain hotfixes including
          those fixing CVEs.
          
          For the major and minor releases it really depends on what sort of
          dependencies you are using and how stable they are.
          
          The issue isn't really unique to the JavaScript eco system either. A
          bigger java project (certainly with a lot of spring related
          dependencies) will also see a lot of movement.
          
          That isn't to say that some tropes about the JavaScript ecosystem
          being extremely volatile aren't entirely true. But in this case I do
          think the context is the bigger difference.
          
          > then again, we make client side applications with essentially no
          networking, so security isn’t as critical for us, stability is much
          more important)
          
          By its nature, most JavaScript will be network connected in some
          fashion in environments with plenty of bad actors.
       
          pandemic_region wrote 5 hours 24 min ago:
          > Does the JS ecosystem really move so fast that you can’t wait a
          month or two before updating your packages?
          
          In 2 months, a typical js framework goes through the full Gartner
          Hype Cycle and moves to being unmaintained with an archived git repo
          and dozens of virus infected forks with similar names.
       
          ozim wrote 5 hours 54 min ago:
          NPM packages follow semantic versioning so minor versions should be
          fine to auto update. (there is still an issue what for package
          maintainer might be minor not being minor for you - but let's stick
          to ideal world for that)
          
          I don't think people are having major versions updated every month,
          it is more really like 6 months or once a year.
          
          I guess the problem might be people think auto updating minor
          versions in CI/CD pipeline will keep them more secure as bug fixes
          should be in minor versions but in reality we see it is not the case
          and attackers use it to spread malware.
       
          progx wrote 6 hours 15 min ago:
          Yes, but this is not only JS dependent, in PHP (composer) is the
          same.
          
          Normally old major or minor packages don't get an update, only the
          latest.
          
          E.g. 4.1.47 (no update), 4.2.1 (yes got update).
          
          So if the problem is in 4.1 you must "upgrade" to 4.2.
          
          With "perfect" semver, this shouldn't be a problem, cause 4.2 only
          add new features... but... back to reality, the world is not perfect.
       
        omnicognate wrote 6 hours 33 min ago:
        Should have included the units in the name or required a choice of unit
        to be selected as part of the value. Sorry, just a bugbear of mine.
       
          homebrewer wrote 5 hours 2 min ago:
          The new setting is consistent with the old ones, which is more
          important IMHO:
          
   URI    [1]: https://pnpm.io/settings#modulescachemaxage
       
            rtpg wrote 4 hours 36 min ago:
            the name could have included it though right?
       
              TheRoque wrote 1 hour 1 min ago:
              If the others don't include it, it could be another inconsistance
              though
       
          fzeindl wrote 6 hours 20 min ago:
          ISO8601 durations should be used, like PT3M.
       
            mort96 wrote 3 hours 34 min ago:
            Oh wow, never looked at ISO8601 durations before and I had no idea
            they were this ugly. Please, no, don't make me deal with ISO8601.
            I'd rather write a number of seconds or a format like 'X weeks' or
            'Y hours Z minutes'x ISO8601 looks exclusively like a data
            interchange format
       
            aa-jv wrote 6 hours 9 min ago:
            Should be easy, just add the ISO8601-duration package to your
            project ..
            
            /s
       
          zokier wrote 6 hours 24 min ago:
          Or just use ISO8601 standard notation (e.g. "P1D" for one day)
       
            1oooqooq wrote 2 hours 38 min ago:
            or PT1400M or P0.5DT700M?
            
            oh, you can use commas too.
            
            and if you're still not thinking this is fun, here's a quote from
            Wikipedia "But keep in mind that "PT36H" is not the same as
            "P1DT12H" when switching from or to Daylight saving time."
            
            just add a unit to your period parameters. sigh.
       
        postepowanieadm wrote 6 hours 54 min ago:
        If everyone is going to wait 3 days before installing the latest
        version of a compromised package, it will take more than 3 days to
        detect an incident.
       
          kibwen wrote 2 hours 37 min ago:
          Also, if everyone is going to wait 3 days before installing the
          latest version of a compromised package, it will take more than 3
          days to broadly disseminate the fix for a compromise in the wild. The
          knife cuts both ways.
       
          acdha wrote 2 hours 54 min ago:
          Think about how the three major recent incidents were caught: not by
          individual users installing packages but by security companies
          running automated scans on new uploads flagging things for audits.
          This would work quite well in that model, and it’s cheap in many
          cases where there isn’t a burning need to install something which
          just came out.
       
          blamestross wrote 4 hours 4 min ago:
          1) Checks and audits will still happen (if they are happening at all)
          
          2) Real chances for owners to notice they have been compromised
          
          3) Adopt early before that commons is fully tragedy-ed.
       
          singulasar wrote 5 hours 24 min ago:
          Not really, app sec companies scan npm constantly for updated
          packages to check for malware. Many attacks get caught that way.
          e.g. the debug + chalk supply chain attack was caught like this:
          
   URI    [1]: https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
       
          anematode wrote 6 hours 39 min ago:
          A lot of people will still use npm, so they'll be the canaries in the
          coal mine :)
          
          More seriously, automated scanners seem to do a good job already of
          finding malicious packages. It's a wonder that npm themselves haven't
          already deployed an automated countermeasure.
       
            mcintyre1994 wrote 5 hours 0 min ago:
            In the case of the chalk/debug etc hack, the first detection seemed
            to come from a CI build failure it caused: [1] > It started with a
            cryptic build failure in our CI/CD pipeline, which my colleague
            noticed
            
            > This seemingly minor error was the first sign of a sophisticated
            supply chain attack. We traced the failure to a small dependency,
            error-ex. Our package-lock.json specified the stable version 1.3.2
            or newer, so it installed the latest version 1.3.3, which got
            published just a few minutes earlier.
            
   URI      [1]: https://jdstaerk.substack.com/p/we-just-found-malicious-co...
       
              DougBTX wrote 2 hours 15 min ago:
              > Our package-lock.json specified the stable version 1.3.2 or
              newer
              
              Is that possible? I thought the lock files restricted to a
              specific version with an integrity check hash. Is it possible
              that it would install a newer version which doesn't match the
              hash in the lock file? Do they just mean package.json here?
       
                Mattwmaster58 wrote 1 hour 49 min ago:
                > Is that possible?
                
                This comes up every time npm install is discussed. Yes, npm
                install will "ignore" your lockfile and install the latest
                dependancies it can that satisfy the constraints of your
                package.json. Yes, you should use npm clean-install. One
                shortcoming is the implementation insists on deleteing the
                entire node_modules folder, so package installs can actually
                take quite a bit of time, even when all the packages are being
                served from the npm disk cache:
                
   URI          [1]: https://github.com/npm/cli/issues/564
       
                streptomycin wrote 2 hours 9 min ago:
                If they were for some reason doing `npm install` rather than
                `npm ci`, then `npm install` does update packages in the lock
                file. Personally I always found that confusing, and yarn/pnpm
                don't behave that way. I think most people do `npm ci` in CI,
                unless they are using CI to specifically test if `npm install`
                still works, which I guess maybe would be a good idea if you
                use npm since it doesn't like obeying the lock file.
       
                  Rockslide wrote 1 hour 46 min ago:
                  How does this get repeated over and over, when it's simply
                  not true? At least not anymore. npm install will only update
                  the lockfile if you make changes to your package.json.
                  Otherwise, it will install the versions from the lockfile.
       
            vasachi wrote 6 hours 20 min ago:
            If only there was a high-ranking official at Microsoft, who could
            prioritize security[1]! /s
            
   URI      [1]: https://blogs.microsoft.com/blog/2024/05/03/prioritizing-s...
       
       
   DIR <- back to front page