_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
   URI   Will Nix Overtake Docker?
   DIR   text version
        beanjuiceII wrote 2 days ago:
        until nix works on windows natively i wouldn't give it a second
        thought, what is it like on osx?
        Hakashiro wrote 2 days ago:
        Short answer (without reading the link): no.
        Long answer: nooooooooooooooo.
        jillesvangurp wrote 2 days ago:
        I have some experience using docker/docker-compose, like world+dog.
        Mostly it's good. Docker is simple, or it can be if you don't over
        engineer things. Simple is why docker got popular. The other reason it
        is popular is because it decouples to process of provisioning software
        from deployment and infrastructure provisioning which these days are
        separate activities typically done by different
        people/teams/organizations. Splitting responsibilities for this is a
        big win. If you self host all of that might be internal to your org but
        it still allows you to split the responsibility across teams.
        I have no experience with Nix. But a couple of red flags jump out of
        this thread for me are that "it's complicated", Haskell is somehow
        involved, and it is vaguely suggested that this is somehow similar to
        Docker. Which is clearly not the case.
        When it comes to putting stuff in a docker container, I guess you could
        use Nix. But why would you? It makes about as much sense as using
        Ansible or Puppet for that.
        Nothing against Haskell, but it evokes an image that might be a bit
        harsh to some but probably rings true to many: the wrong people getting
        excited for the wrong kind of reasons. I'm talking over-engineering
        here. People getting overly excited and creating a mess of hard to
        understand code that becomes a burden for whomever has to train up to
        maintain that code. People wielding hammers looking for ever more nails
        to wack.
        I've unceremoniously ditched multiple configuration languages over the
        years. Usually it feels good getting rid of all the over engineered
        crap when you realize you don't need most of it anymore. Of course the
        flip side is that what you replace it with might end up being similar.
        Been there and done that.
        However, I seem to only need a few lines of Dockerfile to replace what
        used to be a convoluted mess of puppet or ansible scripts to provision
        a thing that runs our thing. No longer needed. No more obsessing on how
        to, for example, get NTP tell the time to a server and why that somehow
        requires wrapping your head around a 600 line puppet module that does
        god knows what. Not a thing anymore.
        Nix sounds more like that than like Docker. I used to know people that
        actually used puppet to setup their laptop. Nix might be a better
        product for that. But I'm looking to minimize the need for that
        entirely. Dockerized development environments are great for that. A
        laptop with a browser running vs code is all you need these days.
        People are using freaking ipads to do development even.
        kohlerm wrote 2 days ago:
        Nix will not overtake Docker because
        1. The overlap is not that big
        2. Nix has usability issues
        That being said Nix even with its usability  issues works better for
        reproducible development environments than anything else  (including
        Docker) ATM. 
        Therefore in my view either Nix or something  very similar will be the
        solution for this and replace Docker (some people run builds using
        Docker containers) for this use case.
        qalmakka wrote 2 days ago:
        Would peaches replace bananas? The comparison makes IMHO very little
        sense. They are different tools that solve different issues in
        different ways.
        It's true that Docker is often abused in very creative ways to do what
        Nix does, but it still remains a different tool with a different goal
        in mind.
        1_player wrote 2 days ago:
        For containers? No way, Dockerfiles are much simpler than nix
        For workstations? ostree is the future, not nix, IMO. Check out
        Silverblue. Flatpak takes care of the desktop aspect.
        NixOS is an interesting attempt at solving the reproducibility problem,
        but it hasn't been adopted because it's just a stepping stone towards a
        better solution. It has far too many quirks to hit mainstream, it's too
        complex, under documented and it's still based upon package management,
        which is getting obsolete as the Linux world is moving fast towards
        containerising everything.
          megumax wrote 2 days ago:
          >Dockerfiles are much simpler than nix configuration
          I highly doubt this, Dockerfiles are simpler for a hello world
          microservice, but they get worse and unmaintainable when you add them
          lots of them with docker-compose.
          >ostree is the future, not nix
          >flatpak takes care of the desktop aspect
          I really hate the approach this approach, it just moves the state
          from the root system to the user. The root directory is immutable,
          but you move the whole responsability to Flatpak, which is not even
          comparable to nix.
          Meanwhile, in case of Nix, the whole system is reproducible and
          settings are more and less immutable.
          >it's too complex
          My whole nix config (with specific configs for vscode, neovim,
          chromium, wireguard, rust, python, nodejs, c++, go) is less than 600
          lines of code, and it's readable. Also I configure most of my open
          source projects via nix as well as it's pretty easy to share the
          environment with the other developers.
          Nix works everywhere, on every linux distro, macOS and even Windows
          via cygwin or wsl.
          >under documented
          That's a really good point, nix maintainers didn't care much about
          documentation, but usability. But it looks like that's changing, they
          moved docs to mdbook and a team is working specifically on improving
          the docs
          pxc wrote 2 days ago:
          > it's still based upon package management, which is getting obsolete
          as the Linux world is moving fast towards containerising everything.
          Container images still rely on package management. There's no fully
          containerized future where everything is just `./configure && make &&
          make install`'d all the way up from Linux From Scratch or whatever.
          Fedora Silverblue's ostree image is still managed by dnf and RPM
          underneath, too.
          viraptor wrote 2 days ago:
          There's a third use case (and possibly more): For developers of a
          project, where you want to manage some software per-environment,
          rather than via flatpak. But also want closer integration in the
          system, not building a new system on a side (like containers).
        KingMachiavelli wrote 2 days ago:
        Eventually? It is possible. You could probably make a thin wrapper to
        convert basic Dockerfiles into a native Nix system configuration. But
        running Docker images on Nix is easy and making Docker images from Nix
        is pretty easy so there really isn't a need to get rid of the
        Dockerfile itself.
        Also given that a blog post posted today is already sort-of out of date
        [1] when it comes to Nix, it is going to take a while. (Not that it
        really matters for the context of this blog post).
        [1] Despite being an 'experimental' feature, the future of Nix is
        Flakes which are most simply a function definition (inputs & outputs)
        with the inputs pinned via hashes.
        DeathArrow wrote 2 days ago:
        >Both tools can solve the age-old problem of "it works on my machine".
        But Docker does much more than that. People use it especially to
        isolate runing software and deploy microservices.
        4ad wrote 2 days ago:
        I don't like nix[1], but I sure hope it will overtake docker. Any
        theoretical foundation is better than no foundation.
   URI  [1]: https://lobste.rs/s/sizjqf/migrating_from_docker_podman#c_opli...
          pxc wrote 2 days ago:
          It kinda sounds like your main gripe with Nix is that it's a
          source-based package manager with transparent binary caching rather
          than a binary package manager.
          I think that's a mistake, but it's true that Nixpkgs suffers in some
          places from missing or overlapping abstractions.
        greatgib wrote 2 days ago:
        Along my life, i have worked with a lot of build systems, but I find
        Nix syntax and commands to be completely awful!
        I see that it could be useful, but it completely turns me down
          soraminazuki wrote 2 days ago:
          The Nix syntax being complex is something that is taken for a given
          in online circles, but it's actually one of the most simple languages
          that I've learned. The Nix syntax is in essence just JSON with
          functions which consists of the following building blocks:
          * Primitive values (strings, numbers, paths, booleans, null)
          * Lists and sets
          * Variables
          * Functions
          * Conditionals
          * Assertions
          * Arithmetic operators
          And that's all there is. For people familiar with programming, it
          should only take around 10 minutes or so to grasp the syntax. It
          would take much, much longer for commonly used programming languages
          like C, C++, Javascript, Python, Ruby, Perl, ... you name it.
          The official docs does a decent job at explaining the syntax[1]. I
          didn't have any experience with functional languages prior, but I
          didn't have much problem grasping the syntax once I've read through
          the documentation.
   URI    [1]: https://nixos.org/manual/nix/stable/#ch-expression-language
            greatgib wrote 2 days ago:
            It is not necessary what it can do the problem, but how it does it.
            Looking at the page you link for example: [1] <<{ stdenv, fetchurl,
            perl }:>>
            Nix functions generally have the form { x, y, ..., z }: e where x,
            y, etc. are the names of the expected arguments, and where e is the
            body of the function.
            Strange, but let's say ok. But then, in the example, where does the
            function end?
            <<"So we have to build a package. Building something from other
            stuff is called a derivation in Nix (as opposed to sources, which
            are built by humans instead of computers). >>
            Looking at that, I say whaaaattt? "derivation" does not mean
            anything obvious to  anyone used to build packages. And then, as
            the subtitle said, did they decide to use a weird name just to
            confuse everyone for the pleasure?
            A set is just a list of key/value pairs where each key is a string
            and each value is an arbitrary Nix expression. They take the
            general form { name1 = expr1; ... nameN = exprN; }.
            So, looking at the code, from far, function, sets, everything looks
            mixup ...
   URI      [1]: https://nixos.org/manual/nix/stable/expressions/expression...
              soraminazuki wrote 2 days ago:
              The section you linked to was not specifically about syntax, but
              a high level overview that is explained in detail later on. For
              syntax, you should look at [1] . It should explain things like
              where how you write functions.
              > Looking at that, I say whaaaattt? "derivation" does not mean
              anything obvious to anyone used to build packages. And then, as
              the subtitle said, did they decide to use a weird name just to
              confuse everyone for the pleasure?
              The same kind of reaction can be made to any programming concept
              introduced in an introductory material. I don't think it's fair
              to ridicule Nix for defining its own technical terms, especially
              when the writing you referred to explains what it is. It is
              somewhat light on details, but you can't possibly expect an
              introductory section to explain it all at once. It should be
              enough to keep you going.
              But to answer your question, a derivation is the low-level build
              instruction that Nix's package builder can parse and build
              packages out of. You use the Nix language to convert high level
              package definitions to Nix derivations before feeding it to the
              package builders.
   URI        [1]: https://nixos.org/manual/nix/stable/expressions/expressi...
          pxc wrote 2 days ago:
          I don't get all the hate for the Nix language. It seems really simple
          and flexible to me.
          What do you dislike about the language?
            greatgib wrote 2 days ago:
            I can't really say, but when I see it, I have the feeling that it
            is made of the worst parts of perl, bash, javascript, ... all
            combined together.
        aitchnyu wrote 2 days ago:
        I used Docker Compose to, among other things, manage Postgres, which
        wants to listen to port 5432 and use a central dir to hold databases.
        Can a non Docker solution allow me to run multiple instances of
          tempodox wrote 2 days ago:
          Deploy them into different directories and have them listen on
          different ports.  You can configure the deployment directory when
          building from source and then set the port numbers in the respective
          “postgresql.conf” files.
            mst wrote 2 days ago:
            Right, a bunch of apps I've worked on use
            p3rl.org/Test::Postgresql58 to spin up a temporary postgres
            instance per test file - clean state and parallelisation are
            wonderful things to have for fast and reliable testing cycles.
          Bayart wrote 2 days ago:
          Without going into stuff like Kubernetes, basically what you want is
          to run your postgres instances behind a load-balancer listening to
          5432 and forwarding to whatever instance. The most common with
          postgres is probably HAProxy.
          If your problem is that you want to run unrelated instance of
          postgres... you don't really have a problem to begin with. Either use
          docker run to run your postgres image on different ports with
          different volumes, or just make a docker-compose YAML file with a
          bunch of postgres services. It's not like you have to have a single
        whateveracct wrote 2 days ago:
        Who cares? I've spent years ignoring mainstream, "senior", etc devs'
        opinions on both Nix and Haskell. "Nobody uses it," "it's too
        complicated," "bet on mainstream X," "it's too hard to learn."
        Guess what? If has paid off in terms of personal efficiency - and I can
        make a good living doing it to boot with some effort.
        If you want to aim for the mountaintop, don't let the weight of cynical
        complainers hold you back. Remember: It takes guts to be amazing.
        throwaway984393 wrote 2 days ago:
        I only noticed recently that I have a tendency to search for reasons to
        use a tool. I literally googled "Do I need to learn Go?". You'd think
        it would work the other way around: I can't accomplish something with a
        language I use today, so I go searching for a language that solves my
        problem. Instead I'm literally looking for excuses to learn the most
        popular thing. It's absurd. (and really it's resume-driven development)
        In terms of Nix versus Docker, they're completely different levels of
        abstraction and solve completely different problems. Docker is (among
        other things... it does a lot of things) an abstraction for reproducing
        network services on arbitrary platforms. Nix is an abstraction for
        installing and setting up files.
        kortex wrote 2 days ago:
        As TFA itself answers, "no". But I can give a different reason: nix is
        too much of a barrier of entry, relative to Docker.
        Docker might not be simple; there's a lot of moving parts to manage,
        some hidden gotchas, and networks are a mess. But it's comparatively
        easy, and once you bake an image, it's pretty simple. Dockerfiles are
        basically just sh. Package managers are the usual suspects. Images are
        easy to distribute. It's very familiar.
        Nix is not easy. It's a new syntax to learn, it's a new way of
        building, it's an entirely new way of thinking about the problem. But
        in the end, it does simplify packaging in the Rich Hickey sense of the
        easy/simple dichotomy. No braiding of packages and shared libs. But
        using Nix is not so simple. There's all kinds of bindings and bodges
        and bespoke shims to make it all work.
        History tends to show that easy+non-simple things beat out
        simple+non-easy things (devs are lazy). Easy-kinda-simple vs
        non-easy-kinda-simple-but-long-term-gains? No contest in my opinion.
        I think Nix is a beautiful idea but it's an uphill battle.
          stevebmark wrote 2 days ago:
          I agree, there’s a reason no one uses Nix. It has terrible DX
            jessaustin wrote 23 hours 22 min ago:
            Does it seem weird to anyone that Nix insists on installing its
            "/nix" directory on the root directory rather than somewhere
            sensible like "/usr" or even "/usr/local" ?
            kortex wrote 2 days ago:
            Well "no one" is a bit absolute. But colloquially, I know what you
            mean. Relative to Docker, it's way less popular. But it has 5.2k
            stars on [1] .
            But yeah. The DX "needs work". which is a nice way of saying, I
            find it downright painful to use.
   URI      [1]: https://github.com/NixOS/nix
            rch wrote 2 days ago:
            Quite a few people use Nix.
        qudat wrote 2 days ago:
        As far as I understand nix, the nix examples do not create a
        reproducible environment because it is not locking the versions down
        for each package, including nixpkgs.
   URI  [1]: https://nix.dev/tutorials/towards-reproducibility-pinning-nixp...
          amarshall wrote 2 days ago:
          Ass that article says, it’s possible but awkward. It gets better
          with Flakes
   URI    [1]: https://nixos.wiki/wiki/Flakes
            amarshall wrote 2 days ago:
            *As (oops, mobile typing)
        yewenjie wrote 2 days ago:
        I love Nix and NixOS, but sometimes it is extremely painful to get
        complex software which expects FHS to work. For example, I have spent a
        day trying to get Android Studio for React Native set up and failed.
        solmag wrote 2 days ago:
        About to write "they aren't the same thing" as I remember Nix aliases
        the language, package manager and OS.
        frogperson wrote 2 days ago:
        No, I don't think it will.  A Dockerfile can be read in seconds and
        explained to a junior in a few minutes.  Nix is way too complicated to
        see mainstream use.
        mac-chaffee wrote 2 days ago:
        One excellent crossover between Nix and Docker is [1] If you ever need
        a container with a handful of well-known utils installed, just specify
        them in the image URL. Positively magical!
   URI  [1]: https://nixery.dev/
        InTheArena wrote 2 days ago:
        AmericanBlarney wrote 2 days ago:
        kristianpaul wrote 2 days ago:
        Hopefully Systemd eventually does.
        r3trohack3r wrote 2 days ago:
        I know the author calls this out - but Docker and Nix _are_ different.
        For immutable distribution they solve the same problem. But they solve
        it in fundamentally different ways.
        If you go back and read the last 20 years or so of LISA papers, it can
        be … humbling/frustrating. You can read two papers back to back,
        separated by 15 years, and they’ll be describing a new solution to
        the same exact problem that still hasn’t been properly solved.
        Dependency management has been horribly broken in our industry and
        we’ve never really managed to address it - until Nix. The Nix LISA
        paper [1] was a breath of fresh air, it really cut to the core of the
        problems with modern dependency management and addressed them head on.
        Docker declared bankruptcy on dependency management and just resorted
        to fs snapshots to guarantee bit perfect deployments.
   URI  [1]: https://edolstra.github.io/pubs/nspfssd-lisa2004-final.pdf
        paxys wrote 2 days ago:
        The article starts and ends with "they are very different tools for
        different use cases". What was even the point of writing the whole
        thing hypothesizing whether one will replace the other?
          cbrewster wrote 2 days ago:
          Author here. It's titled this way because Docker seems to come up in
          a lot of discussions around Nix and is often compared with Nix.
          Partly because of their overlap in functionality but also because
          people may not understand the difference between the two.
          The goal here was to a) highlight the different use-cases between the
          two tools b) compare the tools in the areas that they overlap and c)
          show how both tools can be used together.
        cglong wrote 2 days ago:
        Does Nix have an answer to Docker's support for Windows containers? The
        ability to mix legacy Windows applications with cloud-native ones is a
        huge boon for those interested in migrating to a modern stack.
          pxc wrote 2 days ago:
          Nix doesn't have any Windows support, though there's intermittent
          developer interest that sometimes manifests as a demo before the
          bitrot sets in
          For now you can cross-compile to Windows using Nix and you can run
          Nix in WSL
        justicezyx wrote 2 days ago:
        > This question could be dismissed by saying that Nix and Docker are
        different tools that solve different problems.
        > One is a container deployment toolkit and the other is a package and
        configuration manager.
        My reading, based on the mapping from the arrangement of the 2 words
        Nix and Docker, is that Nix is a "container deployment toolkit" and
        docker "package and configuration manager".
        Of course, this interpretation is wrong.
        Further, even if applying container deployment toolkit as a description
        to docker, it still misses a core part of docker, I.e., docker is also
        a container image building tool kit. The building part and the
        deployment are for different steps with seemingly equal importance.
        dimgl wrote 2 days ago:
        As someone who has never used Nix, from first glance the question I ask
        myself is... why?
          pxc wrote 2 days ago:
            - is fast (faster than Homebrew or Docker)
            - is multi-user (don't need root privileges to install software)
            - lets you try software without permanently installing it
            - gives you fast, reliable undo without requiring a special
            - lets you automatically (re)produce a setup that never drifts from
          its official definition
            - can safely perform upgrades and configuration changes without
          thinking about what has been done to the system or environment in the
            - has tons and tons of software already packaged for it at this
          point, which you can just use without any special effort
          rgoulter wrote 2 days ago:
          Nix is difficult, but there are advantages to it.
          A couple of comparisons are similar: with Terraform (or
          CloudFormation etc.), you put in a lot of effort up front in order to
          reduce maintenance effort later. You could manually go through and
          setup a VM instance or two; but with Terraform, you have a
          declaration in code of what the system should be.
          I think another comparison is: Vim or Emacs are more difficult to use
          compared to using VSCode. In terms of DX, there are advantages to
          Emacs over VSCode. If modal editing seems to make sense, it's
          probably worth looking at vim. But, if you just want to get the job
          done right now and not fuss about learning an unfamiliar tool, VSCode
          is surely a better choice.
        snicker7 wrote 2 days ago:
        Guix supports exporting to Docker containers. A guix.scm file then
        becomes a reproducible alternative to a Dockerfile.
        However, Guix has its own container implementation. And this is
        significantly lighter than Docker (instead of layers, links to
        /gnu/store) and is root-less. If Guix containers had a
        runc-compatability layer and better docs/tutorials, it would be hard to
        go back to Docker/podman.
          rekado wrote 2 days ago:
          I have been using Guix-generated reproducible Docker images for
          years.    I'll never go back to using Docker directly.
          Could you please elaborate what you mean by "runc-compatability
          layer"?  And what about the docs requires clarification?
          (I wrote the original Docker export backend for "guix pack", so I
          have an interest in improving this feature where necessary.)
            snicker7 wrote 2 days ago:
            I am referring to a hypothetical dockerless OCI compatability layer
            built on top of `guix shell --container` or `guix system
            I sort of got this idea from runj [0], an attempt to create an OCI
            compatible runtime for FreeBSD jails.
            The main advantage is to use certain OCI-compatible tools (e.g.
            k8s) with Guix without needing to use Docker.
   URI      [1]: https://samuel.karp.dev/blog/2021/03/runj-a-new-oci-runtim...
        dreyfan wrote 2 days ago:
        What's the easiest way to deploy a docker container to a VM? Preferably
        without ten layers of cloud provider interconnected services or
        expensive and complex kubernetes layers? I just want to send a
        container and get back an IP address I can send some traffic to the
        exposed port(s).
          reducesuffering wrote 2 days ago:
          Dokku. I'm a DevOps noob, just recently picked up Docker, and had
          such an easy time with Dokku, I decided on it for my current deploy
          process. Once you setup a few Dokku commands on the server, you can
          git commit, git push to the dokku remote and it runs your docker
          container automagically.
          umvi wrote 2 days ago:
          Docker save the container image to a zip file, ftp it over to the VM,
          then docker load it?
        exdsq wrote 2 days ago:
        Oh god I hope not. Having worked in > 100kloc nix environments I am
        completely turned off of the idea. I really really tried, I installed
        NixOS as my main OS and used Nix whenever I could to try and pick it
        up, but it's such a complex beast I felt it slowed everything down.
        Simple tasks that would take 10 minutes in Docker suddenly became
        DevOps tickets. I suddenly had to write bindings for tools rather than
        apt-get them in a Dockerfile. Build times in Haskell were bad enough
        but suddenly you had nix into the mix and I'd be wasting half a day as
        Nix built the entire world - solved with internal caching servers but
        have fun trying to work out how to push something to that correctly.
        There's a blog post that goes around from time to time about how a
        company have three risk tokens to allocate per project on non-boring
        technologies. Nix seriously needs all three. It's the only technology
        I've ever vetoed at a startup because I've seen the hell hole it can
        become - perhaps not for the DevOps engineers who know it, but the team
        who have to suddenly work within the resulting environment.
          kaladin-jasnah wrote 2 days ago:
          I truly hope not as well. I spent an extensive period of time trying
          to learn Nix and integrate it into my homelab. Granted, it's not an
          enterprise environment, but it is the most painful, unstable, broken,
          buggy, bloated, hard-to-use, poorly-documented mess I've ever tried.
          I hope no one touches Nix with a ten-foot pole.
          That's not to say it's not a great idea. It's just a huge pain to get
          to work, and the cost of trying to get a perfectly-working Nix
          environment is just *not* worth it. In my case, I was working with
          orchestrating virtual machines, which… well, there's a project
          called nixops that claims to do this. The thing is it flat-out didn't
          work half the time (also used python2.7 and I believe one of its
          dependencies were marked as insecure for a long period of time). I
          got so frustrated with this "advertised" nixops tool, that had to
          write my own replacement for nixops, and while it was a fun
          experience, I was so burnt-out from dealing with Nix and its breaking
          randomly every ten seconds, I just gave up on my homelab.
          If you want any program that doesn't exist for Nix, you can't just
          use the program's build script. You have to manually make a Nix
          derivation, and then pray and hope it will actually compile. Want to
          deploy your Ruby on Rails application on NixOS? Prepare to spend
          three days trying to set this up, since there's only one other
          application and the entire process is poorly documented (even for a
          "regular" Ruby program that isn't Rails).
          Without additional work, Nix will never be worth its cost (did I
          mention the interpreter took forever and sometimes failed on errors
          in files that had NOTHING to do with yours, leaving you to manually
          debug each line?). You could spend the days upon days upon days
          trying to get the damned thing to work for something far more useful
          instead. Since once you get Nix to work, it will break.
          I'm sorry if I offended any Nix users/developers, but the product is
          just not ready for anything yet IMHO. I just don't have time to deal
          with it anymore, and I'd rather get on doing something more fun than
          dealing with a bloated, undocumented system that I can just replace
          with Docker and get my work done in 5 minutes instead of 5 days.
            pxc wrote 2 days ago:
            > did I mention the interpreter took forever and sometimes failed
            on errors in files that had NOTHING to do with yours, leaving you
            to manually debug each line?
            This is a common (and horrible) issue with dynamic languages that
            pass functions or blocks of code around. There have been some major
            improvements to Nix error messages which were included in the last
            release, and there's also ongoing work to address this through
            gradual typing.
            There was a talk on the latter recently, with some examples that I
            think make the overall issue pretty clear:
   URI      [1]: https://www.youtube.com/watch?v=B0J-SclpRYE
              kaladin-jasnah wrote 1 day ago:
              >  There have been some major improvements to Nix error messages
              which were included in the last release
              That's good to hear (and I think I heard about it before). The
              problem is, there are still lingering issues with Nix, like how
              long it takes to figure out how to compile a new program,
              (exceptionally) poor documentation, packages being unmaintained,
              the nixpkgs repository being a gigantic blackhole that takes
              forever to eval, etc.
              Don't get me wrong. I really like the idea behind Nix. Even with
              this fix though, I'm still not sure I would enjoy using Nix
              (since many, many other problems) or giving it another shot
              because of my emotional response to it that's been caused by
              burnout trying to wrangle with it.
                pxc wrote 1 day ago:
                > I'm still not sure I would enjoy using Nix (since many, many
                other problems) or giving it another shot because of my
                emotional response to it that's been caused by burnout trying
                to wrangle with it.
                I'm not sure you would, either, and I won't ask you to give it
                another shot right now. I understand the feeling. Maybe it's
                something to revisit after more time than has already passed.
          nvarsj wrote 2 days ago:
          I'd say don't blame the tool for what sounds like an architectural
          TFA points out using nix as a reproducible build environment - which
          it is excellent at. Create a shell.nix in your repo, and every dev
          uses the same exact tools within the comfort of their own
          machine/shell. Docker is much more painful for this kind of local dev
            exdsq wrote 2 days ago:
            Potentially it was a mess, but the guys building it contribute to
            nix so I don’t think that’s to blame. Dev environments are
            okay, but that’s not the limit of nix - it gets used for build
            servers, ci servers, linting, replacing stack cabal yarn etc,
            creating docker, hell I’m pretty sure I saw nix creating nix
          chriswarbo wrote 2 days ago:
          > apt-get them in a Dockerfile
          > Nix built the entire world
          You're comparing apples to oranges.
          Docker just runs commands. If you want to run commands, you can do
          that in Nix using e.g. nixpkgs.runCommand.
          apt-get just fetches prebuilt binaries. If you want to fetch prebuilt
          binaries, you can do that in Nix using e.g. nixpkgs.fetchurl.
          You can even use runCommand to run apt-get (I do this to use Debian's
          Chromium package on 32bit NixOS)
          In contrast, it sounds like you were using Nix as a build tool, to
          define your entire OS from scratch (presumably using Nixpkgs/NixOS).
          In which case apt-get isn't a comparable command; instead, the Debian
          equivalent would be:
              apt-build world
          Guess what that does?
          fulafel wrote 2 days ago:
          Was/would it be hard to switch back to traditional containers or was
          there some kind of lock-in effect? Or was the consensus just pro-Nix?
            exdsq wrote 2 days ago:
            The conesnsus with those who mattered to make that change was
            pro-Nix, but I have no idea how you'd go back to containers because
            by that point builds were declarative and the implementation was
            spread over abstractions and repositories! So if DevOps decided to
            quit because another company was using SuperNix2, which I think is
            a plausible thing if you have a team using a risky technology, then
            it would have required hiring consultants to do it - I think it
            would have been a months-long refactor going in blind.
          throwaway894345 wrote 2 days ago:
          This mirrors my experience as a developer in an org that used Nix. If
          you want the dev team to have a strong dependency on the devops team
          for every little (often unpredictable) aspect of their workflow, Nix
          is the tool for the job.
          Don’t get me wrong, I’m completely bought in on the vision of
          reproducible builds but there’s a long ways to go before it’s
          usable in real organizations. I’ve heard that some orgs manage it,
          but I really have no idea how. Maybe some critical mass of people who
          already know and love Nix (and implicitly packaging obscure
            vineyardmike wrote 2 days ago:
            Many orgs get reproducible builds by using different build tools
            and abstractions. Eg Amazon gets it via their Brazil tool.
              throwaway894345 wrote 1 day ago:
              I was talking about Nix in particular, but yes, if you’re an
              enormous and highly lucrative tech company you can pay a team of
              engineers to build your own reproducible build tool. If the same
              team is very talented they can probably even get Bazel working.
              ;) But I don’t think any large orgs at all use Nix, presumably
              because it doesn’t scale beyond a small number of
              (enthusiastic) developers.
            hitpointdrew wrote 2 days ago:
            >  If you want the dev team to have a strong dependency on the
            devops team for every little (often unpredictable) aspect of their
            I have never used nix, but from the article the author only
            concentrated on the fact that docker and nix create reproducible
            environments, and completely misses the other benefits of
            As a devops guy if someone hands me a nix project, how do I deploy
            that so it is highly available and scales by itself?
            With containers I just put it in kubernetes.
              jokethrowaway wrote 2 days ago:
              That's because nix is not a docker alternative.
              I'd love a nix-like docker alternative to be fair but that'd be a
              massive undertaking.
                throwaway894345 wrote 2 days ago:
                What is a "nix-like docker alternative"? Is that "using Nix to
                build container images"? Because Nix already does that.
              Nullabillity wrote 2 days ago:
              You can use Nix as a better docker build, see [1] or [2] .
   URI        [1]: https://grahamc.com/blog/nix-and-layered-docker-images
   URI        [2]: https://nixery.dev/
                throwaway894345 wrote 2 days ago:
                The dream is using something like Nix to not only reproducibly
                build a container image but also all of the infrastructure
                manifests which reference it. I _think_ this is achievable in
                Nix if you're willing to deal with all of the pain of actually
                using Nix; however, this would depend on pushing an image to a
                container registry as part of the Nix build and I'm pretty sure
                that violates Nix's idioms/conventions? I've certainly never
                heard of anyone building their entire infrastructure manifests
                this way.
                  verdverm wrote 2 days ago:
                  See [1] , the next gen tool by the creators of docker that
                  does this.
                  CUE + BuildKit
   URI            [1]: https://dagger.io
                  Nullabillity wrote 2 days ago:
                  > The dream is using something like Nix to not only
                  reproducibly build a container image but also all of the
                  infrastructure manifests which reference it. I _think_ this
                  is achievable in Nix if you're willing to deal with all of
                  the pain of actually using Nix;
                  We did that at an old job. Basically, Nix built images + K8s
                  manifests + a script to push the images. Our CI job boiled
                  down to `nix-build && ./result/push-images && kubectl apply
                  -f ./result/manifests -R`. This is similar to how NixOS'
                  nixos-rebuild boils down to `nix-build && ./result/activate`.
          Barrin92 wrote 2 days ago:
          >Build times in Haskell were bad enough
          this one's very true. Every time I try to build something in Haskell
          on my laptop it feels like we're moving closer to the heat death of
          the universe. Is there some good read on how/why Haskell compilation
          times are so long compared to some other languages?
            Zababa wrote 2 days ago:
            I can't offer an answer specific to Haskell, but I know that OCaml
            does some things to make the compilation fast: interface files, no
            circular dependencies, no forward declarations, not a lot of
            optimizations. From what I understand, those tradeoffs come from
            languages designed by Niklaus Wirth, where efficiency of
            compilation was important.
            In general, I feel like every language that wasn't made to compile
            fast like Go, OCaml, Pascal and derivatives is going to be called
            "slow to compile". There's Java and C# that are kind of a
            middle-ground, since they emit bytecode for a JIT compiler. So my
            answer to "why Haskell compilation times are so long compared to
            some other languages?" would be "because compilation times don't
            take priority over some other points for Haskell and its users".
              funcDropShadow wrote 2 days ago:
              Haskell compiles so slow because it needs way more optimizations
              then Ocaml before it performs good. The primary reason for that
              is its lazy evaluation semantics. Prior to GHC many researchers
              believed it to be impossible to execute lazy languages with
              comparable speed than imperative ones. Haskell and GHC were
              primarily developed as research tools. I.e. they valued the
              efficient exploration of new ideas in programming languages and
              implementations higher than the resulting language and
              implementation, at least in the beginning. And I would say
              Haskell was extremely successful considering those priorities.
              That does not mean, that the implementation hasn't been moving
              towards industry-readiness for a long time.
            wereHamster wrote 2 days ago:
            > moving closer to the heat death of the universe
            So… you're saying your laptop is super cool? Because the heat
            death of the universe is when thermodynamic energy is equally
            distributed everywhere, which, given the large space of the
            universe, means really cold.
              xelxebar wrote 2 days ago:
              Heh. Though while we're playing Pedantics, it's worth pointing
              out that the defining characteristic of Heat Death is maximum
              entropy, not uniform temperature. That said, it's entirely
              possible that the whole system of Energy Company + OP's computer
              actually does get cooler on average when compiling Haskell.
              Here's a fun game. Have you ever played Follow the Energy? For
              example, depressing keys on your keyboard takes energy. Where
              does that energy come from? Well, the work performed by your
              fingers of course! But where does that energy come from? Well,
              the muscles in your fingers, of course! But where does that come
              from? Well, your food! But what about that? Well, your food's
              food! That? Eventually plants. That? The sun! That? Nuclear
              fusion. That? Gravitational potential! That? Etc...
              But here's the funny thing. In this image, it kind of seems like
              energy gets "used up". The sun provides energy to the earth via
              solar radiation; the plants consume this energy; animals eat the
              plants, obtaining their energy; etc. However, energy is
              conserved. What gives?
              Better yet, if the earth were not radiating just as much energy
              as it received, it should be heating up. However, the
              earth-atmosphere system is mostly constant temperature! This
              implies that the total energy flux is zero. If X Watts
              (Energy/Time) is coming in, then the earth actually radiates out
              X Watts as well.
              So... if the total energy flux is zero, then how does your
              keyboard key actually get pressed? What gives?
              The key is entropy. X Watts of solar radiation impinge upon the
              earth, but these photons are "hotter" (i.e. higher frequency) on
              average than those that radiate out. The balance is in numbers.
              You need more cooler photons to balance the energy of the hotter
              ones. E=hf; energy (E) is proportional (constant h) to frequency
              (f), after all.
              This means that while energy is conserved, the flow of that
              energy increases the entropy of the system. In a very real sense,
              typing on your keyboard happens because of the waste heat
              generated in the process.
              All driving us closer to Heat Death...
                MereInterest wrote 2 days ago:
                > The sun! That? Nuclear fusion. That? Gravitational potential!
                That? Etc...
                Minor pedantry on top of pedantry.  The gravitational potential
                energy of a collapsing protostar gets you past the activation
                barrier for nuclear fusion, but isn't itself the source of
                energy beyond that.
                Think of it like lighting campfire using friction.  The
                friction heats up the kindling, but that investment allows you
                to access the potential chemical energy of the wood.  The
                gravitational potential energy is converted into heat, and that
                investment allows you to access the potential nuclear energy of
                the unfused hydrogen.
                  xelxebar wrote 22 hours 26 min ago:
                  Thanks! That's one place in the chain that I paused for quite
                  a while, deliberating what to write down. Overall, the little
                  "causal chain" I wrote down is littered with little (but
                  useful!) fibs, and the one you point out is probably the most
                  Nice analogy, btw.
                solohan wrote 2 days ago:
                Thanks for elaborating on this. I guess the confusion also
                stems from the word "heat" which we usually associate with
                something that's warmer than some human day-to-day reference.
                Heat in the physical sense though just refers to the kinetic
                energy of a number of particles, which can be harvested (per
                the post above) provided there are differences in kinetic
                energy throughout space.
                It's also interesting to play the "Follow the Energy" game to
                it's logical conclusion, namely that nearly all of the energy
                in the Universe (including that which you expend typing on your
                keyboard) originates from the gravitational potential created
                in the Big Bang (whatever that is, by the way). This begs the
                question; how was the entropy of the early Universe to low,
                that it could increase by such an enormous amount, to produce
                intelligent beings such as ourselves, typing things on a
                keyboard while we should be working?
                It's really one of the most fundamental questions in cosmology,
                and one of the (many) reasons why I love physics.
              solohan wrote 2 days ago:
              Actually, the heat death is when all potential energy in the
              universe has been converted to heat. So converting an excess of
              stored chemical energy in their laptop battery to heat by
              compiling a load of Haskell would be a fine way of increasing the
              entropy of the Universe. Thus moving ever so slightly closer to
              the inevitable heat death.
            wyager wrote 2 days ago:
            If you're compiling a project for the first time it's probably
            pulling in a ton of libraries and compiling them all from scratch.
            But let's be real here - we're talking maybe 3-5 minutes for the
            first build of a huge Haskell codebase, one or two orders of
            magnitude faster on second build.
            It seems comparable to what I've noticed with Rust or C, and pretty
            fast compared to some medium/large C++ codebases I've built.
            I'm not really sure what workflows people are using where this is a
            problem, so this may be less of a pain point for me than it is for
            you. Personally I build Haskell projects once to pull in the
            dependencies, then use ghcid to get ~instantaneous typechecking on
            changes I make to the codebase.
              exdsq wrote 2 days ago:
              I wish it took 3 - 5 minutes, I’ve built it projects that are
              far longer than that. I know this isn’t just me though, because
              the GHC build documentation tells you to go read stuff for a
              while as it builds. Longest I can remember is 2 hours.
              AzzieElbab wrote 2 days ago:
              From my experience with Haskell and Scala, derivations and not
              dependencies is what’s driving compile time. You can probably
              hide a crypto miner behind all that compiler magic. In Scala I
              usually move code that invokes a lot of derivations into sub
              projects, not a clue on how to do that in Haskell
          rgoulter wrote 2 days ago:
          > There's a blog post that goes around from time to time about how a
          company have three risk tokens to allocate per project on non-boring
   URI    [1]: https://mcfunley.com/choose-boring-technology
            kwertyoowiyop wrote 2 days ago:
            Strangely, once a team adds one risk, they feel more comfortable
            adding a second risk.  Ditto for the third risk, etc.  The
            psychology is weird.
            Cthulhu_ wrote 2 days ago:
            I wish I had seen that (and fully understood the implications)
            years ago, I mean it's been in the back of my head all the time but
            never explained this well. I've referenced it a couple of times
            now, especially the slides showing the graphs of technologies and
            links which is a really succinct way of saying "adding one
            technology multiplies complexity".
            laumars wrote 2 days ago:
            I hadn’t seen that before but it perfectly sums up my approach to
            tech. I’m going to share this the next time someone questions why
            I take a pragmatic approach rather than jumping on the next new
            tech stack
            Thanks for sharing
            exdsq wrote 2 days ago:
            Thanks, this is the one! Highly recommend it, especially when
            you're in the HN echo chamber of new and interesting technologies
              amarshall wrote 2 days ago:
              Worth noting in this context of OP, though, is that Nix predates
              Docker by about a decade.
                roguas wrote 9 hours 8 min ago:
                Nix popularity is still lower than Docker, even during dotcloud
                era(I think this was the og company). Boring means common,
                unnuanced, rather than older or first.
                exdsq wrote 2 days ago:
                Huh, cool! This is like Python being older than Java
                :mindblown:. Still, old technologies can be just as risky as
                new ones.
                eximius wrote 2 days ago:
                That is extremely concerning, then, that it is so difficult to
                use after all that time.
                I had hopes in the back of my head that "maybe it'll get
                better/more ergonomic in the next few years".
                  KronisLV wrote 2 days ago:
                  > That is extremely concerning, then, that it is so difficult
                  to use after all that time.
                  > I had hopes in the back of my head that "maybe it'll get
                  better/more ergonomic in the next few years".
                  I feel much the same way about wanting to run something like
                  FreeBSD and having it just work, as opposed to running into
                  weirdness because of the driver situation, with which
                  GNU/Linux seems to be getting better at (even though you
                  sometimes are forced to install proprietary ones for optimal
                  experience, should get better in the next decade).
                  So, might have to wait for a bunch more years, or just pick
                  something else, like OpenBSD, or just run in a set of
                  constraints for having a really predictable and well
                  supported hardware configuration, which isn't always
                  possible. Alas, my off brand Polish netbook will just have to
                  wait before i can (hopefully) run BSD on it comfortably.
                  Well, short of making the missing pieces of puzzle myself,
                  for which i'd probably also need a decade or so of systems
                  level development experience, as opposed to just web dev and
                  occasionally dabbling in lower level stuff to mediocre
                  success. Of course, by then the netbook itself might not be
                  relevant, so who knows.
                  I also feel much the same way about GNU/Hurd - a project that
                  is conceptually interesting, but isn't yet quite stable, even
                  though apparently you can get Debian for it: [1] Now, i don't
                  have almost any experience with it, apart from reading about
                  it, but some people tried figuring out when it could be
                  released based on the bug reports and how quickly they were
                  being addressed, and the number that they came up with was
                  around 2070 or so.
                  In summary, there are probably projects out there, for which
                  their age isn't necessarily related to how usable they are,
                  whereas other pieces of software will have never truly
                  achieved that stability in the first place. Not all old
                  software is good (edit: at least not for certain use cases).
                  Of course, there are exceptions, for example, you can look at
                  Apache2/httpd: it is regarded as old and dated for the most
                  part, however just recently an update was released, which
                  added mod_md. It now lets you get Let's Encrypt certificates
                  without external tools (like Certbot), in some ways setting
                  it ahead of Nginx: [2] Not all old software is always boring.
                  Same for Docker and any other tools. If developers use tool A
                  for use case X, instead of tool B, then maybe there are some
                  very good reasons for this widespread usage? That's also
                  probably the answer to this debate - regardless of their
                  conceptual/technical benefits, the usability will probably
                  decide which tool will win in the long term.
   URI            [1]: https://www.gnu.org/software/hurd/
   URI            [2]: https://httpd.apache.org/docs/2.4/mod/mod_md.html
                    lproven wrote 1 day ago:
                    My suspicion is that the HURD folk seized on new shiny that
                    was not finished enough, and it's dogged the project ever
                    since. Periodically the already-inadequate number of people
                    have got diverted by investigating L4, then CoyotOS, then
                    I also submit that to understand why that happened, one
                    needs to consider the context at the time that the decision
                    was made: [1] [2] «
                    RMS was a very strong believer -- wrongly, I think -- in a
                    very greedy-algorithm approach to code reuse issues. My
                    first choice was to take the BSD 4.4-Lite release and make
                    a kernel. I knew the code, I knew how to do it. It is now
                    perfectly obvious to me that this would have succeeded
                    splendidly and the world would be a very different place
                    RMS wanted to work together with people from Berkeley on
                    such an effort. Some of them were interested, but some seem
                    to have been deliberately dragging their feet: and the
                    reason now seems to be that they had the goal of spinning
                    off BSDI. A GNU based on 4.4-Lite would undercut BSDI.
                    So RMS said to himself, "Mach is a working kernel, 4.4-Lite
                    is only partial, we will go with Mach." It was a decision
                    which I strongly opposed. But ultimately it was not my
                    decision to make, and I made the best go I could at working
                    with Mach and doing something new from that standpoint.
                    This was all way before Linux; we're talking 1991 or so. 
                     -- Friar Thomas Bushnell
                    They looked at BSD. The BSD people were not sure, so RMS
                    decided on another path.
                    If GNU had picked the BSD kernel, there could have been a
                    working Free xNix before Linux and things would have been
                    very different.
                    Secondly, I think it's important when discussing
                    microkernels and microkernel OSes to consider more than the
                    most famous one: Mac OS X.
                    Xnu is based on Mach, but it's not a pure microkernel: its
                    Xnu kernel contains a large, in-kernel "Unix server"
                    derived from BSD code. This was done for performance
                    reasons – remember, macOS is Mac OS X is NeXTstep,
                    written for a 25MHz 68040 in around 1987-1988.
                    There are better examples. QNX is probably the best: a
                    working, true-microkernel, Unix-like OS, dating from 1982.
                    At one point shared-source but not any longer.
                    There is also Minix 3, which is a different OS from Minix 1
                    & 2, the OS that inspired Linux and upon which Linux was
                    initially bootstrapped.
                    Minix 3 is still quite limited: no SMP, some missing APIs
                    etc. But QNX proves that  a true microkernel, on generic
                    hardware such as x86 and ARM, can support SMP and so on.
   URI              [1]: http://www.h-online.com/open/features/GNU-HURD-Alt...
   URI              [2]: http://www.groklaw.net/article.php?story=200507272...
                  beermonster wrote 2 days ago:
                  Is it? Or is it just a tool that requires a different mindset
                  so those coming to it from another regular distro? If the
                  latter, then I’m not sure it’s something they will see as
                  a problem but more its very ethos.
                    amarshall wrote 2 days ago:
                    The latter. As a (bad) analogy: it’s a bit like learning
                    Haskell. If you’ve only ever worked in
                    strictly-evaluated, imperative languages, it’s not gonna
                    be easy to learn. Nevertheless, Haskell predates e.g. Go by
                    two decades.
                    Nix (and Haskell) has its warts, and undoubtedly a new
                    system would avoid them (compatibility needs make some
                    changes very challenging), but the fundamental difficulty
                    remains because it is fundamentally different and solving a
                    truly difficult problem set.
                    rgoulter wrote 2 days ago:
                    Maybe an apt comparison is git.
                    It's just as easy to make a mistake with git today as it
                    was however many years ago; git hasn't fundamentally
                    changed in ways that make it easier. Git still more/less
                    requires you to have a good understanding of what's going
                    on in order to be comfortable using it.
                    But, since use of git is now widespread, it's less of an
                    issue. And the git CLI has seen some UX improvements.
                    Nix is very weird. I'm sure there are some ways its UX
                    could be nicer; but to some extent, doing things in a
                    strict/pure way is always going to have friction if the
                    tools/environments don't easily support that.
                      112233 wrote 2 days ago:
                      > It's just as easy to make a mistake with git today as
                      it was however many years ago; git hasn't fundamentally
                      changed in ways that make it easier.
                      Easier? Who needs that? Soon then you will have common
                      unlearned folk and peasants trying to use git. 
                      They had the super obtuse git reset, that was four
                      different things bolted together, so they fixed it by
                      adding git restore, that does slightly different things,
                      but you still need both...
                      pjmlp wrote 2 days ago:
                      I avoid most git usability issues by using it as SVN.
                      When something goes wrong I just bork the whole repo and
                      clone it again, then manually merge the last set of saved
                        carlmr wrote 2 days ago:
                        Since learning about reset --hard, I need about 80%
                        less of that.
                          clem wrote 2 days ago:
                          "git reflog" has also been a blessing in getting out
                          of scrapes.
                        tomcam wrote 2 days ago:
                        Thank you! Me too. I can finally come out of the
                          pjmlp wrote 2 days ago:
                          I has been a long journey since I started with RCS
                          back in 1998, but given that git now is everywhere,
                          the least I deal with its complexity the better.
                          In some bad days I even miss Clearcase.
                      eitland wrote 2 days ago:
                      > It's just as easy to make a mistake with git today as
                      it was however many years ago;
                      Pro tip from an actual pro :
                      git became significantly easier for me once I decided
                      1. to just use a good comfortable gui (at least one tui
                      also look good) for anything beyond commit and push.
                      (maybe not everything can be done from your gui of choice
                      but at least you get a good overview of the situation
                      before you dive in with the cli.)
                      2. and to just stick to the basics. (And when you need to
                      help someone else who didn't, take it carefully even if
                      that means you don't look like a wizard.)
                      In fact even working on the command line feels easier
                      once I had worked in a good gui for a while.
                      Don't believe experts who claim that you need to study
                      vim and stick to the git cli to "get it".
                      But of course: if cli is your thing, use it, just stop
                      writing blog posts that claiming it is the only true way.
                        carlmr wrote 2 days ago:
                        Which git GUI do you recommend? The one in VSCode I
                        find even more confusing than the CLI.
                        I do agree with you that some workflows are just easier
                        with a GUI, since I used to use TortoiseSVN and it was
                        much nicer for diffing two commits than the CLI is. But
                        I haven't really dug into git GUIs.
                          peanball wrote 1 day ago:
                          Fork, hands down.
   URI                    [1]: https://fork.dev
                          rgoulter wrote 1 day ago:
                          I found this website useful for helping me understand
                          what was going on with git. [1] GUI vs CLI shouldn't
                          be about which is a less confusing way of using git.
                          (Though, yes, magit is excellent, and there are very
                          few tools which come close).
   URI                    [1]: https://learngitbranching.js.org/
                          McP wrote 2 days ago:
                           [1] It's the most similar I've found to TortoiseHg
                          which is the standard, and excellent, Mercurial GUI.
                          (TortoiseGit does not live up to the family name)
   URI                    [1]: https://www.syntevo.com/smartgit/
                          eitland wrote 2 days ago:
                          I recommend finding one or more that you like, my
                          preferences doesn't always match everyone elses ;-)
                          At times I have used ungit or git extensions (which,
                          despite its name contains a full desktop application.
                          Today I use VS Code with a combination of Git Branch
                          for visualization and Git Lens for the rest. Git Lens
                          might need some configuration (I rarely use all the
                          panes, but I use the interactive rebase tool from it
                          as well as some annotationn features. For the visual
                          rebase tool to work I had to configure VS Code as my
                          git editor, but that is how I like it anyway.)
                          Again: try a few, find one that makes sense for you.
                          Jetbrains tools for example are generally good, liked
                          by many and well though I understand, but still the
                          git visualization (and a couple of other things)
                          consistently manages to confuse me.
                          KronisLV wrote 2 days ago:
                          Here are some recommendations:
                          SourceTree: [1] Windows and Mac. Free. Feels
                          sluggish, but is also really dependable, the graph
                          view is lovely and it covers most of the common
                          things that you want to do - also, staging/discarding
                          chunks or even individual lines of code is lovely.
                          Oh, and the Git LFS integration, and creating patches
                          is also really easy. And it gives you the underlying
                          Git commands it uses, in case you care about that.
                          GitKraken: [2] Windows, Mac and Linux. May need
                          commercial license. Feels like a step up from
                          SourceTree, but i find that using this for commercial
                          needs is a no go. If that's not an issue, however, it
                          has a good UI, is nice to work with and just
                          generally doesn't have anything i'd object to. IIRC
                          it saved my hide years back by letting me do a ctrl+z
                          for a repo after accidentally forcing to the wrong
                          remote, so that i could fix what i had done (memory
                          might fail me, was years ago), just generally feels
                          intuitive like that.
                          Git Cola: [3] Windows, Mac and Linux. Free and open
                          source. Perhaps one of the more basic interfaces, but
                          as far as free software goes, it does what it sets
                          out to do, and does it well. I use this on Linux,
                          whenever i want to have that visual feedback about
                          the state of the repo/staging area or just don't feel
                          like using the CLI.
                          TortoiseGit: [4] Windows only. Free. Recommending
                          this just because you mentioned TortoiseSVN. If you
                          just want a similar workflow, this is perhaps your
                          best option. Honestly, there is definitely some merit
                          to having a nice file system integration, i rather
                          enjoyed that with SVN.
                          Whatever your IDE has built in: look at your IDE
                          On any platform that your IDE runs on. Same licensing
                          as your IDE. Some people just shop around for an IDE
                          that they enjoy and then just use whatever VCS
                          workflows that they provide. I'd say that VS Code
                          with some plugins is really nice, though others swear
                          by JetBrains' IDEs, whereas others are fine with even
                          just NetBeans or Eclipse (Java example, you can
                          replace that with Visual Studio or whatever). If
                          you're working within a particular stack/IDE, that's
                          not too bad of an idea.
                          The CLI: [5] Windows, Mac and Linux. Free and open
                          source. You'll probably want to know a bit of the CLI
                          anyways, just in case. Personally, i'm still way too
                          used to using a GUI since dealing with branches and
                          change sets just feels like something that's more
                          easy when visual, but the CLI has occasionally helped
                          me out nonetheless.
                          Actually, here's a useful list of some of the Git
                          GUIs: [5] downloads/guis
                          For example, did you know that there are some simple
                          GUI tools already built in, git-gui and gitk?
   URI                    [1]: https://www.sourcetreeapp.com/
   URI                    [2]: https://www.gitkraken.com/
   URI                    [3]: https://git-cola.github.io/
   URI                    [4]: https://tortoisegit.org/
   URI                    [5]: https://git-scm.com/
   URI                    [6]: https://git-scm.com/downloads/guis
                          copperx wrote 2 days ago:
                          Magit is the most recommended one. But it comes with
                          an Emacs dependency.
              gurjeet wrote 2 days ago:
              See also:
   URI        [1]: http://boringtechnology.club/
          lvass wrote 2 days ago:
          I've always kept my usage of Nix language to a minimum and found it
          to be a joy every time. But it's no wonder you'd have trouble on
          something that humongous. Why did it get so large?
            rgoulter wrote 2 days ago:
            > I've always kept my usage of Nix language to a minimum and found
            it to be a joy every time.
            The trouble is, it's more significant to consider what the pain is
            in the worst case.
            e.g. Two things which count against Nix:
            With stuff like Docker, I can get away with not really
            understanding what's going on, and have a quick + dirty fix. And
            with Nix, the community is smaller, so it's less likely you'll find
            an answer to the problem you're having right now.
            I worked at a company where the DevOps team used Nix for the
            production/staging. For local development (and ephemeral
            development deployments), the dev team came up with a
            docker-compose setup. -- That said, there was at least one case
            where the devs contributed to the Nix stuff by way of copy-pasting
            to setup a new deployment.
            aspaceman wrote 2 days ago:
            I'm not the author and my opinion might not be warranted, but I
            would guess:
            * When you modify your local nix file, there's only a couple things
            you need to change. It stays small. You can keep track of all these
            changes because they're in your head.
            * The company needs a lot of little changes here and there so they
            build up. Foobar needs rooster v1,2 while Barfoom needs rooster
            v0,5. A lot different than managaging a users config. No longer
            possible to squeeze all the details into one brain.
            This isn't to say Nix would be bad at handling any of these things.
            But I would agree with the OP. I have trouble teaching new devs how
            APT and shell work. Couldn't even imagine trying to explain Nix
            syntax to them.
            The more Nix is used for, the more likely a dev will have to touch
            it. And teaching someone who doesn't care Nix is like teaching
            someone who doesn't care about Makefiles / Git. Doesn't work. They
            just get mad and say it's too complicated.
            exdsq wrote 2 days ago:
            Nix can do everything and that's an issue - suddenly the CI
            servers, AWS deployments, build servers, testing, linting, dev
            package management, dev system environments, and more are all
            written in Nix packages. You need to write nix bindings. You need
            to add nix caching. And, as is fitting with a functional language,
            it can be beautifully (read: painfully) abstract. Some of the guys
            on the team were contributors to Nix so it wasn't like we weren't
            following best practices either.
            I'm sure if everyone was willing to put in weeks/months of effort
            it'll be as fun to use as Haskell, but Nix isn't the thing I do so
            I need something simple - docker/docker-compose works perfectly
            well for that.
              svenhy wrote 2 days ago:
              No, Nix can't do everything, it can do the one thing it does,
              mainly it maintains immutable paths and allows to define
              derivations of any number of those paths using namespaced
              execution of build scripts. 
              All this is controlled using expression in the Nix language,
              which is 
              well suited for the intended purpose. It is in important aspect
              the opposite of Haskell.
              Nix does so well what it does, that it is easy to use it where
              better solution exists (looking at you NixOps)
              So I agree with some parts of your criticism,
              on the other hand if Nix is be used to do the sensible subset of
              "CI servers, AWS deployments, build servers, testing, linting,
              dev package management, dev system environments, and more are",
              that would be great.
              And what that subset is depends heavily on the given
              For me, being able to derive the "CI servers, AWS deployments,
              build servers, testing, linting, dev package management, dev
              system environments" things from a single source of truth,
              actually does sell Nix pretty nicely.
              When approaching a complex, multi-component source repository,
              with the requirement for dev-environments, CI builds and
              generation of production ready Docker-images, a careful and not
              over-eager Nix build, seems a sensible choice,     given that the
              layered nature of Dockerfiles and docker-compose obscures the DAG
               nature of built artifacts and dependencies.
              Nix also doesn't build anything by itself, it merely invokes
              low-level build tools in isolated environments that only contain
              the specified dependencies.
              Sure, when using Nix one is forced to explicitly state the
              dependencies of each (sub-component-) build-step, that results in
              more code than a bunch of RUN ... 
              calls in Dockerfiles. 
              Both approaches have their sweetspots.
              An investment into learning Nix is done once. And you are
              completely right, that refactoring to a Nix based build takes
              weeks in a serious legacy software project. I wonder myself, if
              this will be worth it. Currently, I think that might be the case,
              and that part of the weeks put into the build are not due to Nix
              and would have been necessary for Docker-based builds, too, and
              also one can do a lot of stuff much easier with Nix, which leads
              to "feature-creep" like "oh let me add this preconfigured
              vim/emacs/vs-code into the build env" that one would probably not
              do if it weren't so easy to customize everything with overlays
              and overrides.
              But that is a good thing, and it is much better to have a capable
              tool and critical discussions with colleagues to limit the usage
              of that tool, than the other way around.
              Heck, I remeber when we switched to a Maven build, and had a
              build with all bells and whistles. That took weeks, too, and all
              the generated artifacts, like source code documentation sites,
              were probably a waste if time to implement.
              I am not sure if that proves or disproves powerful, declarative
              and modular build systems.
              Dockerfiles and docker-compose.yamls have a learning curve too,
              but more importantly, if you have to invest more time per build
              maintenance than with a Nix based build, it is only a matter time
              until that costs more than the nix approach.
              lvass wrote 2 days ago:
              Sounds like your scientists were so preoccupied with whether or
              not they could, they didn’t stop to think if they should. Would
              it be nicer if you just used Nix to build the application and a
              shell, in the simplest possible manner?
                exdsq wrote 2 days ago:
                This is totally fine and I like it, my worry is scope creep
                once it’s in. Nix repl to experiment with new languages is
                really cool. Reversing updates to my OS is amazing. If I could
                have a rule that that is the limit - personal dev environments
                - but with additional support for docker then I’d be very
          taeric wrote 2 days ago:
          Funny, because I feel that simple tasks that would take minutes in my
          machine are now a dev adventure with docker.
          And I mean funny.  I suspect it is different mindsets.    And I
          personally like that both seem to be thriving.
            oauea wrote 2 days ago:
            If you can do it in minutes on your machine, you can spend those
            minutes updating your Dockerfile to automate the steps instead.
            It's essentially the same thing.
              taeric wrote 2 days ago:
              Unless auth is concerned.  There are loads of tools I run that I
              want to just be me.  Not whatever user is configured in the
              And learning how to manage that mapping was a heck of a time
                oauea wrote 2 days ago:
                How you going to run those tools on CI? Or by your colleagues?
                These are all questions that need answering anyway, regardless
                of your usage of Docker.
                  taeric wrote 2 days ago:
                  I already authed as me on my machine.  Annoying to also have
                  to auth as me in the container.
                  Or have we reached the stage where folks have secrets
                  management for containers fully orchestrated at a personal
                  user level?
            goodpoint wrote 2 days ago:
            It's well documented that Docker and especially docker hub had a
            terrible impact on security.
            Once you factor in the efforts required in the long term to
            mitigate a decade of bundling gigabytes of applications and
            libraries it's a huge "dev adventure"
              oauea wrote 2 days ago:
              Just build your own images like you would otherwise install the
              software on bare metal. Base those images on official images, not
              community images.
                goodpoint wrote 2 days ago:
                > Just build your own images
                This is like saying "just wear a mask": it's not enough.
                You rely on what other people are doing now and what they've
                done in the past.
                This is an ecosystem problem, not an an individual problem.
                kryptk wrote 2 days ago:
                Personally, you would have to pry my Bitnami images out of my
                cold dead hands.. there is just no way my team of 2 can do
                anywhere near as good.
                  oauea wrote 2 days ago:
                  Fair, there are a few high quality "vendors" that we also
                  trust, and Bitnami is one of them. RandomJoe42/SomeImage is
                  out, though.
              Frost1x wrote 2 days ago:
              More like developer Towers of Babel waiting to collapse on every
              build attempt. Docker is fine if it's used correctly but it does
              add a layer of complexity, it doesn't abstract it away..
              The way I see it rampantly (ab)used is as a shortcut to get some
              software up and running by leveraging a public image and passing
              tech debt for some component of one's system onto the maintainer
              of the Docker image, then cobbling together multistage builds
              from those and microservice architectures to try and support this
              tech debt model. Sometimes it works, many times it breaks, often
              it turns into complete and utter nightmares to deal with.
              "Amazing Andy got this fantastically complex system up and
              running in a week all by their self way under time and budget and
              it works. Now I just want to add this feature or modify this one
              thing and you're telling me that's going to take how long?" Yea,
              I've seen this more times than I care to admit.
                kwertyoowiyop wrote 2 days ago:
                “You mean Amazing Architect Andy who just left for Google?”
                  KronisLV wrote 2 days ago:
                  That's at least partially why we have CV driven development
                  be a thing in the industry. Either you jump on the NoSQL, AI,
                  ML, blockchain, serverless, microservice, container, $FOO
                  bandwagons, or you'll miss out on job opportunities and will
                  have your CV stagnate.
                  That's not to say that those technologies don't have good use
                  cases in particular circumstances, it's just that you as an
                  IC probably don't have strong incentives to care about where
                  the project will be in 5-10 years when you will have left for
                  another company in 2-3. A lot of accidental complexity is an
                  unavoidable reality when you have people who put their own
                  careers ahead of most other concerns.
            aspaceman wrote 2 days ago:
            I feel similarly with Docker. But it's easier to explain to newer
            folks because it's only a single layer of abstraction above shell
              jokethrowaway wrote 2 days ago:
              I can't say Docker is simpler than Nix.
              I found Nix way easier, but the documentation is very... concise.
              aequitas wrote 2 days ago:
              The issue with simple solutions is that the underlying problem
              still has the same complexity. This complexity rises up sooner or
              later and depending on your abstraction will be nice caught
              within it or make a complete mess.
              taeric wrote 2 days ago:
              I call this.  It isn't a single abstraction over shell.  It is a
              single closure over shell.  And that is surprisingly hard to
              Again, though, I am glad both are thriving.  Competition of
              implementation is... not that compelling to me.  Competition of
              approach, is amazing.
              dnautics wrote 2 days ago:
              I really wish something like singularity containers had taken
              over -- that was literally just shell commands.
                mst wrote 2 days ago:
                Possibly [1] would be of interest.
   URI          [1]: https://buildah.io/
                  dnautics wrote 1 day ago:
                  I do use podman, not really stoked about buildah because it's
                  still a bunch of buildah commands in a shell-driven DSL,
                  which I suppose is better than a dockerfile.  Singularity was
                  literally a build file that was a shell script divided into
                  sections that denoted phases.
        zamadatix wrote 2 days ago:
        Somewhat not related and likely dumb question but I figure folks
        looking at this article can help steer me in the right direction: if I
        want containers to act like Linux VMs what's the best option? Like
        Docker minus the assumption that I only want a few specific directories
        to persist or the idea I'm interested in layering containers in a way
        that lets me reproduce building them. Like if I just want another
        separate full instance of Alpine with it's own network view and own
        filesystem view and own everything but sharing the host kernel instead
        of being fully virtualized.
          SkyMarshal wrote 2 days ago:
          In addition to systemd-nspawn, Kata Containers might also work for
   URI    [1]: https://katacontainers.io/
          CameronNemo wrote 2 days ago:
          LXD is probably the most polished option. I believe it is packaged
          and up to date on Alpine.
          Hendrikto wrote 2 days ago:
          That’s basically what systemd-nspawn does.
          oddlama wrote 2 days ago:
          You might want to have a look at systemd-nspawn [1], which is
          basically containerized chroot based on linux namespaces.
   URI    [1]: https://www.freedesktop.org/software/systemd/man/systemd-nsp...
            zamadatix wrote 2 days ago:
            This is exactly what I needed! Thank you!
            CameronNemo wrote 2 days ago:
            I don't think you can easily use systemd-nspawn on Alpine.
        throwaway9870 wrote 2 days ago:
        > If you run docker build twice with the same Dockerfile you might get
        2 images that behave in different ways. For example, a third-party
        package could silently be updated and cause breakage. Aggressively
        pinning dependency versions helps, but doesn't completely prevent this
        If "aggressive" means fully, then why doesn't that fix the issue?
          cbrewster wrote 2 days ago:
          Author here. In our case, we had a large base Docker image called
          Polygott ( [1] ) it pulls in dependencies for 50+ different languages
          from various repositories. We would pin things where possible, but
          its still very difficult to ensure reproducible builds.
          Additionally, docker builds have free access to the network to do
          anything it would like. Nix goes to great lengths to sandbox builds
          and limit network access. Anything accessed from a network requires a
          pinned sha 256 hash to ensure the remote data hasn't changed. ( [2] )
   URI    [1]: https://github.com/replit/polygott
   URI    [2]: https://nixos.wiki/wiki/Nix#Sandboxing
            treis wrote 2 days ago:
            Why do you need reproducible builds for Docker?  The whole point is
            that you build it once and then you use that container on as many
            servers as you want.
              cbrewster wrote 2 days ago:
              What happens when you need to update some dependency within that
              image? Now you have to do an image rebuild. If you're lucky only
              the top-most layers will be rebuilt and the base layers stay the
              same, if you're unlucky nearly the whole image is rebuilt.
              Usually we just want to update some subset of dependencies, but
              during the rebuild some other dependencies may get updated
              unintentionally (eg if they aren't pinned to a particular
              version). For most, this may not be an issue but at Replit,
              everyone's projects use this base Docker image. Unintended
              updates can cause breakage for our users.
                treis wrote 2 days ago:
                That's not really what a reproducible build is though. 
                Reproducible builds are you get the exact same thing from your
                build script today or three weeks for now.  Getting unexpected
                changes with an updated dependency is a different problem than
                not having a reproducible build.
                  cbrewster wrote 2 days ago:
                  Fair, but its still a real issue and solved in a similar way:
                  Nix has finer grained reproducibility -- not only at the
                  environment level but also at the derivation level. Being
                  able to pick and choose which dependencies to update while
                  ensuring other packages are left exactly the same is valuable
                  to us.
            throwaway9870 wrote 2 days ago:
            That makes sense. I think the real issue isn't Docker vs Nix, it is
            that some package managers are almost impossible to use to build
            reproducible images. I worked with debootstrap 10+ years ago trying
            to script reproducible builds and found it exceeding hard. Gentoo
            made it almost trivial (Google used it for ChromeOS so perhaps they
            felt similar). I will look into Nix.
            It appears that with the proper package manager support, Docker
            would be fine?
            I come from a hardware background and seem to be a lot more
            paranoid than most software folks. I would struggle to trust a
            build where so much is not pinned.
          lilyball wrote 2 days ago:
          Probably because of non-deterministic builds. Especially if the
          dependency pulls info from the network at build time.
            throwaway9870 wrote 2 days ago:
            I guess that would not fit my definition of "fully".
            Do people really build images this way? It sounds completely insane
            to pull packages like that randomly from the Internet.
              Volundr wrote 2 days ago:
              Both my professional experience, and public examples would seem
              to suggest that's the norm. Example, here's the official postgres
              Dockerfile: [1] .
              Do you work in an environment that maintains custom copies of
              every dependency in company managed repos? If so, my experience
              suggests your the outlier, not the people running apt, npm, etc
              inside their Dockerfiles.
   URI        [1]: https://github.com/docker-library/postgres/blob/3bb48045...
                throwaway9870 wrote 2 days ago:
                Not custom copies, but a locked down cache of packages. For
                Gentoo you can do this by locking the portage tree you use and
                keeping a copy of the distfiles from the first run, for Python
                it was a requirements.txt file with a cache of the tar files
                from PyPi, for go it was including 3rd party code in repo. I
                don't know what the team did for npm.
                It was really nice doing a full image rebuild and knowing the
                only thing that changed it was you explicitly changed.
                  Volundr wrote 2 days ago:
                  I'm genuinely curious about this. How are you distributing
                  these caches so that if I do a build on my machine it'll
                  produce the exact same image as on yours? If I'm
                  understanding what you mean by "cache" (I'm thinking the
                  node_modules folder for example for NPM) it'd certainly work,
                  but it feels like a logistical nightmare to me.
                    throwaway9870 wrote 2 days ago:
                    We had a VM in a datacenter that hosted everything and was
                    accessed over a VPN. Again, I don't know the specifics of
                    the NPM setup, but for everything else it was HTTP serving
                    static files right off disk. It was a manual process to add
                    new files, but generally they got added in batches so it
                    wasn't too painful. Gentoo has hashes for everything, so at
                    least for OS packages, you could not overwrite anything
                    without breaking the build and noticing.
                    mason55 wrote 2 days ago:
                    We host our own Maven and PyPi. External artifacts get
                    pulled into the cache and devs get all their artifacts form
                    our cache. If an artifact already exists in the cache then
                    we never update it.
                    Super easily honestly, one of those things that we never
                    even think about until someone upstream does something that
                    would have screwed us anyway, like republishing a version
          Volundr wrote 2 days ago:
          A lot of package repositories prevent this now, but theoretically
          your package repository could allow a new version of the package to
          be uploaded with the same version number. Docker tags have this issue
          in fact, so even your base image could change underneath you.
            throwaway9870 wrote 2 days ago:
            I can't imagine using a package system for building images that
            doesn't allow a full local cache of exactly the packages I want
              Volundr wrote 2 days ago:
              I mean sure, you can absolutely retain your own docker
              repository, your own NPM repository, etc and then configure them
              in your Dockerfile before installing packages. Pretty much every
              technical problem has a technical solution, but it's more effort
              that "just" fully pinning your versions.
                throwaway9870 wrote 2 days ago:
                If there are too many packages to pin, then I assume there are
                too many package to audit changes in for each image build you
                do? If that is true, how do you have any confidence that the
                image is going to function correct?
                  Volundr wrote 2 days ago:
                  I think we're talking about two different things. I'm
                  thinking of pinning packages as say, my npm.lock specifies I
                  will use version 2.2.19 of tailwind css. Or my Dockerfile has
                  redis:6.2.6 as it's base image. Both of those are fully
                  pinned, but nothing stops Tailwind or Redis from uploading
                  new versions of the package and reusing the version number (I
                  believe NPM actually disallows this, Docker hub does not). So
                  my build is not technically reproducible.
                  I can solve this by maintaining my own Docker registry, and
                  my own NPM registry, but that's more work that just fully
                  specifying versions in a configuration file.
                  As for auditing changes, well most people don't. They update
                  their dependencies and run their tests and if the tests pass,
                  ship it! Most companies don't have a team auditing every
                  single dependency that's pulled in, much less every update
                  after a dependency has been approved. They simply trust the
                  authors of Redis not to screw them.
                  It's great if you do, I'm certainly not arguing against it,
                  but it's far from the norm.
                    etcet wrote 2 days ago:
                    You can reference docker images by their SHA digest which
                    is immutable, e.g.:
                        docker pull
        cetra3 wrote 2 days ago:
        One thing in the blog that is not true is that you can only inherit
        from a single layer.  You can use multi-stage builds to grab & blend
        layers: [1] . It is not as powerful as nix, granted, but it is
   URI  [1]: https://docs.docker.com/develop/develop-images/multistage-buil...
          flyinprogrammer wrote 2 days ago:
          Except that multi-stage builds are deceptively not grabbing or
          blending layers - it's simply letting you copy files from a layer.
          And that is a big miss. Being able to describe "now append this layer
          and all, but only, its file changes from this previous layer" would
          be pretty epic.
          cbrewster wrote 2 days ago:
          While you can copy files from different stages, I wouldn't consider
          this to be the same thing as composing two base images together. Like
          the example in the post, you can't take the Rust and NodeJS images
          and tell Docker to magically merge them. You can copy binaries &
          libraries from one to the other but that seems extremely tedious and
          error prone.
          Whereas Nix makes it rather trivial to compose together the packages
          you need (eg Rust, NodeJS) in your environment.
        awinter-py wrote 2 days ago:
        IMO initial value of docker for local development is enabling me to run
        two copies of postgres without them shitting on each other. I get that
        nix is supposed to be hermetic, but does it enable two of something?
        nix being really good at package management is something docker needs
        to imitate -- out of order apt-get without requiring a re-downloading
        all the packages, for example, seems like it would shrink most cloud
        build costs. I guess this is what the article means by trouble 'sharing
        docker buildkit (or moby buildx or wherever the branding has settled)
        is supposed to to improve caching, but creating a simple API for
        caching that plays nicely with system + language package managers would
        really move the needle here; reorderable package installs would be the
          soraminazuki wrote 2 days ago:
          You can, if you take steps to containerize the postgres processes.
          This can be done with NixOS's nixos-container, or any other container
          runtimes including Docker. nixos-container is easy to use if you
          already use NixOS.
          This separation of concerns is one of things I like about Nix when
          compared to something like Docker. For instance, if you use the
          Docker image format for packaging, then you're also forced to buy
          into its specific sandboxing model. With Nix, you can choose to run
          applications the way you see fit.
          otabdeveloper4 wrote 2 days ago:
          > I get that nix is supposed to be hermetic, but does it enable two
          of something?
          No, it doesn't solve the TCP port isolation problem. (But Docker
          doesn't really either. Linux network namespaces should, but nobody
          bothered to develop tools for that yet.)
            nextaccountic wrote 2 days ago:
            Doesn't docker-compose set up a private network interface?
            awinter-py wrote 2 days ago:
            on docker for linux you get different hosts for the containers.
            you'll still need to BYO way to assign them names -- I personally
            use direnv for this
            I think this doesn't work as well on docker desktop for mac
          xxpor wrote 2 days ago:
          Nix supports this quite well. [1] The Nix and container mindset are
          very similar in that they refer to all of their dependencies,
          including down to glibc.
   URI    [1]: https://nixos.org/manual/nix/stable/#multiple-versions
            tyingq wrote 2 days ago:
            Does Nix understand network namespacing?  Or would 2 postgres
            instances clash over tcp listen ports?
            I get you could configure different ports, or virtual interfaces,
            but it sounds like either of those would be outside of the nix
              xxpor wrote 2 days ago:
              You would have to deal with that yourself. But it's typically
              just a variable in the service definition, so it's very easy to
              wyager wrote 2 days ago:
              I don't think it would be possible to do this in an OS-agnostic
              way. LXC and jails are too different, and I'm not even sure what
              the option would be in macOS.
                ratorx wrote 2 days ago:
                I don’t think NixOS (where service configuration would live)
                is system-agnostic. It relies heavily on systemd already.
            xyzzy_plugh wrote 2 days ago:
            Not quite. This is true for dependencies such as libraries but for
            services it's significantly trickier.
            Postgres, for example, would require configuring each version to
            use a distinct space for storage and configuration if you want to
            run them concurrently. It's still pretty easy with NixOS, but not
            as simple as you make it seem.
              xxpor wrote 2 days ago:
              It's one line of override configuration, how is that not trivial?
              Edit: Totally granted, figuring out that one line the first time
              might take 6 hours, but once you know how to do it you're good to
              go the next time. The documentation could certainly be improved.
          jonstewart wrote 2 days ago:
          You can use Nix to create a tarball that can then be launched as a
          Docker container. However, I haven’t figured out a way to make Nix
          play nicely with container image layering—you get a small container
          image for deployment, but you’ll have lots of such
          largely-duplicative tarballs in the CI pipeline and the latency for
          generating them is annoying.
            chriswarbo wrote 2 days ago:
            I do this, but I don't use Docker; I just create a .tar.gz file for
            each layer, run them through the `sha256sum` command to get a
            digest, and generate a JSON config file (via builtins.toJSON). The
            result can be run by anything that supports OCI containers (e.g.
            AWS ECS).
            clhodapp wrote 2 days ago:
            Have you tried dockerTools.buildLayeredImage ( [1] )? It applies
            some smart heuristics ( [2] ) to create images made of pretty
            reusable layers. This is subject to some false cache misses due to
            some laziness in the docker's cache implementation ( [3] ), but
            that is Docker's fault not nix's and it affects dockerfile builds
   URI      [1]: https://nixos.org/manual/nixpkgs/stable/#ssec-pkgs-dockerT...
   URI      [2]: https://grahamc.com/blog/nix-and-layered-docker-images
   URI      [3]: https://github.com/moby/moby/issues/38446
        jonstewart wrote 2 days ago:
        I tried using Nix and NixOS early in the year, but documentation is an
        issue. Also, while I appreciate NixOS’s focus on reproducible
        configuration, I also have 25 years worth of mediocre Unix sysadmin
        experience, and it would be nice if the system found a way to
        accommodate Unix. My last straw was trying to get nix to schedule some
        task through systemd, when what I wanted was a simple crontab.
          pxc wrote 2 days ago:
          NixOS includes a cron daemon (Vixie cron), and you can add crontab
          entries directly either as strings or as files: [1] You can use,
            services.cron.enable = true;
            services.cron.systemCronJobs = [ ''
              */30 * * * * someuser bash /path/to/some/script/idk.sh
            '' ];
          or if you really don't want to worry about the Nix syntax for a list
          of strings, and possible escaping issues, you can just create a
          crontab file in /etc/nixos next to configuration.nix and write
            services.cron.cronFiles = [
   URI    [1]: https://search.nixos.org/options?channel=21.05&show=services...
          l0b0 wrote 2 days ago:
          After having configured a bunch of crontabs, I found the Nix+systemd
          timers approach a massive improvement.
          Gigachad wrote 2 days ago:
          Kind of a tangent, but from what I have seen, crontab doesn't
          actually exist on modern distros, I believe they have a tool that
          converts the crontab config in to systemd timers. As well as fstab.
            nightfly wrote 2 days ago:
            Cron still exists on Ubuntu at least.
            jonstewart wrote 2 days ago:
            Yes, so I was doubly frustrated in this instance. I know the
            BSD-style init script system that I learned with Slackware is not
            geared well towards modern hardware (i.e., laptops), but it was so
            much simpler to read and comprehend.
              Gigachad wrote 2 days ago:
              Systemd timers do have several advantages but they come with more
              complexity which is why a translation layer is usually applied on
              user focused distros.
              You can enable/disable timers, trigger them manually, see when
              they last triggered, see when they will next trigger. And
              probably a bunch of other stuff. They also have individual files
              so automation can easily add or remove timers. For a power user,
              systemd timers are much better. But if you just want to quickly
              add one thing, it probably isn’t worth the effort to learn
        salamander014 wrote 2 days ago:
        I think people are missing the forest for the trees with this.
        In my view, the reason Docker has all the hype is because I can look at
        a Dockerfile, and know what's up. In seconds. Sometimes in
        It's a user experience thing. Yes, Nix is better for 'technical people
        that spent the time learning the tool', but Dockerfiles rely almost
        entirely on existing System knowledge.
        Yes, Nix is 'better', but the fact is Docker is 'good enough' and also
        'stupid simple' to get started.
        Also Docker-Compose, I don't know why people hate on YAML. But it takes
        that same KISS attitude to build complex systems that can also be used
        as sanity checks for migrating to things like kubernetes.
        Being able to spin up a complex full stack app with one command and a
        compose file that doesn't take any brain cells to read is worth it's
        weight in gold.
        This is like the 'general case tool' vs 'DSL' debate. If it's easy to
        use, people will use it.
          Apofis wrote 2 days ago:
          I'm a big fan of Docker-Compose so far because of it's powerful
          simplicity and it's introducing me to GitOps, Infrastructure-As-Code,
          and Terraform, all of which I'm really starting to like... and I
          really hate doing DevOps work.
          I think your point is very valid, it has got to be simple and
          increase productivity instead of impede it. Using something better
          but getting stuck in the minutia every day is a waste, and not
          something anybody in senior leadership should ever approve.
          aequitas wrote 2 days ago:
          > In my view, the reason Docker has all the hype is because I can
          look at a Dockerfile, and know what's up. In seconds. Sometimes in
          Most of the times this just gives me more questions than answers,
          like: what does the entrypoint.sh file in this repo do? Only to
          discover a plethora of shell script for setting up the runtime
          environment based on different environment variables. Most of the
          time not aligned with any common standard or with how you generally
          would setup the application itself.
          fulafel wrote 2 days ago:
          Lots of people seem to be building containers with non-Dockerfile
          based things though, especially in the JVM world.
            random_kris wrote 2 days ago:
            You mean through maven configuration? At the end of the day it is
            still a dockerfile but constructed using Maven's xml.
            I hate it haha
              fulafel wrote 2 days ago:
              I thought at least some of those worked out without generating
              intermediate Dockerfiles or invoking "docker build". After all
              container images are just basically tar files with some metadata.
              Or do you mean it's conceptually the same, just implemented
              differently? I agree there.
              tikkabhuna wrote 2 days ago:
              We use Google Jib with Gradle ( [1] ) and love it. It does some
              slight optimisations (just use the classes rather than jars) and
              removes some decision making about where files are laid out.
              It also builds into the Gradle lifecycle neatly. I don't need a
              separate tool for building images.
              I'm sure writing Maven xml wouldn't be fun though!
   URI        [1]: https://github.com/GoogleContainerTools/jib
          kaba0 wrote 2 days ago:
          That’s because Docker just pushes dependency management to one
          layer below, doesn’t solve it.
            mmcnl wrote 2 days ago:
            But 99% of the times that's exactly what you need.
          wyager wrote 2 days ago:
          I think you are confusing a property of your expertise with a
          property of the tool. As someone who doesn't use docker all the time,
          I find it kind of a pain in the ass to read realistic dockerfiles or
          work with docker-compose. As a juxtaposition I found freebsd jails
          much more pleasant and sane to work with for security
          containerization. For deployment management I'm not sure if there are
          competitors to docker but it's not hard to imagine something vastly
          more pleasant to use.
          nirui wrote 2 days ago:
          Agreed with your opinion about `Dockerfile`. The article had me for a
          second until I saw the script code. I mean, my time is not infinite
          and I rather spend it to do things that are really important to me,
          not learning to write "yet-another build script" for a small system.
          So unless it's mainstream already, I'm not going to touch it.
          `Dockerfile` is light enough for me to not hate it too much.
          For the `docker-compose.yaml` story however, I can offer one reason:
          when you have so many variants(versions), so many setting options and
          so many data types (array, object, string etc), it's hard to find
          references to write one from scratch (have to read multiple documents
          to get it right). Your knowledge on docker command-line parameters
          does not translate to `docker-compose.yaml` smoothly(some option
          changed names, some don't work). And sometimes, some function works
          differently under docker-compose.
            chriswarbo wrote 2 days ago:
            > I mean, my time is not infinite and I rather spend it to do
            things that are really important to me, not learning to write
            "yet-another build script" for a small system.
            You don't have to jump into the deep end with Nix. If you're happy
            to just run shell commands (like Dockerfiles provide), then all you
            need is this:
                (import  {}).runCommand "my-package" {} ''
                  PUT YOUR BASH CODE HERE
              nirui wrote 13 hours 5 min ago:
              No, please don't interpret it like this.
              No matter what the format Nix script look like, it's still a
              script language designed to address something that has already
              been addressed (or can be addressed with light expansions). The
              very idea of "Hey let's build this whole new thing that does this
              specific old task a little bit better at the cost of learning
              many new concepts (and making many mistakes)" is not good at the
              I would rather say, if the dudes there really wants to create a
              new language, fine, but at least make it big. By that, I mean
              don't just try to build a tool that preforms the old task a
              little bit better (at cost of learning), make a tool that does
              new things (in other words, "enables new possibilities") far
              better. Perhaps after that, the toolset could become something
              worth learning for.
              (Currently, there are many ways to create reproducible builds.
              And even if you have reproducible builds, it does not mean the
              build will reproduce the same runtime result all the time. All
              factors combine, the benefit you can receive from the toolset is
              just not great enough at the moment. Hope you understand my
          cbrewster wrote 2 days ago:
          Author here. As with most things, its all about the trade-offs.
          Docker has certainly proved itself and that approach has worked on a
          massive scale. However, its not a silver bullet. For us at Replit,
          our Docker approach was causing issues: our base image was large and
          unmaintainable and we had almost no way of knowing what changed
          between subsequent builds of the base image.
          We've been able to utilize Nix to address both of those issues, and
          others who may be in a similar scenario might also find Nix to be
          Of course Nix comes with its own set of opinions and complexities but
          it has been a worthwhile trade-off for us.
            mayli wrote 2 days ago:
            Correct, that's one of the cases where docker's layered image
            system doesn't work well. Nix is almost the perfect tool to perform
            incremental builds and deployments for the Replit requirements.
            I wish that docker has the ability to merge multiple parent layers
            like git, then you can build the gigantic image by just updating
            single layer.
            The only hack the docker can do is multistage-build, however that
            won't work reliably in some cases such as resolving conflicts.
              KronisLV wrote 2 days ago:
              Disclaimer: the following is still experimental, and will
              probably remain so for a while.
              There is actually the --squash command that you can use during
              builds, to compress all of the layers: [1] For example:
                $ docker build --squash -t my-image .
              In practice it can lead to smaller images, though in my
              experience, as long as you leverage the existing systems in place
              efficiently, you end up shuffling around less data.
                - layer N: whatever the base image needs
                - layer N+1: whatever system packages your container needs
                - layer N+2: whatever dependencies your application needs
                - layer N+3: your application, after it has been built
              That way, i recently got a 300 MB Java app delivery down to about
              a few dozen MB actually being transferred, since nothing in the
              dependencies or the base image needed to be changed since, it
              just sent the latest application version, which was stored in the
              last layer.
              Also, the above order also helps immensely with Docker build
              caching. No changes in your pom.xml or whatever file you use for
              keeping track of dependencies? The cached layers on your CI
              server can be used, no need to install everything again. No
              additional packages need to be installed? Cache. That way, you
              can just rebuild the application and push the new layer to your
              registry of choice, keeping all of the others present.
              Using that sort of instruction ordering makes for faster builds,
              less network traffic and ergo, faster redeploys.
              I even scheduled weekly base image builds and daily builds to
              have the dependencies ready (though that can largely be done away
              with by using something like Nexus as a proxy/mirror/cache for
              the actual dependencies too). It's pretty good.
              Edit: actually, i think that i'm reading the parent comment
              wrong, maybe they just want to update a layer in the middle? I'm
              not sure. That would be nice too, to be honest, though.
   URI        [1]: https://docs.docker.com/engine/reference/commandline/bui...
            AmericanBlarney wrote 2 days ago:
            Those sound like issues with your Docker usage - there are options
            to keep base image quite streamlined (e.g. alpine or distroless
              cbrewster wrote 2 days ago:
              For context, I'm referencing our (legacy) base image for projects
              on Replit:
              Polygott ( [1] ).
              The image contains dependencies needed for 50+ languages. This
              means repls by default are packed with lots of commonly used
              tools. However, the image is massive, takes a long time to build,
              and is difficult to deploy.
              Unfortunately, slimming the image down is not really an option:
              people rely on all the tools we provide out of the box.
   URI        [1]: https://github.com/replit/polygott/
                darkwater wrote 2 days ago:
                > For context, I'm referencing our (legacy) base image for
                projects on Replit: Polygott ( [1] ).
                May I ask why you didn't use something like Ansible to build
                such a complex image? With appropriate package version pinning
                (which it's the real crux here) it should work well enough to
                get a reproducible build.
                I understand it would already have been something different
                from a pure Dockerfile so it's not that fair to compare
   URI          [1]: https://github.com/replit/polygott/
                  chriswarbo wrote 2 days ago:
                  > May I ask why you didn't use something like Ansible
                  They did; it's called Nix, and they wrote a blog post about
                  it ;)
        bjtitus wrote 2 days ago:
        I've set up my new M1 MacBook Pro using Nix and it's been going
        relatively well. Home Manager manages global tooling like Neovim,
        random CLI tools, and config files while I've set up `default.nix`
        files to use with `nix-shell` per-project. The set up of each project
        can be a little tedious as I still find the language confusing but once
        everything is set up the reliable re-creation is excellent. I love the
        feeling of opening the nix shell and feeling like I have each tool I
        need for that project without worry of polluting my user space or
        conflicts between projects.
          rowanG077 wrote 2 days ago:
          Wow I didn't know nix arm support was that far already. Honestly I
          even always felt that osx is kind of second class in the nix
          revscat wrote 2 days ago:
          I have a Makefile that I use when I need to spin up a new laptop [1].
          For your description it sounds like it is functionally similar. What
          does nix bring that a Makefile like this one doesn’t?
   URI    [1]: https://github.com/jchilders/dotfiles/blob/main/Makefile
            kohlerm wrote 2 days ago:
            Your Makefile is not a reproducible build. You would need to pin
            all the versions of the tools being used. Also as others have
            mentioned, Nix ensures that you do not rely on unspecified
            muxator wrote 2 days ago:
            Not an answer to you're question, but do you feel safe doing ( [1]
            > sudo curl -fsSL [2] | /bin/bash
            piping the output of a curl command to sh without first checking
            the sha256 of the file you just got?
            In a similar situation I would not be comfortable without at least
            getting a specific version of the tool I'm downloading and then
            hardcoding an hash of its content I computed after manually
            downloading and inspecting its content.
   URI      [1]: https://github.com/jchilders/dotfiles/blob/main/Makefile#L...
   URI      [2]: https://raw.githubusercontent.com/Homebrew/install/master/...
            amarshall wrote 2 days ago:
            Drift. Nix (effectively) does not mutate in-place, it rebuilds and
            links. Delete a target from that Makefile and it doesn’t actually
            remove that thing. It’s the same problem Ansible etc. have.
            It’s not till actually fully reinstalling the OS that all the
            implicit dependencies reveal themselves. Sure one can write
            “clean” targets but the point of Nix is that such manual
            management is unneeded.
            mateuszf wrote 2 days ago:
            I guess it's reproducible - creates a system that has exactly the
            same versions of all programs and all libraries.
          lilyball wrote 2 days ago:
          I recommend also using direnv, with nix-direnv (home-manager has a
          setting to trivially enable nix-direnv). This lets you integrate your
          shell.nix environment into your existing shell without having to run
          `nix-shell` or use bash.
            bjtitus wrote 2 days ago:
            Oh wow, this is fantastic! Thanks for the tip and I'd like to thank
            you for all of the work you do on nix + Darwin. I'm using your
            `nix-env.fish` package and it's made things much easier than when I
            tried this setup a few years ago.
              lilyball wrote 1 day ago:
              I'm glad to hear it! Though these days I don't actually use it at
              all, I finally switched to using nix-darwin[1] which means I get
              a NixOS-like fish initialization. Also prior to that I got the
              fish package updated with a new override option fishEnvPreInit
              that lets you write an overlay like
                self: super {
                  fish = super.fish.override {
                    fishEnvPreinit = sourceBash: sourceBash
              This would source the given bash script (using fenv IIRC) at the
              same point that NixOS fish would load its environment, thus
              producing the same behavior (notably, setting up all of the
              various directories prior to running any user code, unlike
              nix-env.fish that has to try and patch things up after the fact).
              The downside is that it means you have to recompile fish.
   URI        [1]: https://github.com/LnL7/nix-darwin
          linsomniac wrote 2 days ago:
          My MBP left Shenzen early this morning and I'm super interested in
          details of how you did this.  Are there any examples of doing this
          that you followed or recommend?  Ansible is more my tooling of
          choice, though I'm fascinated by Nix, but I wasn't sure even how to
          get started with using Ansible to set up the Mac.
            linsomniac wrote 2 days ago:
            I just watched this Jeff Geerling video about setting up his M1
            Macs using Ansible and his provided playbooks.    That's probably the
            direction I'll end up going.
   URI      [1]: https://www.youtube.com/watch?v=1VhPVu5EK5o
        42jd wrote 2 days ago:
        If your interested in digging deeper into building docker containers
        with nix this is my favorite post on the topic: [1] . Essentially you
        can use any nix package (including your own) to create the environment.
        And if you really want to understand it, Graham Christensen (the
        contributor of the buildLayeredImage function) wrote a really good blog
        post on how it works: [2] .
   URI  [1]: https://thewagner.net/blog/2021/02/25/building-container-image...
   URI  [2]: https://grahamc.com/blog/nix-and-layered-docker-images
        edude03 wrote 2 days ago:
        No, it definitely (but unfortunately) will not. Nix does everything
        docker does better than docker does, except most crucially, integrate
        with non nix tooling.
        Nix vs Docker is like Rust vs JavaScript - you can point out every
        reason js is terrible and rust is better, but for the common developer
        looking to get things done, they’ll often gravitate to the tool that
        gets them the biggest impact with the least upfront investment, even if
        that tool ends up causing major predicable headaches in the future.
          lolpython wrote 2 days ago:
          Yes, and Nix can still lose on maintainability in the long run,
          considering it is more difficult to onboard new devs with it. They
          have to learn the Nix expression language and write custom bindings
          for much software instead of calling the native package manager
          inside Docker.
            CameronNemo wrote 2 days ago:
            I don't understand how these are comparable. I also don't
            understand what you mean by "bindings". Do you mean writing nix
            derivations for new packages? I would much rather do that than
            fiddle with Debian packaging. Or do you mean writing nix modules to
            configure a service? There are certainly some (IMO) over engineered
            nixos modules, but there are also some dead simple ones.
              lolpython wrote 2 days ago:
              > I don't understand how these are comparable
              They fulfill similar business functions - allowing you to run the
              same code on a bunch of dev machines and on prod (modulo
              modifications for e.g. database storage in Docker's case). Nix
              people get hung up on the fact that Docker runs containers, but
              it doesn't really matter that much. Often Docker is the shortest
              path to getting software running on multiple machines
              > I also don't understand what you mean by "bindings"
              I am referring to derivations and modules. Both are glue that you
              have to write for existing software that is already packaged.
              With Docker you leverage the existing packaging ecosystem like
              pip or apt. The packages are already written for you, and you can
              follow the installation instructions from a project repository
              and they translate seamlessly into Docker.
              For example with ML & Python - If you want PyTorch with CUDA
              support, you can follow the official documentation [0] and
              basically copy and paste the installation instructions to a
              Dockerfile RUN statements. If anything breaks you can file an
              issue on the PyTorch issue tracker which has a wide audience.
              With Nix you have to write glue on top of the installation
              yourself, or a maintainer does it with a much smaller code review
              and support audience. Sometimes the audience is just the author,
              given that Nix project people commit directly to master
              frequently and do self-merges of PRs [1]. And there are other
              hurdles like compiling Python C extensions, which are pervasive.
              Another example is with software systems, I guess this would be a
              Nix module. Here's GitLab: [2] where it was really difficult to
              translate the services into Nix. But a lot of company internal
              services can look like GitLab with a mix-mash of odd dependencies
              and languages. And writing a Dockerfile for this is much easier
              than Nix, since you can copy from the existing README specifying
              the Debian or language-specific dependencies. (edit: and if there
              are conflicts between dependencies of the services they can go
              into different containers. Getting the benefit of Nix -
              reproducibility - without the extra effort.)
              [0]: [1]: [2]:
   URI        [1]: https://pytorch.org/get-started/locally/
   URI        [2]: https://discourse.nixos.org/t/proposal-require-pr-author...
   URI        [3]: https://news.ycombinator.com/item?id=14717852
                soraminazuki wrote 2 days ago:
                > and if there are conflicts between dependencies of the
                services they can go into different containers. Getting the
                benefit of Nix - reproducibility - without the extra effort
                To be clear, are you suggesting that
                    RUN sudo apt-get update && sudo apt-get -y install ...
                is somehow reproducible? I'm asking because I was surprised to
                see the above as being described as "reproducible" of all
                things. Splitting that into many different containers would
                likely exacerbate the reproducibility problem instead of
                improving it.
                > With Docker you leverage the existing packaging ecosystem
                like pip or apt. The packages are already written for you
                This is even more true for Nix, which has the largest and most
                up-to-date package repositories out there[1]. Plus, with Nix,
                you can easily make a new package based on existing packages
                with a mere few lines of code if the existing packages doesn't
                fit your needs. Other package managers besides Guix doesn't
                offer you that flexibility so you'd have to compile from
                scratch. That's way more tedious, hard to maintain, and
                definitely not reproducible.
   URI          [1]: https://repology.org/repositories/graphs
                  lolpython wrote 2 days ago:
                  I am frustrated that you ignored my main argument about
                  Docker sufficiently serving the same business function as
                  Nix, with lower maintenance. Instead you focused on a
                  semantic argument about one word in my post - "reproducible".
                  You have the wrong idea about reproducibility, where you say
                  basically everything that is not Nix or Guix is not
                  reproducible. This ignores things like conda and techniques
                  like package version pinning that allow researchers and
                  businesspeople to get the same results from the same code.
                  Here's a definition of "reproducible" from Wikipedia:
                  Any results should be documented by making all data and code
                  available in such a way that the computations can be executed
                  again with identical results. [1] -----
                  > To be clear, are you suggesting that
                  >     RUN sudo apt-get update && sudo apt-get -y install ...
                  No, you need to pin the dependency versions. With Python this
                  practice is already normalized with requirements.txt or conda
                  yml files. So you would take an existing project and do:
                      RUN conda env create -f environment.yml
                  which would likely be copy-pasted from the project README.
                  The yml file specifies version numbers for dependencies. The
                  SAT solver is deterministic. For other languages like C maybe
                  the project didn't specify dependency version. So you need to
                  figure them out when you first get a successful build, then
                  specify their versions in apt. You can specify version
                  numbers in the apt-get install line.
                  Yes, this is reproducible. Definitely good enough for most
                  business use cases. When I say reproducible I do not mean
                  ivory tower math proof reproducible. I just mean that the
                  code will run on the relevant machines they are targeting. As
                  I wrote in my initial comment. And as I defined at the top of
                  this comment.
                  Also Nix provides a worse experience for pinning dependency
                  versions since it does not have a native concept of version
                  numbers [0]. Instead people have to grep through the Nixpkgs
                  repo to find the correct hash of their dependency version.
                  > This is even more true for Nix, which has the largest and
                  most up-to-date package repositories out there
                  No, Docker has the closure (to borrow Nix's terminology) of
                  all of the package managers in that graph. If you add the
                  height of the Debian packages with Pypi and Hackage you
                  already have Nix beat. You can keep adding - cargo, ruby
                  gems, etc all in their native package managers. If Nix were
                  better off then people would be adapting Nix packages to
                  other ecosystems. But the reality is the other way around.
                  > Plus, with Nix, you can easily make a new package based on
                  existing packages with a mere few lines of code if the
                  existing packages doesn't fit your needs. Other package
                  managers besides Guix doesn't offer you that flexibility so
                  you'd have to compile from scratch
                  With Nix, you are forced to make new packages based on
                  existing packages. That is not a benefit. Regarding "if the
                  existing packages doesn't fit your needs", compiling from
                  source is not a big deal since Docker caches output
   URI            [1]: https://en.wikipedia.org/wiki/Reproducibility
   URI            [2]: https://github.com/NixOS/nixpkgs/issues/93327
                    soraminazuki wrote 2 days ago:
                    > I am frustrated that you ignored my main argument
                    Your main argument was that Docker "sufficiently" serves
                    the same goal of reproducibility. I just pointed out how it
                    doesn't come anywhere close. Addressing the core of an
                    argument is far from a "semantic" argument.
                    > where you say basically everything that is not Nix or
                    Guix is not reproducible
                    My definition of reproducibility is that you get identical
                    build results every time, which should meet the definition
                    you quoted from Wikipedia. "docker build," which runs
                    arbitrary shell commands with network access, is the
                    farthest thing possible from any sane definition of
                    > Also Nix provides a worse experience for pinning
                    dependency versions
                    The exact opposite is true. No other system-level package
                    managers like apt or yum truly supports pinning packages.
                    With apt or yum, packages in a repository snapshot are
                    tightly coupled together since they're all installed into a
                    single shared location. It's not possible to swap out or
                    pin a subset of packages without the risk of breakage.
                    Nix provides a truly working way to pin packages. Packages
                    are installed into its own isolated location to  avoid
                    collisions and dependencies are explicitly specified. This
                    makes it possible to mix packages from stable channels,
                    unstable channels, and even specific git commits of those
                    channels. This can't be done with apt or yum.
                    Language-level package managers are somewhat more flexible
                    regarding pinning, but still has problems. More on that
                    > No, you need to pin the dependency versions. With Python
                    this practice is already normalized with requirements.txt
                    or conda yml files.
                    Yet CI builds constantly break because Python dependencies
                    are a moving target. And no, the SAT solver doesn't make
                    the builds deterministic. The fact that you even need a SAT
                    solver just makes it clear that dependency management is
                    getting out of hand and we need better tools.
                    > If you add the height of the Debian packages with Pypi
                    and Hackage you already have Nix beat. ... If Nix were
                    better off then people would be adapting Nix packages to
                    other ecosystems.
                    I don't know why you believe Nix and only Nix has to
                    compete with all other package managers combined. You must
                    really dislike it if you can convince yourself that is fair
                    But Nix can be used along with other package managers, so I
                    don't see the point here. The only anomaly here are Docker
                    images, that monolith binary blob that doesn't compose well
                    like packages in other package managers.
                    And speaking of fairness
                    > or a maintainer does it with a much smaller code review
                    Where did you get this idea from? Nix has a growing
                    community and Nixpkgs is one of the most active
                    repositories on GitHub. Other package repositories with the
                    possible exception of Homebrew and AUR has a much higher
                    barrier to entry, which would most definitely result in,
                    "smaller code review."
                    > Nix project people commit directly to master frequently
                    and do self-merges of PRs
                    Self-merges are nowhere near being unique to Nixpkgs so
                    it's unfair to only call Nixpkgs out for it. And if you
                    count language-specific package repositories like NPM or
                    PyPI, you should assume there is zero code review for most
                    While regrettably there are self-merges in Nixpkgs, it is
                    definitely in the minority and a lot of those changes are
                    especially trivial stuff. Since Nixpkgs has a vibrant
                    community, things like this tend to get attention and some
                    community members are keeping an eye on it and is quick to
                    bring these instances up. It's also worth noting that the
                    Nix community is especially invested in automated testing
                    compared to other package managers and these  are run on
                    PRs that ends up being self-merged.
                    > With Nix, you are forced to make new packages based on
                    existing packages.
                    That is 100% FUD.
                    > "if the existing packages doesn't fit your needs",
                    compiling from source is not a big deal since Docker caches
                    output artifacts.
                    It is a big deal that you can't reuse code with non-Nix
                    package managers. Docker caching isn't relevant and does
                    nothing to deal with maintainability or reproducibility
                    issues. Our company maintains custom OpenSSL RPMs, and it
                    has been a constant source of pain due to RPM's lack of
                    code reusability. Now we also have to maintain our own
                    version of every single package that relies on our build of
                    OpenSSL, which is a nightmare. This wouldn't have been a
                    problem with Nix.
                      lolpython wrote 2 days ago:
                      > You must really dislike it [Nix] if you can convince
                      yourself that is fair comparison.
                      No, I do not give a single fuck about Nix versus Docker.
                      I have no personal attachment to either. I am just
                      worried that pushing Nix at my company would be some form
                      of professional malpractice given the downsides. I
                      literally have a meeting tomorrow about incorporating
                      Docker into a different team's product. I've used both
                      Docker and Nix before. If Nix would be better for them, I
                      would tell them as much. I'd be fine continuing this
                      discussion we are having, some parts were interesting.
                      But unfortunately you seem incapable of formulating an
                      argument without resorting to personal attacks and
                      condescension. And I cannot tolerate that.
                chriswarbo wrote 2 days ago:
                > With Docker you leverage the existing packaging ecosystem
                like pip or apt.
                    with import  {};
                    runCommand "my-python-package" { buildInputs = [
                pythonPackages.pip ]; } ''
                      cd ${/my/project/dir}
                      pip install
                  iterati wrote 2 days ago:
                  That compared to `RUN pip install ` is probably one of the
                  things people are complaining about, no?
                    chriswarbo wrote 2 days ago:
                    I think the complaint is about things like:
                        (import  {}).pythonPackages.callPackage
                          ({ buildPythonPackage, dep1, dep2, dep3, pip }:
                    buildPythonPackage {
                        pname              = "my-package";
                        version           = "123";
                        propagatedBuildInputs = [ dep1 dep2 dep3 ];
                        doCheck           = true;
                        src              = /my/package/dir;
                    That's how Nixpkgs tends to do things, which has nice
                    features like building each dependency separately, allowing
                    easy overrides, etc. but it requires knowledge of how
                    Nixpkgs orchestrates its Python packages.
                    In contrast, 'runCommand' lets us just run a shell script
                    like 'pip install', which is easier but doesn't have those
                    niceties. Also, depending on the platform, the Nix sandbox
                    may have to be disabled for 'pip install' to work, since
                    Nix tries to prevent network access (I think it's enabled
                    by default on Linux, but not on macOS)
          3np wrote 2 days ago:
          There’s just too many edge cases and system/language oddities that
          make me continuously reach for Docker as default, even after 4 years
          of NixOS as daily driver.
          xxpor wrote 2 days ago:
          My favorite feature of NixOS so far though is the ease of creating
          containers via the configuration.nix file. There's a few services I
          run that don't have nix packages, but do have containers. It's
          essentially like writing a docker compose file, but instead of YAML,
          you use Nix and all of the niceties that come with it. Seems like the
          best of both worlds (as someone who isn't themselves creating
          abathur wrote 2 days ago:
          I like to compare Nix to ice-nine from Cat's Cradle, in that it tends
          towards restructuring whatever it comes into contact with.
            doublepg23 wrote 2 days ago:
            Funny you mention that. Guix which is a fork (of sorts) of Nix is
            written in Guile Scheme which uses ice-9 as it's namespace in a lot
            of places.
   URI      [1]: https://lists.gnu.org/archive/html/guile-devel/2010-07/msg...
              rekado wrote 2 days ago:
              This gets repeated a lot, but Guix really is not a fork of Nix. 
              Not by any definition of "fork".  Also not "a fork of sorts". 
              The term "fork" only applies to one executable: "guix-daemon",
              which is a literal fork of "nix-daemon".  They have, of course,
              diverged a lot, but this (and nothing else) is truly a fork.
              Aside from this one executable there is no relationship between
              the two projects.
              The daemon takes .drv files that list what preconditions a build
              has (= other .drv files), what build script to run (= a generated
              Guile script), and what outputs will exist when the .drv file has
              been processed.  It processes these .drv files in an isolated
              environment (chroot + namespaces) where only the inputs mentioned
              in the .drv file are made available.
              The drv files are not shared among the projects; they are not
              generated in even superficially similar ways.  Guix is not a fork
              of Nix.  "guix-daemon" is a fork of "nix-daemon".
              Both are implementations of the same idea: functional package
                ShamelessC wrote 2 days ago:
                This fits my definition of a fork (of sorts). Thanks for the
                nuance though.
   DIR <- back to front page