_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Why is the Rust compiler so slow?
       
       
        Scuds wrote 7 hours 22 min ago:
        I have a mac m4 pro and it's 2 minutes to compile all of Deno, which is
        my go-to for bigass rust projects.
        
        ```
        
        > cargo clean && time cargo build
        
        cargo build  713.80s user 91.57s system 706% cpu 1:53.97 total
        
        > cargo clean && time cargo build --release
        
        cargo build --release  1619.53s user 142.65s system 354% cpu 8:17.05
        total
        
        ```
        
        this is without incremental compilation. And it's not like you have to
        babysit a release build if you have a CI/CD system
       
          hu3 wrote 3 hours 6 min ago:
          Interesting, M1 Max took 6 minutes and M1 Air took 11 minutes
          according to this article:
          
   URI    [1]: https://corrode.dev/blog/tips-for-faster-rust-compile-times/...
       
            Scuds wrote 1 hour 38 min ago:
            Oh yes, apple hardware continues to improve and M4 pro still the
            single threaded champion of anything under 300 w.
            
            FWIW - last stage where the binary is produced takes the longest
            and is single threaded and that's the largest difference between
            release and debug.
       
        WalterBright wrote 7 hours 49 min ago:
        I suspect that it is because the borrow checker needs to do data flow
        analysis. DFA is normally done in the optimizer, and we all know that
        optimized builds are quite a bit slower, and that is due to DFA.
        
        DFA in the front end means slow.
       
          steveklabnik wrote 7 hours 40 min ago:
          As said many times in this thread already, the borrow checker is
          virtually never a significant amount of time spent while compiling.
          
          Also, the article goes into great depth as to what is happening
          there, and the borrow checker never comes up.
       
            WalterBright wrote 5 hours 52 min ago:
            That makes me curious as to how Rust does the DFA. What I do is
            construct the data flow equations as bit vectors, which are then
            solved iteratively until a solution is converged on.
            
            Doing this on the whole program eats a lot of memory.
       
              steveklabnik wrote 5 hours 2 min ago:
              I'm not an expert on this part of the compiler, but what I can
              tell you is that Rust uses multiple forms of IR, and the IR that
              the borrow checker operates on ( [1] ) already encodes control
              flow. So doing that construction, at least, is just a normal part
              of the compiler, and isn't part of the borrow checker itself.
              
              However, in my understanding, it takes that IR, does build a
              borrow graph, computes liveliness, and then does inference.
              There's been a number of details in how this has changed over the
              years, but it's vaguely datalog shaped.
              
   URI        [1]: https://rustc-dev-guide.rust-lang.org/mir/index.html
       
        dminik wrote 11 hours 2 min ago:
        One aspect that I find interesting is that Rust projects often seem
        deceivingly small.
        
        First, dependencies don't translate easily to the perceived size. In
        C++ dependencies on large projects are often vendored (or even not used
        at all). And so it is easy to look at your ~400000 line codebase and go
        "it's slow, but there's a lot of code here after all".
        
        Second (and a much worse problem) are macros. They actually hit the
        same issue. A macro that expands to 10s or 100s of lines can very
        quickly take your 10000 line project and turn it into a million line
        behemoth.
        
        Third are generics. They also suffer the same problem. Every generic
        instantiation is eating your CPU.
        
        But I do want to offer a bit of an excuse for rust. Because these are
        great features. They turn what would have taken 100000 lines of C or
        25000 lines of C++ to a few 1000s lines of Rust.
        
        However, there is definitely an overuse here making the ecosystem seem
        slow. For instance, at work we use async-graphql. The library itself is
        great, but it's an absolute proc-macro hog. There's issues in the
        repository open for years about the performance. You can literally feel
        the compiler getting slower for each type you add.
       
          1vuio0pswjnm7 wrote 3 hours 26 min ago:
          "They turn what would have taken 100000 lines of C or 25000 lines of
          C++ to a few 1000s lines of Rust."
          
          How do they do with smaller programs that would have taken far less
          than 100,000 lines of C.
          
          For whatever reasons, many authors choose to rewrite small C
          utilties, or create similar ones, using Rust.^1  Perhaps there are
          countless examples of 100,000 line C programs rewritten in Rust but
          the ones I see continually submitted to HN, Github and elsewhere are
          much smaller.
          
          How does Rust compilation time compare with C for smaller programs.
          
          NB. I am not asking about program size.  I am asking about
          compilation speed.
          
          (Also curious about resource usage, e.g. storage, memory.  For
          example, last time I measured, Rust compiler toolchain is about 2x as
          large as the GCC toolchain I am using.)
          
          1. Some of these programs due to their size seem unlikely to have
          undetected memory-safety issues in any language.  Their size makes
          them relatively easy to audit.    Unlike a 100,000 line C program.
       
            dminik wrote 48 min ago:
            Well, unfortunately I don't have an exact answer for you, but I do
            have the next best thing: speculation.
            
            I had a quick look and found this article that compares a partial
            port of a C++ codebase (at around 17kloc). [1] The resulting rust
            code apparently ended up slightly larger. This isn't entirely
            unsurprising to me despite the 25:1 figure from above. Certain
            things are much more macro-able than others (like
            de/serialization). Note that C++ is actually well positioned to
            level the field here with C++26 reflection.
            
            Despite the slightly larger size, the compile times seem roughly
            equal. With rust scaling worse as the size increases. As a
            side-note, I did find this part relevant to some of my thoughts
            from above:
            
            > For example, proc macros would let me replace three different
            code generators
            
            Now, I know that C isn't C++. But, I think that when restricting
            yourself to a mostly C-like subset (no proc-macros, mostly no
            generics outside Option/Result) the result would likely mirror the
            one above. Depending on the domain and work needed, either language
            could be much shorter or longer. For example, anything involving
            text would likely be much shorter in rust as the stdlib has UTF-8
            handling built-in. On the other hand, writing custom data
            structures would likely favor C.
            
            I am interested to see if TRACTOR could help here. Being able to
            port C code to Rust and then observe the compile times would be
            pretty interesting.
            
   URI      [1]: https://quick-lint-js.com/blog/cpp-vs-rust-build-times/
   URI      [2]: https://www.darpa.mil/research/programs/translating-all-c-...
       
            lor_louis wrote 2 hours 16 min ago:
            I write a lot of C and Rust, and my personal experience is that for
            smaller C programs, Rust tends to have a slightly higher line
            count, but it's mostly due to forcing the user to handle every
            error possible.
            
            A truly robust C program will generally be much larger than the
            equivalent Rust program.
       
          epage wrote 5 hours 18 min ago:
          > Second (and a much worse problem) are macros. They actually hit the
          same issue. A macro that expands to 10s or 100s of lines can very
          quickly take your 10000 line project and turn it into a million line
          behemoth.
          
          Recently, support as added to help people analyze this
          
          See
          
   URI    [1]: https://nnethercote.github.io/2025/06/26/how-much-code-does-...
       
          jvanderbot wrote 9 hours 57 min ago:
          You can literally feel the compiler getting slower for each type you
          add.
          
          YSK: The compiler performance is IIRC exponential in the "depth" of
          types. And oh boy does GraphQL love their nested types.
       
            tukantje wrote 1 hour 28 min ago:
            Unironically why typescript is a perfect fit for graphql
       
        fschuett wrote 13 hours 43 min ago:
        For deploying Rust servers, I use Spin WASM functions[1], so no Docker
        / Kubernetes is necessary. Not affiliated with them, just saying. I
        just build the final WASM binary and then the rest is managed by the
        runtime.
        
        Sadly, the compile time is just as bad, but I think in this case the
        allocator is the biggest culprit, since disabling optimization will
        degrade run-time performance. The Rust team should maybe look into
        shipping their own bundled allocator, "native" allocators are highly
        unpredictable.
        
        [^1]:
        
   URI  [1]: https://www.fermyon.com
       
        jeden wrote 13 hours 52 min ago:
        why rust compiler create so BIG executable!
       
        mellosouls wrote 13 hours 52 min ago:
        Related from a couple of weeks ago: [1] (Rust compiler performance; 287
        points, 261 comments)
        
   URI  [1]: https://news.ycombinator.com/item?id=44234080
       
        s_ting765 wrote 15 hours 34 min ago:
        OP could have skipped all this by doing the compilation with cache on
        the host system and copying the compiled statically linked binary back
        to the docker image build.
       
        TZubiri wrote 15 hours 43 min ago:
        >Every time I wanted to make a change, I would:
        
        >Build a new statically linked binary (with
        --target=x86_64-unknown-linux-musl)
        >Copy it to my server
        >Restart the website
        
        Isn't it a basic C compiler feature that you can compile a file as an
        Object, and then link the objects into a single executable? Then you
        only recompile the file you changed.
        
        Not sure what I'm missing.
       
          pornel wrote 13 hours 21 min ago:
          That's how Rust works already.
          
          The problem has been created by Docker which destroys all of the
          state. If this was C, you'd also end up losing all of the object
          files and rebuilding them every time.
       
            TZubiri wrote 7 hours 18 min ago:
            Nope, reread the article, docker wasn't part of the problem it's
            part of the 'solution' according to OP.
       
        feelamee wrote 15 hours 59 min ago:
        > Vim hangs when you open it
        
        you can enable word wrapping as a workaround ( `:set wrap`).
        Lifehack: it can be hard to navigate in such file with just `h, j, k,
        l`, but you can use `gh, gj, etc`. With `g` vim will work with visual
        lines, while without it with just lines splitted with LF/CRLF
       
          mmh0000 wrote 15 hours 15 min ago:
          With a little bit of vimrc magic you can make it transparent:
          
            "Make k/j up/down work more naturally by going to the next
          displayed line vs
            "going to the next logical line (for when word-wrapping is on):
            noremap k gk
            noremap j gj
            noremap  gk
            noremap  gj
            "Same as above, but for arrow keys in insert mode:
            inoremap  gka
            inoremap  gja
       
        amelius wrote 16 hours 8 min ago:
        Meanwhile, other languages have a JIT compiler which compiles code as
        it runs. This would be great for development even if it turns out to be
        slower overall.
       
          akkad33 wrote 15 hours 51 min ago:
          Actually JITs can be faster than AOT compilation because they can be
          optimized for the current architecture they are running in. There
          were claims Julia, a JIT language can beat C in some benchmarks
       
            amelius wrote 15 hours 14 min ago:
            In fact, JITs can be faster because they can specialize code, i.e.
            make optimizations based on live data.
       
        ac130kz wrote 20 hours 0 min ago:
        tldr as always, don't use Musl, if you want performance, compatibility.
       
          ac130kz wrote 8 hours 40 min ago:
          Some "smart" folks even downvote this advice. Yeah, I've seen
          articles on musl's horrible performance back in 2017-2018, and
          apparently it still holds, yet I get a downvote.
       
            9rx wrote 4 hours 55 min ago:
            You make it sound like a "downvote" has meaning or something.
            
            It doesn't.
       
        leoh wrote 22 hours 18 min ago:
        tl;dr: it’s slow because it finds far more bugs before runtime than
        literally any other mainstream compiled language
       
        edude03 wrote 23 hours 41 min ago:
        First time someone I know in real life has made it to the HN front page
        (hey sharnoff, congrats) anyway -
        
        I think this post (accidentally?) conflates two different sources of
        slowness:
        
        1) Building in docker 
        2) The compiler being "slow"
        
        They mention they could use bind mounts, yet wanting a clean build
        environment - personally, I think that may be misguided. Rust with
        incremental builds is actually pretty fast and the time you lose
        fighting dockers caching would likely be made up in build times - since
        you'd generally build and deploy way more often than you'd fight the
        cache (which, you'd delete the cache and build from scratch in that
        case anyway)
        
        So - for developers who build rust containers, I highly recommend
        either using cache mounts or building outside the container and adding
        just the binary to the image.
        
        2) The compiler being slow - having experienced ocaml, go and scala for
        comparisons the rust compiler is slower than go and ocaml, sure, but
        for non interactive (ie, REPL like) workflows, this tends not to matter
        in my experience - realistically, using incremental builds in dev mode
        takes seconds, then once the code is working, you push to CI at which
        point you can often accept the (worst case?) scenario that it takes 20
        minutes to build your container since you're free to go do other
        things.
        
        So while I appreciate the deep research and great explanations, I don't
        think the rust compiler is actually slow, just slower than what people
        might be use to coming from typescript or go for example.
       
        jmyeet wrote 1 day ago:
        Early design decisions favored run-time over compile-time [1]:
        
        > * Borrowing — Rust’s defining feature. Its sophisticated pointer
        analysis spends compile-time to make run-time safe.
        
        > * Monomorphization — Rust translates each generic instantiation
        into its own machine code, creating code bloat and increasing compile
        time.
        
        > * Stack unwinding — stack unwinding after unrecoverable exceptions
        traverses the callstack backwards and runs cleanup code. It requires
        lots of compile-time book-keeping and code generation.
        
        > * Build scripts — build scripts allow arbitrary code to be run at
        compile-time, and pull in their own dependencies that need to be
        compiled. Their unknown side-effects and unknown inputs and outputs
        limit assumptions tools can make about them, which e.g. limits caching
        opportunities.
        
        > * Macros — macros require multiple passes to expand, expand to
        often surprising amounts of hidden code, and impose limitations on
        partial parsing. Procedural macros have negative impacts similar to
        build scripts.
        
        > * LLVM backend — LLVM produces good machine code, but runs
        relatively slowly.
        Relying too much on the LLVM optimizer — Rust is well-known for
        generating a large quantity of LLVM IR and letting LLVM optimize it
        away. This is exacerbated by duplication from monomorphization.
        
        > * Split compiler/package manager — although it is normal for
        languages to have a package manager separate from the compiler, in Rust
        at least this results in both cargo and rustc having imperfect and
        redundant information about the overall compilation pipeline. As more
        parts of the pipeline are short-circuited for efficiency, more metadata
        needs to be transferred between instances of the compiler, mostly
        through the filesystem, which has overhead.
        
        > * Per-compilation-unit code-generation — rustc generates machine
        code each time it compiles a crate, but it doesn’t need to — with
        most Rust projects being statically linked, the machine code isn’t
        needed until the final link step. There may be efficiencies to be
        achieved by completely separating analysis and code generation.
        
        > * Single-threaded compiler — ideally, all CPUs are occupied for the
        entire compilation. This is not close to true with Rust today. And with
        the original compiler being single-threaded, the language is not as
        friendly to parallel compilation as it might be. There are efforts
        going into parallelizing the compiler, but it may never use all your
        cores.
        
        > * Trait coherence — Rust’s traits have a property called
        “coherence”, which makes it impossible to define implementations
        that conflict with each other. Trait coherence imposes restrictions on
        where code is allowed to live. As such, it is difficult to decompose
        Rust abstractions into, small, easily-parallelizable compilation units.
        
        > * Tests next to code — Rust encourages tests to reside in the same
        codebase as the code they are testing. With Rust’s compilation model,
        this requires compiling and linking that code twice, which is
        expensive, particularly for large crates.
        
        [1] 
        
   URI  [1]: https://www.pingcap.com/blog/rust-compilation-model-calamity/
       
        rednafi wrote 1 day ago:
        I’m glad that Go went the other way around: compilation speed over
        optimization.
        
        For the kind of work I do — writing servers, networking, and glue
        code — fast compilation is absolutely paramount. At the same time, I
        want some type safety, but not the overly obnoxious kind that won’t
        let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the
        price. Not having to deal with sigil soup is another plus point.
        
        I guess Google’s years of experience led to the conclusion that, for
        software development to scale, a simple type system, GC, and wicked
        fast compilation speed are more important than raw runtime throughput
        and semantic correctness. Given the amount of networking and large -
        scale infrastructure software written in Go, I think they absolutely
        nailed it.
        
        But of course there are places where GC can’t be tolerated or
        correctness matters more than development speed. But I don’t work in
        that arena and am quite happy with the tradeoffs that Go made.
       
          liampulles wrote 7 hours 54 min ago:
          As the story goes, a couple of Google developers designed Go while
          waiting for one of their C++ projects to compile.
       
            zenlot wrote 4 hours 27 min ago:
            If we wanted speed compile times only, we'd be using Pascal. No
            need for Go. In fact, if there would be ever a case for me to use
            Go, I'd rather go for Pascal or Delphi. But there isn't, it just
            doesn't fit anywhere.
       
              rednafi wrote 2 hours 44 min ago:
              I understand the sentiment as I feel the same about Rust. I’d
              rather raw dog C++ than touch Rust. Doesn’t make sense and I
              could come up with some BS like you did and make my case anyway.
       
          frollogaston wrote 8 hours 28 min ago:
          Same but with Python and NodeJS cause I'm doing less
          performance-critical stuff. Dealing with type safety and slow builds
          would cost way more than it's worth.
       
            rednafi wrote 6 hours 42 min ago:
            Python and NodeJS bring a whole lot of other problems. But yeah at
            a smaller scale these languages are faster to work with.
            
            At the same time, I have worked at places where people had to
            rewrite major parts of their backend in other languages because
            Python/Node was no longer sufficient.
       
              frollogaston wrote 6 hours 25 min ago:
              I'd have to see it, cause rewrites happen all the time. We had a
              system written in C++, then rewritten in Golang because C++ was
              supposedly not good enough, then rewritten back into C++.
       
          danielscrubs wrote 14 hours 23 min ago:
          One day I would like to just change pascals syntax a bit to be
          Pythonic and just blow the socks of junior and Go developers.
       
            rednafi wrote 8 hours 51 min ago:
            Sounds like the guy who wanted to write curl in a weekend. /s
       
            the_sleaze_ wrote 9 hours 50 min ago:
            That's what they did to Erlang with Elixir and now there are a lot
            of people saying it's the Greatest Of All Time.
            
            I'd be interested in this project if you do decide to pursue it.
       
          paldepind2 wrote 14 hours 44 min ago:
          > I guess Google’s years of experience led to the conclusion that,
          for software development to scale, a simple type system, GC, and
          wicked fast compilation speed are more important than raw runtime
          throughput and semantic correctness.
          
          I'm a fan of Go, but I don't think it's the product of some awesome
          collective Google wisdom and experience. Had it been, I think they'd
          have come to the conclusion that statically eliminating null pointer
          exceptions was a worthwhile endeavor, just to mention one thing.
          Instead, I think it's just the product of some people at Google
          making a language they way they wanted to.
       
            melodyogonna wrote 10 hours 48 min ago:
            But those people at Google were veteran researchers who wanted to
            make a language that could scale for Google's use cases; these
            things are well documented.
            
            For example, Ken Thompson has said his job at Google was just to
            find things he could make better.
       
              nine_k wrote 8 hours 21 min ago:
              They also built a language that can be learned in a weekend
              (well, now two) and is small enough for a fresh grad hire to
              learn at the job.
              
              Go has a very low barrier to entry, but also a relatively low
              ceiling. The proliferation of codegen tools for Go is a testament
              of its limited expressive power.
              
              It doesn't mean that Go didn't hit a sweet spot. For certain
              tasks, it very much did.
       
          silverwind wrote 15 hours 22 min ago:
          You can have the best of both worlds: A fast, but sloppy compiler and
          slow, but thorough checkers/linters. I think it's ideal that way, but
          rust seems to have chosen to needlessly combine both actions into
          one.
       
          mike_hearn wrote 16 hours 53 min ago:
          > fast compilation is absolutely paramount. At the same time, I want
          some type safety, but not the overly obnoxious kind that won’t let
          me sloppily prototype. Also, the GC helps
          
          Well, that point in the design space was already occupied by Java
          which also has extremely fast builds. Go exists primarily because the
          designers wanted to make a new programming language, as far as I can
          tell. It has some nice implementation aspects but it picked up its
          users mostly from the Python/Ruby/JS world rather than C/C++/Java,
          which was the original target market they had in mind (i.e. Google
          servers). Scripting language users were in the market for a language
          that had a type system but not one that was too advanced, and which
          kept the scripting "feel" of very fast turnaround times. But not Java
          because that was old and unhip, and all the interesting intellectual
          space like writing libs/conf talks was camped on already.
       
            frollogaston wrote 8 hours 22 min ago:
            Golang having solid n:m greenthreading day 1 was its big deal. Java
            has had no good way to do IO-heavy multitasking, leading to all
            those async/promise frameworks that jack up your code. I cannot
            even read the Java code we have at work. Java recently got virtual
            threads, but even if that fixes the problem, it'll be a while
            before things change to that. Python had the same problem before
            asyncio. This isn't even a niche thing, your typical web backend
            needs cooperative multitasking.
            
            I'm also not fond of any of the Golang syntax, especially not
            having exceptions. Or if you want explicit errors, fine, at least
            provide nice unwrap syntax like Rust does.
       
              cogman10 wrote 6 hours 6 min ago:
              Java 21 has n:m green threads, but with caveats.  Java 24 removed
              a major source of the caveats.
       
                computably wrote 3 hours 23 min ago:
                I'm sure you're already aware but for those who aren't - Java
                21 was released in 2023. Golang 1.0 was released in 2012.
       
                frollogaston wrote 3 hours 48 min ago:
                Yeah, I'm hoping that fixes things eventually. We still can't
                use that at work, and even once we can, there's 10yo code still
                using 3 different old ways of async.
       
              aaronblohowiak wrote 6 hours 39 min ago:
              by FAR my biggest complaint about Golang was null instead of
              Option. could have been special cased like their slice and map
              and would have been so so so much better than nil checks imho.
              really, a big miss.
       
            rednafi wrote 9 hours 1 min ago:
            Java absolutely does not fill in the niche that Go targeted. Even
            without OO theology, JVM applications are heavy and memory
            intensive. Plus the startup time of the VM alone is a show stopper
            for the type of work I do. Also yes, Java isn’t hip and you
            couldn’t pay me to write it anymore.
       
            k__ wrote 10 hours 45 min ago:
            "it picked up its users mostly from the Python/Ruby/JS world rather
            than C/C++/Java"
            
            And with the increasing performance of Bun, it seems that Go is
            about to get a whooping by JS.
            
            (Which isn't really true, as most of the Bun perf comes from Zig,
            but they are targeting JS Devs.)
       
              rednafi wrote 8 hours 55 min ago:
              Runtimes like Bun, Deno, or type systems like TypeScript don’t
              change the fact it’s still JS underneath — a crappily
              designed language that should’ve never left throwable frontend
              code.
              
              None of these runtimes make JS anywhere even close to
              single-threaded Go perf, let alone multithreaded (goroutine)
              perf.
       
                frollogaston wrote 8 hours 14 min ago:
                JS is perfectly designed for what it does, frontend and
                non-CPU-intensive backend code. It's never going to reach
                singlethreaded Golang perf.
       
                  rednafi wrote 6 hours 45 min ago:
                  JavaScript was never designed for non-browser usage. It’s
                  the community’s unquenchable thirst to use the same
                  language everywhere that brought us here.
       
                    frollogaston wrote 6 hours 30 min ago:
                    NodeJS about page makes its case pretty succinctly, JS was
                    a good fit for IO-bound concurrent backends because of the
                    event loop. This was at a time when no other major
                    language/runtime had a good answer for this unless you
                    count Erlang. Plenty of people using it didn't even come
                    from the web frontend side, myself included.
                    
                    npm was also maybe the first default package manager that
                    "just works," unlike Python or browser JS.
       
                      AgentME wrote 4 hours 41 min ago:
                      A lot of people don't realize NodeJS was made because the
                      creator wanted to make a runtime dedicated to
                      asynchronous IO, didn't want to use a language with a
                      pre-existing standard library and ecosystem built around
                      synchronous IO, and realized that JS almost uniquely fit
                      the bill. It was not built for the purpose of letting
                      people who only knew JS use JS on the server.
       
                        frollogaston wrote 3 hours 57 min ago:
                        This tracks, cause the NodeJS website also doesn't
                        mention anything about people already knowing JS. And
                        imports worked completely differently between browser
                        and Node, so they were pretty separate things at least
                        at the start.
       
            loudmax wrote 11 hours 49 min ago:
            As a system administrator, I vastly prefer to deploy Go programs
            over Java programs.  Go programs are typically distributed as a
            single executable file with no reliance on external libraries.    I
            can usually run `./program -h` and it tells me about all the flags.
            
            Java programs rely on the JVM, of which there are many variants. 
            Run time options are often split into multiple XML files -- one
            file for logging, another to control the number of threads and so
            on.  Checking for the running process using `ps | grep` yields some
            long line that wraps the terminal window, or doesn't fit neatly
            into columns shown in `htop` or `btop`.
            
            These complaints are mostly about conventions and idioms, not the
            languages themselves.  I appreciate that the Java ecosystem is
            extremely powerful and flexible.  It is possible to compile Java
            programs into standalone binaries, though I rarely see these in
            practice.  Containers can mitigate the messiness, and that helps,
            up until the point when you need to debug some weird runtime issue.
            
            I wouldn't argue that people should stop programming in Java, as
            there are places where it really is the best choice. For example
            deep legacy codebases, or where you need the power of the JVM for
            dynamic runtime performance optimizations.
            
            There are a lot of places where Go is the best choice (eg. simple
            network services, CLI utilities), and in those cases, please,
            please deploy simple Go programs.  Most of the time, developers
            will reach for whatever language they're most comfortable with.
            
            What I like most about Go is how convenient it is, by default. This
            makes a big difference.
       
            rsanheim wrote 16 hours 13 min ago:
            Java still had slow startup and warmup time circa 2005-2007, on the
            order of 1-3 seconds for hello world and quite a bit more for real
            apps. That is horrendous for anything CLI based.
            
            And you left out classloader/classpath/JAR dependency hell, which
            was horrid circa late 90s/early 2000s...and I'm guessing was still
            a struggle when Go really started development.    Especially at
            Google's scale.
            
            Don't get me wrong, Java has come a long way and is a fine language
            and the JVM is fantastic. But the java of 2025 is not the same as
            mid-to-late 2000s.
       
              mike_hearn wrote 12 hours 17 min ago:
              Maybe so, although I don't recall it being that bad.
              
              But Go wasn't designed for CLI apps. It was designed for writing
              highly multi-threaded servers at Google, according to the
              designers, hence the focus on features like goroutines. And in
              that context startup time just doesn't matter. Startup time of
              servers at Google was (in that era) dominated by cluster
              scheduling, connecting to backends, loading reference data and so
              on. Nothing that a change in programming language would have
              fixed.
              
              Google didn't use classloader based frameworks so that also
              wasn't relevant.
       
                frollogaston wrote 8 hours 3 min ago:
                Golang is frequently used for CLIs, even if it wasn't designed
                for that exactly
       
                  zenlot wrote 4 hours 30 min ago:
                  If you want to write CLI tool, you use Rust.
       
                    rednafi wrote 2 hours 48 min ago:
                    Who said that? Go is pretty amazing to whip up great CLIs
                    quickly.
       
                    frollogaston wrote 4 hours 3 min ago:
                    Or you use Golang? A lot of the time there isn't a big
                    difference.
       
          mark38848 wrote 19 hours 16 min ago:
          What are obnoxious types? Types either represent the data correctly
          or not. I think you can force types to shut up the compiler in any
          language including Haskell, Idris, PureScript...
       
            throwawaymaths wrote 11 hours 26 min ago:
            > Types either represent the data correctly or not.
            
            No.  two types can represent the same payload, but one might be a
            simple structure, the other one could be three or twenty nested
            type template abstractions deep, and created by a proc macro so you
            can't chase down how it was made so easily.
       
            ratorx wrote 15 hours 53 min ago:
            This might work for the types you create, but what about all the
            code written in the language that expects the “proper”
            structure?
            
            > Types either represent the data or not
            
            This definitely required, but is only really the first step. Where
            types get really useful is when you need to change them later on.
            The key aspects here are how easily you can change them, and how
            much the language tooling can help.
       
            Mawr wrote 18 hours 13 min ago:
            I'd say you already get like 70% of the benefit of a type system
            with just the basic "you can't pass an int where string is
            expected". Being able to define your own types based on the basic
            ones, like "type Email string", so it's no longer possible to pass
            a "string" where "Email" is expected gets you to 80%. Add Result
            and Optional types (or arguably just sum types if you prefer) and
            you're at 95%. Anything more and you're pushing into diminishing
            returns.
       
              hgomersall wrote 16 hours 16 min ago:
              Well it depends what you're doing. 95% is like, just your opinion
              man. The rust type system allows, in many cases, APIs that you
              cannot use wrongly, or are highly resistant to incorrect usage,
              but to do that requires careful thinking about. To be clear, such
              APIs are just as relevant internally to a project as externally
              if you want to design a system that is long term maintainable and
              robust and I would argue is the point when the type system starts
              to get really useful (rather than diminishing returns).
       
                rednafi wrote 8 hours 46 min ago:
                > The rust type system allows, in many cases, APIs that you
                cannot use wrongly, or are highly resistant to incorrect usage,
                but to do that requires careful thinking about
                
                I need none of that guarantee and all of the compilation speed
                along with a language where juniors in my team can contribute
                quickly. Different problem space.
       
          ode wrote 1 day ago:
          Is Go still in heavy use at Google these days?
       
            fsmv wrote 11 hours 16 min ago:
            Go has never been in heavy use at Google
       
              melodyogonna wrote 10 hours 42 min ago:
              Isn't it heavily used in Google Cloud?
       
            hu3 wrote 22 hours 54 min ago:
            What would they use for networking if not Go?
       
              homebrewer wrote 13 hours 35 min ago:
              Last time I paid any attention to Google's high level conference
              presenters (like Titus Winters), they almost didn't use Go at
              all. Judging by the sibling comment, this hasn't changed much.
              For some reason people are thinking that half of Google is
              written in Go at this point, when in reality if you listen to
              what they themselves are saying, it's 99% C++ and Java, with a
              tiny bit of Python and other languages where it makes sense.
              
              It's just a project from a few very talented people who happen to
              draw their salary from Google's coffers.
       
                fireflash38 wrote 7 hours 54 min ago:
                K8s isn't entirely in go?
       
                  frollogaston wrote 6 hours 18 min ago:
                  They don't really use K8S internally
       
              surajrmal wrote 17 hours 16 min ago:
              C++ and Java. Go is still used, but it's never caught up to the
              big two.
       
                frollogaston wrote 7 hours 57 min ago:
                And probably more Java than C++
       
          galangalalgol wrote 1 day ago:
          That is exactly what go was meant for and there is nothing better
          than picking the right tool for the job. The only foot gun I have
          seen people run into is parallelism with mutable shared state through
          channels can be subtly and exploitably wrong. I don't feel like most
          people use channels like that though? I use rust because that isn't
          the job I have. I usually have to cramb slow algorithms into slower
          hardware, and the problems are usually almost but not quite
          embarrassingly parallel.
       
            bjackman wrote 16 hours 7 min ago:
            I think a lot of the materials that the Go folks put out in the
            early days encourage a very channel-heavy style of programming that
            leads to extremely bad places.
            
            Nowadays the culture seems to have evolved a bit. I now go into
            high alert mode if I see a channel cross a function boundary or a
            goroutine that wasn't created via errgroup or similar.
            
            People also seem to have chilled out about the "share by
            communicating" thing. It's usually better to just use a mutex and I
            think people recognise that now.
       
              rednafi wrote 8 hours 54 min ago:
              This is true. I have been writing Go for years and still think
              channel is a bit too low level. It probably would've benefited
              from a different layer of abstraction.
       
        ozgrakkurt wrote 1 day ago:
        Rust compiler is very very fast but language has too many features.
        
        The slowness is because everyone has to write code with generics and
        macros in Java Enterprise style in order to show they are smart with
        rust.
        
        This is really sad to see but most libraries abuse codegen features
        really hard.
        
        You have to write a lot of things manually if you want fast compilation
        in rust.
        
        Compilation speed of code just doesn’t seem to be a priority in
        general with the community.
       
          skeezyboy wrote 11 hours 56 min ago:
          >Compilation speed of code just doesn’t seem to be a priority in
          general with the community.
          
          they have only one priority, memory safety (from a certain class of
          memory bugs)
       
          aquariusDue wrote 1 day ago:
          Yeah, for application code in my experience the more I stick to the
          dumb way to do it the less I fight the borrow checker along with
          fewer trait issues.
          
          Refactoring seems to take about the same time too so no loss on that
          front. After all is said and done I'm just left with various logic
          bugs to fix which is par for the course (at least for me) and a sense
          of wondering if I actually did everything properly.
          
          I suppose maybe two years from now we'll have people that suggest
          avoiding generics and tempering macro usage. These days most people
          have heard the advice about not stressing over cloning and unwraping
          (though expect is much better imo) on the first pass more or less.
          
          Something something shiny tool syndrome?
       
        cratermoon wrote 1 day ago:
        Some code that can make Rust compilation pathologically slow is complex
        const expressions.
        Because the compiler can evaluate a subset of expressions at compile
        time[1],
        a complex expression can take an unbounded amount of time to evaluate.
        The long-running-const-eval will by default abort the compilation if
        the evaluation takes too long.
        
        1
        
   URI  [1]: https://doc.rust-lang.org/reference/const_eval.html
       
        duped wrote 1 day ago:
        A lot of people are replying to the title instead of the article.
        
        > To get your Rust program in a container, the typical approach you
        might find would be something like:
        
        If you have `cargo build --target x86_64-unknown-linux-musl` in your
        build process you do not need to do this anywhere in your Dockerfile.
        You should compile and copy into /sbin or something.
        
        If you really want to build in a docker image I would suggest using
        `cargo --target-dir=/target ...` and then run with `docker run --mount
        type-bind,...` and then copy out of the bind mount into /bin or
        wherever.
       
          remram wrote 10 hours 30 min ago:
          The author dismissed that option saying "I value that docker build
          can have a clean environment every time", so this is self-inflicted.
       
          edude03 wrote 23 hours 51 min ago:
          Many docker users develop on arm64-darwin and deploy to x86_64
          (g)libc, so I don't think that'll work generally.
       
            duped wrote 22 hours 21 min ago:
            Those users are wrong :shrug:
       
        Devasta wrote 1 day ago:
        Slow compile times are a feature, get to make a cuppa.
       
          zozbot234 wrote 1 day ago:
          > Slow compile times are a feature
          
          xkcd is always relevant:
          
   URI    [1]: https://xkcd.com/303/
       
            randomNumber7 wrote 1 day ago:
            On the other hand you get mentally insane if you try to work in a
            way that you do s.th. usefull during the 5-10 min compile times you
            often have with C++ projects.
            
            When I had to deal with this I would just open the newspaper and
            read an article in front of my boss.
       
              PhilipRoman wrote 11 hours 0 min ago:
              Slow compile times really mess with your brain. When I wanted to
              test two different solutions, I would keep multiple separate
              clones (each one takes about 80GB, mind you) and do the manual
              equivalent of branch prediction by compiling both, just in case I
              needed the other one as well.
       
        juped wrote 1 day ago:
        I don't think rustc is that slow. It's usually cargo/the dozens of
        crates that make it take a long time, even if you've set up a cache and
        rustc is doing nothing but hitting the cache.
       
        smcleod wrote 1 day ago:
        I've got to say when I come across an open source project and realise
        it's in rust I flinch a bit know how incredibly slow the build process
        is. It's certainly been one of the deterrents to learning it.
       
        kenoath69 wrote 1 day ago:
        Where is Cranelift mentioned
        
        My 2c on this is nearly ditching rust for game development due to the
        compile times, in digging it turned out that LLVM is very slow
        regardless of opt level. Indeed it's what the Jai devs have been
        saying.
        
        So Cranelift might be relevant for OP, I will shill it endlessly, took
        my game from 16 seconds to 4 seconds. Incredible work Cranelift team.
       
          BreakfastB0b wrote 1 day ago:
          I participated in the most recent Bevy game jam and the community has
          a new tool that came out of Dioxus called subsecond which as the name
          suggests provides sub-second hot reloading of systems. It made
          prototyping very pleasant. Especially when iterating on UI.
          
   URI    [1]: https://github.com/TheBevyFlock/bevy_simple_subsecond_system
       
          lll-o-lll wrote 1 day ago:
          Wait. You were going to ditch rust because of 16 second build times?
       
            Mawr wrote 17 hours 37 min ago:
            "Wait. You were going to ditch subversion for git because of 16
            second branch merge times?"
            
            Performance matters.
       
            kenoath69 wrote 21 hours 38 min ago:
            Pulling out Instagram 100 times in every workday, yes, it's a total
            disaster
       
              johnisgood wrote 13 hours 4 min ago:
              It may also contribute to smoking. :D Or (over-)eating... or
              whatever your vice is.
       
            metaltyphoon wrote 1 day ago:
            Over time that adds up when your coding consists of REPL like
            workflow.
       
            sarchertech wrote 1 day ago:
            16 seconds is infuriating for something that needs to be manually
            tested like does this jump feel too floaty.
            
            But it’s also probable that 16 seconds was fairly early in
            development and it would get much worse from there.
       
          norman784 wrote 1 day ago:
          Nice, I checked a while ago and was no support for macOS aarch64, but
          seems that now it is supported.
       
          jiehong wrote 1 day ago:
          I think that’s what zig team is also doing to allow very fast build
          times: remove LLVM.
       
            norman784 wrote 1 day ago:
            Yes, Zig author commented[0] that a while ago
            
            [0]
            
   URI      [1]: https://news.ycombinator.com/item?id=44390972
       
        gz09 wrote 1 day ago:
        Unfortunately, removing debug symbols in most cases isn't a good/useful
        option
       
          magackame wrote 1 day ago:
          What "most" cases are you thinking of? Also don't forget that a
          binary that in release weights 10 MB, when compiled with debug
          symbols can weight 300 MB, which is way less practical to distribute.
       
        o11c wrote 1 day ago:
        TL;DR `async` considered harmful.
        
        For all the C++ laughing in this thread, there's really only one thing
        that makes C++ slow - non-`extern` templates - and C++ gives you a lot
        more space to speed them up than Rust does.
       
          int_19h wrote 1 day ago:
          C++ also has async these days.
          
          As for templates, I can't think of anything about them that would
          speed up things substantially wrt Rust aside from extern template and
          manually managing your instantiations in separate .cpp files. Since
          otherwise it's fundamentally the same problem - recompiling the same
          code over and over again because it's parametrized with different
          types every time.
          
          Indeed, out of the box I would actually expect C++ to do worse
          because a C++ header template has potentially different environment
          in every translation unit in which that header is included, so
          without precompiled headers the compiler pretty much has to assume
          the worst...
       
            sgt wrote 8 hours 28 min ago:
            What happened with Zig and async? Last I heard they might never
            implement it.
       
        OtomotO wrote 1 day ago:
        It's not. It's just doing way more work than many other compilers, due
        to a sane type system.
        
        Personally I don't care anymore, since I do hotpatching: [1] Zig is
        faster, but then again, Zig isn't memory save, so personally I don't
        care. It's an impressive language, I love the syntax, the simplicity.
        But I don't trust myself to keep all the memory relevant invariants in
        my head anymore as I used to do many years ago. So Zig isn't for me.
        Simply not the target audience.
        
   URI  [1]: https://lib.rs/crates/subsecond
       
        aappleby wrote 1 day ago:
        you had a functional and minimal deployment process (compile copy
        restart) and now you have...
       
          canyp wrote 20 hours 26 min ago:
          ...Kubernetes.
          
          Damn, this makes such a great ad.
       
        charcircuit wrote 1 day ago:
        Why doesn't the Rust ecosystem optimize around compile time? It seems a
        lot of these frameworks and libraries encourage doing things which are
        slow to compile.
       
          kzrdude wrote 9 hours 16 min ago:
          Lots of developers in the ecosystem avoid proc macros for example.
          But going as far as avoiding monomorphisation and generics is not
          that widespread
       
          int_19h wrote 1 day ago:
          It would be more accurate to say that idiomatic Rust encourages doing
          things which are slow to compile: lots of small generic functions
          everywhere. And the most effective way to speed this up is to avoid
          monomorphization by using RTTI to provide a single generic compiled
          implementation that can be reused for different types, like what
          Swift does when generics across the module boundary. But this is less
          efficient at runtime because of all the runtime checks and
          computations that now need to be done to deal with objects of
          different sizes etc, many direct or even inlined calls now become
          virtual etc.
          
          Here's a somewhat dated but still good overview of various approaches
          to generics in different languages including C++, Rust, Swift, and
          Zig and their tradeoffs:
          
   URI    [1]: https://thume.ca/2019/07/14/a-tour-of-metaprogramming-models...
       
          nicoburns wrote 1 day ago:
          It's starting to, but a lot of people are using Rust because they
          need (or want) the best possible runtime performance, so that tends
          to be prioritised a lot of the time.
       
          steveklabnik wrote 1 day ago:
          The ecosystem is vast, and different people have different
          priorities. Simple as that.
       
        taylorallred wrote 1 day ago:
        So there's this guy you may have heard of called Ryan Fleury who makes
        the RAD debugger for Epic. The whole thing is made with 278k lines of C
        and is built as a unity build (all the code is included into one file
        that is compiled as a single translation unit). On a decent windows
        machine it takes 1.5 seconds to do a clean compile. This seems like a
        clear case-study that compilation can be incredibly fast and makes me
        wonder why other languages like Rust and Swift can't just do something
        similar to achieve similar speeds.
       
          barchar wrote 26 min ago:
          Rust does do this. The unit of compilation is the whole crate and the
          compiler creates appropriately sized chunks of LLVM IR to balance
          duplicate work and incrementality.
          
          Rust is generally faster to compile on a per-sourceline basis than
          c++. But rust projects compile all their dependencies as well.
       
          motorest wrote 13 hours 6 min ago:
          > This seems like a clear case-study that compilation can be
          incredibly fast (...)
          
          Have you tried troubleshooting a compiler error in a unity build?
          
          Yeah.
       
            moffkalast wrote 6 hours 27 min ago:
            It compiles in 2 seconds! Does it run? No, but it was fast!
       
          john-h-k wrote 14 hours 52 min ago:
          My C compiler, which is pretty naive and around ~90,000 lines, can
          compile _itself_ in around 1 second. Clang can do it in like 0.4.
          
          The simple truth is a C compiler doesn’t need to do very much!
       
          TZubiri wrote 15 hours 38 min ago:
          I guess you can do that, but if for some reason you needed to compile
          separately, (suppose you sell the system to a third party to a
          client, and they need to modify module 1, module 2 and the main
          loop.)
          It would be pretty trivial to remove some #include "module3.c" lines
          and add some -o module3 options to the compiler. Right?
          
          I'm not sure what Rust or docker have to do with this basic issue, it
          just feels like young blood attempting 2020 solutions before
          exploring 1970 solutions.
       
          weinzierl wrote 15 hours 55 min ago:
          This is sometimes called amalgamation and you can do it Rust as well.
          Either manually or with tools. The point is that apart from very
          specific niches it is just not a practical approach.
          
          It's not that it can't be done but that it usually is not worth the
          hassle and our goal should be for compilation to be fast despite not
          everything being in one file.
          
          Turbo Pascal is a prime example for a compiler that won the market 
          not least because of its - for the time - outstanding compilation
          speed.
          
          In the same vein, a language can be designed for fast compilation.
          Pascal in general was designed for single-pass compilation which made
          it naturally fast. All the necessary forward declarations were a pain
          though and the victory of languages that are not designed for
          single-pass compilation proofs    that while doable it was not worth it
          in the end.
       
          rowanG077 wrote 16 hours 11 min ago:
          C hardly requires any high effort compile things. No templates, no
          generics, super simple types, no high level structures.
       
            dgb23 wrote 13 hours 15 min ago:
            Are we seeing similar compilation speed when a Rust program doesn't
            use these types of features?
       
          troupo wrote 17 hours 4 min ago:
          There's also Jonathan Blow's jai where he routinely builds an entire
          game from scratch in a few seconds (hopefully public beta will be
          released by the end of this year).
       
          1vuio0pswjnm7 wrote 20 hours 51 min ago:
          Alpha. Windows-only.
          
   URI    [1]: https://codeload.github.com/EpicGamesExt/raddebugger/tar.gz/...
       
          ben-schaaf wrote 22 hours 17 min ago:
          Every claim I've seen about unity builds being fast just never rings
          true to me. I just downloaded the rad debugger and ran the build
          script on a 7950x (about as fast as you can get). A debug build took
          5s, a release build 34s with either gcc or clang.
          
          Maybe it's a MSVC thing - it does seem to have some multi-threading
          stuff. In any case raddbg non-clean builds take longer than any of my
          rust projects.
       
            taylorallred wrote 3 hours 3 min ago:
            This is true. After making my earlier comment, I went home and
            tested MSVC and Clang and got similar numbers. I had 1.5s in my
            head from using it earlier but maybe some changes made it slower.
            Either way, it's a lot of code and stays on the order of seconds or
            tens of seconds rather than minutes.
       
            maccard wrote 17 hours 7 min ago:
            I use unity builds day in day out. The speed up is an order of
            magnitude on a 2m+ LOC project.
            
            If you want to see the difference download unreal engine and
            compile the editor with and without unity builds enabled.
            
            My experience has been the polar opposite of yours - similar size
            rust projects are an order of magnitude slower than C++ ones. Could
            you share an example of a project to compare with?
       
              ben-schaaf wrote 9 hours 58 min ago:
              > If you want to see the difference download unreal engine and
              compile the editor with and without unity builds enabled.
              
              UE doesn't use a full unity build, it groups some files together
              into small "modules". I can see how this approach may have some
              benefits; you're trading off a faster clean build for a slower
              incremental build.
              
              I tested compiling UnrealFrontend, and a default setup with the
              hybrid unity build took 148s. I noticed it was only using half my
              cores due to memory constraints. I disabled unity and upped the
              parallelism and got 182s, so 22% slower while still using less
              memory. A similarly configured unity build was 108s, so best case
              is ~2x.
              
              On the other hand only changing the file
              TraceTools/SFilterPreset.cpp resulted in 10s compilation time
              under a unity build, and only 2s without unit.
              
              I can see how this approach has its benefits (and drawbacks). But
              to be clear this isn't what projects like raddbg and sqlite3 are
              doing. They're doing a single translation unit for the entire
              project. No parallelism, no incremental builds, just a single
              compiler invocation. This is usually what I've seen people mean
              by a unity build.
              
              > My experience has been the polar opposite of yours - similar
              size rust projects are an order of magnitude slower than C++
              ones. Could you share an example of a project to compare with?
              
              I just did a release build of egui in 35s, about the same as
              raddbg's release build. This includes compiling dependencies like
              wgpu, serde and about 290 other dependencies which add up to well
              over a million lines of code.
              
              Note I do have mold configured as my linker, which speeds things
              up significantly.
       
              almostgotcaught wrote 10 hours 12 min ago:
              How many LOC is unreal? I'm trying to estimate whether making
              LLVM compatible with UNITY_BUILD would be worth the effort.
              
              EDIT: i signed up to get access to unreal so take a look at how
              they do unity builds and turns out they have their own build tool
              (not CMake) that orchestrates the build. so does anyone know (can
              someone comment) whether unity builds for them (unreal) means
              literally one file for literally all project sources files or if
              it's "higher-granularity" like UNITY_BUILD in CMake (i.e., single
              file per object).
       
                maccard wrote 6 hours 7 min ago:
                The build tool groups files into a roughly equivalent size
                based on file length, and dispatches compiles those in
                parallel.
       
                  almostgotcaught wrote 5 hours 49 min ago:
                  how many groups do people usually use to get a fast build
                  (alternatively what is the group size)?
       
                    maccard wrote 2 hours 16 min ago:
                    It’s about 300kb before pre processor expansion by
                    default. 
                    I’ve never changed it.
       
                Culonavirus wrote 9 hours 22 min ago:
                At least 10M (from what I remember, maybe more now)
       
          glandium wrote 23 hours 29 min ago:
          That is kind of surprising. The sqlite "unity" build, has about the
          same number of lines of C and takes a lot longer than that to
          compile.
       
          vbezhenar wrote 1 day ago:
          I encountered one project in 2000-th with few dozens of KLoC in C++.
          It compiled in a fraction of a second on old computer. My hello world
          code with Boost took few seconds to compile. So it's not just about
          language, it's about structuring your code and using features with
          heavy compilation cost. I'm pretty sure that you can write Doom with
          C macros and it won't be fast. I'm also pretty sure, that you can
          write Rust code in a way to compile very fast.
       
            herewulf wrote 16 hours 47 min ago:
            My anecdata would be that the average C++ developer puts includes
            inside of every header file which includes more headers to the
            point where everything is including everything else and a single
            .cpp file draws huge swaths of unnecessary code in and the project
            takes eons to compile on a fast computer.
            
            That's my 2000s development experience. Fortunately I've spent a
            good chunk of the 2010s and most of the 2020s using other
            languages.
            
            The classic XKCD compilation comic exists for a reason.
       
            taylorallred wrote 1 day ago:
            I'd be very interested to see a list of features/patterns and the
            cost that they incur on the compiler. Ideally, people should be
            able to use the whole language without having to wait so long for
            the result.
       
              LtdJorge wrote 16 hours 40 min ago:
              There is an experimental Cranelift backend[0] for rustc to
              improve compilation performance in debug builds.
              
   URI        [1]: https://github.com/rust-lang/rustc_codegen_cranelift
       
              vbezhenar wrote 1 day ago:
              So there are few distinctive patterns I observed in that project.
              Please note that many of those patterns are considered
              anti-patterns by many people, so I don't necessarily suggest to
              use them.
              
              1. Use pointers and do not include header file for class, if you
              need pointer to that class. I think that's a pretty established
              pattern in C++. So if you want to declare pointer to a class in
              your header, you just write `class SomeClass;` instead of
              `#include "SomeClass.hpp"`.
              
              2. Do not use STL or IOstreams. That project used only libc and
              POSIX API. I know that author really hated STL and considered it
              a huge mistake to be included to the standard language.
              
              3. Avoid generic templates unless absolutely necessary. Templates
              force you to write your code in header file, so it'll be parsed
              multiple times for every include, compiled to multiple copies,
              etc. And even when you use templates, try to split the class to
              generic and non-generic part, so some code could be moved from
              header to source. Generally prefer run-time polymorphism to
              generic compile-time polymorphism.
       
                dieortin wrote 1 day ago:
                Why use C++ at that point? Also, pre declaring classes instead
                of including the corresponding headers has quite a few
                drawbacks.
       
                  maccard wrote 17 hours 10 min ago:
                  References, for one. Also there’s a huge difference between
                  “avoid templates unless necessary” and “don’t use
                  templates”.
       
                  kortilla wrote 23 hours 36 min ago:
                  RAII? shared pointers?
       
              kccqzy wrote 1 day ago:
              Templates as one single feature can be hugely variable. Its
              effect on compilation time can be unmeasurable. Or you can easily
              write a few dozen lines that will take hours to compile.
       
          maxk42 wrote 1 day ago:
          Rust is doing a lot more under the hood.  C doesn't track variable
          lifetimes, ownership, types, generics, handle dependency management,
          or handle compile-time execution (beyond the limited language that is
          the pre-compiler).  The rust compiler also makes intelligent (scary
          intelligent!) suggestions when you've made a mistake: it needs a lot
          of context to be able to do that.
          
          The rust compiler is actually pretty fast for all the work it's
          doing.    It's just an absolutely insane amount of additional work. 
          You shouldn't expect it to compile as fast as C.
       
          ceronman wrote 1 day ago:
          I bet that if you take those 278k lines of code and rewrite them in
          simple Rust, without using generics, or macros, and using a single
          crate, without dependencies, you could achieve very similar compile
          times. The Rust compiler can be very fast if the code is simple. It's
          when you have dependencies and heavy abstractions  (macros, generics,
          traits, deep dependency trees) that things become slow.
       
            taylorallred wrote 1 day ago:
            I'm curious about that point you made about dependencies. This Rust
            project ( [1] ) is made with essentially no dependencies, is 17,426
            lines of code, and on an M4 Max it compiles in 1.83s debug and
            5.40s release. The code seems pretty simple as well.
            Edit: Note also that this is 10k more lines than the OP's project.
            This certainly makes those deps suspicious.
            
   URI      [1]: https://github.com/microsoft/edit
       
              MindSpunk wrote 1 day ago:
              The 'essentially no dependencies' isn't entirely true. It depends
              on the 'windows' crate, which is Microsoft's auto-generated Win32
              bindings. The 'windows' crate is huge, and would be leading to
              hundreds of thousands of LoC being pulled in.
              
              There's some other dependencies in there that are only used when
              building for test/benchmarking like serde, zstd, and criterion.
              You would need to be certain you're building only the library and
              not the test harness to be sure those aren't being built too.
       
            90s_dev wrote 1 day ago:
            I can't help but think the borrow checker alone would slow this
            down by at least 1 or 2 orders of magnitude.
       
              tomjakubowski wrote 1 day ago:
              The borrow checker is really not that expensive.  On a random
              example, a release build of the regex crate, I see <1% of time
              spent in borrowck.  >80% is spent in codegen and LLVM.
       
              FridgeSeal wrote 1 day ago:
              Again, as this been often repeated, and backed up with data, the
              borrow-checker is a tiny fraction of a Rust apps build time, the
              biggest chunk of time is spent in LLVM.
       
              steveklabnik wrote 1 day ago:
              Your intuition would be wrong: the borrow checker does not take
              much time at all.
       
          Aurornis wrote 1 day ago:
          > makes me wonder why other languages like Rust and Swift can't just
          do something similar to achieve similar speeds.
          
          One of the primary features of Rust is the extensive compile-time
          checking. Monomorphization is also a complex operation, which is not
          exclusive to Rust.
          
          C compile times should be very fast because it's a relatively
          low-level language.
          
          On the grand scale of programming languages and their compile-time
          complexity, C code is closer to assembly language than modern
          languages like Rust or Swift.
       
          dhosek wrote 1 day ago:
          Because Russt and Swift are doing much more work than a C compiler
          would? The analysis necessary for the borrow checker is not free,
          likewise with a lot of other compile-time checks in both languages. C
          can be fast because it effectively does no compile-time checking of
          things beyond basic syntax so you can call foo(char) with foo(int)
          and other unholy things.
       
            jvanderbot wrote 1 day ago:
            If you'd like the rust compiler to operate quickly:
            
            * Make no nested types - these slow compiler time a lot
            
            * Include no crates, or ones that emphasize compiler speed
            
            C is still v. fast though. That's why I love it (and Rust).
       
              windward wrote 16 hours 28 min ago:
              >Make no nested types
              
              I wouldn't like it that much
       
            Thiez wrote 1 day ago:
            This explanation gets repeated over and over again in discussions
            about the speed of the Rust compiler, but apart from rare
            pathological cases, the majority of time in a release build is not
            spent doing compile-time checks, but in LLVM. Rust has zero-cost
            abstractions, but the zero-cost refers to runtime, sadly there's a
            lot of junk generated at compile-time that LLVM has to work to
            remove. Which is does, very well, but at cost of slower
            compilation.
       
              vbezhenar wrote 1 day ago:
              Is it possible to generate less junk? Sounds like compiler
              developers took a shortcuts, which could be improved over time.
       
                LtdJorge wrote 16 hours 43 min ago:
                Well, zero-cost abstractions are still abstractions. It’s not
                junk per-se, but things that will be optimized out if the IR
                has enough information to safely do so, so basically lots of
                extra metadata to actually prove to LLVM that these things are
                safe.
       
                zozbot234 wrote 1 day ago:
                You can address the junk problem manually by having generic
                functions delegate as much of their work as possible to
                non-generic or "less" generic functions (Where a "less" generic
                function is one that depends only on a known subset of type
                traits, such as size or alignment.  Then delegating can help
                the compiler generate fewer redundant copies of your code, even
                if it can't avoid code monomorphization altogether.)
       
                  andrepd wrote 17 hours 18 min ago:
                  Isn't something like this blocked on the lack of
                  specialisation?
       
                    dwattttt wrote 13 hours 39 min ago:
                    I believe the specific advice they're referring to has been
                    stable for a while. You take your generic function & split
                    it into a thin generic wrapper, and a non-generic worker.
                    
                    As an example, say your function takes anything that can be
                    turned into a String. You'd write a generic wrapper that
                    does the ToString step, then change the existing function
                    to just take a String. That way when your function is
                    called, only the thin outer function is monomorphised, and
                    the bulk of the work is a single implementation.
                    
                    It's not _that_ commonly known, as it only becomes a
                    problem for a library that becomes popular.
       
                      estebank wrote 10 hours 4 min ago:
                      To illustrate:
                      
                        fn foo>(s: S) {
                            fn inner(s: String) { ... }
                            inner(s.into())
                        }
       
                rcxdude wrote 1 day ago:
                Probably, but it's the kind of thing that needs a lot of fairly
                significant overhauls in the compiler architecture to really
                move the needle on, as far as I understand.
       
            taylorallred wrote 1 day ago:
            These languages do more at compile time, yes. However, I learned
            from Ryan's discord server that he did a unity build in a C++
            codebase and got similar results (just a few seconds slower than
            the C code). Also, you could see in the article that most of the
            time was being spent in LLVM and linking. With a unity build, you
            nearly cut out link step entirely. Rust and Swift do some
            sophisticated things (hinley-milner, generics, etc.) but I have my
            doubts that those things cause the most slowdown.
       
            steveklabnik wrote 1 day ago:
            The borrow checker is usually a blip on the overall graph of
            compilation time.
            
            The overall principle is sound though: it's true that doing some
            work is more than doing no work. But the borrow checker and other
            safety checks are not the root of compile time performance in Rust.
       
              kimixa wrote 1 day ago:
              While the borrow checker is one big difference, it's certainly
              not the only thing the rust compiler offers on top of C that
              takes more work.
              
              Stuff like inserting bounds checking puts more work on the
              optimization passes and codegen backend as it simply has to deal
              with more instructions. And that then puts more symbols and
              larger sections in the input to the linker, slowing that down.
              Even if the frontend "proves" it's unnecessary that calculation
              isn't free. Many of those features are related to "safety" due to
              the goals of the language. I doubt the syntax itself really makes
              much of a difference as the parser isn't normally high on the
              profiled times either.
              
              Generally it provides stricter checks that are normally punted to
              a linter tool in the c/c++ world - and nobody has accused
              clang-tidy of being fast :P
       
                simonask wrote 15 hours 10 min ago:
                It truly is not about bounds checks. Index lookups are rare in
                practical Rust code, and the amount of code generated from them
                is miniscule.
                
                But it _is_ about the sheer volume of stuff passed to LLVM, as
                you say, which comes from a couple of places, mostly related to
                monomorphization (generics), but also many calls to tiny
                inlined functions. Incidentally, this is also what makes many
                "modern" C++ projects slow to compile.
                
                In my experience, similarly sized Rust and C++ projects seem to
                see similar compilation times. Sometimes C++ wins due to better
                parallelization (translation units in Rust are crates, not
                source files).
       
            drivebyhooting wrote 1 day ago:
            That’s not a good example. Foo(int) is analyzed by compiler and a
            type conversion is inserted. 
            The language spec might be bad, but this isn’t letting the
            compiler cut corners.
       
          lordofgibbons wrote 1 day ago:
          The more your compiler does for you at build time, the longer it will
          take to build, it's that simple.
          
          Go has sub-second build times even on massive code-bases. Why?
          because it doesn't do a lot at build time. It has a simple module
          system, (relatively) simple type system, and leaves a whole bunch of
          stuff be handled by the GC at runtime. It's great for its intended
          use case.
          
          When you have things like macros, advanced type systems, and want
          robustness guarantees at build time.. then you have to pay for that.
       
            phplovesong wrote 9 hours 58 min ago:
            Thats not really true. As a counter example, Ocaml has a very
            advanced type system, full typeinference, generics and all that
            jazz. Still its on par, or even faster to compile than Go.
       
            jstanley wrote 13 hours 45 min ago:
            > Go has sub-second build times even on massive code-bases.
            
            Unless you use sqlite, in which case your build takes a million
            years.
       
              infogulch wrote 4 hours 48 min ago:
              Try [1] it runs sqlite in WASM with wazero, a pure Go WASM
              runtime, so it builds without any CGo required. Most of the
              benchmarks are within a few % of the performance of
              mattn/go-sqlite3.
              
   URI        [1]: https://github.com/ncruces/go-sqlite3
       
              Groxx wrote 11 hours 56 min ago:
              Yeah, I deal with multiple Go projects that take a couple minutes
              to link the final binary, much less build all the intermediates.
              
              Compilation speed depends on what you do with a language. "Fast"
              is not an absolute, and for most people it depends heavily on
              community habits. Rust habits tend to favor extreme
              optimizability and/or extreme compile-time guarantees, and that's
              obviously going to be slower than simpler code.
       
            Mawr wrote 18 hours 20 min ago:
            Not really. The root reason behind Go's fast compilation is that it
            was specifically designed to compile fast. The implementation
            details are just a natural consequence of that design decision.
            
            Since fast compilation was a goal, every part of the design was
            looked at through a rough "can this be a horrible bottleneck?", and
            discarded if so. For example, the import (package) system was
            designed to avoid the horrible, inefficient mess of C++. It's
            obvious that you never want to compile the same package more than
            once and that you need to support parallel package compilation.
            These may be blindingly obvious, but if you don't think about
            compilation speed at design time, you'll get this wrong and will
            never be able to fix it.
            
            As far as optimizations vs compile speed goes, it's just a simple
            case of diminishing returns. Since Rust has maximum possible
            perfomance as a goal, it's forced to go well into the diminishing
            returns territory, sacrificing a ton of compile speed for minor
            performance improvements. Go has far more modest performance goals,
            so it can get 80% of the possible performance for only 20% of the
            compile cost. Rust can't afford to relax its stance because it's
            competing with languages like C++, and to some extent C, that are
            willing to go to any length to squeeze out an extra 1% of
            perfomance.
       
            Zardoz84 wrote 1 day ago:
            Dlang compilers does more than any C++ compiler (metaprogramming, a
            better template system and compile time execution) and it's hugely
            faster. Language syntax design has a role here.
       
            cogman10 wrote 1 day ago:
            Yes but I'd also add that Go specifically does not optimize well.
            
            The compiler is optimized for compilation speed, not runtime
            performance.  Generally speaking, it does well enough.    Especially
            because it's usecase is often applications where "good enough" is
            good enough (IE, IO heavy applications).
            
            You can see that with "gccgo".    Slower to compile, faster to run.
       
              pclmulqdq wrote 4 hours 38 min ago:
              Go defaults to an unoptimized build. If you want it to run heavy
              optimization passes, you can turn those on with flags. Rust
              defaults to doing most of those optimizations on every build and
              allows you to turn them off.
       
              cherryteastain wrote 1 day ago:
              Is gccgo really faster? Last time I looked it looked like it was
              abandoned (stuck at go 1.18, had no generics support) and was not
              really faster than the "actual" compiler.
       
                cogman10 wrote 21 hours 31 min ago:
                Digging around, looks like it's workload dependent.
                
                For pure computational workloads, it'll be faster.  However,
                anything with heavy allocation will suffer as apparently the
                gccgo GC and GC related optimizations aren't as good as cgo's.
       
            duped wrote 1 day ago:
            I think this is mostly a myth. If you look at Rust compiler
            benchmarks, while typechecking isn't _free_ it's also not the
            bottleneck.
            
            A big reason that amalgamation builds of C and C++ can absolutely
            fly is because they aren't reparsing headers and generating exactly
            one object file so the linker has no work to do.
            
            Once you add static linking to the toolchain (in all of its forms)
            things get really fucking slow.
            
            Codegen is also a problem. Rust tends to generate a lot more code
            than C or C++, so while the compiler is done doing most of its
            typechecking work, the backend and assembler has a lot of things to
            chuck through.
       
              benreesman wrote 12 hours 17 min ago:
              The meme that static linking is slow or produces anything other
              than the best executables is demonstrably false and the result of
              surprisingly sinister agendas. Get out readelf and nm and PS
              sometime and do the arithematic: most programs don't link much of
              glibc (and its static link is broken by design, musl is better at
              just about everything). Matt Godbolt has a great talk about how
              dynamic linking actually works that should give anyone pause.
              
              DLLs got their start when early windowing systems didn't quite
              fit on the workstations of the era in the late 80s / early 90s.
              
              In about 4 minutes both Microsoft and GNU were like, "let me get
              this straight, it will never work on another system and I can
              silently change it whenever I want?" Debian went along because it
              gives distro maintainers degrees of freedom they like and don't
              bear the costs of.
              
              Fast forward 30 years and Docker is too profitable a problem to
              fix by the simple expedient of calling a stable kernel ABI on
              anything, and don't even get me started on how penetrated
              everything but libressl and libsodium are. Protip: TLS is popular
              with the establishment because even Wireshark requires special
              settings and privileges for a user to see their own traffic,
              security patches my ass. eBPF is easier.
              
              Dynamic linking moves control from users to vendors and
              governments at ruinous cost in performance, props up bloated
              industries like the cloud compute and Docker industrial complex,
              and should die in a fire.
              
              Don't take my word for it, swing by cat-v.org sometimes and see
              what the authors of Unix have to say about it.
              
              I'll save the rant about how rustc somehow manages to be slower
              than clang++ and clang-tidy combined for another day.
       
                duped wrote 10 hours 43 min ago:
                I think you're confused about my comment and this thread - I'm
                talking about build times.
       
                  benreesman wrote 8 hours 28 min ago:
                  You said something false and important and I took the
                  opportunity to educate anyone reading about why this aspect
                  of their computing experience is a mess. All of that is
                  germane to how we ended up in a situation where someone is
                  calling rustc with a Dockerfile and this is considered
                  normal.
       
                    duped wrote 2 hours 37 min ago:
                    Seems like you still misunderstand both the comment and
                    context and getting overly emotional/conspiratorial. You
                    might want to work on those feelings.
       
                      benreesman wrote 2 hours 5 min ago:
                      No one is trying to take anyone's multi-gigabyte pile of
                      dynamic library closure to deploy what should be a few
                      hundred kilobytes of arbitrarily portable, secure by
                      construction, built to last executable.
                      
                      But people should make an informed choice, and there
                      isn't any noble or high minded or well-meaning reason to
                      try to shout that information down.
                      
                      Don't confidently assert falsehoods unless you're
                      prepared to have them refuted. You're entitled to peddle
                      memes and I'm entitled to reply with corrections.
       
                jrmg wrote 10 hours 59 min ago:
                …surprisingly sinister agendas.
                
                …
                
                Dynamic linking moves control from users to vendors and
                governments at ruinous cost in performance, props up bloated
                industries...
                
                This is ridiculous. Not everything is a conspiracy!
       
                  trinix912 wrote 3 hours 55 min ago:
                  Had they left "governments" out of there it would've been
                  almost fine, but damn I didn't know it's now governments
                  changing DLLs for us!
       
                    benreesman wrote 2 hours 50 min ago:
                     [1] [2] [3] [4] [5] [6]
                    
   URI              [1]: https://en.wikipedia.org/wiki/Equation_Group
   URI              [2]: https://en.wikipedia.org/wiki/Advanced_persistent_...
   URI              [3]: https://en.wikipedia.org/wiki/Operation_Olympic_Ga...
   URI              [4]: https://simple.wikipedia.org/wiki/Stuxnet
   URI              [5]: https://en.wikipedia.org/wiki/Cozy_Bear
   URI              [6]: https://en.wikipedia.org/wiki/Fancy_Bear
   URI              [7]: https://en.wikipedia.org/wiki/Tailored_Access_Oper...
       
                  computably wrote 5 hours 38 min ago:
                  Bad incentives != conspiracy
       
                  benreesman wrote 8 hours 31 min ago:
                  I didn't say anything was a conspiracy, let alone everything.
                  I said inferior software is promoted by vendors on Linux as
                  well as on MacOS and Windows with unpleasant consequences for
                  users in a way that serves those vendors and the even more
                  powerful institutions to which they are beholden. Sinister
                  intentions are everywhere in this business (go read the
                  opinions of the people who run YC), that's not even remotely
                  controversial.
                  
                  If fact, if there was anything remotely controversial about a
                  bunch of extremely specific, extremely falsifiable claims I
                  made, one imagines your rebuttal would have mentioned at
                  least one.
                  
                  I said inflmatory things (Docker is both arsonist and fireman
                  at ruinous cost), but they're fucking true. That Alpine in
                  the Docker jank? Links musl!
       
                  k__ wrote 10 hours 48 min ago:
                  That's an even more reasonable fear than trusting trust, and
                  people seem to take that seriously.
       
                jelder wrote 11 hours 16 min ago:
                CppCon 2018: Matt Godbolt “The Bits Between the Bits: How We
                Get to main()"
                
   URI          [1]: https://www.youtube.com/watch?v=dOfucXtyEsU
       
              blizdiddy wrote 12 hours 35 min ago:
              Go is static by default and still fast as hell
       
                vintagedave wrote 12 hours 10 min ago:
                Delphi is static by default and incredibly fast too.
       
                  zenlot wrote 4 hours 53 min ago:
                  FreePascal to the game please
       
                    tukantje wrote 1 hour 23 min ago:
                    Will the real FORTRAN please stand up?
       
              the-lazy-guy wrote 12 hours 52 min ago:
              > Once you add static linking to the toolchain (in all of its
              forms) things get really fucking slow.
              
              Could you expand on that, please? Every time you run dynmically
              linked program, it is linked at runtime. (unless it explicitly
              avoids linking unneccessary stuff by dlopening things lazily;
              which pretty much never happens). If it is fine to link on every
              program launch, linking at build time should not be a problem at
              all.
              
              If you want to have link time optimization, that's another story.
              But you absolutely don't have to do that if you care about build
              speed.
       
              windward wrote 16 hours 36 min ago:
              >Codegen is also a problem. Rust tends to generate a lot more
              code than C or C++
              
              Wouldn't you say a lot of that comes from the macros and (by way
              of monomorphisation) the type system?
       
                jandrewrogers wrote 9 hours 11 min ago:
                Modern C++ in particular does a lot of similar, albeit not
                identical, codegen due to its extensive metaprogramming
                facilities. (C is, of course, dead simple.) I've never looked
                into it too much but anecdotally Rust does seem to generate
                significantly more code than C++ in cases where I would
                intuitively expect the codegen to be similar. For whatever
                reason, the "in theory" doesn't translate to "in practice"
                reliably.
                
                I suspect this leaks into both compile-time and run-time costs.
       
              fingerlocks wrote 1 day ago:
              The swift compiler is definitely bottle necked by type checking.
              For example, as a language requirement, generic types are left
              more or less in-tact after compilation. They are type checked
              independent of what is happening. This is unlike C++ templates
              which are effectively copy-pasting the resolved type with the
              generic for every occurrence of type resolution.
              
              This has tradeoffs: increased ABI stability at the cost of longer
              compile times.
       
                slavapestov wrote 9 hours 41 min ago:
                > This has tradeoffs: increased ABI stability at the cost of
                longer compile times.
                
                Nah. Slow type checking in Swift is primarily caused by the
                fact that functions and operators can be overloaded on type.
                
                Separately-compiled generics don't introduce any algorithmic
                complexity and are actually good for compile time, because you
                don't have to re-type check every template expansion more than
                once.
       
                  choeger wrote 4 hours 31 min ago:
                  Separate compilation also enables easy parallelization of
                  type checking.
       
                  fingerlocks wrote 4 hours 41 min ago:
                  You’re absolutely right. I realized this later but it was
                  too late to edit the post.
       
                windward wrote 16 hours 33 min ago:
                >This is unlike C++ templates which are effectively
                copy-pasting the resolved type with the generic for every
                occurrence of type resolution.
                
                Even this can lead to unworkable compile times, to the point
                that code is rewritten.
       
                willtemperley wrote 20 hours 43 min ago:
                A lot can be done by the programmer to mitigate slow builds in
                Swift. Breaking up long expressions into smaller ones and using
                explicit types where type inference is expensive for example.
                
                I’d like to see tooling for this to pinpoint bottlenecks -
                it’s not always obvious what’s making builds slow.
       
                  never_inline wrote 12 hours 48 min ago:
                  > Breaking up long expressions into smaller ones
                  
                  If it improves compile time, that sounds like a bug in the
                  compiler or the design of the language itself.
       
                  ykonstant wrote 17 hours 3 min ago:
                  >I’d like to see tooling for this to pinpoint bottlenecks -
                  it’s not always obvious what’s making builds slow.
                  
                  I second this enthusiastically.
       
                    glhaynes wrote 14 hours 32 min ago:
                    I'll third it. I've started to see more and more cargo
                    culting of "fixes" that I'm extremely suspicious do nothing
                    aside from making the code bulkier.
       
              treyd wrote 1 day ago:
              Not only does it generate more code, the initially generated code
              before optimizations is also often worse.  For example, heavy use
              of iterators means a ton of generics being instantiated and a ton
              of call code for setting up and tearing down call frames.  This
              gets heavily inlined and flattened out, so in the end it's
              extremely well-optimized, but it's a lot of work for the
              compiler.  Writing it all out classically with for loops and ifs
              is possible, but it's harder to read.
       
                estebank wrote 10 hours 30 min ago:
                For loops are sugar around an Iterator instantiation:
                
                  for i in 0..10 {}
                
                translates to roughly
                
                  let mut iter = Range { start: 0, end: 10 }.into_iter();
                  while let Some(i) = iter.next() {}
       
            ChadNauseam wrote 1 day ago:
            That the type system is responsible for rust's slow builds is a
            common and enduring myth. `cargo check` (which just does
            typechecking) is actually usually pretty fast. Most of the build
            time is spent in the code generation phase. Some macros do cause
            problems as you mention, since the code that contains the macro
            must be compiled before the code that uses it, so they reduce
            parallelism.
       
              tedunangst wrote 1 day ago:
              I just ran cargo check on nushell, and it took a minute and a
              half. I didn't time how long it took to compile, maybe five
              minutes earlier today? So I would call it faster, but still not
              fast.
              
              I was all excited to conduct the "cargo check; mrustc; cc" is
              100x faster experiment, but I think at best, the multiple is
              going to be pretty small.
       
                CryZe wrote 15 hours 51 min ago:
                A ton of that is actually still doing codegen (for the proc
                macros for example).
       
                ChadNauseam wrote 1 day ago:
                Did you do it from a clean build? In that case, it's actually a
                slightly misleading metric, since rust needs to actually
                compile macros in order to typecheck code that uses them. (And
                therefore must also compile all the code that the macro depends
                on.) My bad for suggesting it, haha. Incremental cargo check is
                often a better way of seeing how long typechecking takes, since
                usually you haven't modified any macros that will need to be
                recompiled. On my project at work, incremental cargo check
                takes `1.71s`.
       
                  estebank wrote 10 hours 11 min ago:
                  Side note: There's an effort to cache proc macro invocations
                  so that they get executed only once if the item they annotate
                  hasn't changed: [1] There are multiple caveats on providing
                  this to users (we can't assume that macro invocations are
                  idempotent, so the new behavior would have to be opt in, and
                  this only benefits incremental compilation), but it's in our
                  radar.
                  
   URI            [1]: https://github.com/rust-lang/rust/pull/129102
       
              rstuart4133 wrote 1 day ago:
              > Most of the build time is spent in the code generation phase.
              
              I can believe that, but even so it's caused by the type system
              monomorphising everything.  When it use qsort from libc, you are
              using per-compiled code from a library.  When you use
              slice::sort(), you get custom assembler compiled to suit your
              application.  Thus, there is a lot more code generation going on,
              and that is caused by the tradeoffs they've made with the type
              system.
              
              Rusts approach give you all sorts of advantages, like fast code
              and strong compile time type checking.    But it comes with warts
              too, like fat binaries, and a bug in slice::sort() can't be fixed
              by just shipping of the std dynamic library, because there is no
              such library.  It's been recompiled, just for you.
              
              FWIW, modern C++ (like boost) that places everything in templates
              in .h files suffers from the same problem.  If Swift suffers from
              it too, I'd wager it's the same cause.
       
                badmintonbaseba wrote 15 hours 33 min ago:
                It's partly by the type system. You can implement a std::sort
                (or slice::sort()) that just delegates to qsort or a qsort-like
                implementation and have roughly the same compile time
                performance as just using qsort straight.
                
                But not having to is a win, as the monomorphised sorts are just
                much faster at runtime than having to do an indirect call for
                each comparison.
       
                  estebank wrote 10 hours 22 min ago:
                  This is a pattern a crate author can rely on (write a
                  function that uses genetics that immediately delegates to a
                  function that uses trait objects or converts to the needed
                  types eagerly so the common logic gets compiled only once),
                  and there have been multiple efforts to have the compiler do
                  that automatically. It has been called polymorphization and
                  it comes up every now and then:
                  
   URI            [1]: https://internals.rust-lang.org/t/add-back-polymorph...
       
          tptacek wrote 1 day ago:
          I don't think it's interesting to observe that C code can be compiled
          quickly (so can Go, a language designed specifically for fast
          compilation). It's not a problem intrinsic to compilation; the
          interesting hard problem is to make Rust's semantics compile quickly.
          This is a FAQ on the Rust website.
       
          js2 wrote 1 day ago:
          "Just". Probably because there's a lot of complexity you're waving
          away. Almost nothing is ever simple as "just".
       
            taylorallred wrote 1 day ago:
            That "just" was too flippant. My bad. What I meant to convey is
            "hey, there's some fast compiling going on here and it wasn't that
            hard to pull off. Can we at least take a look at why that is and
            maybe do the same thing?".
       
              steveklabnik wrote 1 day ago:
              > "hey, there's some fast compiling going on here and it wasn't
              that hard to pull off. Can we at least take a look at why that is
              and maybe do the same thing?".
              
              Do you really believe that nobody over the course of Rust's
              lifetime has ever taken a look at C compilers and thought about
              if techniques they use could apply to the Rust compiler?
       
                taylorallred wrote 1 day ago:
                Of course not. But it wouldn't surprise me if nobody thought to
                use a unity build. (Maybe they did. Idk. I'm curious).
       
                  ameliaquining wrote 1 day ago:
                  Can you explain why a unity build would help? Conventional
                  wisdom is that Rust compilation is slow in part because it
                  has too few translation units (one per crate, plus codegen
                  units which only sometimes work), not too many.
       
                  steveklabnik wrote 1 day ago:
                  Rust and C have differences around compilation units: Rust's
                  already tend to be much larger than C on average, because the
                  entire crate (aka tree of modules) is the compilation unit in
                  Rust, as opposed to the file-based (okay not if you're on
                  some weird architecture) compilation unit of C.
                  
                  Unity builds are useful for C programs because they tend to
                  reduce header processing overhead, whereas Rust does not have
                  the preprocessor or header files at all.
                  
                  They also can help with reducing the number of object files
                  (down to one from many), so that the linker has less work to
                  do, this is already sort of done (though not to literally
                  one) due to what I mentioned above.
                  
                  In general, the conventional advice is to do the exact
                  opposite: breaking large Rust projects into more, smaller
                  compilation units can help do less "spurious" rebuilding, so
                  smaller changes have less overall impact.
                  
                  Basically, Rust's compile time issues lie elsewhere.
       
            pixelpoet wrote 1 day ago:
            At a previous company, we had a rule: whoever says "just" gets to
            implement it :)
       
              forrestthewoods wrote 1 day ago:
              I wanted to ban “just” but your rule is better. Brilliant.
       
        ic_fly2 wrote 1 day ago:
        This is such a weird cannon on sparrows approach.
        
        The local builds are fast, why would you rebuild docker for small
        changes?
        
        Also why is a personal page so much rust and so many dependencies. For
        a larger project with more complex stuff you’d have a test suite that
        takes time too. Run both in parallel in your CI and call it a day.
       
        senderista wrote 1 day ago:
        WRT compilation efficiency, the C/C++ model of compiling separate
        translation units in parallel seems like an advance over the Rust model
        (but obviously forecloses opportunities for whole-program
        optimization).
       
          woodruffw wrote 1 day ago:
          Rust can and does compile separate translation units in parallel;
          it's just that the translation unit is (roughly) a crate instead of a
          single C or C++ source file.
       
            EnPissant wrote 1 day ago:
            And even for crates, Rust has incremental compilation.
       
        RS-232 wrote 1 day ago:
        Is there an equivalent of ninja for rust yet?
       
          steveklabnik wrote 1 day ago:
          It depends on what you mean by 'equivalent of ninja.'
          
          Cargo is the standard build system for Rust projects, though some
          users use other ones. (And some build those on top of Cargo too.)
       
        b0a04gl wrote 1 day ago:
        rust prioritises build-time correctness: no runtime linker or no
        dynamic deps. all checks (types, traits, ownership) happen before
        execution. this makes builds sensitive to upstream changes. docker uses
        content-hash layers, so small context edits invalidate caches. without
        careful layer ordering, rust gets fully recompiled on every change.
       
        namibj wrote 1 day ago:
        Incremental compilation good.
        If you want, freeze the initial incremental cache after a single fresh
        build to use for building/deploying updates, to mitigate the risk of
        intermediate states gradually corrupting the cache.
        
        Works great with docker: upon new compiler version or major website
        update, rebuild the layer with the incremental cache; otherwise just
        run from the snapshot and build newest website update version/state,
        and upload/deploy the resulting static binary.
        Just set so that mere code changes won't force rebuilding the layer
        that caches/materializes the fresh clean build's incremental
        compilation cache.
       
          maccard wrote 17 hours 4 min ago:
          The intermediates for my project are 150GB+ alone. Last time I worked
          with docker images that large we had massive massive problems.
       
        AndyKelley wrote 1 day ago:
        My homepage takes 73ms to rebuild: 17ms to recompile the static site
        generator, then 56ms to run it.
        
            andy@bark ~/d/andrewkelley.me (master)> zig build --watch
        -fincremental
            Build Summary: 3/3 steps succeeded
            install success
            └─ run exe compile success 57ms MaxRSS:3M
               └─ compile exe compile Debug native success 331ms
            Build Summary: 3/3 steps succeeded
            install success
            └─ run exe compile success 56ms MaxRSS:3M
               └─ compile exe compile Debug native success 17ms
            watching 75 directories, 1 processes
       
          nicoburns wrote 1 day ago:
          My non-static Rust website (includes an actual webserver as well as a
          react-like framework for templating) takes 1.25s to do an incremental
          recompile with "cargo watch" (which is an external watcher that just
          kills the process and reruns "cargo run").
          
          And it can be considerably faster if you use something like
          subsecond[0] (which does incremental linking and hotpatches the
          running binary). It's not quite as fast as Zig, but it's close.
          
          However, if that 331ms build above is a clean (uncached) build then
          that's a lot faster than a clean build of my website which takes
          ~12s.
          
          [0]:
          
   URI    [1]: https://news.ycombinator.com/item?id=44369642
       
            AndyKelley wrote 1 day ago:
            The 331ms time is mostly uncached. In this case the build script
            was already cached (must be re-done if the build script is edited),
            and compiler_rt was already cached (must be done exactly once per
            target; almost never rebuilt).
       
              nicoburns wrote 1 day ago:
              Impressive!
       
          ww520 wrote 1 day ago:
          Nice. Didn't realize zig build has --watch and -fincremental added. I
          was mostly using "watchexec -e zig zig build" for recompile on file
          changes.
       
            Graziano_M wrote 1 day ago:
            New to 0.14.0!
       
          qualeed wrote 1 day ago:
          Neat, I guess?
          
          This comment would be a lot better if it engaged with the posted
          article, or really had any sort of insight beyond a single compile
          time metric. What do you want me to take away from your comment? Zig
          good and Rust bad?
       
            kristoff_it wrote 1 day ago:
            I think the most relevant thing is that building a simple website
            can (and should) take milliseconds, not minutes, and that --
            quoting from the post:
            
            > A brief note: 50 seconds is fine, actually!
            
            50 seconds should actually not be considered fine.
       
              qualeed wrote 1 day ago:
              As you've just demonstrated, that point can be made without even
              mentioning Zig, let alone copy/pasting some compile time stuff
              with no other comment or context. Which is why I thought (well,
              hoped) there might be something more to it than just a dunk
              attempt.
              
              Now we get all of this off-topic discussion about Zig. Which I
              guess is good for you Zig folk... But it's pretty off-putting for
              me.
              
              whoisyc's comment is extremely on point. As the VP of community,
              I would really encourage thinking about what they said.
       
                maccard wrote 17 hours 2 min ago:
                I disagree. Zig and go are perfect frames of reference to say
                “actually no, Rust really is slow. Here are examples for you
                to go and see for yourself”
       
                kristoff_it wrote 1 day ago:
                > As you've just demonstrated, that point can be made without
                even mentioning Zig, let alone copy/pasting some compile time
                stuff with no other comment or context. Which is why I thought
                (well, hoped) there might be something more to it than just a
                dunk attempt.
                
                Having concrete proof that something can be done more
                efficiently is extremely important and, no, I haven't
                "demonstrated" anything, since my earlier comment would have
                had way less substance to it without the previous context.
                
                The comment from Andrew is not just random compiler stats, but
                a datapoint showing a comparable example having dramatically
                different performance characteristics.
                
                You can find in this very HN submission various comments that
                assume that Rust's compiler performance is impossible to
                improve because of reasons that actually are mostly (if not
                entirely) irrelevant. Case in point, see people talking about 
                how Rust compilation must take longer because of the borrow
                checker (and other safety checks) and Steve pointing out that,
                no, actually that part of the compilation pipeline is very
                small.
                
                > Now we get all of this off-topic discussion about Zig.
                
                So no, I would argue the opposite: this discussion is very much
                on topic.
       
          whoisyc wrote 1 day ago:
          Just like every submission about C/C++ gets a comment about how great
          Rust is, every submission about Rust gets a comment about how great
          Zig is. Like a clockwork.
          
          Edit: apparently I am replying to the main Zig author? Language
          evangelism is by far the worst part of Rust and has likely stirred up
          more anti Rust sentiment than “converting” people to Rust. If you
          truly care for your language you should use whatever leverage you
          have to steer your community away from evangelism, not embrace it.
       
            frollogaston wrote 7 hours 47 min ago:
            It's unlikely that anyone was going to use Rust anyway but decided
            not to because they got too annoyed hearing about it.
       
            AlienRobot wrote 11 hours 55 min ago:
            If you can't be proud about a programming language you made what is
            even the point?
       
          vlovich123 wrote 1 day ago:
          Zig isn’t memory safe though right?
       
            kristoff_it wrote 1 day ago:
            How confident are you that memory safety (or lack thereof) is a
            significant variable in how fast a compiler is?
       
            ummonk wrote 1 day ago:
            Zig is less memory safe than Rust, but more than C/C++. Neither Zig
            nor Rust is fundamentally memory safe.
       
              Ar-Curunir wrote 1 day ago:
              What? Zig is definitively not memory-safe, while safe Rust, is,
              by definition, memory-safe. Unsafe rust is not memory-safe, but
              you generally don't need to have a lot of it around.
       
                rurban wrote 14 hours 5 min ago:
                By definition yes. There were a lot of lies to persuade
                managers. You can write a lot into your documentation.
                
                But by implementation and spec definitely not.
       
                Graziano_M wrote 1 day ago:
                The second you have any `unsafe`, Rust is _by definition_ not
                memory-safe.
       
                  Meneth wrote 13 hours 40 min ago:
                  And the majority of the Rust standard library uses `unsafe`.
       
                    Measter wrote 11 hours 2 min ago:
                    Prove it. Show me the stats that the standard library is
                    over 50% unsafe.
       
                      9rx wrote 5 hours 53 min ago:
                      "Over 50%" only holds if the statement is intended to be
                      binary. It may be that he considers direct use, no use,
                      and transitive use to be all different. In which case it
                      is possible that the majority[1] use unsafe, even if more
                      than 50% does not.
                      
                      [1] The cult of the orange man would call this a
                      plurality, which may be what has tripped you up, but the
                      civilized world calls it a majority.
       
                  Ar-Curunir wrote 16 hours 25 min ago:
                  By that definition, Python is not memory-safe, Java is not
                  memory-safe, Go is not memory-safe, and so on. All of these
                  languages contain escape hatches to do memory-unsafe stuff,
                  yet no one is calling them memory unsafe.
       
                    ummonk wrote 9 hours 53 min ago:
                    Go is more memory unsafe than Java or Rust. Data races in
                    concurrent Go code can cause memory corruption, unlike in
                    concurrent Java code. Safe Rust is designed to avoid data
                    races altogether using static analysis.
       
                ummonk wrote 1 day ago:
                Safe Rust is demonstrably not memory-safe:
                
   URI          [1]: https://github.com/Speykious/cve-rs/tree/main
       
                  steveklabnik wrote 1 day ago:
                  This is a compiler bug. This has no bearing on the language
                  itself. Bugs happen, and they will be fixed, even this one.
       
                    ummonk wrote 22 hours 46 min ago:
                    It's a 10 year old bug which will eventually be fixed but
                    may require changes to how Rust handles type variance.
                    
                    Until you guys write an actual formal specification, the
                    compiler is the language.
       
                      steveklabnik wrote 22 hours 27 min ago:
                      It’s a ten year old bug because it has never been found
                      in the wild, ever, in those ten years. Low impact, high
                      implementation effort bugs take less priority than bugs
                      that affect real users.
                      
                      The project is adopting Ferrocene for the spec.
       
                        ummonk wrote 22 hours 11 min ago:
                        Ferrocene is intended to document the behavior of the
                        current version of the rustc compiler, so it's just an
                        effort to formalize "the compiler is the language".
                        
                        Yes, the soundness hole itself is low impact and
                        doesn't need to be prioritized but it undermines the
                        binary "Zig is definitively not memory-safe, while safe
                        Rust, is, by definition, memory-safe" argument that was
                        made in response to me. Now you're dealing with
                        qualitative / quantitative questions of practical
                        impact, in which my original statement holds: "Zig is
                        less memory safe than Rust, but more than C/C++.
                        Neither Zig nor Rust is fundamentally memory safe."
                        
                        You can of course declare that Safe Rust is by
                        definition memory safe, but that doesn't make it any
                        more true than declaring that Rust solves the halting
                        problem or that it proves P=NP. RustBelt is proven
                        sound. Rust by contrast, as being documented by
                        Ferrocene, is currently fundamentally unsound (though
                        you won't hit the soundness issues in practice).
       
                          _flux wrote 12 hours 29 min ago:
                          I believe these two statements should show the
                          fundamental difference:
                          
                          - If a safe Rust program exhibits a memory safety
                          problem, it is a bug in the Rust compiler that is to
                          be fixed
                          - If a Zig program exhibits a memory safety problem,
                          it is a bug in the Zig program that needs to be
                          fixed, not in the compiler
                          
                          Wouldn't you agree?
                          
                          > Ferrocene is intended to document the behavior of
                          the current version of the rustc compiler, so it's
                          just an effort to formalize "the compiler is the
                          language".
                          
                          I must admit I haven't read the specification, but I
                          doubt they attempt to be "bug for bug" compatible in
                          the sense that the spec enumerates memory safety bugs
                          present in the Rust compiler. But am I then mistaken?
       
                            vlovich123 wrote 10 hours 1 min ago:
                            > If a safe Rust program exhibits a memory safety
                            problem, it is a bug in the Rust compiler that is
                            to be fixed - If a Zig program exhibits a memory
                            safety problem, it is a bug in the Zig program that
                            needs to be fixed, not in the compiler
                            
                            That is the absolute best description of memory
                            safety I’ve heard expressed.
       
                            ummonk wrote 10 hours 15 min ago:
                            No, I don't agree. A compiler bug is something that
                            gets fixed in a patch release after it's reported,
                            or perhaps some platform-specific regression that
                            gets fixed in the next release after it's reported.
                            What we're discussing by contrast is a soundness
                            hole in the language itself - one which will most
                            likely require breaking changes to the language to
                            close (i.e. some older programs that were perfectly
                            safe will fail to compile as a side effect of
                            tightening up the Rust language to prevent this
                            soundness hole).
                            
                            As to the Ferrocene specification, it explicitly
                            states "Any difference between the FLS and the
                            behavior of the Rust compiler is considered an
                            error on our part and the FLS will be updated
                            accordingly."
                            
                            Proposals to fix the soundness hole in Rust either
                            change the variance rules themselves, or require
                            where clauses in certain places. Either of these
                            changes would require corresponding changes to
                            chapter 4 of the Ferrocene specification.
       
                              steveklabnik wrote 9 hours 16 min ago:
                              > As to the Ferrocene specification, it
                              explicitly states "Any difference between the FLS
                              and the behavior of the Rust compiler is
                              considered an error on our part and the FLS will
                              be updated accordingly."
                              
                              Right, this is from before it's adopted as the
                              actual spec, because it was from outside the
                              project, and so could not be.
                              
                              Also, these goalposts are moving: it was "Rust
                              doesn't have a spec" and now it's "I don't like
                              the spec."
                              
                              Fixing this soundness hole does not require a
                              breaking change to the language. It is an
                              implementation bug, not a problem with the
                              language as specified. But even if it were,
                              Rust's policies around soundness do allow for
                              this, and the project has done it in the past.
       
                                ummonk wrote 8 hours 0 min ago:
                                The goalposts haven't moved. The goalposts were
                                always "the current compiler is the language".
                                
                                If there is a proposed fix to the soundness
                                hole that wouldn't reject some existing sound
                                Rust code, please link to it; none of the
                                proposed fixes I've seen do so. And yes, Rust's
                                policies do allow for breaking changes in
                                pursuit of soundness - likely some day many
                                years from now safe Rust will indeed be sound
                                and guaranteed to be memory safe.
       
                              Ar-Curunir wrote 10 hours 2 min ago:
                              And Rust has and will make those breaking
                              changes, while Zig will likely not. In fact there
                              are documented and blessed ways to break memory
                              safety in Zig, and no one is calling them
                              soundness bugs!
                              
                              I really don’t see how you can claim with a
                              straight face that the two approaches are the
                              same.
       
                                ummonk wrote 7 hours 33 min ago:
                                "In fact there are documented and blessed ways
                                to break memory safety in Zig" - just as there
                                are in Rust... even the Rust standard library
                                makes liberal use of them (thereby making any
                                program which invokes those parts of the
                                standard library transitively unsafe by
                                definition).
                                
                                Look, I'm not saying the Zig and Rust
                                approaches are the same. I explicitly stated
                                that Rust is more memory safe than Zig (which
                                is in turn more memory safe than C/C++).
                                
                                This is because Rust has clearly delineated a
                                "safe" subset of Rust which you have to
                                explicitly opt out of that is mostly sound (and
                                has a goal of eventually being entirely sound),
                                has a culture of encouraging the use of the
                                safe subset, and has taken a good approach to
                                the interfacing of safe and unsafe code (i.e.
                                if unsafe code is properly written and
                                satisfies the exposed contract - despite the
                                compiler being unable to verify this - then
                                safe code can safely be linked with it).
                                
                                All of this results in extremely low risk of
                                memory corruption for Rust programs in practice
                                (far lower than any other commonly used non-GC
                                language with the sole exception of SPARK).
                                
                                What you can't do though is reject the notion
                                of memory safety being a sliding scale and draw
                                a binary distinction between languages that are
                                100% perfectly memory safe and languages that
                                are memory unsafe. Well you can, but Rust will
                                fall on the side of memory unsafe for many
                                years to come. Java (ignoring vendor-specific
                                extensions) falls on the safe side though - the
                                language semantics as specified are sound and
                                it doesn't even have an unsafe subset.
       
            pixelpoet wrote 1 day ago:
            It isn't a lot of things, but I would argue that its exceptionally
            (heh) good exception handling model / philosophy (making it good,
            required, and performant) is more important than memory safety,
            especially when a lot of performance-oriented / bit-banging Rust
            code just gets shoved into Unsafe blocks anyway. Even C/C++ can be
            made memory safe, cf. [1] What I'm more interested to know is what
            the runtime performance tradeoff is like now; one really has to
            assume that it's slower than LLVM-generated code, otherwise that
            monumental achievement seems to have somehow been eclipsed in very
            short time, with much shorter compile times to boot.
            
   URI      [1]: https://github.com/pizlonator/llvm-project-deluge
       
              jorvi wrote 22 hours 18 min ago:
              > especially when a lot of performance-oriented / bit-banging
              Rust code just gets shoved into Unsafe blocks anyway. Even C/C++
              can be made memory safe, cf.
              
              Your first claim is unverifiable and the second one is just so,
              so wrong. Even big projects with very talented, well-paid C or
              C++ devs eventually end up with CVEs, ~80% of them
              memory-related. Humans are just not capable of 0% error rate in
              their code.
              
              If Zig somehow got more popular than C/C++, we would still be
              stuck in the same CVE bog because of memory unsafety. No thank
              you.
       
                dgb23 wrote 12 hours 55 min ago:
                > If Zig somehow got more popular than C/C++, we would still be
                stuck in the same CVE bog because of memory unsafety. No thank
                you.
                
                Zig does a lot of things to prevent or detect memory safety
                related bugs. I personally haven't encountered a single one so
                far, while learning the language.
                
                > ~80% of them memory-related.
                
                I assume you're referencing the 70% that MS has published? I
                think they categorized null pointer exceptions as memory safety
                bugs as well among other things. Zig is strict about those, has
                error unions, and is strict and explicit around casting. It can
                also detect memory leaks and use after free among other things.
                It's a language that's very explicit about a lot of things,
                such as control flow, allocation strategies etc. And there's
                comptime, which is a very potent tool to guarantee all sorts of
                things that go well beyond memory safety.
                
                I almost want to say that your comment presents a false
                dichotomy in terms of the safety concern, but I'm not an expert
                in either Rust or Zig. I think however it's a bit broad and
                unfair.
       
              vlovich123 wrote 1 day ago:
              > Even C/C++ can be made memory safe, cf. [1] > Fil-C achieves
              this using a combination of concurrent garbage collection and
              invisible capabilities (each pointer in memory has a
              corresponding capability, not visible to the C address space)
              
              With significant performance and memory overhead. That just isn't
              the same ballpark that Rust is playing in although hugely
              important if you want to bring forward performance insensitive C
              code into a more secure execution environment.
              
   URI        [1]: https://github.com/pizlonator/llvm-project-deluge
       
                mike_hearn wrote 16 hours 40 min ago:
                Fil-C has advanced a lot since I last looked at it:
                
                > Fil-C is currently 1.5x slower than normal C in good cases,
                and about 4x slower in the worst cases.
                
                with room for optimization still. Compatibility has improved
                massively too, due to big changes to how it works. The early
                versions were kind of toys, but if Filip's claims about the
                current version hold up then this is starting to look like a
                very useful bit of kit. And he has the kind of background that
                means we should take this seriously. There's a LOT of use cases
                for taking stuff written in C and eliminating memory safety
                issues for only a 50% slowdown.
       
          taylorallred wrote 1 day ago:
          @AndyKelley I'm super curious what you think the main factors are
          that make languages like Zig super fast at compiling where languages
          like Rust and Swift are quite slow. What's the key difference?
       
            AlienRobot wrote 11 hours 51 min ago:
            One difference that Zig has is that it doesn't have multiline
            comments or multiline strings, meaning that the parser can parse
            any line correctly without context. I assume this makes
            parallelization trivial.
            
            There is ino operator overloading like C, so A + B can only mean
            one thing.
            
            You can't redeclare a variable, so foo can only map to one thing.
            
            The list goes on.
            
            Basically it was designed to compile faster, and that means many
            issues on Github have been getting rejected in order to keep it
            that way. It's full of compromises.
       
            steveklabnik wrote 1 day ago:
            I'm not Andrew, but Rust has made several language design decisions
            that make compiler performance difficult. Some aspects of compiler
            speed come down to that.
            
            One major difference is the way each project considers compiler
            performance:
            
            The Rust team has always cared to some degree about this. But, from
            my recollection of many RFCs, "how does this impact compiler
            performance" wasn't a first-class concern. And that also doesn't
            really speak to a lot of the features that were basically
            implemented before the RFC system existed. So while it's important,
            it's secondary to other things. And so while a bunch of
            hard-working people have put in a ton of work to improve
            performance, they also run up against these more fundamental
            limitations at the limit.
            
            Andrew has pretty clearly made compiler performance a first-class
            concern, and that's affected language design decisions. Naturally
            this leads to a very performant compiler.
       
              rtpg wrote 20 hours 30 min ago:
              >  Rust has made several language design decisions that make
              compiler performance difficult
              
              Do you have a list off the top of your head/do you know of a
              decent list? I've now read many "compiler slow" thoughtpieces by
              many people and I have yet to see someone point at a specific
              feature and say "this is just intrinsically harder".
              
              I believe that it likely exists, but would be good to know what
              feature to get mad at! Half joking of course
       
                steveklabnik wrote 8 hours 57 min ago:
                Brian Anderson wrote up his thoughts here, and it's a good
                intro to the topic: [1] Let's dig into this bit of that, to
                give you some more color:
                
                > Split compiler/package manager — although it is normal for
                languages to have a package manager separate from the compiler,
                in Rust at least this results in both cargo and rustc having
                imperfect and redundant information about the overall
                compilation pipeline. As more parts of the pipeline are
                short-circuited for efficiency, more metadata needs to be
                transferred between instances of the compiler, mostly through
                the filesystem, which has overhead.
                
                > Per-compilation-unit code-generation — rustc generates
                machine code each time it compiles a crate, but it doesn’t
                need to — with most Rust projects being statically linked,
                the machine code isn’t needed until the final link step.
                There may be efficiencies to be achieved by completely
                separating analysis and code generation.
                
                Rust decided to go with the classic separate compilation model
                that languages like C use. Let's talk about that compared to
                Zig, since it was already brought up in this thread.
                
                So imagine we have a project, A, and it depends on B. B is a
                huge library, 200,000 lines of code, but we only use one
                function from it in A, and that function is ten lines. Yes,
                this is probably a bad project management decision, but we're
                using extremes here to make a point.
                
                Cargo will compile B first, and then A, and then link things
                together. That's the classic model. And it works. But it's
                slow: rust had to compile all 200,000 lines of code in B, even
                though we only are gonna need ten lines. We do all of this
                work, and then we throw it away at the end. A ton of wasted
                time and effort. This is often mitigated by the fact that you
                compile B once, and then compile A a lot, but this still puts a
                lot of pressure on the linker, and generics also makes this
                more complex, but I'm getting a bit astray of the main point
                here, so I'll leave that alone for now.
                
                Zig, on the other hand, does not do this. It requires that you
                compile your whole program all at once. This means that they
                can drive the compilation process beginning from main, in other
                words, only compile the code that's actually reachable in your
                program. This means that in the equivalent situation, Zig only
                compiles those ten lines from B, and never bothers with the
                rest. That's just always going to be faster.
                
                Of course, there are pros and cons to both of these decisions,
                Rust made the choice it did here for good reasons. But it does
                mean it's just going to be slower.
                
   URI          [1]: https://www.pingcap.com/blog/rust-compilation-model-ca...
       
                mike_hearn wrote 16 hours 50 min ago:
                Rust heavily uses value types with specialized generics, which
                explodes the work needed by the compiler. It can - sometimes -
                improve performance. But it always slows down compilation.
       
                Mawr wrote 17 hours 51 min ago:
                You can have your house built fast, cheap, or well. Pick two;
                or a bit of all three that adds up to the same effort required.
                You can't have all three.
                
                You can't have a language with 100% of the possible runtime
                perf, 100% of the possible compile speed and 100% of the
                possible programmer ease-of-use.
                
                At best you can abuse the law of diminishing returns aka the
                80-20 rule, but that's not easy to balance and you run the risk
                of creating a language that's okay at everything, but without
                any strong selling points, like the stellar runtime performance
                Rust is known for.
                
                So a better way to think about it is: Given Rust's numerous
                benefits, is having subpar compilation time really that big of
                a deal?
       
                  rtfeldman wrote 8 hours 33 min ago:
                  > Given Rust's numerous benefits, is having subpar
                  compilation time really that big of a deal?
                  
                  As someone who uses Rust as a daily driver at work at zed.dev
                  (about 600K LoC of Rust), and Zig outside of work on
                  roc-lang.org (which was about 300K LoC of Rust before we
                  decided to rewrite it in Zig, in significant part because of
                  Rust's compilation speed), yes - it is an absolutely huge
                  deal.
                  
                  I like a lot of things about Rust, but its build times are my
                  biggest pain point.
       
            coolsunglasses wrote 1 day ago:
            I'm also curious because I've (recently) compiled more or less
            identical programs in Zig and Rust and they took the same amount of
            time to compile. I'm guessing people are just making Zig programs
            with less code and fewer dependencies and not really comparing
            apples to apples.
       
              kristoff_it wrote 1 day ago:
              Zig is starting to migrate to custom backends for debug builds
              (instead of using LLVM) plus incremental compilation.
              
              All Zig code is built in a single compilation unit and everything
              is compiled from scratch every time you change something,
              including all dependencies and all the parts of the stdlib that
              you use in your project.
              
              So you've been comparing Zig rebuilds that do all the work every
              time with Rust rebuilds that cache all dependencies.
              
              Once incremental is fully released you will see instant rebuilds.
       
                metaltyphoon wrote 1 day ago:
                When does this land in Zig? Will aarch64 be supported?
       
                  mlugg wrote 1 day ago:
                  When targeting x86_64, the self-hosted backend is already
                  enabled by default on the latest builds of Zig (when
                  compiling in Debug mode). The self-hosted aarch64 backend
                  currently isn't generally usable (so we still default to LLVM
                  when targeting aarch64), but it's likely to be the next ISA
                  we focus on codegen for.
       
                    metaltyphoon wrote 1 day ago:
                    I assume x86_64 is Linux only correct?
       
                      AndyKelley wrote 22 hours 3 min ago:
                      Not quite- any ELF or MachO target is enabled by default
                      already. Windows is waiting on some COFF linker bug
                      fixes.
       
            AndyKelley wrote 1 day ago:
            Basically, not depending on LLVM or LLD. The above is only possible
            because we invested years into making our own x86_64 backend and
            our own linker. You can see all the people ridiculing this decision
            2 years ago
            
   URI      [1]: https://news.ycombinator.com/item?id=36529456
       
              VeejayRampay wrote 15 hours 9 min ago:
              what is even the point of quoting reactions from two years ago?
              
              this is a terrible look for your whole community
       
                elktown wrote 9 hours 14 min ago:
                Honestly I think it's good to highlight it. As a industry we're
                too hampered by "Don't even try that, use the existing thing"
                and it's causing these end results.
       
              zozbot234 wrote 1 day ago:
              The Rust folks have cranelift and wild BTW. There are
              alternatives to LLVM and LLD, even though they might not be as
              obvious to most users.
       
              unclad5968 wrote 1 day ago:
              LLVM isnt a good scapegoat. A C application equivalent in size to
              a rust or c++ application will compile an order of magnitude
              quicker and they all use LLVM. I'm not a compiler expert, but it
              doesn't seem right to me that the only possible path to quick
              compilation for Zig was a custom backend.
       
                int_19h wrote 1 day ago:
                It will compile an order of magnitude quicker because it often
                doesn't do the same thing - e.g. functions that are
                aggressively inlined in C++ or Rust or Zig would be compiled
                separately and linked normally, and generally there's less
                equivalent of compile-time generics in C code (because you have
                to either spell out all the instantiations by hand or use
                preprocessor or a code generator to do something that is two
                lines of code in C++).
       
                MobiusHorizons wrote 1 day ago:
                Be that as it may, many C compilers are still an order of
                magnitude faster than LLVM. Probably the best example is tcc,
                although it is not the only one. C is a much simpler language
                than rust, so it is expected that compilation should take less
                time for C. That doesn’t mean llvm isn’t a significant
                contributor to compilation speed. I believe cranelift
                compilation of rust is also much faster than the llvm path
       
                  unclad5968 wrote 1 day ago:
                  > That doesn’t mean llvm isn’t a significant contributor
                  to compilation speed.
                  
                  That's not what I said. I said it's unlikely that fast
                  compilation cannot be achieved while using LLVM which, I
                  would argue, is proven by the existence of a fast compiler
                  that uses LLVM.
       
          echelon wrote 1 day ago:
          Zig is a small and simple language. It doesn't need a complicated
          compiler.
          
          Rust is a large and robust language meant for serious systems
          programming. The scope of problems Rust addresses is large, and Rust
          seeks to be deployed to very large scale software problems.
          
          These two are not the same and do not merit an apples to apples
          comparison.
          
          edit: I made some changes to my phrasing. I described Zig as a "toy"
          language, which wasn't the right wording.
          
          These languages are at different stages of maturity, have different
          levels of complexity, and have different customers. They shouldn't be
          measured against each other so superficially.
       
            ummonk wrote 1 day ago:
            This is an amusing argument to make in favor of Rust, since it's
            exactly the kind of dismissive statement that Ada proponents make
            about other languages including Rust.
       
            steveklabnik wrote 1 day ago:
            Come on now. This isn't acceptable behavior.
            
            (EDIT: The parent has since edited this comment to contain more
            than just "zig bad rust good", but I still think the combative-ness
            and insulting tone at the time I made this comment isn't cool.)
       
              echelon wrote 1 day ago:
              > but I still think the combative-ness and insulting tone at the
              time I made this comment isn't cool
              
              Respectfully, the parent only offers up a Zig compile time
              metric. That's it. That's the entire comment.
              
              This HN post about Rust is now being dominated by a cheap shot
              Zig one liner humblebrag from the lead author of Zig.
              
              I think this thread needs a little more nuance.
       
                Mawr wrote 17 hours 39 min ago:
                > Respectfully, the parent only offers up a Zig compile time
                metric. That's it. That's the entire comment.
                
                That's correct, but slinging cheap shots at each other is not
                how discussions on this site are supposed to be.
                
                > I think this thread needs a little more nuance.
                
                Yes, but your comment offers none.
       
                steveklabnik wrote 1 day ago:
                FWIW, I think your revised comment is far better, even though I
                disagree with some of the framing, there's at least some
                substance there.
                
                Being frustrated by perceived bad behavior doesn't mean
                responding with more bad behavior is a good way to improve the
                discourse, if that's your goal here.
       
                  echelon wrote 1 day ago:
                  You're 100% right, Steve. Thank you for your voice of
                  moderation. You've been amazing to this community.
       
                    steveklabnik wrote 1 day ago:
                    It's all good. I'm very guilty of bad behavior myself a lot
                    of the time. It's on all of us to give gentle nudges when
                    we see each other getting out of line. I deserve to be told
                    the same if you see me doing this too!
       
        MangoToupe wrote 1 day ago:
        I don't really consider it to be slow at all. It seems about as
        performant as any other language  this complexity, and it's far faster
        than the 15 minute C++ and Scala build times I'd place in the same
        category.
       
          BanterTrouble wrote 12 hours 17 min ago:
          The memory usage is quite large compared to C/C++ when compiling. I
          use Virtual Machines for Demos on my YouTube Channel and compiling
          something large in Rust requires 8GB+.
          
          In C/C++ I don't even have to worry about it.
       
            gpm wrote 8 hours 34 min ago:
            I can't agree, I've had C/C++ builds of well known open source
            projects try to use >100GB of memory...
       
              BanterTrouble wrote 8 hours 12 min ago:
              Maybe something else is going on then. I've done builds of some
              large open source projects and most of the time they was maxing
              the cores (I was building j32) but memory usage was fine.
              
              Out of interest what were they?
       
                gpm wrote 6 hours 7 min ago:
                > Out of interest what were they?
                
                The one that immediately comes to mind is cvc5... not super
                recently though.
                
                I suspect that "tried to" is doing a bit of work here. The fact
                that it was failing and swapping out probably meant that the
                more memory heavy g++ processes were going slower than the
                memory light ones, resulting in more of them running
                simultaneously than would likely have happened in a normal
                successful build. Still, this was on a system with 32GB of ram,
                so it was using roughly that before swapping would slow down
                more memory intensive processes.
       
            windward wrote 12 hours 3 min ago:
            I can't say the same. Telling people to use `-j$(nproc)` in lieu of
            `-j` to avoid the wrath of the OOM-killer is a rite of passage
       
          mountainriver wrote 21 hours 52 min ago:
          I also don’t understand this, the rust compiler hardly bothers me
          at all when I’m working. I feel like this is due to how bad it was
          early on and people just sticking to that narrative
       
          randomNumber7 wrote 1 day ago:
          When C++ templates are turing complete is it pointless to complain
          about the compile times without considering the actual code :)
       
        ecshafer wrote 1 day ago:
        The Rust compiler is slow. But if you want more features from your
        compiler you need to have a slower compiler, there isn't a way around
        that. However this blog post doesn't really seem to be around that and
        more an annoyance in how they deploy binaries.
       
        adastra22 wrote 1 day ago:
        As a former C++ developer, claims that rust compilation is slow leave
        me scratching my head.
       
          oreally wrote 15 hours 24 min ago:
          Classic case of:
          
          New features: yes
          
          Talking to users and fixing actual problems: lolno, I CBF
       
          MobiusHorizons wrote 1 day ago:
          Things can still be slow in absolute terms without being as slow as
          C++. The issues with compiling C++ are incredibly well understood and
          documented. It is one of the worst languages on earth for compile
          times. Rust doesn’t share those language level issues, so the
          expectations are understandably higher.
       
            const_cast wrote 1 day ago:
            Rust shares pretty much every language-level issue C++ has with
            compile times, no? Monomorphization explosion, turing-complete
            compile time macros, complex type system.
       
              steveklabnik wrote 21 hours 59 min ago:
              There's a lot of overlap, but not that simple. Unless you also
              discount C issues that C++ inherits. Even then, there's
              subtleties and differences between the two that matter.
       
            int_19h wrote 1 day ago:
            But it does share some of those issues. Specifically, while Rust
            generics aren't as unstructured as C++ templates, the main burden
            is actually from compiling all those tiny instantiations, and Rust
            monomorphization has the same exact problem responsible for the
            bulk of its compile times.
       
          eikenberry wrote 1 day ago:
          Which is one of the reasons why Rust is considered to be targeting
          C++'s developers. C++ devs already have the Stockholm syndrome needed
          to tolerate the tooling.
       
            galangalalgol wrote 1 day ago:
            Also modern c++ with value semantics is more functional than many
            other languages people might come to rust from, that keeps the
            borrow checker from being as annoying. If people are used to making
            webs of stateful classes with references to each pther. The borrow
            checker is horrific, but that is because that design pattern is
            horrific if you multithread it.
       
            zozbot234 wrote 1 day ago:
            > Stockholm syndrome
            
            A.k.a. "Remember the Vasa!"
            
   URI      [1]: https://news.ycombinator.com/item?id=17172057
       
            MyOutfitIsVague wrote 1 day ago:
            Rust's compilation is slow, but the tooling is just about the best
            that any programming language has.
       
              adastra22 wrote 17 hours 5 min ago:
              Slow compared to what? I’m still scraping my head at this. My
              cargo builds are insanely fast, never taking more than a minute
              or two even on large projects. The only ahead of time compiled
              language I’ve used with faster compilation speed is Go, and
              that is a language specifically designed around (and arguably
              crippled by) the requirement for fast compilation. Rust is
              comparable to C compilation, and definitely faster than C++,
              Haskell, Java, Fortran,  Algol, and Common Lisp.
       
                johnisgood wrote 4 hours 6 min ago:
                Just a few days ago I used cargo to install something. It took
                like two minutes at the last stage. Definitely not comparable
                to C or Fortran. I never had to wait that much before. With
                C++? Definitely. Never with C though.
       
              GuB-42 wrote 1 day ago:
              How good is the debugger? "edit and continue"? Hot reload? Full
              IDE?
              
              I don't know enough Rust, but I find these aspects are seriously
              lacking in C++ on Linux, and it is one of the few things I think
              Windows has it better for developers. Is Rust better?
       
                steveklabnik wrote 21 hours 59 min ago:
                > debugger
                
                I've only ever really used a debugger on embedded, we used gdb
                there. I know VS: Code has a debugger that works, I'm sure
                other IDEs do too.
                
                > edit and continue
                
                Hard to do in a pre-compiled language with no runtime, if
                you're asking about what I think you're asking about.
                
                > Hot reload
                
                Other folks gave you good links, but this stuff is pretty new,
                so I wouldn't claim that this is great and often good and such.
                
                > Full IDE
                
                I'm not aware of Rust-specific IDEs, but many IDEs have good
                support for Rust. VS: Code is the most popular amongst users
                according to the annual survey. The Rust Project distributes an
                official LSP server, so you can use that with any editor that
                supports it.
       
                  izacus wrote 16 hours 32 min ago:
                  So the answer is very clear "no" on all accounts, just like
                  for other languages built by people who don't understand the
                  value of good tooling.
       
                    frollogaston wrote 7 hours 54 min ago:
                    I haven't used Rust much, but the tooling felt very solid.
                    There's a default package manager that works well, unlike
                    many other languages including C++ and somehow Python.
                    Debugging is fine. Idk why you expected edit-and-continue,
                    it's not like you get that in C++ either.
       
                      GuB-42 wrote 1 hour 28 min ago:
                      You have "edit and continue" in Visual Studio (the real
                      IDE, not VS Code).
                      
                      And I mentioned it as a downside of C++ on Linux, and I
                      would expect a language that has "the best" tooling to
                      have that.
                      
                      C++ tooling isn't that great, but it has one thing going
                      for it: it is popular in the video game industry, and the
                      video industry has some of the best tools.
                      
                      And sure enough, if by tooling you mean "package
                      management", I'd say everything is better than C++, and
                      on the other side, it seems that cargo is pretty good. I
                      don't know how they tackle the "left-pad" problem that
                      plagues npm though. By that I mean supply-chain attacks.
       
                        frollogaston wrote 1 hour 12 min ago:
                        It's not like npm is particularly bad at handling
                        supply-chain attacks, it's just a very popular
                        ecosystem and gets targeted more as a result. Idk how
                        you truly solve this without code audits, and if
                        anything the more popular/visible packages will be
                        audited more.
                        
                        Btw, left-pad fallout wasn't all that bad. It's not
                        like the author put something malicious into the code.
                        For less than a day, people couldn't download that dep
                        from npm. If someone really needed to fix a build, they
                        could copy in a backup. Pretty sure a typical C++ or
                        Python project build gets broken on its own more often
                        than that.
       
                          GuB-42 wrote 22 min ago:
                          > Idk how you truly solve this without code audits
                          
                          Idk either, but code audits are definitely a
                          solution. Take Debian packages for instance. Debian
                          has package maintainers, and while they may no do
                          full audits, they will at least test it before
                          publishing. In addition, it doesn't get in the
                          "stable" release before an extensive testing phase.
                          Security patches are usually backported.
                          
                          Or do like with the Apple App Store, where you don't
                          get to publish anything unreviewed.
                          
                          These are not perfect solution, there is no such
                          thing as a perfect solution. For instance, Debian is
                          famously lagging behind in versions, and the App
                          Store will sometimes reject your app for no good
                          reason, while being expensive. In every case there is
                          some barrier to entry, a slow process, and it costs
                          time and money, but that mitigates some of the
                          issues.
                          
                          Npm seems to have very little safeguards, has a
                          culture of always taking the latest version, and as a
                          result is often victim to supply-chain attacks. I
                          don't think it is just popularity. Debian is really
                          popular too, but AFAIK, it doesn't have this problem,
                          in fact, one of the best known supply-chain attack is
                          the xz library, and Debian didn't fall to it.
       
                mdaniel wrote 23 hours 18 min ago:
                > How good is the debugger? "edit and continue"?
                
                Relevant: Subsecond: A runtime hotpatching engine for Rust
                hot-reloading - [1] - June, 2024 (36 comments)
                
                > Full IDE? [2] (newly free for non-commercial use)
                
                > find these aspects are seriously lacking in C++ on Linux [3]
                (same, non-commercial)
                
   URI          [1]: https://news.ycombinator.com/item?id=44369642
   URI          [2]: https://www.jetbrains.com/rust/
   URI          [3]: https://www.jetbrains.com/clion/
       
                adastra22 wrote 1 day ago:
                No idea because I never do that. Nor does any rust programmer I
                know. Which may answer your question ;)
       
                  GuB-42 wrote 43 min ago:
                  If programmers don't use a debugger, that's because the
                  debugger is bad.
                  
                  For me, the ideal debugger is one you never leave. That is
                  you are in a debugging session, maybe with your code running,
                  or stopped on a breakpoint, and you write your code without
                  leaving the session. You can see the values of your variables
                  as you type them, the branches that will be taken, etc...
                  When an error condition happen, you break on it, and still
                  without leaving the session, you can fix the code, roll back
                  before the error happened and see if it passes. IntelliJ is
                  close to that, and it seems like some tools from the video
                  game industry are even better.
                  
                  Debugging should be natural, it shouldn't be something you
                  pull out as a last resort. If it is not natural, it is a bad
                  debugger, or a bad development environment, period. Maybe
                  there are reasons for why it is bad, maybe the language
                  designers have other priorities, which is acceptable, but it
                  doesn't change the fact that it is bad. Same goes for slow
                  compile times by the way.
       
                  frollogaston wrote 7 hours 49 min ago:
                  "Rust devs don't use debuggers" isn't a good answer. The one
                  time I used Rust for some project like 7 years ago, I did
                  have to use a debugger, and it was fine.
       
          shadowgovt wrote 1 day ago:
          I thorougly enjoy all the work on encapsulation and reducing the
          steps of compilation to compile, then link that C does... Only to
          have C++ come along and undo almost all of it through the simple
          expedient of requiring templates for everything.
          
          Oops, changed one template in one header. And that impacts.... 98% of
          my code.
       
        tmtvl wrote 1 day ago:
        Just set up a build server and have your docker containers fetch
        prebuilt binaries from that?
       
        kelnos wrote 1 day ago:
        > This is... not ideal.
        
        What?  That's absolutely ideal!  It's incredibly simple.  I wish
        deployment processes were always that simple!  Docker is not going to
        make your deployment process simpler than that.
        
        I did enjoy the deep dive into figuring out what was taking a long time
        when compiling.
       
          quectophoton wrote 1 day ago:
          One thing I like about Alpine Linux is how easy and dumbproof it is
          to make packages. It's not some wild beast like trying to create
          `.deb` files.
          
          If anyone out there is already fully committed to using only Alpine
          Linux, I'd recommend trying creating native packages at least once.
       
            eddd-ddde wrote 22 hours 31 min ago:
            I'm not familiar with .deb packages, but one thing I love about
            Arch Linux is PKGBUILD and makepkg. It is ridiculously easy to make
            a package.
       
        ahartmetz wrote 1 day ago:
        That person seems to be confused. Installing a single, statically
        linked binary is clearly simpler than managing a container?!
       
          vorgol wrote 1 day ago:
          Exactly. I immediately thought of the grug brain dev when I read
          that.
       
          jerf wrote 1 day ago:
          Also strikes me as not fully understanding what exactly docker is
          doing. In reference to building everything in a docker image:
          
          "Unfortunately, this will rebuild everything from scratch whenever
          there's any change."
          
          In this situation, with only one person as the builder, with no need
          for CI or CD or whatever, there's nothing wrong with building locally
          with all the local conveniences and just slurping the result into a
          docker container. Double-check any settings that may accidentally add
          paths if the paths have anything that would bother you. (In my case
          it would merely reveal that, yes, someone with my username built it
          and they have a "src" directory... you can tell how worried I am
          about both those tidbits by the fact I just posted them publicly.)
          
          It's good for CI/CD in a professional setting to ensure that you can
          build a project from a hard drive, a magnetic needle, and a monkey
          trained to scratch a minimal kernel on to it, and boot strap from
          there, but personal projects don't need that.
       
            linkage wrote 1 day ago:
            Half the point of containerization is to have reproducible builds.
            You want a build environment that you can trust will be identical
            100% of the time. Your host machine is not that. If you run `pacman
            -Syu`, you no longer have the same build environment as you did
            earlier.
            
            If you now copy your binary to the container and it implicitly
            expects there to be a shared library in /usr/lib or wherever, it
            could blow up at runtime because of a library version mismatch.
       
              missingdays wrote 12 hours 38 min ago:
              Nobody is suggesting to copy the binary to the Docker container.
              
              When developing locally, use `cargo test` in your cli. When
              deploying to the server, build the Docker image on CI. If it
              takes 5 minutes to build it, so be it.
       
            scuff3d wrote 1 day ago:
            Thank you! I got a couple minutes in and was confused as hell.
            There is no reason to do the builds in the container.
            
            Even at work, I have a few projects where we had to build a Java
            uber jar (all the dependencies bundled into one big far) and when
            we need it containerized we just copy the jar in.
            
            I honestly don't see much reason to do builds in the container
            unless there is some limitation in my CICD pipeline where I don't
            have access to necessary build tools.
       
              mike_hearn wrote 16 hours 52 min ago:
              It's pretty clear that this whole project was god-tier level
              procrastination so I wouldn't worry too much about the details.
              The original stated problem could have been solved with a 5-line
              shell script.
       
                scuff3d wrote 7 hours 46 min ago:
                Not strictly related, but I got to the parts about using a
                dependency to speed up builds in the container, and that his
                website has "hundreds" of Rust dependencies, and I was reminded
                why I get so annoyed with Rust. It's a great language, but the
                practice of just duct taping a bunch of dependencies together
                drives me nuts.
       
                  aaronblohowiak wrote 3 hours 40 min ago:
                  what language are you using where you arent pulling in deps
                  for much of the work?
       
                    scuff3d wrote 2 hours 40 min ago:
                    At work I prefer languages that take a "batteries included"
                    approach. So Go and Python are good examples. You can get
                    really far with just what is offered in the standard
                    libraries. Though obviously you can still pull in a
                    shitload of dependencies if you want.
                    
                    In my own time I usually write C or Zig.
       
          hu3 wrote 1 day ago:
          From the article, the goal was not to simplify, but rather to
          modernize:
          
          > So instead, I'd like to switch to deploying my website with
          containers (be it Docker, Kubernetes, or otherwise), matching the
          vast majority of software deployed any time in the last decade.
          
          Containers offer many benefits. To name some: process isolation,
          increased security, standardized logging and mature horizontal
          scalability.
       
            eeZah7Ux wrote 1 day ago:
            > process isolation, increased security
            
            no, that's sandboxing.
       
            a3w wrote 1 day ago:
            Increased security compared to bare hardware, lower than VMs. Also,
            lower than Jails and RKT (Rocket) which seems to be dead.
       
            dwattttt wrote 1 day ago:
            Mightily resisting the urge to be flippant, but all of those
            benefits were achieved before Docker.
            
            Docker is a (the, in some areas) modern way to do it, but far from
            the only way.
       
            adastra22 wrote 1 day ago:
            So put the binary in the container. Why does it have to be compiled
            within the container?
       
              hu3 wrote 1 day ago:
              That is what they are doing. It's a 2 stage Dockerfile.
              
              First stage compiles the code. This is good for isolation and
              reproducibility.
              
              Second stage is a lightweight container to run the compiled
              binary.
              
              Why is the author being attacked (by multiple comments) for not
              making things simpler when that was not claimed that as the goal.
              They are modernizing it.
              
              Containers are good practice for CI/CD anyway.
       
                adastra22 wrote 1 day ago:
                Because he spends a good deal of the intro complaining that
                this makes his dev practice slow. So don’t do it! It has
                nothing to do with docker but rather the fact he is wiping the
                cache on every triggered build.
       
                AndrewDucker wrote 1 day ago:
                I'm not sure why "complicate things unnecessarily" is
                considered more modern.
                
                Don't do what you don't need to do.
       
                  hu3 wrote 1 day ago:
                  You realize the author is compiling a Rust webserver for a
                  static website right?
                  
                  They are already long past the point of "complicate things
                  unnecessarily".
                  
                  A simple Dockerfile pales in comparison.
       
                MobiusHorizons wrote 1 day ago:
                That’s a reasonable deployment strategy, but a pretty
                terrible local development strategy
       
                  taberiand wrote 1 day ago:
                  Devcontainers are a good compromise though - you can develop
                  within a context that can be very nearly identical to
                  production; with a bit of finagling you could even use the
                  same dockerfile for the devcontainer, and the build image and
                  the deployed image
       
       
   DIR <- back to front page