_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Rosetta 2 creator leaves Apple to work on Lean full-time
       
       
        revskill wrote 1 hour 6 min ago:
        The linkedin back button is weird. Instead of coming back to hn after
        back button, it goes to its homepage.
       
        danielktdoranie wrote 2 hours 31 min ago:
        I am pretty sure “lean” is that codeine cough syrup rappers drink
       
        ein0p wrote 2 hours 45 min ago:
        Apple just seems to be bleeding talent left and right. I wonder what's
        going on over there to cause people to leave when the job market is as
        uncertain as it is right now.
       
          blitzar wrote 1 hour 59 min ago:
          The 20th million doesn't hit as hard as the 19th and when you make 2x
          your salary on the dividends on your stock you start to wonder why
          not just do something more interesting.
       
          raverbashing wrote 2 hours 29 min ago:
          Citation needed?
          
          I mean, there will always be long tenured people leaving, even
          without offers on the table
          
          Some jobs get old eventually
       
        evaneykelen wrote 3 hours 50 min ago:
        In a previous discussion the name of Gary Davidian is mentioned who
        also — initialy single-handed — did amazing work on architecture
        changes at Apple. There’s an interview with him in the Computer
        History Museum archive. [1]
        
   URI  [1]: https://news.ycombinator.com/item?id=28914208
   URI  [2]: https://youtu.be/MVEKt_H3FsI?si=BbRRV51ql1V6DD4r
       
        cwzwarich wrote 9 hours 18 min ago:
        This is me! Didn’t expect to see this on here, but I’m looking
        forward to working with everyone else at the Lean FRO and the wider
        Lean community to help make Lean even better.
        
        My background is in mathematics and I’ve had an interest in
        interactive theorem provers since before I was ever a professional
        software engineer, so it’s a bit of a dream come true to be able to
        pursue this full-time.
       
          singularity2001 wrote 3 hours 24 min ago:
          sorry to hijack the discussion but do you see any chance of
          consolidating the theoretical framework of real numbers with
          practical calculations of floats?
          That is if I proof the correctness of some theorem for real numbers
          ideally I would just use that as the algorithm to compute things with
          floats.
          
          also I was shocked to learn that the simple general comparison of
          (the equality of) two real numbers is not decidable, which is very
          logical if you think about it but an enormous hindrance for practical
          applications. Is there any work around for that?
       
            zozbot234 wrote 3 hours 17 min ago:
            You can use floats to accelerate interval arithmetic (which is
            "exact" in the sense of constructive real numbers) but that
            requires setting the correct rounding modes, and being aware of
            quirks in existing hardware floating point implementations, some of
            which may e.g. introduce non-exact outputs in several of the least
            significant digits, or even flush "small" (for unclear definitions
            of "small", not always restricted to FP-denormal numbers) results
            to zero.
            
            Equality is not computable in the general case, but apartness can
            be stated exactly.  For some practical cases, one may also be able
            to prove that two real numbers are indeed equal.
       
          cookiengineer wrote 4 hours 39 min ago:
          Do you have book recommendations in regards to disassembly, syscalls,
          x86/64 assembler etc?
          
          What do I need to know to be able to build something as advanced as
          rosetta?
          
          I am assuming that you reimplemented the syscalls for each host/guest
          system as a reliable abstraction layer to test against. But so many
          things are way beyond my level of understanding.
          
          Did you build your own assembler debugger? What kind of tools did you
          use along the way? Were reversing tools useful at all (like ghidra,
          binaryninja etc)?
       
            peterkelly wrote 1 hour 19 min ago:
            "Virtual Machines: Versatile Platforms for Systems and Processes"
            by Jim Smith and Ravi Nair is a great book on the topic.
       
          steego wrote 7 hours 3 min ago:
          This is exciting!
          
          Given your experience with Rosetta 2 and your deep understanding of
          code translation and optimization, what specific areas in Lean’s
          code generation pipeline do you see as ‘low-hanging fruit’ for
          improvement?
          
          Additionally, which unique features or capabilities of Lean do you
          find most promising or exciting to leverage in pushing the boundaries
          of efficient and high-quality code generation?
       
          croemer wrote 7 hours 54 min ago:
          We're you really _the_ creator of Rosetta 2? How big was the team,
          what was your role in it?
          
          Rosetta 2 is amazing, I'm genuinely surprised this is the work of
          just one person!
       
            spockz wrote 4 hours 20 min ago:
            It is my experience that it is easier to create good quality things
            as an individual than as a team. Especially for the core of a
            product. Also look at Asahi.
            
            However, to really finish/polish a product you need a larger group
            of people. To get the UI just right, to get the documentation
            right, to advocate the product, to support it.
            
            It is easily possible to have 10 people working on the team and
            only having a single core person. Then find someone to act as
            product manager while as the core person you can focus on the core
            of the product while still setting the direction without having to
            chase all the other work.
            
            It is possible, but not easy to set up in most organisations. You
            need a lot of individual credit/authority and/or the business case
            needs to be very evident.
       
            cwzwarich wrote 7 hours 21 min ago:
            I was the only person working on it for ~2 years, and I wrote the
            majority of the code in the first version that shipped. That said,
            I’m definitely glad that I eventually found someone else (and
            later a whole team) to work on it with me, and it wouldn’t have
            been as successful without that.
            
            When people think of a binary translator, they usually just think
            of the ISA aspects, as opposed to the complicated interactions with
            the OS etc. that can consume just as much (or even more)
            engineering effort overall.
       
              zmb_ wrote 4 hours 59 min ago:
              As someone frustrated in a team of 10+ that is struggling to ship
              even seemingly trivial things due to processes and overheads and
              inefficiencies, I would really appreciate some insights on how do
              you organize the work to allow a single developer to achieve
              this.
              
              How do you communicate with the rest of the organization? What is
              the lifecycle and release process like? Do you write requirements
              and specs for others (like validation or integration) to base
              their work on? Basically, what does the day to day work look
              like?
       
                tonyedgecombe wrote 3 hours 56 min ago:
                >How do you communicate with the rest of the organization?
                
                I wonder if Apple's renowned secrecy is a help with this. If
                nobody outside your small team knows what you are doing then it
                is hard for them to stick their oar in.
       
              sitkack wrote 6 hours 18 min ago:
              That is fascinating that this amazing system was the work of
              largely one person.  You mentioned that interacting with the OS
              was super difficult. What were the most enjoyable aspects of
              building Rosetta?
       
                porphyra wrote 5 hours 20 min ago:
                I am also amazed that this was the work of largely one person.
                Having seamless and performant Rosetta 2 was a major factor why
                the Apple transition from Intel to Apple Silicon was viable in
                the first place!
       
              archagon wrote 6 hours 32 min ago:
              It's a shame that Apple's stated intent is to throw the project
              away after a while. Personally, I really hope it sticks around
              forever, though I'm not optimistic.
       
                wtallis wrote 4 hours 50 min ago:
                Rosetta 2 can't go away until Apple is ready to also retire
                Game Porting Toolkit.  At most, they might drop support for
                running regular x86 macOS applications while keeping it around
                for Linux VMs and Windows applications, but that would be
                pretty weird.
       
                  nubinetwork wrote 3 hours 5 min ago:
                  > game porting toolkit
                  
                  I don't understand why Apple even bothers these days, I
                  wouldn't be surprised if Apple's gaming market is a quarter
                  of what the Linux gaming market currently is (thanks to Valve
                  and their work on proton and by extension wine)...
       
                    drexlspivey wrote 2 hours 50 min ago:
                    Because people want to use their fancy new hardware to play
                    games? Linux market share wouldnt be increasing so fast if
                    Valve didn’t do the work so why shouldn’t Apple do the
                    same?
       
                      nubinetwork wrote 2 hours 20 min ago:
                      > so why shouldn't Apple do the same?
                      
                      If Apple truly cared, they would stop blocking older
                      games from being run on newer versions of OSX...
       
                  duskwuff wrote 4 hours 13 min ago:
                  In principle, the Linux Rosetta binaries should remain usable
                  well into the future. Even if Apple discontinues support for
                  Rosetta in their VMs, there's very little (beyond a simple,
                  easily removed runtime check) preventing them from being used
                  standalone.
       
                    vbezhenar wrote 1 hour 59 min ago:
                    AFAIK Linux Rosetta does not work standalone but uses some
                    channels to exchange x86 and arm binary code between Linux
                    guest and macOS host. Actual translation happens in the
                    macOS.
       
                    saagarjha wrote 2 hours 5 min ago:
                    The kernel could drop support.
       
                K7PJP wrote 5 hours 3 min ago:
                Where did Apple state that Rosetta 2 was to be deprecated?
       
                  larusso wrote 3 hours 51 min ago:
                  I think they assuming from the past that this will happen.
                  When Apple moved from powerPC to x86 there was Rosetta 1. It
                  got deprecated as well.
       
                    skissane wrote 2 hours 30 min ago:
                    I think it is different this time. A lot of developers use
                    Rosetta 2 for Linux to run x86-64 Linux Docker containers
                    under macOS (including me). They'll be upset if Apple
                    discontinues Rosetta 2 for Linux. By contrast, once the
                    PPC-to-Intel transition was under way, Rosetta was only
                    used for running old software, and as time went by that
                    software became increasingly outdated and use of it
                    declined. While I think Rosetta 2 for macOS usage will
                    likely decline over time too, I think Rosetta 2 for Linux
                    usage is going to be much more stable and Apple will likely
                    maintain it for a lot longer. Maybe if we eventually see a
                    gradual migration of data centres from x86-64 to ARM,
                    Rosetta 2 for Linux usage might begin to also decline, and
                    then Apple may actually kill it. But, if such a migration
                    happens, it is going to take a decade or more for us to get
                    there.
       
                      larusso wrote 30 min ago:
                      I just pointed out what happened in the past. I have no
                      clue if Apple will deprecate it and what reason they put
                      forward doing so. I personally like the fact that I can
                      run both arm and x86 binaries. But I think judging Apple
                      that if they don’t have a personal reason to support
                      Linux (they also use it for their services) they will
                      remove it. But deprecated dons’t mean it will be
                      removed anytime soon. Apple keeps APIs and frameworks as
                      long as they don’t interfere with something else.
       
                      CodesInChaos wrote 37 min ago:
                      What's the advantage of running x86-64 Linux Docker
                      containers over running ARM Linux Docker containers?
                      Aren't most distributionss and packages available for
                      both platforms?
       
              croemer wrote 7 hours 12 min ago:
              That's super impressive. I remember being astonished that the x86
              executable of Python running through Rosetta 2 on my M1 was just
              a factor of 2 slower than the native version.
              
              QEMU was something like a factor of 5-10x slower than native,
              IIRC.
       
                lostmsu wrote 6 hours 20 min ago:
                QEMU probably had to account for differences in memory models.
                A fork with that stuff removed might be able to easily catch
                up.
       
                  bonzini wrote 5 hours 7 min ago:
                  QEMU loses a bit from being a generic translator instead of
                  being specialized for x86->ARM like Rosetta 2, Box64 or
                  FEXEmu. It does a lot of spilling for example even though x86
                  has a lot fewer registers than aarch64.
                  
                  Flags are also tricky, though they're pretty well optimized.
                  In the end the main issue with them is also the spilling, but
                  QEMU's generic architecture makes it expensive to handle
                  consecutive jump instructions for example.
       
                    croemer wrote 3 hours 55 min ago:
                    I found this blog post reverse engineering Rosetta 2
                    translated code:
                    
   URI              [1]: https://dougallj.wordpress.com/2022/11/09/why-is-r...
       
                      bonzini wrote 3 hours 28 min ago:
                      Interesting. Yeah, being able to use Arm flags always is
                      probably a big thing, since they even added hardware
                      support for that.
                      
                      It's a huge achievement for a single person to have
                      written most of that.
       
                        RandomThoughts3 wrote 42 min ago:
                        > It's a huge achievement for a single person to have
                        written most of that.
                        
                        Qemu was mostly Fabrice Bellard by himself at the
                        beginning and plenty of emulators are single person
                        project.
                        
                        It’s a field which lends itself well to single person
                        development. How to properly architecture
                        compiler/interpreter/emulator has been studied to death
                        and everyone mostly uses the same core principles so
                        there is little guess work as how to start (provided
                        you have taken the time to study the field). If you are
                        ready to do the work, you can reach a working
                        translator from hard work alone. Then, the interesting
                        work of optimising it starts.
                        
                        Don’t get me wrong, Rosetta 2 is a very impressive
                        achievement because the performances are really good. I
                        tip my metaphorical hat to whoever did it. My post is
                        more in the spirit of you can do something in the same
                        ballpark too if that’s your kick.
       
          adamnemecek wrote 8 hours 46 min ago:
          What was the tipping point that made you want to work on Lean?
       
            cwzwarich wrote 6 hours 7 min ago:
            I don't think there was a single tipping point, just a growing
            accumulation of factors:
            
            - the release of Lean 4 slightly over a year ago, which impressed
            me both as a proof assistant and a programming language
            
            - the rapid progress in formalization of mathematics from 2017
            onward, almost all of which was happening in Lean
            
            - the growing relevance of formal reasoning in the wake of
            improvements in AI
            
            - seeing Lean's potential (a lot of which is not yet realized) for
            SW verification (especially of SW itself written in Lean)
            
            - the establishment of the Lean FRO at the right time, intersecting
            all of the above
       
              a1o wrote 2 hours 40 min ago:
              How does Lean compares with Coq? (I am not familiar with Lean but
              am familiar with Coq)
       
                denotational wrote 2 hours 16 min ago:
                Mario Carneiro’s MS Thesis has a good overview of the type
                theory and how it compares to Coq:
                
   URI          [1]: https://github.com/digama0/lean-type-theory/releases/d...
       
          brcmthrowaway wrote 9 hours 9 min ago:
          Surprised you didnt go into something AI adjacent
       
            fantod wrote 8 hours 30 min ago:
            I don't know what his reasons are but it makes sense to me. Yes,
            there are incredible results coming out of the AI world but the
            methods aren't necessarily that interesting (i.e. intellectually
            stimulating) and it can be frustrating working in a field with this
            much noise.
       
              uoaei wrote 8 hours 1 min ago:
              I don't want to come across as too harsh but having studied
              machine learning since 2015 I find the most recent crop of people
              excited about working on AI are deep in Dunning-Kruger. I think I
              conflate this a bit with the fascination of results over process
              (I suppose that befuddlement is what led me to physics over
              engineering) but working in ML research for so long it's hard to
              gin up a perspective that these things are actually
              teleologically useful, and not just randomly good enough most of
              the time to keep up the illusion.
       
                deet wrote 6 hours 26 min ago:
                I feel that way sometimes too.
                
                But then I think about how maddeningly unpredictable human
                thought and perception is, with phenomena like optical
                illusions, cognitive biases, a limited working memory. Yet it
                is still produces incredibly powerful results.
                
                Not saying ML is anywhere near humans yet, despite all the
                recent advances, but perhaps a fully explainable AI system,
                with precise logic, 100% predictable, isn’t actually needed
                to get most of what we need out of AI. And given the
                “analog” nature of the universe maybe it’s not even
                possible to have something perfect.
       
                  hn_throwaway_99 wrote 5 hours 37 min ago:
                  > But then I think about how maddeningly unpredictable human
                  thought and perception is, with phenomena like optical
                  illusions, cognitive biases, a limited working memory.
                  
                  I agree with your general point (I think), but I think that
                  "unpredictable" is really the wrong word here. Optical
                  illusions, cognitive biases and limited working memory are
                  mostly    extremely predictable, and make perfect sense if you
                  look at the role that evolution played in developing the
                  human mind. E.g. many optical illusions are due to the fact
                  that the brain needs to recreate a 3-D model from a 2-D
                  image, and it has to do this by doing what is statistically
                  most likely in the world we live in (or, really, the world of
                  African savannahs where humans first evolved and walked
                  upright). This, it's possible to "tricks" this system by
                  creating a 2D image from a 3D set of objects that is
                  statistically unlikely in the natural world.
                  
                  FWIW Stephen Pinker's book "How the Mind Works" has a lot of
                  good examples of optical illusions and cognitive biases and
                  the theorized evolutionary bases for these things.
       
                croemer wrote 7 hours 25 min ago:
                What do you mean by "things that are actually teleologically
                useful"?
                
                Fellow physicist here by the way
       
                  uoaei wrote 6 hours 49 min ago:
                  Like useful in an intentional way: purpose-built and achieves
                  success via accurate, parsimonious models. The telos here
                  being the stated goal of a structurally sound agent that can
                  emulate a human being, as opposed to the accidental,
                  max-entropy implementations we have today.
       
                    croemer wrote 4 hours 1 min ago:
                    I see, so humans are also not usefully intelligent in an
                    intentional way, because they also follow the 2nd law of
                    thermodynamics and maximize entropy and aren't
                    deterministic?
       
                      throw646577 wrote 1 hour 4 min ago:
                      Pure, refined “but humans also”.
       
                    calf wrote 5 hours 39 min ago:
                    Is a guide dog teleologically useful?
       
                      jrflowers wrote 4 hours 39 min ago:
                      Not if you’re taste testing ceviche
       
                  Baeocystin wrote 7 hours 17 min ago:
                  Not OP, but I'm assuming he means that they are maddeningly
                  black-boxy, if you want to know how the sausage is made.
       
            adamnemecek wrote 8 hours 46 min ago:
            Lean is AI adjacent.
       
              saagarjha wrote 8 hours 19 min ago:
              Only because the AI people find it interesting. It's not really
              AI in itself.
       
                trenchgun wrote 3 hours 1 min ago:
                It's not ML, but it is AI
       
                zozbot234 wrote 4 hours 27 min ago:
                Proof automation definitely counts as AI.  Not all AI is based
                on machine learning or statistical methods, GOFAI is a thing
                too.
       
                fspeech wrote 5 hours 7 min ago:
                If you want to have superhuman performance like AlphaZero
                series you need a verifier (valuation network) to tell you if
                you are on the right track. Lean (proof checker) in general can
                act as a trusted critic.
       
                ofrzeta wrote 5 hours 31 min ago:
                They do have AI on their roadmap, though:
                
   URI          [1]: https://lean-fro.org/about/roadmap-y2/
       
                mkl wrote 7 hours 34 min ago:
                It's not AI in itself, but it's one of the best possibilities
                for enabling AI systems to generate mathematical proofs that
                can be automatically verified to be correct, which is needed at
                the scale they can potentially operate.
                
                Of course it has many non-AI uses too.
       
                cwzwarich wrote 8 hours 5 min ago:
                If you’re interested in applications of AI to mathematics,
                you’re faced with the problem of what to do when the ratio of
                plausible proofs to humans that can check them radically
                changes. There are definitely some in the AI world who feel
                that the existing highly social construct of informal
                mathematical proof will remain intact, just with humans
                replaced by agents, but amongst mathematicians there is a
                growing realization that formalization is the best way to deal
                with this epistemological crisis.
                
                It helps that work done in Lean (on Mathlib and other
                developments) is reaching an inflection point just as these
                questions become practically relevant from AI.
       
        brcmthrowaway wrote 9 hours 19 min ago:
        What is Lean FRO?
       
          thih9 wrote 1 hour 40 min ago:
          Background about the organization: [1] Their proof assistant /
          programming language:
          
   URI    [1]: https://en.m.wikipedia.org/wiki/Convergent_Research
   URI    [2]: https://en.m.wikipedia.org/wiki/Lean_(proof_assistant)
       
          bagels wrote 5 hours 20 min ago:
          "we aim to tackle the challenges of scalability, usability, and proof
          (Mathematics) automation in the Lean proof assistant."
       
          threeseed wrote 8 hours 2 min ago:
          
          
   URI    [1]: https://lean-lang.org/lean4/doc/
       
            jemmyw wrote 7 hours 55 min ago:
            There are a lot of broken links in the docs. Like most of the
            feature links.
       
              kmill wrote 4 hours 27 min ago:
              There's a completely new language reference in the process of
              being written: [1] (by David Thrane Christiansen, co-author of
              The Little Typer, and Lean FRO member)
              
              Some links here seem to be broken at the moment — and David's
              currently on vacation so they likely won't be fixed until January
              — but if you see for example [2] it's supposed to be [1]
              basic-types/strin...
              
   URI        [1]: https://lean-lang.org/doc/reference/latest/
   URI        [2]: https://lean-lang.org/basic-types/strings/
   URI        [3]: https://lean-lang.org/doc/reference/latest/basic-types/s...
       
            croemer wrote 7 hours 59 min ago:
            That answers the Lean part, FRO stands for Focused Research
            Organization
       
          cwzwarich wrote 9 hours 17 min ago:
          
          
   URI    [1]: https://lean-fro.org/about/
       
       
   DIR <- back to front page