_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI Rosetta 2 creator leaves Apple to work on Lean full-time
revskill wrote 1 day ago:
The linkedin back button is weird. Instead of coming back to hn after
back button, it goes to its homepage.
Yujf wrote 1 day ago:
Its not weird its just disgusting. The back button should go back
danielktdoranie wrote 1 day ago:
I am pretty sure âleanâ is that codeine cough syrup rappers drink
dilsmatchanov wrote 1 day ago:
URI [1]: https://youtu.be/4Or-5OLCNDA?si=mzd_o0573HPgCVrl&t=51
hinkley wrote 22 hours 43 min ago:
The audio on this is about the worst Iâve ever heard on YouTube.
I fast forwarded and at least he stops playing that loud music over
his quiet voice, but damn.
He gets off topic a lot (bullies, amphetamine salts??) and spends
the entire time talking to the commenters not the video recording.
Surely, thereâs a better video out there than this.
ein0p wrote 1 day ago:
Apple just seems to be bleeding talent left and right. I wonder what's
going on over there to cause people to leave when the job market is as
uncertain as it is right now.
turnsout wrote 1 day ago:
You could have posted this in 1985 and been right. Talented people
have options.
tchbnl wrote 1 day ago:
Sometimes people just want to work on cool stuff and have the luxury
of being able to do that. Rosetta 2 is shipped and done.
blitzar wrote 1 day ago:
The 20th million doesn't hit as hard as the 19th and when you make 2x
your salary on the dividends on your stock you start to wonder why
not just do something more interesting.
raverbashing wrote 1 day ago:
Citation needed?
I mean, there will always be long tenured people leaving, even
without offers on the table
Some jobs get old eventually
evaneykelen wrote 1 day ago:
In a previous discussion the name of Gary Davidian is mentioned who
also â initialy single-handed â did amazing work on architecture
changes at Apple. Thereâs an interview with him in the Computer
History Museum archive. [1]
URI [1]: https://news.ycombinator.com/item?id=28914208
URI [2]: https://youtu.be/MVEKt_H3FsI?si=BbRRV51ql1V6DD4r
markus_zhang wrote 1 day ago:
From wiki it looks like David's emulator is perhaps uses interpreting
as wiki says Eric's uses dynamical recompilation and Connectix' is
even faster so maybe more optimization.
I tried to find the source code of any without any success.
lproven wrote 5 hours 42 min ago:
All this stuff is or was proprietary, closed-source tech, and
what's more, it was tech that gave certain companies strong
competitive advantage at particular points in time -- so they had
strong incentives to make sure it did not leak.
(I see posters in this thread who do not know what I thought were
well-documented parts of the story, so I am trying to spell out the
context here.)
Some large reputable companies have histories of stealing other's
code, ideas, implementation methods, algorithms etc. and passing
them off as their own. IBM, Microsoft, Sun, Apple, Google, Oracle,
Digital Research, Lotus -- all were dominant players, all were so
accused. Most either backed down, or re-wrote, or re-implemented to
avoid being sued.
Microsoft more than almost anyone, and it only thrived because it
was able to pay other companies off, or simply wait for them to go
broke.
Sometimes, how code works can be deduced simply by studying what it
does. I worked out how Rsync worked because someone asked me to
explain what it did in detail.
Powerquest's PartitionMagic was amazing, black magic tech when it
came out. I didn't review v1 because I did not believe what the
packaging said; when I reviewed v2, a reader wrote in accusing my
employers of doing an elaborate April Fool's joke and pointed out
that my name is an anagram of APRIL VENOM.
(If I ever branch out into fiction, that's my pseudonym.)
Now, the revolutionary functionality of PartitionMagic is just an
option in one screen of some installation programs. It's valueless
now. Once people saw it working, they could work out how it was
done, and then do it, and it ceased to have value.
Very fast emulation is not such a thing. Setting aside sheer
Moore's Law/Dennard scaling brute horsepower, efficient emulation
during the short window of processor architecture transitions is a
massive commercial asset.
Apple has done it 3 times between 4 architectures.
68000 -> PowerPC
PowerPC -> x86
x86-64 -> Arm64
Nobody else has ever done so many.
IBM bought Transitive for QuickTransit, but it's not clear how it
used it. Its major architecture change was IBM i. Originally OS/400
on AS/400, a derivative of the System 36 minicomputer, it
successfully moved this to POWER servers. However, there is a
translation layer in the architecture, so it didn't need Transitive
for that.
But IBM has bought many radical tech companies and not used the
tech. E.g. Rembo, an amazing Linux-based boot-time network-boot
fleet deployment tool it never really commercialised.
Microsoft bought Connectix for VirtualPC, kept the disk formats and
management UI and threw away everything else, because Intel and AMD
bundled the core virtualisation tech.
I know a little of the binary translation tech because the man who
wrote it flew across the Altantic for me to interview him.
All thrown away, but today, it's valueless anyway.
At the time, though, very valuable.
cwzwarich wrote 1 day ago:
This is me! Didnât expect to see this on here, but Iâm looking
forward to working with everyone else at the Lean FRO and the wider
Lean community to help make Lean even better.
My background is in mathematics and Iâve had an interest in
interactive theorem provers since before I was ever a professional
software engineer, so itâs a bit of a dream come true to be able to
pursue this full-time.
yieldcrv wrote 20 hours 59 min ago:
Out of curiosity, if youâve been at a FAANG since at least 2009,
have you ever retired or taken a âsabbaticalâ for a year or two,
since you would have made enough money to retire and live passively
at amounts similar to annual compensation and taxed way better
Just curious how the decisions have formed, its totally fine if FAANG
or specifically Apple was fulfilling for you, I also wonder if its
financial fear to an irrational extent just because I see that on
Blind a lot
cwzwarich wrote 20 hours 17 min ago:
I did go to Mozilla Research to work on Servo/Rust for a bit in
2015, which didnât turn out to be the best decision.
I always assumed that I would stick around at Apple until some
singular event that would motivate me to quit, and that would be
it. I have been so lucky at Apple to have been in the right place
at the right time for several projects: relatively early iPhone
team, original iPad team, involved in the GCC -> Clang transition,
involved in the 64-bit ARM transition, involved in early Apple
Watch development, first engineer working full-time on the Apple
silicon transition for the Mac, etc. Obviously I was doing
something right if I kept getting these chances, but if I went to
another FAANG I wouldnât have the same history, and I donât
think it would be the same experience.
My projected path to parting ways with Apple didnât really take
place, since Iâm now working at a non-profit dedicated to
developing an interactive theorem prover and left Apple without any
animosity in either direction.
markus_zhang wrote 17 hours 21 min ago:
Wow that's really a ton of memorable experience. I hope you write
a book or some blog posts or do an interview.
grecy wrote 19 hours 9 min ago:
It would be incredible if you could write a book someday about
all those experiences. I would very happily buy that book.
Thanks for all that incredible work and your insights here.
lenkite wrote 22 hours 5 min ago:
Can Lean can do what TLA+ does - model check thorny concurrency
problems ?
mattgreenrocks wrote 1 day ago:
Rosetta 2 is easily one of the most technically impressive things
I've seen in my life. I've done some fairly intense work applying
binary translation (DynamoRIO) and Rosetta 2 still feels totally
magical to me.
cwzwarich wrote 19 hours 55 min ago:
Thanks. It means a lot coming from someone with experience in our
niche field.
markus_zhang wrote 1 day ago:
Thank you! My work laptop is a M4 Macbook Pro so I really appreciate
the beauty of Rosetta. Thank you for the effort!
I just checked your LinkedIn and realized you joined Apple since 2009
(with one year of detour to Mozilla). You also graduated from
Waterloo as a Pure Math Graduate student (I absolutely love Waterloo,
the best Math/CS school IMO in my country - at the age of 40+ I'd go
without doubt if they accept me).
May I ask, what is the path that leads you to the Rosetta 2 project?
I even checked your graduate paper: ( [1] ), but it doesn't look like
it's related to compiler theory.
(I myself studied Mathematics back in the day, but I was not a good
student and I studied Statistics, which I joked that was NOT part of
Mathematics, so I didn't take any serious Algebra classes and
understand nothing of your paper)
URI [1]: https://uwspace.uwaterloo.ca/items/4bc518ca-a846-43ce-92f0-8...
cwzwarich wrote 1 day ago:
> May I ask, what is the path that leads you to the Rosetta 2
project?
The member of senior management who was best poised to suggest who
should work on it already knew me and thought I would be the best
choice. Getting opportunities in large companies is a combination
of nurturing relationships and luck.
computerdork wrote 21 hours 29 min ago:
Btw, followup question, and don't take this the wrong way at all,
but what is impressive is someone with a mathematical background
worked on something that seems to be one of the pinnacles of
software engineering: a translator working at the binary level
that creates executables interacting directly with the OS. Did
you also double in CS back in school? Or did you pick up the
knowledge afterwards? Yeah, it seems like a long list: operating
systems, compilers, computer architecture, UNIX/MacOS
systems-calls and internals...
... not to mention all the performance considerations and
optimizations, also requiring a strong sense of algorithms and
computational complexity. Wow!
Yeah, seems like most mathematicians (and physicists) I know who
go into tech don't get past learning a couple of programming
languages and don't have an interest in learning the depths of a
how a computer works. Very impressive!
cwzwarich wrote 21 hours 12 min ago:
I had an interest in programming at an early age. My dad would
always bring home the old computer magazines from the IT
department at work and I would pore over them. I got a bit
obsessed with MIT AI lab myths in books like Levyâs Hackers.
In a stroke of luck, I found a copy of SICP at the local
bookstore in middle school and kept struggling through it.
I originally wasnât going to go to university, but my parents
suggested I go for CS. I transferred into Pure Math in my first
term after the intro Java programming course asked us to
implement tic-tac-toe without using arrays.
Basically all of the low-level programming and systems stuff
was learned on the job, but it helped that my first job at
Apple was working on WebKitâs interpreter (and later JIT),
coming out of a Google Summer of Code doing the same thing. One
of my coworkers on that project was an alumnus of the original
Rosetta from Transitive, and he later ended up managing the
group doing the transition to Apple silicon on the SWE side (I
was part of HW Technologies). An interesting example of how
things loop back in the industry.
computerdork wrote 16 hours 20 min ago:
Don't think I have ever heard of the SICP book as my school
split all these different subjects it covers into different
courses. But looked it up and wow, it goes into a lot of
advanced subjects! (recursion, trees, Big O and computational
complexity, Data abstractions, Objects, and ends with
compilers). That is some survey and it's amazing you were
able to work through this in middle school - I was just
learning Algebra and the history of the early Americans back
then - Incredible
This now makes sense you were able to work on Rosetta 2
mainly on your own!
markus_zhang wrote 16 hours 34 min ago:
I can relate with introductory CS courses. I switched to
Statistics after ghe first course is a Java class and prof
complained that back in his time no one gave introductory
programming classes as they expected the students to self
teach.
I kinda agreed with him.
markus_zhang wrote 1 day ago:
Thank you for the information! I'm sure your skills are well
trusted.
flkenosad wrote 1 day ago:
Waterloo really is the best CS school in the world.
mixmastamyk wrote 23 hours 6 min ago:
Belgium, London, ABBA, Canada, or San Dimas? Why better than
others?
Insanity wrote 22 hours 25 min ago:
lol, pretty sure it is Waterloo Canada / Ontario. People like
to âidolizeâ their Alma Mater.
markus_zhang wrote 1 day ago:
I have never been there, what do you consider to be its
speciality comparing to say MIT and Berkeley?
Me1000 wrote 13 hours 5 min ago:
Didnât go there but worked with a lot of great folks who did.
The main thing I think is that they require their students to
get industry experience before they graduate.
I donât remember the actual requirement, unfortunately.
singularity2001 wrote 1 day ago:
sorry to hijack the discussion but do you see any chance of
consolidating the theoretical framework of real numbers with
practical calculations of floats?
That is if I proof the correctness of some theorem for real numbers
ideally I would just use that as the algorithm to compute things with
floats.
also I was shocked to learn that the simple general comparison of
(the equality of) two real numbers is not decidable, which is very
logical if you think about it but an enormous hindrance for practical
applications. Is there any work around for that?
zozbot234 wrote 1 day ago:
You can use floats to accelerate interval arithmetic (which is
"exact" in the sense of constructive real numbers) but that
requires setting the correct rounding modes, and being aware of
quirks in existing hardware floating point implementations, some of
which may e.g. introduce non-exact outputs in several of the least
significant digits, or even flush "small" (for unclear definitions
of "small", not always restricted to FP-denormal numbers) results
to zero.
Equality is not computable in the general case, but apartness can
be stated exactly. For some practical cases, one may also be able
to prove that two real numbers are indeed equal.
cookiengineer wrote 1 day ago:
Do you have book recommendations in regards to disassembly, syscalls,
x86/64 assembler etc?
What do I need to know to be able to build something as advanced as
rosetta?
I am assuming that you reimplemented the syscalls for each host/guest
system as a reliable abstraction layer to test against. But so many
things are way beyond my level of understanding.
Did you build your own assembler debugger? What kind of tools did you
use along the way? Were reversing tools useful at all (like ghidra,
binaryninja etc)?
peterkelly wrote 1 day ago:
"Virtual Machines: Versatile Platforms for Systems and Processes"
by Jim Smith and Ravi Nair is a great book on the topic.
markus_zhang wrote 1 day ago:
Thank you. Other than papers, I think this is one of the rare
books that talk extensively about dynamic recompilation. I was
hoping to learn more about the PPC M68K emulator (early version
interpreter style and later version dynamic recompilation style)
and definitely will read it.
steego wrote 1 day ago:
This is exciting!
Given your experience with Rosetta 2 and your deep understanding of
code translation and optimization, what specific areas in Leanâs
code generation pipeline do you see as âlow-hanging fruitâ for
improvement?
Additionally, which unique features or capabilities of Lean do you
find most promising or exciting to leverage in pushing the boundaries
of efficient and high-quality code generation?
croemer wrote 1 day ago:
We're you really _the_ creator of Rosetta 2? How big was the team,
what was your role in it?
Rosetta 2 is amazing, I'm genuinely surprised this is the work of
just one person!
spockz wrote 1 day ago:
It is my experience that it is easier to create good quality things
as an individual than as a team. Especially for the core of a
product. Also look at Asahi.
However, to really finish/polish a product you need a larger group
of people. To get the UI just right, to get the documentation
right, to advocate the product, to support it.
It is easily possible to have 10 people working on the team and
only having a single core person. Then find someone to act as
product manager while as the core person you can focus on the core
of the product while still setting the direction without having to
chase all the other work.
It is possible, but not easy to set up in most organisations. You
need a lot of individual credit/authority and/or the business case
needs to be very evident.
cwzwarich wrote 1 day ago:
I was the only person working on it for ~2 years, and I wrote the
majority of the code in the first version that shipped. That said,
Iâm definitely glad that I eventually found someone else (and
later a whole team) to work on it with me, and it wouldnât have
been as successful without that.
When people think of a binary translator, they usually just think
of the ISA aspects, as opposed to the complicated interactions with
the OS etc. that can consume just as much (or even more)
engineering effort overall.
zmb_ wrote 1 day ago:
As someone frustrated in a team of 10+ that is struggling to ship
even seemingly trivial things due to processes and overheads and
inefficiencies, I would really appreciate some insights on how do
you organize the work to allow a single developer to achieve
this.
How do you communicate with the rest of the organization? What is
the lifecycle and release process like? Do you write requirements
and specs for others (like validation or integration) to base
their work on? Basically, what does the day to day work look
like?
cwzwarich wrote 23 hours 59 min ago:
Well, the first thing to realize about scaling codebases with
developers is that an N developer team will usually produce a
codebase that requires N developers to maintain. So by starting
small and staying small until you reach a certain critical mass
of fundamental decisions, you can avoid some of the problems
that you get from having too many developers too early. You can
easily also fall into the reverse trap: a historical core with
pieces that fit too well together, but most of the developers
on the team donât intuitively understand the reasons behind
all of the past decisions (because they werenât there when
they happened). This can lead to poorly affixed additions to a
system in response to new features or requirements.
As far as Rosetta in particular was concerned, I think I was
just in the right environment to consistently be in a flow
state. I have had fleeting moments of depression upon the
realization that I will probably never be this productive for
an extended period of time ever again.
markus_zhang wrote 17 hours 29 min ago:
Thanks for sharing. Do you have an estimation of LoC? I know
it's a BS indicator but just curious. I'd imagine it's
something difficult but not too large.
janderson215 wrote 20 hours 19 min ago:
Thank you for what you did with Rosetta 2. It is outstanding.
On your last point, Iâve felt something like that myself
and I hold onto hope that it isnât true for myself (and now
for you in your future endeavors). But even if it is true,
you achieved something superhuman in your niche and the vast
majority of people throughout the history of time have no
idea what that is like. Tasting Heaven cannot last too long
while on Earth. Maybe AI will bring us a little bit closer to
that Heaven.
tonyedgecombe wrote 1 day ago:
>How do you communicate with the rest of the organization?
I wonder if Apple's renowned secrecy is a help with this. If
nobody outside your small team knows what you are doing then it
is hard for them to stick their oar in.
ladberg wrote 1 day ago:
For the record I was interning on Cameron's team while he
worked on Rosetta 2 and didn't even know myself what he
worked on (the rest of the team and I were working on
something else). I only found out later after it was
released!
iwontberude wrote 23 hours 32 min ago:
Apple is like this, I have seen plenty of instances where
you have one person carrying a team of 5 or more on their
back. I always wonder how they manage to compensate them
when itâs clear they are getting 10x more done. Hopefully
they get paid 10x, but something tells me that isnât
true.
tonyedgecombe wrote 5 hours 44 min ago:
When I was consulting I saw that everywhere. A team of
ten people would have one or two primary contributors and
often one person who had a negative impact on
productivity.
markus_zhang wrote 17 hours 28 min ago:
Maybe getting interesting work is a better perk than $$,
especially when Apple is already paying top dollars?
I'd imagine a lot of people are willing to do things for
free.
sitkack wrote 1 day ago:
That is fascinating that this amazing system was the work of
largely one person. You mentioned that interacting with the OS
was super difficult. What were the most enjoyable aspects of
building Rosetta?
porphyra wrote 1 day ago:
I am also amazed that this was the work of largely one person.
Having seamless and performant Rosetta 2 was a major factor why
the Apple transition from Intel to Apple Silicon was viable in
the first place!
archagon wrote 1 day ago:
It's a shame that Apple's stated intent is to throw the project
away after a while. Personally, I really hope it sticks around
forever, though I'm not optimistic.
wtallis wrote 1 day ago:
Rosetta 2 can't go away until Apple is ready to also retire
Game Porting Toolkit. At most, they might drop support for
running regular x86 macOS applications while keeping it around
for Linux VMs and Windows applications, but that would be
pretty weird.
nubinetwork wrote 1 day ago:
> game porting toolkit
I don't understand why Apple even bothers these days, I
wouldn't be surprised if Apple's gaming market is a quarter
of what the Linux gaming market currently is (thanks to Valve
and their work on proton and by extension wine)...
archagon wrote 19 hours 17 min ago:
I suspect this was a project spearheaded by some clever
geeks deep in the company and promoted upwards by
management. Not a top-down initiative.
drexlspivey wrote 1 day ago:
Because people want to use their fancy new hardware to play
games? Linux market share wouldnt be increasing so fast if
Valve didnât do the work so why shouldnât Apple do the
same?
nubinetwork wrote 1 day ago:
> so why shouldn't Apple do the same?
If Apple truly cared, they would stop blocking older
games from being run on newer versions of OSX...
bzzzt wrote 1 day ago:
They don't really block games though. It's more like
they don't want to maintain the roads the games need to
run on.
Transitioning to ARM wasn't possible if they had to
support 2 x86 ABI's and an extra ARM 32 bits ABI. Throw
in another migration and you have an untestable number
of legacy combinations.
duskwuff wrote 1 day ago:
In principle, the Linux Rosetta binaries should remain usable
well into the future. Even if Apple discontinues support for
Rosetta in their VMs, there's very little (beyond a simple,
easily removed runtime check) preventing them from being used
standalone.
vbezhenar wrote 1 day ago:
AFAIK Linux Rosetta does not work standalone but uses some
channels to exchange x86 and arm binary code between Linux
guest and macOS host. Actual translation happens in the
macOS.
duskwuff wrote 22 hours 39 min ago:
You'd think so, but no. With a patch to remove the
runtime check, Rosetta works on Asahi Linux, with no
macOS kernel present at all.
saagarjha wrote 1 day ago:
The kernel could drop support.
K7PJP wrote 1 day ago:
Where did Apple state that Rosetta 2 was to be deprecated?
larusso wrote 1 day ago:
I think they assuming from the past that this will happen.
When Apple moved from powerPC to x86 there was Rosetta 1. It
got deprecated as well.
GeekyBear wrote 1 day ago:
The first Rosetta was based on licensed technology, used at
a time when Apple was still pinching pennies.
It made financial sense to stop paying the licensing fee to
include it in each new version of the OS as quickly as
possible.
There is no financial incentive to remove the current
version of Rosetta, since it was developed in-house.
larusso wrote 21 hours 55 min ago:
Thanks didnât know that.
GeekyBear wrote 19 hours 30 min ago:
It was interesting tech, licensed by Silicon Graphics,
Apple, IBM, and Sun. IBM ended up buying out the
company that brought it to market.
URI [1]: https://www.wikipedia.org/wiki/QuickTransit
skissane wrote 1 day ago:
I think it is different this time. A lot of developers use
Rosetta 2 for Linux to run x86-64 Linux Docker containers
under macOS (including me). They'll be upset if Apple
discontinues Rosetta 2 for Linux. By contrast, once the
PPC-to-Intel transition was under way, Rosetta was only
used for running old software, and as time went by that
software became increasingly outdated and use of it
declined. While I think Rosetta 2 for macOS usage will
likely decline over time too, I think Rosetta 2 for Linux
usage is going to be much more stable and Apple will likely
maintain it for a lot longer. Maybe if we eventually see a
gradual migration of data centres from x86-64 to ARM,
Rosetta 2 for Linux usage might begin to also decline, and
then Apple may actually kill it. But, if such a migration
happens, it is going to take a decade or more for us to get
there.
larusso wrote 1 day ago:
I just pointed out what happened in the past. I have no
clue if Apple will deprecate it and what reason they put
forward doing so. I personally like the fact that I can
run both arm and x86 binaries. But I think judging Apple
that if they donât have a personal reason to support
Linux (they also use it for their services) they will
remove it. But deprecated donsât mean it will be
removed anytime soon. Apple keeps APIs and frameworks as
long as they donât interfere with something else.
CodesInChaos wrote 1 day ago:
What's the advantage of running x86-64 Linux Docker
containers over running ARM Linux Docker containers?
Aren't most distributionss and packages available for
both platforms?
rickette wrote 19 hours 7 min ago:
Lots and lots of Docker images are only linux/amd64
compatible. Without Rosetta 2 I wouldn't be able to do
my job, especially in a team with a mix of Mac and
Linux workstations and most images being build as
amd64/linux only.
0x0 wrote 23 hours 49 min ago:
Microsoft SQL Server is only available as an x86-64
docker container binary. They actually had a native(?)
arm64 docker container under the name "azure-sql-edge",
which was (and still is) super useful as you can run it
"natively" in an arm64 qemu linux for example, but alas
that version was not long lived, as Microsoft decided
to stop developing it again, which feels like a huge
step backwards. [1] There's probably other
closed-source linux software being distributed as
amd64-only binaries (rosetta 2 for linux VMs isn't
limited to docker containers).
URI [1]: https://techcommunity.microsoft.com/blog/sqlse...
spockz wrote 1 day ago:
Some images are only available for amd64 still. Like
oracle databases. Even if there is an arm64 of a recent
version of the app, it may not exist for older versions
that you want to test against.
xienze wrote 1 day ago:
The advantage is the fact that they exist. Not every
Docker container is built for multiple platforms.
croemer wrote 1 day ago:
That's super impressive. I remember being astonished that the x86
executable of Python running through Rosetta 2 on my M1 was just
a factor of 2 slower than the native version.
QEMU was something like a factor of 5-10x slower than native,
IIRC.
lostmsu wrote 1 day ago:
QEMU probably had to account for differences in memory models.
A fork with that stuff removed might be able to easily catch
up.
bonzini wrote 1 day ago:
QEMU loses a bit from being a generic translator instead of
being specialized for x86->ARM like Rosetta 2, Box64 or
FEXEmu. It does a lot of spilling for example even though x86
has a lot fewer registers than aarch64.
Flags are also tricky, though they're pretty well optimized.
In the end the main issue with them is also the spilling, but
QEMU's generic architecture makes it expensive to handle
consecutive jump instructions for example.
croemer wrote 1 day ago:
I found this blog post reverse engineering Rosetta 2
translated code:
URI [1]: https://dougallj.wordpress.com/2022/11/09/why-is-r...
bonzini wrote 1 day ago:
Interesting. Yeah, being able to use Arm flags always is
probably a big thing, since they even added hardware
support for that.
It's a huge achievement for a single person to have
written most of that.
RandomThoughts3 wrote 1 day ago:
> It's a huge achievement for a single person to have
written most of that.
Qemu was mostly Fabrice Bellard by himself at the
beginning and plenty of emulators are single person
project.
Itâs a field which lends itself well to single person
development. How to properly architecture
compiler/interpreter/emulator has been studied to death
and everyone mostly uses the same core principles so
there is little guess work as how to start (provided
you have taken the time to study the field). If you are
ready to do the work, you can reach a working
translator from hard work alone. Then, the interesting
work of optimising it starts.
Donât get me wrong, Rosetta 2 is a very impressive
achievement because the performances are really good. I
tip my metaphorical hat to whoever did it. My post is
more in the spirit of you can do something in the same
ballpark too if thatâs your kick.
adamnemecek wrote 1 day ago:
What was the tipping point that made you want to work on Lean?
cwzwarich wrote 1 day ago:
I don't think there was a single tipping point, just a growing
accumulation of factors:
- the release of Lean 4 slightly over a year ago, which impressed
me both as a proof assistant and a programming language
- the rapid progress in formalization of mathematics from 2017
onward, almost all of which was happening in Lean
- the growing relevance of formal reasoning in the wake of
improvements in AI
- seeing Lean's potential (a lot of which is not yet realized) for
SW verification (especially of SW itself written in Lean)
- the establishment of the Lean FRO at the right time, intersecting
all of the above
a1o wrote 1 day ago:
How does Lean compares with Coq? (I am not familiar with Lean but
am familiar with Coq)
denotational wrote 1 day ago:
Mario Carneiroâs MS Thesis has a good overview of the type
theory and how it compares to Coq:
URI [1]: https://github.com/digama0/lean-type-theory/releases/d...
a1o wrote 17 hours 26 min ago:
Thank you!
brcmthrowaway wrote 1 day ago:
Surprised you didnt go into something AI adjacent
fantod wrote 1 day ago:
I don't know what his reasons are but it makes sense to me. Yes,
there are incredible results coming out of the AI world but the
methods aren't necessarily that interesting (i.e. intellectually
stimulating) and it can be frustrating working in a field with this
much noise.
uoaei wrote 1 day ago:
I don't want to come across as too harsh but having studied
machine learning since 2015 I find the most recent crop of people
excited about working on AI are deep in Dunning-Kruger. I think I
conflate this a bit with the fascination of results over process
(I suppose that befuddlement is what led me to physics over
engineering) but working in ML research for so long it's hard to
gin up a perspective that these things are actually
teleologically useful, and not just randomly good enough most of
the time to keep up the illusion.
deet wrote 1 day ago:
I feel that way sometimes too.
But then I think about how maddeningly unpredictable human
thought and perception is, with phenomena like optical
illusions, cognitive biases, a limited working memory. Yet it
is still produces incredibly powerful results.
Not saying ML is anywhere near humans yet, despite all the
recent advances, but perhaps a fully explainable AI system,
with precise logic, 100% predictable, isnât actually needed
to get most of what we need out of AI. And given the
âanalogâ nature of the universe maybe itâs not even
possible to have something perfect.
hn_throwaway_99 wrote 1 day ago:
> But then I think about how maddeningly unpredictable human
thought and perception is, with phenomena like optical
illusions, cognitive biases, a limited working memory.
I agree with your general point (I think), but I think that
"unpredictable" is really the wrong word here. Optical
illusions, cognitive biases and limited working memory are
mostly extremely predictable, and make perfect sense if you
look at the role that evolution played in developing the
human mind. E.g. many optical illusions are due to the fact
that the brain needs to recreate a 3-D model from a 2-D
image, and it has to do this by doing what is statistically
most likely in the world we live in (or, really, the world of
African savannahs where humans first evolved and walked
upright). This, it's possible to "tricks" this system by
creating a 2D image from a 3D set of objects that is
statistically unlikely in the natural world.
FWIW Stephen Pinker's book "How the Mind Works" has a lot of
good examples of optical illusions and cognitive biases and
the theorized evolutionary bases for these things.
croemer wrote 1 day ago:
What do you mean by "things that are actually teleologically
useful"?
Fellow physicist here by the way
uoaei wrote 1 day ago:
Like useful in an intentional way: purpose-built and achieves
success via accurate, parsimonious models. The telos here
being the stated goal of a structurally sound agent that can
emulate a human being, as opposed to the accidental,
max-entropy implementations we have today.
bobxmax wrote 20 hours 34 min ago:
Sounds like an arbitrary telos, especially in a world where
one of the most useful inventions in human existence has
been turning dead dinosaurs into flying metal containers to
transport ourselves great distances in.
uoaei wrote 10 hours 39 min ago:
Every goal is equally arbitrary, I'm speaking to the
assumed ideology of the AI fanatics.
croemer wrote 1 day ago:
I see, so humans are also not usefully intelligent in an
intentional way, because they also follow the 2nd law of
thermodynamics and maximize entropy and aren't
deterministic?
throw646577 wrote 1 day ago:
Pure, refined âbut humans alsoâ.
croemer wrote 47 min ago:
What do you mean by "Pure, refined"?
You're right that "but humans also" is better than my
"and humans also"
calf wrote 1 day ago:
Is a guide dog teleologically useful?
jrflowers wrote 1 day ago:
Not if youâre taste testing ceviche
Baeocystin wrote 1 day ago:
Not OP, but I'm assuming he means that they are maddeningly
black-boxy, if you want to know how the sausage is made.
adamnemecek wrote 1 day ago:
Lean is AI adjacent.
saagarjha wrote 1 day ago:
Only because the AI people find it interesting. It's not really
AI in itself.
trenchgun wrote 1 day ago:
It's not ML, but it is AI
zozbot234 wrote 1 day ago:
Proof automation definitely counts as AI. Not all AI is based
on machine learning or statistical methods, GOFAI is a thing
too.
fspeech wrote 1 day ago:
If you want to have superhuman performance like AlphaZero
series you need a verifier (valuation network) to tell you if
you are on the right track. Lean (proof checker) in general can
act as a trusted critic.
ofrzeta wrote 1 day ago:
They do have AI on their roadmap, though:
URI [1]: https://lean-fro.org/about/roadmap-y2/
bobxmax wrote 20 hours 33 min ago:
Seems more like applying LEAN to AI development, no?
ofrzeta wrote 18 hours 22 min ago:
Partially, I guess, but also: "We will seek to provide
tooling, data, and other support that enables AI
organizations and researchers to advance Leanâs
contribution at the intersection of AI, math, and science."
mkl wrote 1 day ago:
It's not AI in itself, but it's one of the best possibilities
for enabling AI systems to generate mathematical proofs that
can be automatically verified to be correct, which is needed at
the scale they can potentially operate.
Of course it has many non-AI uses too.
cwzwarich wrote 1 day ago:
If youâre interested in applications of AI to mathematics,
youâre faced with the problem of what to do when the ratio of
plausible proofs to humans that can check them radically
changes. There are definitely some in the AI world who feel
that the existing highly social construct of informal
mathematical proof will remain intact, just with humans
replaced by agents, but amongst mathematicians there is a
growing realization that formalization is the best way to deal
with this epistemological crisis.
It helps that work done in Lean (on Mathlib and other
developments) is reaching an inflection point just as these
questions become practically relevant from AI.
brcmthrowaway wrote 1 day ago:
What is Lean FRO?
yairchu wrote 5 hours 4 min ago:
Lean is a currently-niche programming language / proof-assistant.
A proof assistant is basically a tool to construct mathematical
proofs, which verifies that the proofs are correct like how a
compiler verifies types in your programs.
IIUC a regular programming language with a certain set of
restrictions already duals as a proof-assistant as discovered by
Curry & Howard. By restrictions, I mean something like how Rust
forces you to follow certain rules compared to Java.
thih9 wrote 1 day ago:
Background about the organization: [1] Their proof assistant /
programming language:
URI [1]: https://en.m.wikipedia.org/wiki/Convergent_Research
URI [2]: https://en.m.wikipedia.org/wiki/Lean_(proof_assistant)
bagels wrote 1 day ago:
"we aim to tackle the challenges of scalability, usability, and proof
(Mathematics) automation in the Lean proof assistant."
threeseed wrote 1 day ago:
URI [1]: https://lean-lang.org/lean4/doc/
jemmyw wrote 1 day ago:
There are a lot of broken links in the docs. Like most of the
feature links.
kmill wrote 1 day ago:
There's a completely new language reference in the process of
being written: [1] (by David Thrane Christiansen, co-author of
The Little Typer, and Lean FRO member)
Some links here seem to be broken at the moment â and David's
currently on vacation so they likely won't be fixed until January
â but if you see for example [2] it's supposed to be [1]
basic-types/strin...
URI [1]: https://lean-lang.org/doc/reference/latest/
URI [2]: https://lean-lang.org/basic-types/strings/
URI [3]: https://lean-lang.org/doc/reference/latest/basic-types/s...
croemer wrote 1 day ago:
That answers the Lean part, FRO stands for Focused Research
Organization
swat535 wrote 19 hours 10 min ago:
That doesn't say much.. Research on what? It looks like Lean is a
programming language but everything else is pretty abstract to
me.
cwzwarich wrote 1 day ago:
URI [1]: https://lean-fro.org/about/
hinkley wrote 22 hours 57 min ago:
Yeah that really doesnât help.
lproven wrote 5 hours 41 min ago:
FWIW, not having done a proof since about 1982, it doesn't help
me either.
lambdas wrote 15 hours 56 min ago:
we aim to tackle the challenges of scalability, usability, and
proof automation in the Lean proof assistant [1] Yep. Truly a
mystery.
URI [1]: https://lean-lang.org/
DIR <- back to front page