_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   JEP 483: Ahead-of-Time Class Loading and Linking
       
       
        s6af7ygt wrote 1 hour 10 min ago:
        I'm a dunce
       
          dtech wrote 1 hour 8 min ago:
          Read the article, this doesn't reduce JIT capabilities at all.
       
        fulafel wrote 3 hours 10 min ago:
        What does this mean for Clojure? At least loading the Clojure runtime
        should benefit, but what about app code loading.
       
          diggan wrote 1 hour 12 min ago:
          I feel like for the Clojure applications where you need it to start
          really fast, like tiny CLI utilities that don't do a lot of work, the
          improvements would be so marginal to not matter much. The example
          they use in the JEP seems to have gone from a ~4 second startup to ~2
          seconds, which for a tiny CLI, still would make it seem pretty slow.
          You're better off trying to use Babashka, ClojureScript or any of the
          other solutions that give a fast startup.
          
          And for the bigger applications (like web services and alike), you
          don't really care that it takes 5 seconds or 10 seconds to start it,
          you only restart the server during deployment anyways, so why would
          startup time matter so much?
       
            dtech wrote 1 hour 6 min ago:
            The 4 second application is a web server. They also give a basic
            example starting in 0.031s, fine for a CLI.
            
            One of the use cases for startup time is AWS lambda and similar.
       
              diggan wrote 36 min ago:
              > The 4 second application is a web server. They also give a
              basic example starting in 0.031s, fine for a CLI.
              
              Sure, my comment was more about the relative improvement. In the
              case of the 0.031s example (which is the number without the
              improvement), it gets down to 0.018s with this new AOT class
              loading. What value do you get from something starting in 0.018s
              instead of 0.031s? The difference is so marginal for that
              particular use case.
              
              > One of the use cases for startup time is AWS lambda and
              similar.
              
              I suppose that's one use case where it does make sense to really
              focus on startup times. But again, I'd rather use something that
              fast startup already exists (Babashka, ClojureScript) instead of
              having to add yet another build-step into the process.
       
              bobnamob wrote 51 min ago:
              Prebuilding a cache through a training run will be difficult
              between lambda invocations though and snapstart[1] already
              "solves" a lot of the issues a class cache might address. [1] 
              
              Of course, I wouldn't be surprised if the boffins at lambda add
              some integration between snapstart and class caching once their
              leadership can get it funded
              
   URI        [1]: https://docs.aws.amazon.com/lambda/latest/dg/snapstart.h...
       
          funcDropShadow wrote 1 hour 53 min ago:
          It should benefit, if namespaces are AOT-compiled by Clojure.
       
        o11c wrote 8 hours 37 min ago:
        The concern that jumps out at me is: what about flags that affect code
        generation? Some are tied to the subarch (e.g. "does this amd64 have
        avx2?" - relevant if the cache is backed up and restored to a slightly
        different machine, or sometimes even if it reboots with a different
        kernel config), others to java's own flags (does compressed pointers
        affect codegen? disabling intrinsics?).
       
          lxgr wrote 8 hours 24 min ago:
          I don’t see any mention that code is actually going to be stored in
          a JITted form, so possibly it’s just architecture-independent
          loading and linking data being cached?
       
            MBCook wrote 7 hours 30 min ago:
            My impression from reading this was it was about knowing which
            classes reference which other classes when and which jars
            everything is in.
            
            So I think you’re right.
            
            So a bit more linker style optimization than compiler related
            caching stuff.
       
              brabel wrote 3 hours 56 min ago:
              The JEP explains what this does:
              
              "The AOT cache builds upon CDS by not only reading and parsing
              class files ahead-of-time but also loading and linking them."
              
              While CDS (which has been available for years now) only caches a
              parsed form of the class files that got loaded by the
              application, the AOT cache will also "load and link" the classes.
              
              The ClassLoader.load method docs explain what loading means: [1]
              1. find the class (usually by looking at the file-index of the
              jar, which is just a zip archive, but ClassLoaders can implement
              this in many ways).
              
              2. link the class, which is done by the resolveClass method: [1]
              and explained in the Java Language Specification: [3] "Three
              different activities are involved in linking: verification,
              preparation, and resolution of symbolic references."
              
              Hence, I assume the AOT cache will somehow keep even symbolic
              references between classes, which is quite interesting.
              
   URI        [1]: https://docs.oracle.com/en/java/javase/21/docs/api/java....
   URI        [2]: https://docs.oracle.com/en/java/javase/21/docs/api/java....
   URI        [3]: https://docs.oracle.com/javase/specs/jls/se21/html/jls-1...
       
        layer8 wrote 9 hours 35 min ago:
        > [example hello world] program runs in 0.031 seconds on JDK 23. After
        doing the small amount of additional work required to create an AOT
        cache it runs in in 0.018 seconds on JDK NN — an improvement of 42%.
        The AOT cache occupies 11.4 megabytes.
        
        That’s not immediately convincing that it will be worth it. It is a
        start I guess.
       
          dgfitz wrote 6 hours 46 min ago:
          How so?
          
          RAM is almost free if you’re not on embedded, and embedded could
          run Java sure, but it isn’t common.
       
            imtringued wrote 38 min ago:
            RAM might be inexpensive, but this hasn't stopped cloud providers
            from being stingy with RAM and price gouging.
            
            At current RAM prices you'd expect the smallest instances to have
            2GB, yet they still charge $4/month for 512MB, which isn't enough
            to run the average JVM web server.
       
            pdpi wrote 5 hours 51 min ago:
            That’s not an in-memory cache either. AIUI it’s storing those
            artefacts to disk
       
              lmz wrote 3 hours 14 min ago:
              Container sizes may be affected though.
       
                bobnamob wrote 47 min ago:
                So you're now weighing the increased container pull time (due
                to size) vs the class load time you're saving through the
                cache.
                
                It's nice to at least have the option of making that tradeoff
                
                (And I suspect for plenty of applications, the class cache will
                be worth more time than (an also probably cached) image pull)
       
        foolfoolz wrote 9 hours 38 min ago:
        i’m curious if any of this was inspired from aws lambda snapstart
       
          layer8 wrote 9 hours 26 min ago:
          Maybe read the History section.
       
        petesoper wrote 10 hours 18 min ago:
        Sweet!
       
       
   DIR <- back to front page