_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Infinigen
       
       
        fireant wrote 10 hours 36 min ago:
        It looks great, but I'm missing what's innovative about this? AAA
        procedural folliage has been done for 20 years, terrain too. Blender
        has had procedural geo nodes for a long time too. What is so
        interesting here?
       
        w_for_wumbo wrote 16 hours 21 min ago:
        This feels like we've got all the pieces of the puzzle to create a
        reality experience - I'm pretty sure with visuals like this and haptic
        feedback that your brain will fill in any gaps once you adapt to this
        given enough time.
        
        You could use this with a VR headset, monitoring heart-beat,
        temperature and adapt the environment based off the experiencer's
        desires.
        
        It feels like we're on the precipice of recreating an experience of
        reality, which may reveal more about our existing reality than we ever
        expected.
       
        szundi wrote 18 hours 30 min ago:
        Don’t seem to be photorealistic images according to today’s
        standards
       
        ginko wrote 19 hours 12 min ago:
        Isn't their logo just the (old) Visual Studio logo?
       
        culi wrote 20 hours 39 min ago:
        If only demoscenes were still as prominent today as they used to be.
        They'd be having a field day with this
       
          tandr wrote 12 hours 16 min ago:
          Maybe this IS the demoscene of today? I saw more insanely beautiful
          computer generated pictures in the last couple years than I saw in
          previous 10, AI or no AI.
       
        vdjskshi wrote 22 hours 9 min ago:
        An LLM frontend for this would be amazing.
        
        You could describe the scene you want in normal english and iterate
        eith conversation.
        
        It could automatically create scenes from novels and poems.
       
          aDyslecticCrow wrote 18 hours 55 min ago:
          The whole point of this is to generate diverse training data with
          accurate labels for model training. If you actually want to create
          nice scenes, use normal blender and some plugins and free online
          assets.
       
          lylabs wrote 21 hours 13 min ago:
          true, but that's very easy to do as long as the LLM can create the
          code, which shouldn't be difficult... idk blender's code lang used (i
          know godot is very python-like), but most code is nowadays some form
          of cpp (or it is cpp), so with a few-shot it might be even possible
          to get a cpp trained llm gen the correct code
       
        amarcheschi wrote 1 day ago:
        It feels like l-systems on mega steroid, cool
       
        nailer wrote 1 day ago:
        I was very confused by “math rules only” - as opposed to what? But
        it seems they don’t think an LLM is maths. Which it absolutely is.
       
          dcuthbertson wrote 23 hours 29 min ago:
          While mathematics are necessary to build LLMs, they are not a kind of
          math or a distinct branch of mathematics.
       
            nailer wrote 20 hours 4 min ago:
            Yes I’d agree.
       
        dualogy wrote 1 day ago:
        Also discussed 30 days ago, 33 comments:
        
   URI  [1]: https://news.ycombinator.com/item?id=42485423
       
        DidYaWipe wrote 1 day ago:
        Is what?
       
          qwertox wrote 1 day ago:
          "Infinigen is a procedural generator of 3D scenes".
       
            DidYaWipe wrote 17 hours 25 min ago:
            That's not in the title, where it belongs.
            
            Look at all the sibling posts that are done right:
            
            TabBoo – add random jumpscares to websites you're trying to avoid
            
            Stratoshark, a sibling application to Wireshark
            
            Hunyuan3D 2.0 – High-Resolution 3D Assets Generation
            
            Flame: A small language model for spreadsheet formulas
            
            JReleaser: quick and effortless way to release your project
            
            The infantile downvoting of anyone who calls out useless titles
            just plays into HN's rep. Knock it off.
       
        draven wrote 1 day ago:
        Watching the video I thought "No Man's Sky as a python lib."
       
          amlib wrote 22 hours 37 min ago:
          Also terragen but for everything
       
            darknavi wrote 17 hours 0 min ago:
            I miss terragen! What a fun way to waste an afternoon on a rainy
            day as a kid.
            
            I just looked it up and WOW it has come a long way (and wasn't it
            free before?  Maybe I misremember).
       
              tinco wrote 15 hours 24 min ago:
              I think it always has had a trial version. Back in the early
              2000's my friend and I would mess around with the settings, hit
              render and then go upstairs to play with lego's or something for
              a couple of hours, and when it was done it would just be fancy
              terrain but without details like grass, trees or even rivers I
              think.
       
        saikatsg wrote 1 day ago:
        Source:
        
   URI  [1]: https://github.com/princeton-vl/infinigen
       
        janalsncm wrote 1 day ago:
        This seems extremely cool. I’m wondering if it can be used to create
        procedural video game assets.
       
          ANewFormation wrote 19 hours 59 min ago:
          This looks rather extremely similar to something that Unreal already
          natively supports. Here is a demo video from them -
          
   URI    [1]: https://youtube.com/watch?v=8tBNZhuWMac
       
          kannonboy wrote 1 day ago:
          From the homepage it sounds like they've prioritised geometry
          fidelity for CV research rather than performance:
          
          > Infinigen is optimized for computer vision research, particularly
          3D vision. Infinigen does not use bump/normal-maps,
          full-transparency, or other techniques which fake geometric detail.
          All fine details of geometry from Infinigen are real, ensuring
          accurate 3D ground truth.
          
          So I suspect the assets wouldn't be particularly optimised for video
          games. Perhaps a good starting point though!
       
            ghfhghg wrote 19 hours 17 min ago:
            That's actually a fairly ideal fit for nanite meshes.
       
            cma wrote 21 hours 50 min ago:
            I doubt they prioritized it.  To get normal maps you usually first
            need a high resolution mesh, but then need other steps to get good
            decimation for lods and normal bake. That's mostly extra work, not
            alternative work that wasn't prioritized.  If by transparency they
            mean faking aggregates, you also need full geo there before
            sampling and baking down into planes or some other impostor
            technique.
       
            jeffhuys wrote 1 day ago:
            Well, we've come a long way. Look at nanite - it might actually be
            compatible...
       
              mfost wrote 22 hours 32 min ago:
              Epic did say that you might in some situations forego normalmaps
              with Nanite and save disk space even though you have super
              detailed models so it DOES fit in this context.
              
              Also, video games are used to take a high poly model and bake a
              normalmap corresponding to it on a lower poly model anyway so it
              might also be used that way. I think Doom 3 was the first game to
              show the technique?
       
                ghfhghg wrote 19 hours 15 min ago:
                With nanite normal maps are less required than otherwise
                because the detail is preserved in the mesh. You could make the
                argument that micro detail normal maps are still useful but
                those aren't always generated from the mesh. Especially if they
                are tiling.
       
        feverzsj wrote 1 day ago:
        I like the "zero AI" part.
       
        markisus wrote 1 day ago:
        This project generates synthetic computer vision training data. The
        arxiv paper has more detail including some cool pictures of random
        creatures it can generate. The images are nice but all of them are
        nature settings so I assume one would have to supplement this type of
        data with another data set for training a computer vision model.
       
          kannonboy wrote 1 day ago:
          The same authors also created Infinigen Indoors[1] to generate indoor
          scenes for computer vision applications such as robotics & AR.
          
   URI    [1]: https://arxiv.org/abs/2406.11824
       
       
   DIR <- back to front page