_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI WorldGrow: Generating Infinite 3D World
deterministic wrote 17 hours 8 min ago:
The problem is not generating worlds. The problem is generating
interesting worlds.
tantalor wrote 1 day ago:
Oh great, it's the Severance simulator.
kittikitti wrote 1 day ago:
This is great and really cool! Thank you for sharing.
theknarf wrote 1 day ago:
Did they reinvent "wave function collapse" ( [1] )?
URI [1]: https://github.com/mxgmn/WaveFunctionCollapse
mock-possum wrote 3 hours 52 min ago:
Whoa this is awesome, thanks for the link!
fallat wrote 1 day ago:
No. WFC is fundamentally different from this.
swiftcoder wrote 1 day ago:
Indeed, but it does serve more or less the same purpose in procgen
pipelines (and folks have tweaked WFC for infinite worlds
before[1]).
[1]
URI [1]: https://www.youtube.com/watch?v=7ffT_8wViBA
ivanjermakov wrote 1 day ago:
WFC?
nikolay wrote 19 hours 32 min ago:
WFC = Wave Function Collapse
splintercell wrote 1 day ago:
Is it just me, or some of the places it generates are just not
realistic? Like a small area of some kind which is a dead space, and
there is a giant window into it.
hobofan wrote 1 day ago:
Yeah, I think either the method doesn't work well, or there is
something off with their tuning.
Their block-by-block generation method seems to be too local in its
considerations, where each 3x3 section (= the ones generated based on
the immediate neighbors) looks a lot more coherent than the 4x4
sections and above. I think it might need to be extended to be less
local and might also in general need to be paired with some sort of
guidance systems (e.g. in the office example would generate the
overall floor layout).
icoder wrote 1 day ago:
It's not just you. The generated stuff - in my opinion - doesn't make
any sense at all, with regard to structure or meaning. Unless,
perhaps, the aim was to generate some kind of badly designed Ikea
store.
oniony wrote 1 day ago:
Not only not realistic but also not explicit: not so much as a peachy
bottom in sight.
keyle wrote 1 day ago:
This is cool. And could be fun in games. Not sure I get the point
otherwise... The thought that came to mind was "Architectural slop".
flohofwoe wrote 1 day ago:
Games have used procedural world generation since at least the 1980s,
and on 8-bit home computers. Glancing through the video and webpage,
the results don't look much different from what's possible with
traditional Wave Function Collapse tbh.
fjfaase wrote 1 day ago:
I wonder if they also have a strategy for deleting generate tiles,
otherwise the infinite is limited to the size of available memory. I
also wonder if with their method can exactly recreate tiles that have
been deleted. Or in other words, that they have a method for generating
unique seeds for all tiles. The paper does not give much technical
details. If the seed has a limited size and there is a method for
generating seeds for each 2D coordinate, I wonder if it is possible to
make a non-repeating infinite world. I think it is not possible with a
limited size seed.
gcr wrote 1 day ago:
This could be a great way to make backrooms horror environments!
I've dreamed of a NeRF-powered backrooms walking simulator for quite a
while now. This approach is "worse" because the mesh seems explicit
rather than just the world becoming what you look at, but that's
arguably better for real-world use cases of course.
endymion-light wrote 1 day ago:
i'm thinking a new version of LSD dream emulator could be really
interesting.
grumbelbart2 wrote 1 day ago:
> backrooms horror environments
True, it sounds (and looks) a lot like
URI [1]: https://scp-wiki.wikidot.com/scp-3008
gpderetta wrote 20 hours 34 min ago:
That SCP was literally the first thing that came to my mind when
looking at the intro video!
Garlef wrote 1 day ago:
I don't think generating virtual space is the issue.
It's about generating interesting virtual space!
otikik wrote 1 day ago:
Indeed, this has been described in the past as "The Oatmeal Problem"
[1]
URI [1]: https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
xwiz wrote 1 day ago:
Kate Compton's GDC talk:
URI [1]: https://www.gdcvault.com/play/1024213/Practical-Procedural...
zparky wrote 1 day ago:
Important to note that article was written 9 years ago and NMS has
received numerous content updates since. There's a lot more to the
game now.
nsxwolf wrote 1 day ago:
There is, but the procedural generation part is not what makes it
fun to me. It's what you create and how you choose to "live" in
the game. It really is like the real universe - isotropic, the
same in all directions - it only takes a few hours to be
overwhelmed by how pointless it all seems, knowing there's an
infinity of anything you discover elsewhere.
Once you build a base or create some goal for yourself, it
becomes interesting.
keyle wrote 1 day ago:
You reminded me of this [1] And Valve I think used to have a series
on level design, involving from big to small and "anchor points", but
I seem to have misplaced the link.
URI [1]: https://book.leveldesignbook.com/process/layout
analog8374 wrote 1 day ago:
Consider the levels generated in any roguelike.
Consider the patterns generated by cellular automata.
Both tend to stay interesting in the small scale but lose it to
boring chaos in the large.
For this reason I think the better approach is to start with a simple
level-scale form and then refine it into smaller parts, and then to
refine those parts and so on.
(Vs plugging away at tunnel-building like a mole)
nonethewiser wrote 1 day ago:
>Both tend to stay interesting in the small scale but lose it to
boring chaos in the large.
I think that's a good way to put it. I started writing a reply
before reading your comment entirely and arrived at basically the
same conclusion as this but more verbosely:
> For this reason I think the better approach is to start with a
simple level-scale form and then refine it into smaller parts, and
then to refine those parts and so on.
It seems hard to get away from having some sort of overarching
goal, and then constantly looking back at it. At progressively
smaller levels. Like what is the universe of the thing you are
generating randomly. Is it a dungeon in a roguelike? It it meant to
be one of many floors? Or is it a space inside a building? Is it a
house? Is it an office? Is the office a stand alone building or a
sky scraper?
Perhaps a good algorithm would start big and go small.
- assume the universe to generate is a world
- pick a location and assign stuff to generate. lets say
its a city
- pick a type of city thing to generate. lets say its
an sky scraper
- etc. going, smaller and smaller
- look at the city so far. pick another type of city
thing to generate based on what has been generated so far
- look at the world so far. pick another type of thing to
generate
Or maybe instead of looking back you could pre-divide into zones.
But then, if you want to make an entire universe (as in multiple
worlds), you need to just make random worlds which leads to your
original problem (boring chaos at large scale) like this or go up
another level to more intelligently generate.
Point being, you need some sort of top down perspective on it.
analog8374 wrote 1 day ago:
Here are 2 graphical examples of that strategy [1]
URI [1]: http://fleen.org
URI [2]: https://www.flickr.com/photos/jonathanmccabe/albums/7215...
anthk wrote 1 day ago:
Nethack/Slashem and DCSS, maybe.
The levels are made to fit under 80x24 terminal with maybe
a max of 7/8? -can't remember- rooms per level.
The worlds from Cataclysm DDA:Bright Nights are pretty regular,
and you have an overworld, labs, subways...
rootlocus wrote 1 day ago:
Or at least coherent.
jpalomaki wrote 1 day ago:
â The generated scenes are walkable and suitable for
navigation/planning evaluation.â
Maybe the idea is to create environments for AI robotics traini ng.
james-bcn wrote 1 day ago:
Yep. People have been doing this kind of stuff for computer games for
decades. It's actually not that difficult. It's not clear what novel
problem is being solved here.
agravier wrote 1 day ago:
Do you have some particular piece of software or tech demo or game
in mind with interesting very large generated 3D worlds?
NBJack wrote 1 day ago:
Age of Empires got me into tinkering with content generation. The
flexible map rules were fantastic in making this possible.
Minecraft is of course the poster child for very large worlds of
interest these days.
Dwarf Fortress crafts an entire continent complete with a
multi-century history, the results of which you can explore
freely in adventure mode.
Most of the recent examples of 3D worlds like the post tend to do
it through wave function collapse.
omnibrain wrote 1 day ago:
> Minecraft is of course the poster child for very large worlds
of interest these days.
Minecraft used to create very interesting worlds until they
changed the algorithm and the landscapes became plain and
boring. It took them about 10 year until the Caves and Cliffs
Update to make the world generation interesting again.
antonvdi wrote 1 day ago:
Minecraft surely fits those criteria.
SiempreViernes wrote 1 day ago:
In Mario 64 there is a staircase you can run up forever, granted
it looks the same no matter how long you have Mario run up the
stairs, but that certainly fits "big but uninteresting 3d world."
bogwog wrote 1 day ago:
> big but uninteresting 3d world.
I know 'interesting' is subjective, but your comment is
demonstrably false. Just type "mario 64 staircase" into
youtube, and look at the hundreds (thousands? millions?) of
videos and many millions of views.
f17428d27584 wrote 1 day ago:
People are interested in it as a form of trivia. It is
extremely uninteresting from the perspective of the player
and more importantly how the word was actually used, which
was in reference to the quality of world generation.
Redefining âinterestingâ just so you can provide a
completely irrelevant âcorrectionâ is bad faith trolling.
bogwog wrote 1 day ago:
Not sure why you're so defensive about this. I'm not
trolling. Whether something is interesting or not is
subjective, which is my point. You might think you know why
that staircase is interesting to people (it's just trivia),
but that's just your opinion. This is a tech community, so
you're obviously unimpressed by the technology used to make
it, but most people don't care about that at all.
There's no secret formula to culture. Some programmers and
AI people seem to think there is some magic AI model that
will be able to produce cultural hits at the click of a
button. If you're a boring person, you're not likely to
"get" why something is interesting, or why that part can't
just be automated away. No technology can help with that.
sirtaj wrote 1 day ago:
Valheim and No Man's Sky are ones I've played recently.
jsheard wrote 1 day ago:
Yeah but those traditional procgen techniques don't use AI, and
this one does use AI. They solved the problem of them not being AI
enough for the AI era. AI!
embedding-shape wrote 1 day ago:
It is only a paper as of now:
> The code is being prepared for public release; pretrained weights and
full training/inference pipelines are planned.
Any ideas of how it would different and better compared to
"traditional" PCG? Seems like it'd give you more resource consumption,
worse results and less control, neither of which seem like a benefit.
tantalor wrote 1 day ago:
An unpublished paper.
glenneroo wrote 1 day ago:
The description in the linked YouTube video for some reason has more
info than the github repo:
> We tackle the challenge of generating the infinitely extendable 3D
world â large, continuous environments with coherent geometry and
realistic appearance. Existing methods face key challenges:
2D-lifting approaches suffer from geometric and appearance
inconsistencies across views, 3D implicit representations are hard to
scale up, and current 3D foundation models are mostly object-centric,
limiting their applicability to scene-level generation. Our key
insight is leveraging strong generation priors from pre-trained 3D
models for structured scene block generation. To this end, we propose
WorldGrow, a hierarchical framework for unbounded 3D scene synthesis.
Our method features three core components: (1) a data curation
pipeline that extracts high-quality scene blocks for training, making
the 3D structured latent representations suitable for scene
generation; (2) a 3D block inpainting mechanism that enables
context-aware scene extension; and (3) a coarse-to-fine generation
strategy that ensures both global layout plausibility and local
geometric/textural fidelity. Evaluated on the large-scale 3D-FRONT
dataset, WorldGrow achieves SOTA performance in geometry
reconstruction, while uniquely supporting infinite scene generation
with photorealistic and structurally consistent outputs. These
results highlight its capability for constructing large-scale virtual
environments and potential for building future world models.
embedding-shape wrote 1 day ago:
That seems to compare to other similar "generating infinite 3D
worlds" approaches, but not to traditional PCG, which would give
you all of that except higher quality, better performance and
more/better control.
jackdoe wrote 1 day ago:
cant wait for the new diablo :)
pjmlp wrote 1 day ago:
With a quarter the size of the development team, 'cause productivity!
speedgoose wrote 1 day ago:
It looks more like the Stanley parable.
DIR <- back to front page