_______ __ _______ | | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----. | || _ || __|| < | -__|| _| | || -__|| | | ||__ --| |___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____| on Gopher (inofficial) URI Visit Hacker News on the Web COMMENT PAGE FOR: URI Doom crash after 2.5 years of real-world runtime confirmed on real hardware shadowgovt wrote 5 hours 11 min ago: "I hope someone got fired for that blunder." /s cestith wrote 6 hours 25 min ago: Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long. EbNar wrote 6 hours 29 min ago: Love the look of that board :-) patchtopic wrote 6 hours 42 min ago: I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now? glitchc wrote 6 hours 45 min ago: I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs. jraph wrote 5 hours 38 min ago: Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-) qiine wrote 7 hours 14 min ago: In games I worked on I use time to pan textures for animated FX. After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem. Yet this keep bothering me.. kwertyoowiyop wrote 8 hours 20 min ago: CNR. Please attach video. otikik wrote 10 hours 53 min ago: Quick! John Carmack needs to be brought into this immediately. piker wrote 12 hours 13 min ago: Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks. shultays wrote 12 hours 19 min ago: Does that hardware traps overflows or something? I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value Doesn't sound like something that would crash, I wonder what was the actual crash Sharlin wrote 9 hours 51 min ago: Signed overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesnât return a value in a function thatâs supposed to return a value. account42 wrote 6 hours 21 min ago: Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash. Sharlin wrote 2 hours 44 min ago: Thatâs what I said? Itâs easy to come up with scenarios where signed overflow breaks a program in a crashy way if the optimizer, for example, optimizes out a check for said overflow because itâs allowed to assume that `++i < 0` can never happen if i is initialized to >= 0. Thatâs something that very real optimizers take advantage of in the very real world, not just on paper. For example, GCC needs -fwrapv to give you guaranteed wrapping behavior (thereâs sctually -ftrapv which raises a SIGFPE on overflow â thatâs likely the easiest way to cause this crash!) But I specifically said that it doesnât look like SOUB in this particular case, and proposed an alternative mechanism for crashing. Whatâs almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`. phkahler wrote 8 hours 46 min ago: That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK Sharlin wrote 7 hours 23 min ago: Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I havenât seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because thereâs no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, wellâ¦) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory. Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing. int foo[5] = { ⦠} foo[i % 5] = bar; Everything is fine as long as i isnât negative. But if it is⦠(note that negative % positive == negative in C) account42 wrote 6 hours 17 min ago: Dividing by a difference that is suddenly zero is another possibility. ogurechny wrote 5 hours 11 min ago: The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows? Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window? There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value. Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or âticâ), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow. Would be hilarious if it really is such an easy mistake. BearOso wrote 3 hours 16 min ago: The VGA 320x200 mode, either 13h or "Mode Y", ran at 70.086 Hz, so that adding up to ~2.5 years is just coincidental. It's a shame the source code for doom isn't available, and that the author couldn't just link directly to a specific line in a gitweb repository. /s jraph wrote 8 hours 14 min ago: Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old. An actual analysis would be needed to understand the actual cause of the crash. johnjames87 wrote 12 hours 28 min ago: Literally unplayable Zobat wrote 13 hours 36 min ago: This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on. moomin wrote 13 hours 41 min ago: Literally unplayable. ustad wrote 14 hours 11 min ago: Was this specific to the PDA port or the core doom code? @ID_AA_Carmack Are you going to write a patch to fix this? jraph wrote 14 hours 49 min ago: Notably, DOOM crashed before Windows CE. chatmasta wrote 2 hours 20 min ago: Seriously⦠Iâm most impressed that this PDA kept an application running for 2.5 years. Iâd be shocked if any modern hardware could do this, even while disconnected from the Internet. jraph wrote 46 min ago: I'd be more impressed by current software not crashing for 2.5 years than hardware, but that might be I'm a software developer, not a hardware developer :-) wingi wrote 12 hours 55 min ago: Yes, great achivement! DeathArrow wrote 15 hours 55 min ago: It's good it didn't took a billion years to overflow. That would have been quite a long wait. jeffrallen wrote 15 hours 59 min ago: This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack. Maybe I need my morning coffee. :) minki_the_avali wrote 14 hours 10 min ago: I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3 ZsoltT wrote 16 hours 59 min ago: glitchless? spjt wrote 17 hours 44 min ago: Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging" casey2 wrote 18 hours 47 min ago: Has this ever come up in a TAS of custom levels? LorenDB wrote 18 hours 50 min ago: Since we've hugged the site to death, have an archive.org link: [1] Sadly it appears that archive.org didn't capture all of the site formatting, but at least the text is there. URI [1]: https://web.archive.org/web/20250916234009/https://lenowo.org/... Insanity wrote 19 hours 8 min ago: Literally unplayable, someone should fix that. Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didnât do it for me. pizza234 wrote 11 hours 2 min ago: I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3. I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too. billyp-rva wrote 6 hours 32 min ago: The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling. jjbinx007 wrote 13 hours 56 min ago: This caters for people who prefer the classic Doom style of gameplay in FPS games: URI [1]: https://www.reddit.com/r/boomershooters/ Insanity wrote 7 hours 59 min ago: Ahh yes, I'm quite happy that a few years ago this has become a trend! jama211 wrote 15 hours 35 min ago: Same. Something about the metroidvania design with the home hub of the later ones didnât give the same feeling. It should be run, kill, find secrets, end, next level. lapetitejort wrote 27 min ago: > find secrets I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course bombela wrote 5 hours 22 min ago: The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game. Insanity wrote 4 hours 53 min ago: I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me. Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons. Insanity wrote 5 hours 57 min ago: This is exactly how I want my FPS games to be. Just linear, run & gun. TBH, I can even do without weapon upgrades or any "RPG" style elements. It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol. jeffwask wrote 6 hours 49 min ago: I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that. It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work. shpongled wrote 17 hours 9 min ago: 2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other) bitwize wrote 18 hours 13 min ago: Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so. Novosell wrote 11 hours 34 min ago: They own Minecraft as well. kodarna wrote 14 hours 35 min ago: They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc. account42 wrote 6 hours 9 min ago: So in other words the own the part of PC gaming that's actually good. nurettin wrote 15 hours 50 min ago: > Microsoft pretty much owns most of PC gaming. So valve next? Lightkey wrote 14 hours 19 min ago: They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft. account42 wrote 6 hours 1 min ago: All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them. And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve. simoncion wrote 6 hours 20 min ago: > Valve is betting everything on Linux right now... They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so. And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does. I'm really gladdened by the effort put in to making this work. [0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup. Insanity wrote 4 hours 43 min ago: I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been. And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot. Spoom wrote 29 min ago: Stating my bias up front, I've been using Linux since Windows Vista, and I'm a fan. That said, I have experienced the same things you did whenever I needed to run Wine for... well, anything. It was clunky as hell. You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version). Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively. jerf wrote 3 hours 53 min ago: It has come lightyears. ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: [1] And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said. In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR. URI [1]: https://www.protondb.com/profile lukan wrote 10 hours 12 min ago: "Valve is betting everything on Linux right now" Not everything, but they do invest in it. tomwojcik wrote 14 hours 48 min ago: As long as Gabe is alive, no way. account42 wrote 5 hours 59 min ago: *in control of Valve Old age can make him give that up before death. HeckFeck wrote 12 hours 23 min ago: We must find a way to extend his life indefinitely. xmonkee wrote 18 hours 55 min ago: Same. And love those brutality mods. jbreckmckye wrote 21 hours 6 min ago: About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die. Left for 2.26 years, it will overflow. When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: URI [1]: https://youtu.be/f7ZzoyVLu58 teeray wrote 6 hours 5 min ago: The true Time Twister unlocked xhrpost wrote 6 hours 53 min ago: Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no? jbreckmckye wrote 45 min ago: Some C programmers take the view that unsigneds have too many disadvantages: undefined behaviour for overflows, and weird type promotion rules. So, they try and avoid uints. aidenn0 wrote 5 hours 16 min ago: If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned. ThrowawayTestr wrote 17 hours 56 min ago: Great video, just subscribed jonhohle wrote 19 hours 35 min ago: I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so whatâs the point of even tesing it? Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write? rybosome wrote 29 min ago: Itâs a totally reasonable choice in that context. I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that âhaha what if the users of this software did this really extreme thingâ is more like âoh shit what if the users of this software did this really extreme thingâ. When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didnât consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you donât want to support it indefinitely (which we tried to do, it was hard). Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing. account42 wrote 6 hours 31 min ago: For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game. lentil_soup wrote 12 hours 51 min ago: they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :| technion wrote 12 hours 57 min ago: Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use. Error: game running for two years, rebooting so you cant cheese a timer. Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered. account42 wrote 6 hours 26 min ago: There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer. jraph wrote 14 hours 53 min ago: Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered? Although for old games released before internet was widespread in the general population, it might have not been this obvious. sim7c00 wrote 12 hours 15 min ago: aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?). looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it? lstodd wrote 2 hours 37 min ago: > if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?). Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore. Gamemaster1379 wrote 18 hours 12 min ago: Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash. URI [1]: https://n64squid.com/paper-mario-reward-block-glitch/ stevage wrote 20 hours 13 min ago: You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...) jsheard wrote 20 hours 49 min ago: There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race. URI [1]: https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan... elcritch wrote 7 hours 24 min ago: We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;) BolexNOLA wrote 9 hours 1 min ago: Lord have mercy fandom really has become unbearable with the ads and pop ups. coldpie wrote 7 hours 42 min ago: Install an ad blocker. BolexNOLA wrote 7 hours 30 min ago: I opened this on an iPhone which has fewer adblock options. Desktop is better locked down. Regardless I can still complain about how intrusive the ads are. account42 wrote 6 hours 33 min ago: Don't accept devices that limit your ad blocker options. BolexNOLA wrote 6 hours 28 min ago: Does this discussion strike you as one where Iâm deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom? These types of comments are always very unhelpful. ogurechny wrote 4 hours 24 min ago: No, that's just a reminder that you had a choice, and chose empty talk about âecosystemsâ over ability to control what you can see on âyourâ screen. You've stepped on a rake once, you got some experience, why repeat it over and over again? BolexNOLA wrote 2 hours 37 min ago: Or another option: we could remember that the ultimate offender here is Fandom. My choice of device is irrelevant when assessing their crappy site. JustExAWS wrote 7 hours 12 min ago: I just opened this my iPhone with 1Blocker installed. I saw no ads. Itâs been around since iOS 8 BolexNOLA wrote 5 hours 57 min ago: Never heard of it, appreciate the recc! Edit: ah only works on safari mrguyorama wrote 5 hours 34 min ago: You are on iOS. There is only safari. Any other "web browser" is just a skin over safari BolexNOLA wrote 4 hours 53 min ago: Yes I know everything is wrapped around safari. But I like having Firefox syncing across devices. Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now coldpie wrote 7 hours 26 min ago: There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success. Gravityloss wrote 9 hours 26 min ago: Am reminded by this quote from Ferdinand Porsche: "The perfect racing car crosses the finish line first and subsequently falls into its component parts." Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve. aleks224 wrote 3 hours 30 min ago: There's a quote in the bible that says something similar: "Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.â (John 12:24) WJW wrote 8 hours 43 min ago: The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this. But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once. creaturemachine wrote 6 hours 32 min ago: Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race. AzN1337c0d3r wrote 5 hours 58 min ago: > Back in the real world, no race team would agree that their cars should disintegrate after one race. Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014? pfdietz wrote 3 hours 50 min ago: Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable. kllrnohj wrote 4 hours 52 min ago: Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines. You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles. jperras wrote 5 hours 24 min ago: If you go back further than that, teams used to destroy entire engines for a single qualifying. The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it. They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap. After which the engine was basically unusable, and so they'd put in a new one for the race. gnatolf wrote 37 min ago: Current examples would be drag racing cars that have motors that are designed and used in a way that they only survive for about 800 total revolutions. creaturemachine wrote 5 hours 25 min ago: Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps. lostlogin wrote 3 hours 33 min ago: > cigarette money enabled all kinds of shenanigans. It still does. New Zealand has a crop of tobacco funded politicians. lawlessone wrote 2 hours 52 min ago: >New Zealand has a crop of tobacco funded politicians. when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film? TylerE wrote 5 hours 8 min ago: F1 income is way way higher than the 80s. ortusdux wrote 6 hours 43 min ago: Anyone can build a bridge, but it takes an engineer to barely build a bridge. The_Fox wrote 2 hours 12 min ago: This is a great quote for the topic, but the quote is normally about a bridge that barely stands. I'm chuckling at the thought of barely building something. (All in good fun, thank you.) mikepurvis wrote 5 hours 32 min ago: Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis. ortusdux wrote 5 hours 0 min ago: There was an aluminum extrusion company that falsified test records for years. They got away with it because what's a few % when your customer's safety factor is 2. Once they got into weight sensitive aerospace applications, where sometimes the factor is 1.2, rockets starting blowing up on the launch pad. URI [1]: https://www.justice.gov/archives/opa/pr/aluminum-e... aYsY4dDQ2NrcNzA wrote 4 hours 18 min ago: Should have resulted in jail time. A monetary fine is no deterrent. Scramblejams wrote 1 hour 28 min ago: It did result in jail time. The linked document states that the testing lab supervisor was sentenced to 3 years. (Not sure how much of that time was actually served, apparently he was suffering from dementia.) More info: [1] Also a correction to GP: They were payload deployment failures, they didn't blow up on the pad. More here: URI [1]: https://www.oregonlive.com/portland/2018/08/co... URI [2]: https://arstechnica.com/science/2019/05/nasa-f... signalToNose wrote 8 hours 6 min ago: Consumer protection laws prevents businesses following this to itâs extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as itâs sold. It has the fulfilled its purpose from their point of view delichon wrote 7 hours 45 min ago: I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model. FuriouslyAdrift wrote 3 hours 37 min ago: Try a Breville PolyScience... [1] Or if you want something even beefier: URI [1]: https://www.breville.com/en-us/product/csv750 URI [2]: https://sammic.com/en/smartvide-xl delichon wrote 2 hours 22 min ago: It looks like the Breville is the most affordable at $600. Currently I'm paying optimistically $45/90 days or $0.50/day. For the Breville to match that it would need to survive for 3.29 years. Will it? FuriouslyAdrift wrote 2 hours 12 min ago: Maybe... the Sammic is made for a high volume commercial kitchen muzani wrote 5 hours 59 min ago: What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well. delichon wrote 5 hours 43 min ago: Beef, lamb, sometimes pork. I have a daily meal of a cheap, tough cut of meat cooked for 48 hours at 150F. Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking. The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it. Nathanael_M wrote 5 hours 39 min ago: I think I love you? This is great. Do you have them running in arrays of 3? Whatâs your favourite cut? Whatâs the best cost:deliciousness cut? What bags do you use to minimize plastic leeching? delichon wrote 5 hours 28 min ago: It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank. Nathanael_M wrote 5 hours 19 min ago: Do you take pride in knowing that you eat cooler than anyone else, because you should. Short rib is shocking where I am. Even chuck is pushing past $15 a pound. What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption. delichon wrote 4 hours 57 min ago: Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs. I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat. I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting. cestith wrote 6 hours 37 min ago: A friend of mine gets new headphones/headsets every six to eighteen months, and hasnât bought a pair entirely out of pocket in years. For him itâs all down to buying the Microcenter protection plan every time theyâre replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesnât even care about the manufacturerâs warranty anymore. Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we havenât really hashed out the numbers together. Iâm typing this while wearing a HyperX gaming headset I bought refurbished thatâs old enough that Iâve replaced the earpads while everything else continues to work. Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would. hnuser123456 wrote 7 hours 18 min ago: Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those. mattkrause wrote 6 hours 11 min ago: Definitely -- get something meant for a lab. I worked in one that had a 150F water bath running day and night. WJW wrote 7 hours 4 min ago: There are, and if you really have the workload that you need to cook stuff 24/7 (what in gods name is OP cooking btw?) then you should definitely get one of those. Maybe not even secondhand but just a new one. The cheap consumer grade ones are meant for people who use them once or twice a year. This is a fine example of what I meant about people complaining when they use products beyond their design parameters. lawlessone wrote 2 hours 48 min ago: If the manufacturers keep replacing the machines because they're within warranty isn't this cheaper for OP? tracker1 wrote 6 hours 17 min ago: I got one that seems to be kind of in the middle, it's better built than most of the consumer models but not quite as "industrial" feeling as some of the commercial models. I use it a few times a week for a few hours each. I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours. I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality. elzbardico wrote 6 hours 30 min ago: It is easy to have to run a bunch of sous vide cooker 24/7 if you have a small restaurant or food delivery business. compiler-guy wrote 5 hours 1 min ago: In which case one shouldn't be using consumer-grade kitchen equipment. elzbardico wrote 4 hours 24 min ago: Call it vibe cooking. account42 wrote 7 hours 21 min ago: Well from an evil business perspective their options are either - the product doesn't break and you don't buy a replacement from them because you still have a working product - the product breaks and there is a greater than 0% chance that you will buy a replacement product from them Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out. rlander wrote 7 hours 27 min ago: Thatâs not a small cycle count for a normal household. 90 à 24 = 2,160 total hours. I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person. Not bad at all. plywoodShadow wrote 6 hours 52 min ago: 2160/12 is 180 weeks, or roughly 3.5 years, not 15 years josephg wrote 7 hours 16 min ago: Photography is the same way. Most SLR / DSLR / mirrorless cameras have a mechanical shutter which is expected to last around 200k-1m activations. I've had a camera for a bit over a year. I've used it quite heavily, and my shutter count is at about 13k photos. At this rate, the shutter will probably last for 20+ years - which seems fine. If I'm still using the camera by then, spending a few hundred dollars to replace the shutter mechanism sounds totally reasonable. account42 wrote 7 hours 17 min ago: You think a measly 360 uses at your 6 hours typical operation is even remotely acceptable for a glorified heating element? And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS. 47282847 wrote 7 hours 20 min ago: Assuming linearity, which I doubt is the case. doubled112 wrote 8 hours 16 min ago: When the design spec seems to be a 3 year long lease I can see why people get bothered. lelandfe wrote 19 hours 58 min ago: So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked. (I also never managed to get it) jonhohle wrote 19 hours 28 min ago: Iâm going to wager that the cutscenes are all XA audio/video DMAâd from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesnât hurt unless you need to time it to avoid an error reading the file for the next section of gameplay. ad133 wrote 11 hours 10 min ago: This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes. Insanity wrote 18 hours 58 min ago: Thatâs a solid guess. And if thatâs the case, thatâs actually pretty good error handling! Jare wrote 14 hours 16 min ago: I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming. p1necone wrote 19 hours 33 min ago: > Never knew why that worked. I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash. debo_ wrote 20 hours 16 min ago: So that's why it's called Excalibur 2! jbreckmckye wrote 20 hours 41 min ago: Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated reactordev wrote 20 hours 0 min ago: Longer vsync pauses but larger frame time deltas so itâs basically the same speed of play. The only thing that was even noticeable was the UI lag. fredoralive wrote 15 hours 2 min ago: Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions. reactordev wrote 12 hours 39 min ago: Incorrect. Iâm looking at the source code. Itâs not perfect but itâs not just âslowed down to 50hzâ like people claim. jbreckmckye wrote 11 hours 8 min ago: When you say looking at the source code, what do you mean here? AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI) In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator (Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play) anthk wrote 8 hours 42 min ago: FF VII-IX were reimplemented under a custom engine. reactordev wrote 4 hours 48 min ago: Except Iâm looking at the original source, not the remake, the crappy C/C++ Square engine. Not C# unity code. There are a number of timers and things used. But the claim that it runs slower is absolutely false. Itâs just perceived that way because itâs âdrawnâ slower. jbreckmckye wrote 3 hours 35 min ago: Firstly, could you elaborate what code you're looking at? Square have never shared the source code for these titles and were not even practicing real version control at this time (see: Eidos FF7/8 debacle) Secondly, it absolutely will run slower. Animations will take longer to complete; FMVs will play at a different rate ; controller sampling will be reduced. My scepticism isn't coming from hearsay or ignorance: I have written PlayStation software, and PSX software is not parallelised, even though it can support threading and cooperative concurrency. The control flow of the title is very locked into the VSync loop, from your first ResetGraph(0) right to your final DrawOTable(*p). In addition, I have done a bunch of reversing work on the other two PSX games, and they are not monolithic programs. They can't be because there simply isn't enough RAM to store the .TEXT of the entire thing at once. So when you say "the source code", I'm inclined to ask - for which module? The kernel or one of the overlays? reactordev wrote 9 hours 0 min ago: Itâs definitely not lost⦠jbreckmckye wrote 8 hours 34 min ago: What code are you looking at? FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory. From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc. mungoman2 wrote 14 hours 48 min ago: Wouldn't a slower tick make it easier as you get more wall time to do the same challenge. fredoralive wrote 14 hours 34 min ago: No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc. serf wrote 21 hours 11 min ago: The easy way to e-Nostradamus predictions: "See this crash? I predicted it years ago. Don't ask me how, I couldn't tell you." p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum. prmoustache wrote 14 hours 11 min ago: I had an iPaq for a while and I don't remember seeing OS/hardware crashes. JoshGlazebrook wrote 21 hours 18 min ago: 2038 is going to be a fun year. cestith wrote 6 hours 29 min ago: You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced. I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I donât have to worry about that. chatmasta wrote 2 hours 19 min ago: Youâve got 13 years to update unless any of your code includes dates in the future. Just stay away from anything related to mortgages, insurance policies, eight year PhD programs, retirement accounts⦠account42 wrote 5 hours 49 min ago: Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them. cestith wrote 2 hours 19 min ago: Most 32-bit games written for some form of Unix will use the system time_t if they care about time. The ones written properly, anyway. Modern Unix systems have a 64-bit time_t, even on 32-bit hardware and OS. If itâs on some other OS and uses the Unix epoch on a signed 32-bit integer thatâs another design flaw. pjc50 wrote 11 hours 12 min ago: Fixing that is my retirement plan. kevin_thibedeau wrote 19 hours 17 min ago: Everybody is sleeping on 2036 for NTP. That's when the fun begins. wiredpancake wrote 18 hours 17 min ago: Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036. The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine. jonhohle wrote 19 hours 44 min ago: That seems much closer than it did in y2k. aaronbrethorst wrote 14 hours 37 min ago: [ 25 ] Now [ 13 ] yep sunrunner wrote 21 hours 20 min ago: Not a comment on the post, but I sure wish Jira would load even half as quickly as this site. 9dev wrote 17 hours 32 min ago: We recently moved to Linear and couldnât be happier, can recommend! andrewinardeer wrote 20 hours 2 min ago: Perhaps it's hosted on a disposable vape? Shared404 wrote 1 hour 27 min ago: Pretty sure the dead sibling to this comment shouldn't be dead. Source: [1] badass URI [1]: https://lenowo.org/viewtopic.php?t=28 gmane wrote 19 hours 55 min ago: Commenting on my Epic from an LG Fridge. stevage wrote 20 hours 12 min ago: It's not loading for me at all. hughes wrote 20 hours 37 min ago: Is this a joke because the site isn't loading at all? sunrunner wrote 20 hours 6 min ago: At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;) SpicyUme wrote 20 hours 24 min ago: Came back to check this since the tab never loaded. I'm guessing traffic caused some issues? fifteen1506 wrote 12 hours 4 min ago: It just supports 1536 concurrent users [0]. Which is fine unless you get to HN frontpage. [0] URI [1]: https://lenowo.org/viewtopic.php?t=28 minki_the_avali wrote 14 hours 6 min ago: You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though Insanity wrote 18 hours 56 min ago: Iâm guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills. antsar wrote 21 hours 14 min ago: It takes serious hardware investment [0] to pull that off. [0] URI [1]: https://lenowo.org/viewtopic.php?t=28 fifteen1506 wrote 12 hours 5 min ago: Meta-Meta-Meta: Update: After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it Source: URI [1]: https://lenowo.org/viewtopic.php?t=28 skilled wrote 14 hours 22 min ago: > Host it on the Fritzbox 7950 instead? It's a router.. oh my god that made me laugh ranger_danger wrote 21 hours 23 min ago: Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all. unixhero wrote 21 hours 15 min ago: Yes. I think it it seems like it was the os that overflowed, and not Doom in this case. jama211 wrote 15 hours 33 min ago: They explained it was in the game code though? unixhero wrote 12 hours 59 min ago: To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS. I am not an OS developer, so I take my own conclusion with a grain of salt. nomel wrote 21 hours 12 min ago: It's also running on very old hardware, potentially with some electrolytic capacitors that have dried up. And, there's always the possibility that it's a gamma ray [1]! URI [1]: https://www.bbc.com/future/article/20221011-how-space-weat... 0cf8612b2e1e wrote 21 hours 34 min ago: I am going to need to see this replicated before I can believe. DIR <- back to front page