_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   The Cloudflare outage might be a good thing
       
       
        torginus wrote 5 min ago:
        What happens if you don't use Cloudflare and just host everything on a
        server?
        
        Can't you run a website like that if you don't host heavy content?
        
        How common are DDOS attacks anyway, and aren't there local (to the
        server), that analyze user behavior to a decent accuracy (at least it
        can tell they're using a real browser and behaving more or less like a
        human would, making attacks expensive).
        
        Can't you buy a list of ISP ranges from a GeoIP provider (you can), at
        least then you'd know which addresses belong to real humans.
        
        I don't think botnets are that big of a problem (maybe in some obscure
        places of the world, but you can temp rangeban a certain IP range, if
        there's a lot of suspicious traffic coming from there).
        
        If lots of legit networks (as in belonging to people who are paying an
        ISP for their network connections) have botnets, that's means most PCs
        are compromised, which is a much more severe issue.
       
        nicman23 wrote 54 min ago:
        i hate that i cannot just scrape things for me usage and i have to use
        things like camufox instead of curl
       
        jcattle wrote 54 min ago:
        "The Cloudflare outage was a good thing [...] they're a warning. They
        can force redundancy and resilience into systems."
        
        - he says. On Github.
       
          Afforess wrote 28 min ago:
          Thanks for doing the meme! [1] You are very intelligent!
          
   URI    [1]: https://knowyourmeme.com/memes/we-should-improve-society-som...
       
            jcattle wrote 18 min ago:
            That's fair. However I don't think I would have wrote that if those
            thoughts were shared on a blogging platform.
            
            Most blogging platforms do not qualify as critical infrastructure.
            GitHub with all its CI/CD and supply chain attacks does.
            
            There is a certain particular irony of this being written on
            critical (centralized) infrastructure without any apparent need.
            
            Maybe it was intended, maybe not, in any case I found it funny.
       
              rkomorn wrote 14 min ago:
              I agree. I think the whole point is someone like TFA author has a
              pretty broad choice of places they can choose to publish this and
              choosing GitHub is somewhat ironic.
              
              Reminds me of the guy who posted an open letter to Mark
              Zuckerberg like "we are not for sale" on LinkedIn, a place that
              literally sells access to its users as their main product.
       
        notepad0x90 wrote 1 hour 29 min ago:
        Does the author of this post not see the irony of posting this content
        on Github?
        
        My counter argument is that "centralization" in a technical sense isn't
        about what company owns things but how services are operated.
        Cloudflare is very decentralized.
        
        Furthermore, I've seen regional outages caused by things like anchors
        dropped by ships in the wrong place, a shark eating a cable. Regional
        power outages caused by squirrels,etc... outages happen.
        
        If everyone ran their own server from their own home, AT&T or Level3
        could have an outage and still take out similar swathes of the
        internet.
        
        With CDNs like cloudflare, if Level3 had an outage, your website won't
        be down because your home or VPS host's upstream transit happens to be
        Level3 (or whatever they call themselves these days) because your
        content (at least static) is cached globally.
        
        The only real reasonable alternative is something like ipfs, web3 and
        similar talk.
        
        Cloudflare has always called itself a content transport provider, think
        of it as such. But also, Cloudflare is just one player, there are
        several very big players. Every big cloud provider has a competing
        product, not to mention companies like Akamai.
        
        People are rage posting about cloudflare, especially because it has
        made CDNs accessible to everyone. You can easily setup a free
        cloudflare account and be on your merry way. This isn't something you
        should be angry about. You're free to pay for any number of other cdns,
        many do.
        
        If you don't like how Cloudflare has so much market share, then come up
        with a similarly competitive alternative and profit. Just this HN
        thread alone is enough for me to think there is a market for more
        players. Or, just spread the word about the competition that exists
        today. Use frontdoor, cloudfront, netlify, flycdn, akamai,etc... It's
        hardly a monopoly.
       
        tomschwiha wrote 1 hour 32 min ago:
        For me personally I didn't notice the downtime in the first hour or so.
        When using some website assets were not loading, but that's it.
        Turnstile outage maybe impacted me most. Could be because I'm EU based
        and Cloudflare is not "so" widespread here as in other parts of the
        world.
       
        rzerowan wrote 1 hour 47 min ago:
        So were going backwards to a world where there are basically 5
        computers running  everything and everyone is basically accessing the
        world through a dumb terminal.Even though the digital slab in our
        pockets has more compute than a roomful of the early gen devices.
        Hopefully critical infrashifts back to managed metal or private clouds
        - dont see it though with the last decades of cloud evangalism to move
        all legacy systems to the cloud doesnt look like reversing anytime
        soon.
       
          zwnow wrote 1 hour 45 min ago:
          I agree considering all the Cloudflare AWS Azure apologists I see all
          around... Learning AWS already is the #1 tip on social media to
          "become employed as a dev in 2025 guaranteed" and I always just sigh
          when seeing this. I wouldnt touch it with a stick.
       
        vasco wrote 1 hour 48 min ago:
        I'll die on the hill that centralization is more efficient than
        decentralization and that rare outages of hugely centralized systems
        that are otherwise highly reliable are much better than full
        decentralization with much worse reliability.
        
        In other words, when AWS or Cloudflare go down it's catastrophic in the
        sense that everyone sees the issues at the same time, but smaller
        providers usually have much more ongoing issues, that just happen to be
        "chronic" vs "acute" pains.
       
          torginus wrote 13 min ago:
          And the irony is that people are pushing for decentralization like
          microservices and k8s - on centralized platforms like AWS.
       
          GeneralMaximus wrote 1 hour 2 min ago:
          Efficient in terms of what, exactly?
          
          There are multiple dimensions to this problem. Putting everything
          behind Cloudflare might give you better uptime, reliability,
          performance, etc. but it also has the effect of centralizing power
          into the hands of a single entity. Instead of twisting the arms of
          ten different CXOs, your local politician now only needs to twist the
          arm of a single CXO to knock your entire business off the internet.
          
          I live in India, where the government has always been hostile to the
          ideals of freedom of speech and expression. Complete internet
          blackouts are common in several states, and major ISPs block websites
          without due process or an appeals mechanism. Nobody is safe from
          this, not even Github[1]. In countries like India, decentralization
          is a preventative measure. [1] 
          
          And I'm not even going to talk about abuse of monopoly power and all
          that. What happens when Cloudflare has their Apple moment? When they
          jack up their prices 10x, or refuse to serve customers that might use
          their CDNs to serve "inappropriate" content? When the definition of
          "inappropriate" is left fuzzy, so that it applies to everything from
          CSAM to political commentary?
          
          No thanks.
          
   URI    [1]: https://en.wikipedia.org/wiki/Censorship_of_GitHub#India
       
            vasco wrote 40 min ago:
            The fix to government censorship must be political, not technical.
       
          Xelbair wrote 1 hour 28 min ago:
          >I'll die on hill that hyperoptimized systems are more efficient than
          anti-fragile.
          
          Of course they are, the issue is what level of failure were going to
          accept.
       
        0xbadcafebee wrote 2 hours 38 min ago:
        Centralization has nothing to do with the problems of society and
        technology. And if you think the internet is all controlled by just a
        couple companies, you don't actually understand how it works. The
        internet is wildly decentralized. Even Cloudflare is. It offers tons of
        services, all of which are completely optional and can be used
        individually. You can also stop using them at any time, and use any of
        their competitors (of which there are many).
        
        If, on the off chance, people just get "addicted" to Cloudflare, and
        Cloudflare's now-obviously-terrible engineering causes society to
        become less reliable, then people will respond to that. Either
        competitors will pop up, or people will depend on them less, or
        governments will (finally!) impose some regulations around the
        operation of technical infrastructure.
        
        We have actually too much freedom on the Internet. Companies are free
        to build internet systems any way they want - including in very
        unreliable ways - because we impose no regulations or standards
        requirements on them. Those people are then free to sell products to
        real people based on this shoddy design, with no penalty for the
        products falling apart. So far we haven't had any gigantic disasters
        (Great Chicago Fire, Triangle Shirtwaist Factory Fire, MGM Grand Hotel
        Fire), but we have had major disruptions.
        
        We already dealt with this problem in the rest of society. Buildings
        have building codes, fire codes, electrical codes. They prescribe and
        require testing procedures, provide standard building methods to ensure
        strength in extreme weather, resist a spreading fire long enough to
        allow people to escape, etc. All measures to ensure the safety and
        reliability of the things we interact with and depend on. You can build
        anything you want - say, a preschool? - but you aren't allowed to build
        it in a shoddy manner. We have that for physical infrastructure; now we
        need it for virtual infrastructure. A software building code.
       
          DeathArrow wrote 2 hours 6 min ago:
          Centralization means having a single point of failure for everything.
          If your government, mobile phone or car stops working, it doesn't
          mean all governments, all cars and all mobile phones stop working.
          
          Centralization makes mass surveillance easier, makes selectively
          denying of service easier. Centralization also means that once
          someone hacks into the system, he gains access to all data, not just
          a part of it.
       
        almosthere wrote 2 hours 38 min ago:
        how many people are still on us-east-1
       
          mcny wrote 2 hours 29 min ago:
          My old employer used azure. It irritated me to no end when they said
          we must rename all our resources to match the convention of naming
          everything US East as "eu-" because (Eastern United States I guess)
          
          A total clown show
       
        Surac wrote 2 hours 55 min ago:
        The thing I learned from the incident is that rust offer a unpack
        function. It puzzles me why the hell they build such a function in the
        first place.
       
          aw1621107 wrote 2 hours 47 min ago:
          > It puzzles me why the hell they build such a function in the first
          place.
          
          One reason is similar to why most programming languages don't return
          an Option when indexing into an array/vector/list/etc. There are
          always tradeoffs to make, especially when your strangeness budget is
          going to other things.
       
        zie1ony wrote 3 hours 5 min ago:
        My friend wasn't able to do RTG during the outage. They had to use
        ultrasound machine on his broken arm to see inside.
       
          Aurornis wrote 2 hours 40 min ago:
          > My friend wasn't able to do RTG during the outage.
          
          What is RTG?
       
            teiferer wrote 1 hour 43 min ago:
            Wilhelm Röntgen, Nobel Prize in 1901, experimentally discovered
            X-rays.
       
            digestives wrote 2 hours 25 min ago:
            X-ray, in some languages (like Polish) the abbreviation comes from
            
   URI      [1]: https://en.wikipedia.org/wiki/Roentgen_(unit)
       
            soni96pl wrote 2 hours 31 min ago:
            X-ray
       
        oidar wrote 3 hours 6 min ago:
        I wonder what would life without cloudflare look like? What practices
        would fill the gaps if a company didn't - or wasn't allowed to --
        satisfy the the concerns that cloudflare fills.
       
        throwaway81523 wrote 3 hours 10 min ago:
        Now just wait til every country on earth really does replace most of
        its employees with ChatGPT... and then OpenAI's data center goes
        offline with a fiber cut or something.    All work everywhere stops. 
        Cloudflare outage is nothing compared to that.
       
          delaminator wrote 41 min ago:
          That was this outage. ChatGPT and Claude are both behind
          Clouflare’s bot detection. You couldn’t log into either Web
          frontends.
          
          And the error message said you were blocking them. We had support
          tickets coming in demanding to know why ChatGPT was being blocked.
          
          We also couldn’t log into our supplier’s B2B system to place our
          customer orders.
          
          So all the advice of “just self host” is moot when you’re in a
          food web.
       
          teiferer wrote 1 hour 42 min ago:
          > goes offline with a fiber cut
          
          If a fiber cut brings your network down then you have fundamental
          network design issues and need to change hiring practices.
       
          DeathArrow wrote 2 hours 5 min ago:
          That's why it's better to have redundancy. Hire Claude and Deepseek,
          too.
       
        tonyhart7 wrote 3 hours 16 min ago:
        I don't like this argument since you can applied this argument to
        google,microsot,aws,facebook etc
        
        Tech world is dominated by US company and what is alternative to most
        of these service???? its a lot fewer than you might think and even then
        you must make a compromise in certain areas
       
        L-four wrote 3 hours 21 min ago:
        It's a tragedy of the commons. Even if you don't use Cloudflare does it
        matter if  no one can pay for your products.
       
        stroebs wrote 3 hours 53 min ago:
        The problem is far more nuanced than the internet simply becoming too
        centralised.
        
        I want to host my gas station network’s air machine infrastructure,
        and I only want people in the US to be able to access it. That simple
        task is literally impossible with what we have allowed the internet to
        become.
        
        FWIW I love Cloudflare’s products and make use of a large amount of
        them, but I can’t advocate for using them in my professional job
        since we actually require distributed infrastructure that won’t fail
        globally in random ways we can’t control.
       
          notepad0x90 wrote 1 hour 28 min ago:
          Is Cloudflare having more outages than aws, gcp or azure? Honestly
          curious, I don't know the answer.
       
            nananana9 wrote 55 min ago:
            Definitely not.
            
            I was a bit shocked when my mother called me for IT help and sent
            me a screenshot of a Cloudflare error page with Cloudflare being
            the broken link and not the server. I assumed it's a bug in the
            error page and told her that the server is down.
       
          Xelbair wrote 1 hour 35 min ago:
          Genuine question - why are you spending time and effort on geofencing
          when you could spend it on improving your software/service?
          
          It takes time and effort for no gain in any sensible business goal.
          People outside of US won't need it, bad actors will spoof their
          location, and it might inconvenience your real customers.
          
          And if you want a secure communication just setup zero-trust network.
       
          Joel_Mckay wrote 2 hours 20 min ago:
          Client side SSL certificates with embedded user account
          identification are trivial, and work well for publicly exposed
          systems where IPsec or Dynamic frame sizes are problematic (corporate
          networks often mangle traffic.)
          
          Accordingly, connections from unauthorized users is effectively
          restricted, but is also not necessarily pigeonholed to a single point
          of failure. [1] Best of luck =3
          
   URI    [1]: https://www.rabbitmq.com/docs/ssl
       
          asimovDev wrote 2 hours 30 min ago:
          not a sysadmin here. why wouldn't this be behind a VPN or some kind
          of whitelist where only confirmed IPs from the offices / gas stations
          have access to the infrastructure?
       
            yardstick wrote 2 hours 4 min ago:
            In practice, many gas stations have VPNs to various services,
            typically via multiple VPN links for redundancy. There’s no
            reason why this couldn’t be yet another service going over a VPN.
            
            Gas stations didn’t stop selling gas during this outage. They
            have planned for a high degree of network availability for their
            core services. My guess is this particular station is an
            independent or the air pumping solution not on anyone’s high risk
            list.
       
          zrm wrote 2 hours 37 min ago:
          > I want to host my gas station network’s air machine
          infrastructure, and I only want people in the US to be able to access
          it. That simple task is literally impossible with what we have
          allowed the internet to become.
          
          That task was never simple and is unrelated to Cloudflare or AWS. The
          internet at a fundamental level only knows where the next hop is, not
          where the source or destination is. And even if it did, it would only
          know where the machine is, not where the person writing the code that
          runs on the machine is.
       
            teiferer wrote 1 hour 53 min ago:
            And that is a good thing and we should embrace it instead of giving
            in to some idiotic ideas from a non-technical C-suite demanding
            geofencing.
       
          Aurornis wrote 2 hours 42 min ago:
          > and I only want people in the US to be able to access it. That
          simple task is literally impossible with what we have allowed the
          internet to become.
          
          Is anyone else as confused as I am about how common anti-openness and
          anti-freedom comments are becoming on HN? I don’t even understand
          what this comment wants: Banning VPNs? Walling off the rest of the
          world from US internet? Strict government identity and citizenship
          verification of people allowed to use the internet?
          
          It’s weird to see these comments get traction after growing up in
          an internet where tech comments were relentlessly pro freedom and
          openness on the web. Now it seems like every day I open HN and there
          are calls to lock things down, shut down websites, institute age (and
          therefore identify) verification requirements. It’s all so foreign
          and it feels like the vibe shift happened overnight.
       
            dmoy wrote 2 hours 30 min ago:
            > Is anyone else as confused as I am about how common anti-openness
            and anti-freedom comments are becoming on HN?
            
            In this specific case I don't think it's about being anti-open? 
            It's that a business with only physical presence in one country
            selling a service that is only accessible physically inside the
            country.... doesn't.... have any need for selling compressed air to
            someone who isn't like 15 minutes away from one of their gas
            stations?
            
            If we're being charitable to GP, that's my read at least.
            
            If it was a digital services company, sure.  Meatspace in only one
            region though, is a different thing?
       
              tensegrist wrote 49 min ago:
              "only need US customers to be able to" vs "want non-US customers
              to be unable to"
       
              teiferer wrote 1 hour 55 min ago:
              > In this specific case I don't think it's about being anti-open?
              It's that a business with only physical presence in one country
              selling a service that is only accessible physically inside the
              country.... doesn't.... have any need for selling compressed air
              to someone who isn't like 15 minutes away from one of their gas
              stations?
              
              But that person might be physically further away at the time they
              want to order something or gather information etc. Maybe they are
              on holidays in Spain and want to access their account to pay a
              bill. Maybe they are in Mexico on a work trip and want to help
              their aunt back home to use some service for which they need to
              log in from abroad.
              
              The other day I helped a neighbor (over here in Europe) prepare
              for a trip to Canada where he wanted to make adjustments to a car
              sharing account. The website always timed out. It was geofenced.
              I helped him set up a VPN. That illustrated how locked in this
              all has become, geofencing without thinking twice.
       
              vpribish wrote 2 hours 16 min ago:
              you're being obtuse, GP clearly wants a locked down internet
       
          Fnoord wrote 3 hours 50 min ago:
          Literally impossible? On the contrary; Geofencing is easy. I block
          all kind of nefarious countries on my firewall, and I don't miss them
          (no loss not being able to connect to/from a mafia state like
          Russia). Now, if I were to block FAMAG... or Cloudflare...
       
            stroebs wrote 3 hours 5 min ago:
            Yes, literally impossible. The barrier to entry for anyone on the
            internet to create a proxy or VPN to bypass your geofencing is
            significantly lower than your cost to prevent them.
       
              Joel_Mckay wrote 1 hour 57 min ago:
              Actually, the 140k Tor exit nodes, VPNs,  and compromised proxy
              servers have been indexed.
              
              It takes 24 minutes to compile these firewall rules, but the
              black-list along with tripwires have proven effective at banning
              game cheats.  Example, dropping connections from TX with a
              hop-count and latency significantly different from their peers.
              
              Preemptively banning all bad-reputation cloud IP ranges except
              whitelisted hosts has zero impact on clients. =3
       
              Dylan16807 wrote 2 hours 17 min ago:
              I don't understand why you want to allow any random guy anywhere
              in the US but not people country hopping on VPNs.  For your air
              machine infrastructure.
              
              It's a bit weird that you can't do this simple thing, but what's
              the motivation for this simple thing?
       
              Aurornis wrote 2 hours 41 min ago:
              I don’t even understand where this line of reasoning is going.
              Did you want a separate network blocked off from the world? A ban
              on VPNs? What are we supposed to believe could have been
              disallowed to make this happen?
       
        timenotwasted wrote 4 hours 9 min ago:
        "Embrace outages, and build redundancy." — It feels like back in the
        day this was championed pretty hard especially by places like Netflix
        (Chaos Monkey) but as downtime has become more expected it seems we are
        sliding backwards. I have a tendency to rely too much on feelings so
        I'm sure someone could point me to some data that proves otherwise but
        for now that's my read on things. Personally, I've been going a lot
        more in on self-hosting lots of things I used to just mindlessly leave
        on the cloud.
       
        0x073 wrote 4 hours 13 min ago:
        The outage wasn’t a good thing, since nothing is changing as a
        result. (How many outages does cloud flare had?)
       
        krick wrote 4 hours 16 min ago:
        It would be a good thing, if it would cause anything to change. It
        obviously won't. As if a single person reading this post wasn't aware
        that the Internet is centralized, and couldn't name specifically a few
        sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's
        the first time this happens. As if after the last time AWS failed (or
        the one before that, or one before…) anybody stopped using AWS. As if
        anybody could viably stop using them.
       
          tcfhgj wrote 29 min ago:
          > As if anybody could viably stop using them.
          
          You can, and even save money.
       
          stingraycharles wrote 1 hour 17 min ago:
          It’s just a function of costs vs benefits. For most people,
          building redundancy at this layer costs far too much than the
          benefits.
          
          If Cloudflare or AWS go down, the outage is usually so big that
          smaller players have an excuse and people accept that.
          
          It’s as simple as that.
          
          “Why isn’t your site working?” “Half the internet is down,
          here read this news article: …” “Oh, okay, let me know when
          it’s back!”
       
          testdelacc1 wrote 2 hours 6 min ago:
          If anything, centralisation shields companies using a hyperscaler
          from criticism. You’ll see downtime no matter where you host. If
          you self host and go down for a few hours, customers blame you. If
          you host on AWS and “the internet goes down”, then customers
          treat it akin to an act of God, like a natural disaster that affects
          everyone.
          
          It’s not great being down for hours, but that will happen
          regardless. Most companies prefer the option that helps them avoid
          the ire of their customers.
          
          Where it’s a bigger problem is when a critical industry like retail
          banking in a country all choose AWS. When AWS goes down all citizens
          lose access to their money. They can’t pay for groceries or
          transport. They’re stranded and starving, life grinds to a halt.
          But even then, this is not the bank’s problem because they’re not
          doing worse than their competitors. It’s something for the banking
          regulator and government to worry about. I’m not saying the bank
          shouldn’t worry about it, I’m saying in practice they don’t
          worry about it unless the regulator makes them worry.
          
          I completely empathise with people frustrated with this status quo.
          It’s not great that we’ve normalised a few large outages a year.
          But for most companies, this is the rational thing to do. And barring
          a few critical industries like banking, it’s also rational for
          governments to not intervene.
       
            DeathArrow wrote 2 hours 1 min ago:
            >If anything, centralisation shields companies using a hyperscaler
            from criticism. You’ll see downtime no matter where you host. If
            you self host and go down for a few hours, customers blame you.
            
            What if you host on AWS and only you go down? How does hosting on
            AWS shield you from criticism?
       
              testdelacc1 wrote 1 hour 51 min ago:
              This discussion is assuming that the outage is entirely out of
              your control because the underlying datacenter you relied on went
              down.
              
              Outages because of bad code do happen and the criticism is fully
              on the company. They can be mitigated by better testing and quick
              rollbacks, which is good. But outages at the datacenter level -
              nothing you can do about that. You just wait until the datacenter
              is fixed.
              
              This discussion started because companies are actually fine with
              this state of affairs. They are risking major outages but so are
              all their competitors so it’s fine actually. The juice isn’t
              worth the squeeze to them, unless an external entity like the
              banking regulator makes them care.
       
          sjamaan wrote 2 hours 59 min ago:
          Same with the big Crowdstrike fail of 2024. Especially when everyone
          kept repeating the laughable statement that these guys have their
          shit in order, so it couldn't possibly be a simple fuckup on their
          end. Guess what, they don't, and it was. And nobody has realized the
          importance of diversity for resilience, so all the major stuff is
          still running on Windows and using Crowdstrike.
       
            c0l0 wrote 44 min ago:
            I wrote [1] in response to the CrowdStrike fallout, and was tempted
            to repost it for the recent CloudFlare whoopsie. It's just too bad
            that publishing rants won't change the darned status quo! :')
            
   URI      [1]: https://johannes.truschnigg.info/writing/2024-07-impending...
       
          ehhthing wrote 3 hours 26 min ago:
          With the rise in unfriendly bots on the internet as well as DDoS
          botnets reaching 15 Tbps, I don’t think many people have much of a
          choice.
       
            swiftcoder wrote 1 hour 18 min ago:
            The cynic in me wonders how much blame the world's leading vendor
            of DDoS prevention might share in the creation of that particularly
            problem
       
              immibis wrote 8 min ago:
              They provide free services to DDoS-for-hire services and do not
              terminate the services when reported.
       
          captainkrtek wrote 4 hours 6 min ago:
          > It would be a good thing, if it would cause anything to change. It
          obviously won't.
          
          I agree wholeheartedly. The only change is internal to these
          organizations (eg: CloudFlare, AWS) Improvements will be made to the
          relevant systems, and some teams internally will also audit for
          similar behavior, add tests, and fix some bugs.
          
          However, nothing external will change. The cycle of pretending like
          you are going to implement multi-region fades after a week. And each
          company goes on continuing to leverage all these services to the Nth
          degree, waiting for the next outage.
          
          Not advocating that organizations should/could do much, it's all
          pros/cons. But the collective blast radius is still impressive.
       
            chii wrote 3 hours 33 min ago:
            the root cause is customers refusing to punish these downtime.
            
            Checkout how hard customers punish blackouts from the grid - both
            via wallet, but also via voting/gov't. It's why they are now more
            reliable.
            
            So unless the backbone infrastructure gets the same flak, nothing
            is going to change. After all, any change is expensive, and the
            cost of that change needs to be worth it.
       
              whatevaa wrote 2 hours 5 min ago:
              Grid reliability depends on where you live. In some places, UPS
              or even a generator is a must have. So it's a bad example, I
              would say.
       
              mopsi wrote 2 hours 44 min ago:
              Downtimes happen one way or another. The upside of using
              Cloudflare is that bringing things back online is their problem
              and not mine like when I self-host. :]
              
              Their infrastructure went down for a pretty good reason (let the
              one who has never caused that kind of error cast the first stone)
              and was brought back within a reasonable time.
       
              MikeNotThePope wrote 3 hours 11 min ago:
              Is a little downtime such a bad thing?    Trying to avoid some
              bumps and bruises in your business has diminishing returns.
       
                Xelbair wrote 1 hour 39 min ago:
                Even more so when most of the internet is also down.
                
                What are customers going to do? Go to competitor that's also
                down?
                
                It is extremely annoying, will ruin your day, but as movie
                quote goes - if everyone is special, no one is.
       
                  immibis wrote 9 min ago:
                  They could go to your competitor that's up. If you choose to
                  be up, your competitor's customers could go to you.
       
                krige wrote 2 hours 45 min ago:
                What's "a little downtime" to you might be work ruined and day
                wasted for someone else.
       
                  fragmede wrote 1 hour 35 min ago:
                  It's 2025. That downtime could be be difference between my
                  cat pics not loading fast enough, or someone's teleoperated 
                  robot surgeon glitching out.
       
                aaron_m04 wrote 2 hours 51 min ago:
                Depends on the business.
       
        theideaofcoffee wrote 4 hours 29 min ago:
        > They [outages] can force redundancy and resilience into systems.
        
        They won’t until either the monetary pain of outages becomes greater
        than the inefficiency of holding on to more systems to support that
        redundancy, or, government steps in with clear regulation forcing their
        hand. And I’m not sure about the latter. So I’m not holding my
        breath about anything changing. It will continue to be a circus of
        doing everything on a shoestring because line must go up every quarter
        or a shareholder doesn’t keep their wings.
       
          morshu9001 wrote 3 hours 53 min ago:
          That's ok though, not every website needs 5 9s
       
        chasing0entropy wrote 4 hours 40 min ago:
        Spot on article, but without a call to action. What can we do to combat
        the migration of society to a centralized corpro-government intertwined
        entity with no regard for 
        unprofitable privacy or individualism?
       
          adrianN wrote 3 hours 55 min ago:
          Individuals are unlikely to be able to do something about the
          centralization problem except vote for politicians that want to
          implement countermeasures. I don’t know of any politicians (with a
          chance to win anything) that have that on their agenda.
       
            teiferer wrote 1 hour 48 min ago:
            There is a crucial step between having an opinion and voting. It's
            conversations within society. That's what makes democracy and
            facilitates change. If you only take your opiniom, isolated from
            everybody else, and vote from that, there isn't much democracy
            going on and your chance for change is slim. It's when there is
            broad conversations happening when movements have an impact.
            
            And that step is here on HN. That's why it's very relevant to
            observe that that HN crowd is increasingly happy to support a
            non-free internet. Be it walled gardens, geofencing, etc.
       
            turtletontine wrote 3 hours 13 min ago:
            That’s called antitrust, and is absolutely a cause you can vote
            for. Some of the Biden administration’s biggest achievements were
            in antitrust, and the head of the FTC for Biden has joined
            Mamdani’s transition team.
       
          card_zero wrote 4 hours 19 min ago:
          We could quibble about the premise.
       
          DANmode wrote 4 hours 21 min ago:
          Learn how to host anything, today.
       
            rurban wrote 3 hours 37 min ago:
            If you host you are running on my cPanel SW. 70% of the internet is
            doing that. Also a kinda centralized point of failure, but I didn't
            hear of any bugs in the last 14 years.
       
            randallsquared wrote 3 hours 56 min ago:
            Have you tried that? I gave up on hosting my own email server seven
            or eight years ago, after it became clear that there would be an
            endless fight with various entities to accept my mail. Hosting a
            webserver without the expectation that you'll need some high
            powered DDOS defense seems naive, in the current day, and good luck
            doing that with a server or two.
       
              IgorPartola wrote 3 hours 40 min ago:
              I have never hosted my own email. It took me roughly a day to set
              it up on a vanilla FreeBSD install running on Vultr’s free tier
              plan and it has been running flawlessly for nearly a year. I did
              not use AI at all, just the FreeBSD, Postfix, and Dovecot’s
              handbooks. I do have a fair bit of Linux admin and development
              experience but all in all this has been a weirdly painless
              experience.
              
              If you don’t love this approach, Mail-in-a-box works incredibly
              well even if the author of all the Python code behind it insists
              on using tabs instead of spaces :)
              
              And you can always grab a really good deal from a small hosting
              company, likely with decades of experience in what they do, via
              LowEndBox/LowEndTalk. The deal would likely blow
              AWS/DO/Vultr/Google Cloud out of the water in terms of value. I
              have been snagging deals from there for ages and I lost a virtual
              host twice. Once was a new company that turned out to be shady
              and another was when I rented a VPS in Cairo and a revolution
              broke out. They brought everything back up after a couple of
              months.
              
              For example I just bought a lifetime email hosting system with
              250GB of storage, email, video, full office suite, calendar,
              contacts, and file storage for $75. Configuration here is down to
              setting the DNS records they give you and adding users. Company
              behind it has been around for ages and is one of the best
              regarded in the LET community.
       
                dmoy wrote 2 hours 21 min ago:
                It's not insurmountable to set up initially.  And when you get
                email denied from whatever org (your lawyer, your mom, some
                random business, whatever), each individual one isn't
                insurmountable to fix.    It does get old after awhile.
                
                It also depends on how much you are emailing, and who.    If it's
                always the same set of known entities, you might be totally
                fine with self hosting.  Someone else who's regularly emailing
                a lot of new people or businesses might incur a lot of
                overhead.  At least worth more than their time than a fastmail
                or protonmail subscription or whatever.
       
            imsurajkadam wrote 4 hours 6 min ago:
            Even if you learn to Host, there are many other services that are
            going to get relied on those centralised platforms, so if you are
            thinking to Host, every single thing on your own, then it is going
            to be more work than you can even imagine and definitely super hard
            to organise as well
       
        charcircuit wrote 4 hours 42 min ago:
        >It's ironic because the internet was actually designed for
        decentralisation, a system that governments could use to coordinate
        their response in the event of nuclear war
        
        This is not true. The internet was never designed to withstand nuclear
        war.
       
          bblb wrote 3 hours 45 min ago:
          Perhaps. Perhaps not. But it will survive it. It will survive a
          complete nuclear winter. It's too useful to die, and will be one the
          first things to be fixed after global annihilation.
          
          But Internet is not hosting companies or cloud providers. Internet
          does not care if they don't build their systems resilient enough and
          let the SPOFs creep up. Internet does it's thing and the packets keep
          flowing. Maybe BGP and DNS could use some additional armoring but
          there are ways around both of them in case of actual emergency.
       
          anonym29 wrote 4 hours 30 min ago:
          ARPANET was literally invented during the cold war for the specific
          and explicit purpose of networked communications resilience for
          government and military in the event major networking hubs went
          offline due to one or more successful nuclear attacks against the
          United States
       
            charcircuit wrote 4 hours 21 min ago:
            It literally wasn't. It's an urban myth.
            
            >Bob Taylor initiated the ARPANET project in 1966 to enable
            resource sharing between remote computers.
            
            >The ARPANET was not started to create a Command and Control System
            that would survive a nuclear attack, as many now claim.
            
   URI      [1]: https://en.wikipedia.org/wiki/ARPANET
       
              oidar wrote 4 hours 10 min ago:
              Per interviews, the initial impetus wasn't to withstand a nuclear
              attack - but after it was first set up, it most certainly a major
              part of the thought process in design.
              
   URI        [1]: https://web.archive.org/web/20151104224529/https://www.w...
       
                charcircuit wrote 4 hours 2 min ago:
                >but after it was first set up
                
                Your link is talking about work Baran did before ARPANET was
                created. The timeline doesn't back your point. And when ARPANET
                was created after Baran's work with Rand:
                
                >Wired: The myth of the Arpanet – which still persists – is
                that it was developed to withstand nuclear strikes. That's
                wrong, isn't it?
                
                >Paul Baran: Yes. Bob Taylor1 had a couple of computer
                terminals speaking to different machines, and his idea was to
                have some way of having a terminal speak to any of them and
                have a network. That's really the origin of the Arpanet. The
                method used to connect things together was an open issue for a
                time.
       
                  oidar wrote 3 hours 17 min ago:
                  Read the whole article. And peruse the oral history here: [1]
                  - the genesis was most definitely related to the cold war.
                  
                  "A preferred alternative would be to have the ability to
                  withstand a first strike and the capability of returning the
                  damage in kind. This reduces the overwhelming advantage by a
                  first strike, and allows much tighter control over nuclear
                  weapons. This is sometimes called Second Strike Capability."
                  
   URI            [1]: https://ethw.org/Oral-History:Paul_Baran
       
              anonym29 wrote 4 hours 12 min ago:
              The stated research goals are not necessarily the same as the
              strategic funding motivations. The DoD clearly recognized
              packet-switching's survivability and dynamic routing potential
              when the US Air Force funded the invention of networked packet
              switching by Paul Baran six years earlier, in 1960, for which the
              explicit purpose was "nuclear-survivable military
              communications".
              
              There is zero reason to believe ARPA would've funded the work
              were it not for internal military recognition of the utility of
              the underlying technology.
              
              To assume that the project lead was told EVERY motivation of the
              top secret military intelligence committee that was responsible
              for 100% of the funding of the project takes either a special
              kind of naïveté or complete ignorance of compartmentalization
              practices within military R&D and procurement practices.
              
              ARPANET would never have been were it not for ARPA funding, and
              ARPA never would've funded it were it not for the existence of
              packet-switched networking, which itself was invented and funded,
              again, six years before Bob Taylor even entered the picture, for
              the SOLE purpose of "nuclear-survivable military communications".
              
              Consider the following sequence of events:
              
              1. US Air Force desires nuclear-survivable military
              communications, funds Paul Baran's research at RAND
              
              2. Baran proves packet-switching is conceptually viable for
              nuclear-survivable communications
              
              3. His specific implementation doesn't meet rigorous Air Force
              deployment standards (their implementation partner, AT&T, refuses
              - which is entirely expectable for what was then a complex new
              technology that not a single AT&T engineer understood or had ever
              interacted with during the course of their education), but the
              concept is now proven and documented
              
              4. ARPA sees the strategic potential of packet-switched networks
              for the explicit and sole purpose of nuclear-survivable
              communications, and decides to fund a more robust development
              effort
              
              5. They use academic resource-sharing as the development/testing
              environment (lower stakes, work out the kinks, get future
              engineers conceptually familiar with the underlying technology
              paradigms)
              
              6. Researchers, including Bob Taylor, genuinely focus on resource
              sharing because that's what they're told their actual job is,
              even though that's not actually the true purpose of their work
              
              7. Once mature, the technology gets deployed for it's
              originally-intended strategic purposes (MILNET split-off in 1983)
              
              Under this timeline, the sole true reason for ARPA's funding of
              ARPANET is nuclear-survivable military communication, Bob Taylor,
              being the military's R&D pawn, is never told that (standard
              compartmentalization practice). Bob Taylor can credibly and
              honestly state that he was tasked with implementing resource
              sharing across academic networks, which is true, but was never
              the actual underlying motivation to fund his research.
              
              ...and the myth of "ARPANET wasn't created for nuclear
              survivability" is born.
       
          chasing0entropy wrote 4 hours 30 min ago:
          Arpanet absolutely was designed to be a physically resilient network
          which could survive the loss of multiple physical switch locations.
       
       
   DIR <- back to front page