_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Peerweb: Decentralized website hosting via WebTorrent
       
       
        tiku wrote 1 hour 29 min ago:
        Napster.. so what happens if peerweb.lol goes down?
       
        kkfx wrote 1 hour 57 min ago:
        In the past ZeroNet was performant enough to realistically share
        websites but it's abandoned (ZeroNet Conservacy exist but no active
        peers seems to exists) this allow client to use an website without
        installing anything, which is nice, but how to get things visible
        initially it's well... A human challenge...
       
        als0 wrote 1 hour 59 min ago:
        Why does every sentence have an emoji?
       
        keepamovin wrote 2 hours 30 min ago:
        I'm glad to see this was not unexpectedly fast to load. Would not want
        to upset those distributed expectations! I wonder if there's a business
        model in selling speed on a robust network that is on average too slow.
        Is there anyway to incentivize more nodes through micropayments
        distributed from people who pay for their site to be served faster?
        
        Ultimately I guess the distributed web is felled by economics thus far.
       
        likiiio wrote 2 hours 43 min ago:
        Can sanitation be disabled? I.e. can this be used to access static
        websites as-is?
       
        bawolff wrote 4 hours 53 min ago:
        > Enhanced security with DOMPurify integration!
        
        >  XSS Protection - All HTML sanitized with DOMPurify
        >  Malicious Code Removal - Dangerous tags and attributes filtered
        >  Sandboxed Execution - Sites run in isolated iframe environment
        
        I don't think that super makes sense. You probably just want the iframe
        sandbox and not remove all js. Or ideally put the torrent hash as the 
        subdomain to use same origin policy.
       
        kruhft wrote 6 hours 5 min ago:
        This is probably going to be taken down like my site was that used Web
        Torrent.
        
        dropclickpaste.com is for sale. kruhft.at.gmail.com
       
        1vuio0pswjnm7 wrote 7 hours 34 min ago:
        No Javascript [1] /releases/expanded_ass...
        
        If the address is a hash perhaps it could contain a public key
        
   URI  [1]: https://github.com/Omodaka9375/peerweb
   URI  [2]: https://github.com/Omodaka9375/peerweb/releases/expanded_asset...
       
        wackget wrote 7 hours 42 min ago:
        Nice idea. Shame absolutely everything about the website screams AI
        slop.
       
        misir wrote 10 hours 53 min ago:
        I wonder if these colors are a kind of a watermark that are hardcoded
        as system instructions. Almost all slopware made using claude have the
        same color palette. So much for a random token generator to be this
        consistent
       
          orbital-decay wrote 8 hours 47 min ago:
           [1] Ask any modern (post-GPT-2) LLM about a random color/name/city
          repeatedly a few dozen times, and you'll see it's not that random.
          You can influence this with a prompt, obviously, but if the prompt
          stays the same each time, the output is always very similar despite
          the existence of thousands of valid alternatives. Which is the case
          for any vibecoded thing that doesn't specify the color palette, in
          particular.
          
          This effect is largely responsible for slop (as in annoying
          stereotypes). It's fixable in principle, but there's pretty little
          research and I don't see big AI shops care enough.
          
   URI    [1]: https://en.wikipedia.org/wiki/Mode_collapse
       
          karanSF wrote 10 hours 22 min ago:
          Emojis on every line are an AI tell. The times I do use AI (shhhh...)
          I always remove them and tweak the language a bit.
       
            rudhdb773b wrote 6 hours 53 min ago:
            Isn't it mostly ChatGPT that does that?
            
            Grok almost never uses emojis.
       
            netule wrote 9 hours 30 min ago:
            Before LLMs became big, I used emojis in my PRs and merge requests
            for fun and to break up the monotony a bit. Now I avoid them, lest
            I be accused of being a bot.
       
          IhateAI wrote 10 hours 48 min ago:
          Yep, and I refuse to use sites that look like this. Lovable built
          frontend/landing pages have a similar feel. Instant lost of trust and
          desire to try it out.
       
            bawolff wrote 4 hours 57 min ago:
            Its interesting - AI has a certain style. You can see it in
            pictures and even text content. It does instantly get my guard up.
       
            j45 wrote 9 hours 26 min ago:
            That's interesting - do you think because it's familiar to you?
            
            Would it be the case for folks who don't have any idea what Lovable
            is.
            
            Familiar UI is similar to what Tailwind or Bootstrap offers, do
            they do something different to keep it fresh?
            
            Average internet users/consumers are likely used to the default
            Shopify checkout.
       
              IhateAI wrote 8 hours 18 min ago:
              Its probably more of a me "problem". But I'm sure there are
              plenty of others that share my sentiment. It doesn't really have
              anything to do with it being familiar, familiar can be good, but
              what I'm talking about is a familiar ugliness and lack of
              intention.
              
              The Stripe or Shopify checkout is familiar, but it only became
              familiar because it was well designed and people wanted to keep
              using it.
              
              Also when its obvious someone used an LLM, it bleeds into my
              overall opinion of the product whether the product is good or
              not. I assume less effort was put into the project, which is
              probably a fair assumption.
       
        DJBunnies wrote 10 hours 55 min ago:
        Every time I try these they never work, including this one.
        
        I’m not sure what the value prop is over just using a torrent client?
        
        Maybe when they’re less buggy they’ll become a thing.
       
          bawolff wrote 4 hours 59 min ago:
          If it actually worked i could certainly see the value prop of not
          making users download a separate program. Generally downloading a
          separate program is a pretty big ask.
       
          Sephr wrote 8 hours 46 min ago:
          I'm planning to eventually launch an open source platform with the
          same name (peerweb.com) that I hope will be vastly more usable, with
          a distributed anti-abuse protocol, automatic asset distribution
          prioritization for highly-requested files, streaming UGC APIs (e.g.
          start uploading a video and immediately get a working sharable link
          before upload completion), proper integration with site URLs (no ugly
          uuids etc. visible or required in your site URLs), and adjustable
          latency thresholds to failover to normal CDNs whenever peers take too
          long to respond.
          
          I put the project on hiatus years ago but I'm starting it back up
          soon! My project is not vibe coded and has thus far been manually
          architected with a deep consideration for both user and site owner
          expectations in the web ecosystem.
       
        bricss wrote 10 hours 59 min ago:
        Somebody has to revive Nullsoft WASTE p2p from 2003 tho
       
        fooker wrote 11 hours 9 min ago:
        What do you all think of the chances that we have decentralized AI
        infrastructure like this at some point?
       
        littlecranky67 wrote 12 hours 53 min ago:
        Cool. Some people complained about broken demos, I uploaded the
        mdwiki.info [1] website unaltered and seems to work fine [0]. MDwiki is
        a single .html file that fetches custom markdown via ajax relative to
        the html file and renders it via Javascript.
        
        [0]: [1]:
        
   URI  [1]: https://peerweb.lol/?orc=b549f37bb4519d1abd2952483610b8078e6e5...
   URI  [2]: https://dynalon.github.io/mdwiki/
       
          Timwi wrote 11 hours 10 min ago:
          Why is it called MDwiki? It's clearly not a wiki.
       
            littlecranky67 wrote 3 hours 55 min ago:
            The idea is to host it on github, and people send changes to the
            content via pull requests (vs. editing like in wikipedia). There is
            no backend, just plain files.
       
            jmercouris wrote 10 hours 25 min ago:
            Sure, in a sense, but “wiki” actually just means “quick”.
       
        Uptrenda wrote 13 hours 5 min ago:
        I feel like if it were combined with federated caching servers it would
        actually work. Then you would have persistence and the p2p part helps
        take load off popular content. There are now P2P databases that seem to
        operate with this. Combining the best of both worlds.
       
        rickcarlino wrote 13 hours 22 min ago:
        Similar project I vibe coded a few weeks ago: "Gnutella/Limewire but
        WebRTC". [1] It works, though probably needs some cleanup and security
        review before being used seriously (thus no running public instance).
        
   URI  [1]: https://github.com/RickCarlino/hazelhop
       
        dpweb wrote 13 hours 24 min ago:
        Useless if it takes > 5 sec. to load a page
       
          TuringTest wrote 11 hours 47 min ago:
          You never lived the 90's
       
            alfiedotwtf wrote 7 hours 42 min ago:
            lol.
            
            Not only did it take > 5 seconds to load a page, images were
            progressively loaded as fast as two at a time over the next minute
            or so - if there were no errors during transfer!
       
        journal wrote 13 hours 34 min ago:
        i wish stuff like this was more like double-click, agree, and use. they
        always make it complicated to where you're spending time trying to
        understand if you should continue to spend more time on this.
       
        logicallee wrote 13 hours 39 min ago:
        I tried this, the functional "Functionality test page:" is stuck on
        "Loading peer web site... connecting to peers". I can't load any
        website from this.
        
   URI  [1]: https://imgur.com/gallery/loaidng-peerweb-site-uICLGhK
       
          davidcollantes wrote 13 hours 30 min ago:
          Yes, none work for me. They either don’t have peers, or the few
          ones are on a very slow network.
       
        SLWW wrote 14 hours 1 min ago:
        I can't imagine that Peerweb has much in the way of stopping certain
        types of material from being uploaded.
       
          j45 wrote 9 hours 30 min ago:
          Smaller site likely have a smaller footprint
       
          b00ty4breakfast wrote 12 hours 16 min ago:
          you can't stop someone from verbally describing certain objectionable
          material, therefore we should regulate the medium thru which sound
          travels and suck up all the oxygen on the planet.  it's the only way
          to save the children
       
        cyrusradfar wrote 14 hours 13 min ago:
        OT: Can someone vibe-code Geocities back to life?
       
          800xl wrote 13 hours 53 min ago:
          Check out neocities.org
       
            cyrusradfar wrote 12 hours 31 min ago:
            you made my life. Thank you life long internet friend.
       
          AreShoesFeet000 wrote 14 hours 4 min ago:
          give me the tokens.
       
          ipaddr wrote 14 hours 9 min ago:
          That would take forever.  If you can get the domain I'll hand code it
          in perl.
       
            awesome_dude wrote 13 hours 16 min ago:
            Neat!!
       
        gnarbarian wrote 14 hours 15 min ago:
        love this. I've been working on something similar for months now [1]
        it's a gpgpu decentralized heterogeneous hpc p2p compute platform that
        runs in the browser
        
   URI  [1]: https://metaversejs.github.io/peercompute/
       
        dcreater wrote 14 hours 27 min ago:
        Good, important idea. Unfortunately bad, low effort vibe coded
        execution
       
          j45 wrote 9 hours 25 min ago:
          Still a shipped idea, driven by someone.  The author has some other
          interesting ideas.
       
        dana321 wrote 14 hours 29 min ago:
        None of the demo sites work for me.
        
        Probably needs more testing and debugging.
       
        BrouteMinou wrote 14 hours 30 min ago:
        Nice, I clicked on the first demo, and I got stuck at connecting with
        peers.
        
        I like the idea though.
       
        kamranjon wrote 14 hours 33 min ago:
        I think one of the values of (what appears to be) AI generated projects
        like this is that they can make me aware of the underlying technology
        that I might not have heard about - for example WebTorrent: [1] Pretty
        cool! Not sure what this offers over WebTorrent itself, but I was happy
        to learn about its existence.
        
   URI  [1]: https://webtorrent.io/faq
       
        mcjiggerlog wrote 14 hours 46 min ago:
        This is cool - I actually worked on something similar way back in the
        day: [1] . It avoided the need to have any kind of intermediary website
        entirely.
        
        The cool thing was it worked at the browser level using experimental
        libdweb support, though that has unfortunately since been abandoned.
        You could literally load URLs like wtp://tomjwatson.com/blog directly
        in your browser.
        
   URI  [1]: https://github.com/tom-james-watson/wtp-ext
       
          astrobe_ wrote 4 hours 10 min ago:
          What were your plans for advertising website updates? Classic RSS
          feed or something else?
       
            mcjiggerlog wrote 3 hours 56 min ago:
            At the time there was a bit of momentum behind the idea of mutable
            torrents:
            
   URI      [1]: https://torrentfreak.com/mutable-torrents-proposal-makes-b...
       
        j45 wrote 14 hours 59 min ago:
        In its own reimagined way from what’s possible in 2026, this could
        kick off a new kind of geocities.
       
        xd1936 wrote 15 hours 5 min ago:
        Fun! I wish WebTorrent had caught on more. I've always thought it had a
        worthy place in the modern P2P conversation.
        
        In 2020, I messed around with a PoC for what hosting and distributing
        Linux distros could look like using WebTorrent[1]. The protocol project
        as a whole has a lovely and brilliant design but has stayed mostly
        stagnant in recent years. There are only a couple of WebRTC-enabled
        torrent trackers that have remained active and stable.
        
        1.
        
   URI  [1]: https://github.com/leoherzog/LinuxExchange
       
          bluedino wrote 12 hours 35 min ago:
          Was there ever a web-based Jigdo?
       
          r14c wrote 13 hours 42 min ago:
          I think the issue has generally been that web torrent doesn't work
          enough like the real thing to do its job properly. There are huge bit
          torrent based streaming media networks out there, illicit, sure, but
          its a proven technology. If browsers had real torrent clients we
          would be having a very different conversation imo
          
          I don't remember the web torrent issue numbers off the top of my
          head, but there are a number of long standing issues that seem
          blocked on webrtc limitations.
       
            1vuio0pswjnm7 wrote 7 hours 48 min ago:
            "If browsers had real torrent clients we would be having a very
            different conversation imo"
            
            The elinks text-only browser has a "real" torrent client
       
            embedding-shape wrote 13 hours 11 min ago:
            I think we still have the same blocker as we had back when
            WebTorrent first appeared; browsers cannot be real torrent clients
            and open connections without some initial routing for the
            discovery, and they cannot open bi-directional unordered
            connections between two browsers.
            
            If we could say do peer discovery via Bluetooth, and open sockets
            directly from a browser page, we could in theory have local-first
            websites running in the browser, that does P2P connections straight
            between browsers.
       
              miki123211 wrote 5 hours 35 min ago:
              Could you run some kind of hybrid DHT where part of it was Webrtc
              and part was plain HTTP(S) / WebSocket?
              
              There are some nodes (desktop clients with UPNP, dedicated
              servers) that can accept browser connections. Those nodes could
              then help you exchange offers/answers to give you connections
              with the Webrtc-only ones, and those could facilitate
              offer/answer exchanges with their peers in turn.
              
              It'd be dog-slow compared to the single-udp-packet-in,
              single-udp-packet-out philosophy of traditional mainline DHT, but
              I don't see why the idea couldn't work in principle.
              
              I think a much bigger problem is content discovery and update
              distribution. You can't really do decentralized search because
              it'd very quickly get sybil-attacked to death. You'd always need
              some kind of centralized, trusted content index, but not
              necessarily one hosted on a centralized server. If you could have
              a reliable way to go from a pubkey to the latest hash signed by
              that pubkey in a decentralized way, + E.G. a Sqlite extension to
              get pages on-demand via WebTorrent, that would get you a long way
              towards solving the problem.
       
                namibj wrote 5 hours 9 min ago:
                That was you ask exists; it updates through a version counter. 
                It just works on mainline DHT btw.
       
              Seattle3503 wrote 11 hours 6 min ago:
              If a tracker could be connected to via WebRTC and had additional
              STUN functionality, would that suffice? Are there additional
              WebRTC limitations?
              
              > they cannot open bi-directional unordered connections between
              two browsers.
              
              Last I checked, DataChannels were bidirectional
       
                embedding-shape wrote 10 hours 53 min ago:
                Yes, but it's STUN that sucks. If the software ships with a
                public (on the internet) relay/STUN server for connecting the
                two clients, it won't work if either aren't connected to the
                internet, even though the clients could still be on the same
                network and reach each other.
       
                  westurner wrote 7 hours 3 min ago:
                  /? STUN: [1] There is a Native Sockets spec draft that only
                  Chrome implements;
                  
                  "Direct Sockets API": [2] :
                  
                  > The Direct Sockets API addresses this limitation by
                  enabling Isolated Web Apps (IWAs) to establish direct TCP and
                  UDP connections without a relay server. With IWAs, thanks to
                  additional security measures—such as strict Content
                  Security Policy (CSP) and cross-origin isolation— this API
                  can be safely exposed.
                  
                  Though there's UPNP XML, it lacks auth for port forwarding
                  permissions. There's also IPV6.
                  
                  Similar: "Breaking the QR Limit: The Discovery of a
                  Serverless WebRTC Protocol – Magarcia" [3] re: Quick Share,
                  Wi-Fi Direct, Wi-Fi Aware, BLE Beacons, BSSIDs and the
                  Geolocation API
                  
   URI            [1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=...
   URI            [2]: https://developer.chrome.com/docs/iwa/direct-sockets
   URI            [3]: https://news.ycombinator.com/item?id=46829296
       
                  jychang wrote 10 hours 45 min ago:
                  That seems like a nonissue for the purposes of this
                  discussion though, in terms of user uptake. Tiktok and
                  Facebook and other websites aren't exactly focused on serving
                  to people on the same network.
       
          cranberryturkey wrote 15 hours 1 min ago:
          
          
   URI    [1]: http://bittorrented.com
       
            xd1936 wrote 13 hours 57 min ago:
            Oh wow
       
        sroerick wrote 15 hours 11 min ago:
        This is pretty interesting!
        
        I think serving video is a particularly interesting use of Webtorrent.
        I think it would be good if you could add this as a front end to
        basically make sites DDOS proof. So you host like a regular site, but
        with a JS front end that hosts the site P2P the more traffic there is.
       
          stanac wrote 14 hours 27 min ago:
          There is PeerTube for video content.
       
          NewsaHackO wrote 14 hours 31 min ago:
          I think it is very difficult (and dangerous to the host) to serve
          user-uploaded videos at scale, particularly from a moderation
          standpoint. The problem is even worse if everyone is anonymous. There
          is a reason YouTube has such a monopoly on personal video hosting.
          Maybe developments in AI moderation will make it more palatable in
          the future.
       
            t-3 wrote 7 hours 48 min ago:
            The "host" is the user in this case. Every user that watches the
            video, shares the video. Given that discovery doesn't appear to be
            a part of this platform, any links would undoubtedly be shared
            "peer-to-peer" as well, so if you aren't looking at illegal things
            and don't have friends sending you illegal things to watch, it's
            perfectly safe.
       
              lgats wrote 1 hour 59 min ago:
              webtorrent!
       
        turtleyacht wrote 27 days ago:
        Github:
        
   URI  [1]: https://github.com/omodaka9375/peerweb
       
          dang wrote 15 hours 19 min ago:
          Thanks! we'll put that link in the toptext.
       
        elbci wrote 27 days ago:
        I don't get it, I upload my files to your site, then I send my friends
        links to your site? How is this not a single point of failure?
       
          toomuchtodo wrote 15 hours 11 min ago:
          IPFS [1] requires a gateway unfortunately (whether remote or running
          locally). If you can use content idents that are supported by web
          primitives, you get the distributed nature without IPFS scaffolding
          required. Content is versioned by hash, although I haven't looked to
          see if mutable torrents [2] [3] are used in this implementation.
          Searching via distributed hash tables for torrent metadata,
          cryptographically signed by the publisher, remains as a requirement
          imho.
          
          Bittorrent, in my experience, "just works," whether you're relying on
          a torrent server or a magnet link to join a swarm and retrieve data.
          So, this is an interesting experiment in the IPFS, torrent, filecoin
          distributed content space. [1] [2]
          
   URI    [1]: https://ipfs.tech/
   URI    [2]: https://news.ycombinator.com/item?id=29920271
   URI    [3]: https://www.bittorrent.org/beps/bep_0046.html
       
            amelius wrote 11 hours 16 min ago:
            You don't hear much these days about IPFS, but I can remember one
            big problem with it was illegal content and how to deal with it.
       
          dang wrote 15 hours 17 min ago:
          [sorry for the weird timestamps - the OP was submitted a while ago
          and I just re-upped it.]
       
            logicallee wrote 13 hours 36 min ago:
            did the test sites work for you when you tried it? because none
            worked for me, and for at least two other commenters here. [1]
            
   URI      [1]: https://news.ycombinator.com/item?id=46830158
   URI      [2]: https://news.ycombinator.com/item?id=46830183
       
          dtj1123 wrote 27 days ago:
          This isn't my site, nor do I have any opinions on the implementation
          here. I do however find the idea of serving web pages via torrent
          interesting.
       
            elbci wrote 26 days ago:
            p2p storage as in torrent or IPFS or whatever is the part that we
            kinda' solved already. Serving/searching/addressing without the
            (centralized) DNS is still missing for a (urgently needed) p2p
            censorship resistant internet. Unfortunately this guy just uses
            some buzzwords to offer nothing new - why would I share links to
            that site instead of sharing torrent magnet links?
       
              recursivegirth wrote 14 hours 19 min ago:
              Thinking about this a little bit... could we use a blockchain
              ledger as an authoritative source for DNS records?
              
              User's can publish their DNS + pub key to the append-only
              blockchain, signed with their private key.
              
              Use a torrent file to connect to an initial tracker to download
              the blockchain.
              
              Once the blockchain is downloaded, every computer would have a
              full copy of the DNS database and could use that for
              discoverability.
              
              I have no experience with blockchains or building trackers, so
              maybe this is a dumb idea.
       
                soulofmischief wrote 13 hours 40 min ago:
                Look into IPFS and ENS.
       
                theendisney wrote 13 hours 44 min ago:
                Its been tried/done but attracted the same audience of
                investors looking to make a quick buck as opposed to looking to
                actually make it work.
                
                From what i've seen you need some minimum percentage of
                makeithappen-ers amoung those interested in a project.
                
                It seems the guy running the extension just left. With minimum
                influence on the value. [1]
                
   URI          [1]: https://addons.mozilla.org/en-US/firefox/addon/b-dns/
   URI          [2]: https://www.coinbase.com/en-nl/price/namecoin
       
              sroerick wrote 15 hours 10 min ago:
              This is a great point.
              
              One issue I've had with IPFS is that there's nothing baked into
              the protocol to maintain peer health, which really limits the
              ability to keep the swarm connected and healthy.
       
                theendisney wrote 13 hours 1 min ago:
                I use to add webseeds but clients seem to love just downloading
                it from there rather than from my conventional seeding.
                
                Some new ideas are needed in this space.
       
       
   DIR <- back to front page