_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Linear sent me down a local-first rabbit hole
       
       
        mizzao wrote 9 min ago:
        Is this technical architecture so different from Meteor back in the
        day? Just curious for those who have a deeper understanding.
       
        antgiant wrote 13 min ago:
        I’ve been working on a small browser app that is local first and have
        been trying to figure out how to pair it with static hosting. It feels
        like this should be possible but so far the tooling all seems stuck in
        the mindset of having a server somewhere.
        
        My use case is scoring live events that may or may not have Internet
        connection. So normal usage is a single person but sometimes it would
        be nice to allow for multi person scoring without relying on
        centralized infrastructure.
       
        10us wrote 1 hour 9 min ago:
        Man why arent couchdb / pouchdb not listed? Still works like a charm!
       
        qweiopqweiop wrote 1 hour 48 min ago:
        It's starting to feel to me that a lot of tech is just converging on
        other platforms solutions. This for example sounds incredibly similar
        to how a mobile app works (on the surface). Of course it goes the other
        way too, with mobile tech taking declarative UIs from the Web.
       
        croes wrote 2 hours 52 min ago:
        But how is conflicting data handled?
        
        For instance one closes an something and another aborts the same thing.
       
        b_e_n_t_o_n wrote 3 hours 3 min ago:
        Local first is super interesting and absolutely needed - I think most
        of the bugs I run into with web apps have to do with sync, exacerbated
        by poor internet connectivity. The local properties don't interest me
        as much as request ordering and explicit transactions. You aren't
        guaranteed that requests resolve in order, and thus can result in a lot
        of inconsistencies. These local-first sync abstractions are a bit like
        bringing a bazooka to a water gun fight - it would be interesting to
        see some halfway approaches to this problem.
       
        madisvain wrote 3 hours 22 min ago:
        Local first is amazing. I have been building a local first application
        for Invoicing since 2020 called Upcount [1] .
        
        First I used PouchDB which is also awesome [2] but now switched to
        SQLite and Turso [3] which seems to fit my needs much better.
        
   URI  [1]: https://www.upcount.app/
   URI  [2]: https://pouchdb.com/
   URI  [3]: https://turso.tech/
       
        blixt wrote 3 hours 28 min ago:
        I've been very impressed by Jazz -- it enables great DX (you're mostly
        writing sync, imperative code) and great UX (everything feels instant,
        you can work offline, etc).
        
        Main problems I have are related to distribution and longevity -- as
        the article mentions, it only grows in data (which is not a big deal if
        most clients don't have to see that), and another thing I think is more
        important is that it's lacking good solutions for public indexes that
        change very often (you can in theory have a public readable list of
        ids). However, I recently spoke with Anselm, who said these things have
        solutions in the works.
        
        All in all local-first benefits often come with a lot of costs that are
        not critical to most use cases (such as the need for much more state).
        But if Jazz figures out the main weaknesses it has compared to
        traditional central server solutions, it's basically a very good
        replacement for something like Firebase's Firestore in just about every
        regard.
       
        yanis_t wrote 3 hours 35 min ago:
        I don't get it. You still have to sync the state one way or another,
        network latency is still there.
       
          WickyNilliams wrote 2 hours 27 min ago:
          The latency is off the critical path with local first. You sync
          changes over the network sure, but your local mutations are stored
          directly and immediately in a local DB.
       
          croes wrote 2 hours 54 min ago:
          But the user gets instant results
       
          Aldipower wrote 3 hours 7 min ago:
          Me neither. Considered we are talking about collaborative network
          applications, you are loosing the single-source-of-thruth (the server
          database) with the local first approach. And it just adds so much
          more complexity. Also, as your app grows, you probably end up to
          implement the business logic twice. On the server and locally. I
          really do not get it.
       
            jitl wrote 1 hour 1 min ago:
            You can use the same business logic code on both the client and
            server.
            
            With the Linear approach, the server remains the source of truth.
       
        minikomi wrote 3 hours 37 min ago:
        My kingdom for a team organised by org mode files through a got repo
       
        preaching5271 wrote 3 hours 43 min ago:
        Automerge + Keyhive is the future
        
   URI  [1]: https://www.inkandswitch.com/project/keyhive/
       
        Cassandra99 wrote 3 hours 49 min ago:
        I developed an open-source task management software based on CRDT with
        a local-first approach. The motivation was that I primarily manage
        personal tasks without needing collaboration features, and tools like
        Linear are overly complex for my use case.
        
        This architecture offers several advantages:
        
        1. Data is stored locally, resulting in extremely fast software
        response times
        2. Supports convenient full database export and import
        3. Server-side logic is lightweight, requiring minimal performance
        overhead and development complexity, with all business logic
        implemented on the client
        4. Simplified feature development, requiring only local logic
        operations
        
        There are also some limitations:
        
        1. Only suitable for text data storage; object storage services are
        recommended for images and large files
        2. Synchronization-related code requires extra caution in development,
        as bugs could have serious consequences
        3. Implementing collaborative features with end-to-end encryption is
        relatively complex
        
        The technical architecture is designed as follows:
        
        1. Built on the Loro CRDT open-source library, allowing me to focus on
        business logic development
        
        2. Data processing flow:
        User operations trigger CRDT model updates, which export JSON state to
        update the UI. Simultaneously, data is written to the local database
        and synchronized with the server.
        
        3. The local storage layer is abstracted through three unified
        interfaces (list, save, read), using platform-appropriate storage
        solutions: IndexedDB for browsers, file system for Electron desktop,
        and Capacitor Filesystem for iOS and Android.
        
        4. Implemented end-to-end encryption and incremental synchronization.
        Before syncing, the system calculates differences based on server and
        client versions, encrypts data using AES before uploading. The server
        maintains a base version with its content and incremental patches
        between versions. When accumulated patches reach a certain size, the
        system uploads an encrypted full database as the new base version,
        keeping subsequent patches lightweight.
        
        If you're interested in this project, please visit
        
   URI  [1]: https://github.com/hamsterbase/tasks
       
        mentalgear wrote 4 hours 44 min ago:
        Local-First & Sync-Engines are the future. Here's a great filterable
        datatable overview of the local-first framework landscape: [1] My
        favorite so far is Triplit.dev (which can also be combined with
        TanStack DB); 2 more I like to explore are PowerSync and NextGraph.
        Also, the recent LocalFirst Conf has some great videos, currently
        watching the NextGraph one ( [2] ).
        
   URI  [1]: https://www.localfirst.fm/landscape
   URI  [2]: https://www.youtube.com/watch?v=gaadDmZWIzE
       
        terencege wrote 4 hours 45 min ago:
        I'm also building a local first editor and rolling my own CRDTs. There
        are enormous challenges to make it work. For example the storage size
        issue mentioned in the blog, I end up using with yjs' approach which
        only increase the clock for upsertion, and for deletion remove the
        content and only remain deleted item ids which can be efficiently
        compressed since most ids are continuous.
       
          jddj wrote 4 hours 32 min ago:
          In case you missed it and it's relevant, there was an automerge v3
          announcement posted the other day here which claimed some nice
          compression numbers as well
       
            terencege wrote 4 hours 20 min ago:
            As far as I know, automerge is using DAG history log and garbage
            collecting by comparing the version clock heads of 2 clients. That
            is different than yjs. I have not followed their compression
            approach in v3 yet, will check if having time.
       
        mkarliner wrote 4 hours 47 min ago:
        Meteor was/is a very similar technology. And I did some fairly major
        projects with it.
       
          mentalgear wrote 4 hours 40 min ago:
          Meteor was amazing, I don't understand why it never got sustainable
          traction.
       
            hobofan wrote 16 min ago:
            I think this blog post may provide some insight: [1] Roughly:
            Meteor required too much vertical integration on each part of the
            stack to survive the strongly changing landscape at the time. On
            top of that, a lot of the teams focus shifted to Apollo (which at
            least from a commercial point of view seems to have been a good
            decision).
            
   URI      [1]: https://medium.com/@sachagreif/an-open-letter-to-the-new-o...
       
            thrown-0825 wrote 1 hour 50 min ago:
            Tight coupling to MongoDB, fragmented ecosystem / packages, and
            react came out soon after and kind of stole its lunch money.
            
            It also had some pretty serious performance bottlenecks, especially
            when observing large tables for changes that need to be synced to
            subscribing clients.
            
            I agree though, it was a great framework for its day. Auth
            bootstrapping in particular was absolutely painless.
       
            h4ch1 wrote 1 hour 56 min ago:
            Seems like meteor is still actively developed and is Framework
            agnostic!
            
   URI      [1]: https://github.com/meteor/meteor
       
            dustingetz wrote 2 hours 22 min ago:
            non-relational, document oriented pubsub architecture based on
            MongoDB, good for not much more than chat apps. For toy apps (in
            2012-2016) – use firebase (also for chat apps), for crud-spectrum
            and enterprise apps - use sql. And then React happened and consumed
            the entire spectrum of frontend architectures, bringing us to
            GraphQL, which didn't, but the hype wave left little oxygen
            remaining for anything else. (Even if it had, still Meteor was not
            better.)
       
        incorrecthorse wrote 5 hours 8 min ago:
        > For the uninitiated, Linear is a project management tool that feels
        impossibly fast. Click an issue, it opens instantly. Update a status
        and watch in a second browser, it updates almost as fast as the source.
        No loading states, no page refreshes - just instant, interactions.
        
        How garbage the web has become for a low-latency click action being
        qualified as "impossibly fast". This is ridiculous.
       
          presentation wrote 2 min ago:
          Since it’s so easy then I’m rooting for you to make some millions
          with performant replacements for other business tools, should be a
          piece of cake
       
          o_m wrote 45 min ago:
          Back in 2018 I worked for a client that required we used Jira. It was
          so slow that the project manager set everything up in Excel during
          our planning meetings. After the meeting she would manually transfer
          it to Jira. She spent most of her time doing this. Each click in the
          interface took multiple seconds to respond, so it was impossible to
          get into a flow.
       
            esafak wrote 28 min ago:
            Stockholm syndrome
       
            ben_w wrote 29 min ago:
            Hm. While I'm not even remotely excited by Jira (or any other PM
            software), I've never noticed it being that bad. Annoying?
            Absolutely! But not that painfully slow.
            
            Were some extras installed? Or is this one of those tools that
            needs a highly performant network?
       
              davey48016 wrote 19 min ago:
              I've seen on perm Jira at large companies get that slow. I'm not
              sure if it's the plugins or just the company being stingy on
              hardware.
       
          zwnow wrote 1 hour 0 min ago:
          Web applications have become too big and heavy. Corps want to control
          everything. A simple example would be a simple note taking app which
          apparently also has to sync throughout devices. They are going to
          store every note you take on their servers, who knows if they really
          delete your deleted notes. They'll also track how often you visited
          your notes for whatever reasons. Wouldn't surprise me if the app also
          required geolocation and stuff like that for whatever reason. Mix
          that with lots of users and you will have loading times unheard of
          with small scale apps. Web apps should scale down but like with
          everything we need more more more bigger better faster.
       
          jallmann wrote 1 hour 54 min ago:
          Linear is actually so slow for me that I dread having to go into it
          and do stuff. I don’t care if the ticket takes 500ms to load, just
          give me the ticket and not a fake blinking cursor for 10 seconds or
          random refreshes while it (slowly) tries to re-sync.
          
          Everything I read about Linear screams over-engineering to me. It is
          just a ticket tracker, and a rather painful one to use at that.
          
          This seems to be endemic to the space though, eg Asana tried to
          invent their own language at one point.
       
          andy99 wrote 2 hours 29 min ago:
          I also winced at "impossibly fast" and realize that it must refer to
          some technical perspective that is lost on most users. I'm not a
          front end dev, I use linear, I'd say I didn't notice speed, it seems
          to work about the same as any other web app. I don't doubt it's got
          cool optimizations, but I think they're lost on most people that use
          it. (I don't mean to say optimization isn't cool)
       
            wooque wrote 1 hour 31 min ago:
            Second this. I use Linear as well and I didn't noticed anything
            close to "impossibly fast", it's faster than Jira for sure, but
            nothing spectacular.
       
              dijit wrote 1 hour 0 min ago:
              If you get used to Jira, especially Ubisofts internally hosted
              Jira (which was in an oversubscribed 10yo server that was
              constantly thrashing and hosted half a world away) ... well, it's
              easy for things to feel "impossibly fast".
              
              In fact in the Better Software Conference this year there were
              people discussing the fact that if you care about performance
              people think your software didn't actually do the work: because
              they're not used to useful things being snappy.
       
          fleabitdev wrote 2 hours 48 min ago:
          I was also surprised to read this, because Linear has always felt a
          little sluggish to me.
          
          I just profiled it to double-check. On an M4 MacBook Pro, clicking
          between the "Inbox" and "My issues" tabs takes about 100ms to 150ms.
          Opening an issue, or navigating from an issue back to the list of
          issues, takes about 80ms. Each navigation includes one function call
          which blocks the main thread for 50ms - perhaps a React rendering
          function?
          
          Linear has done very good work to optimise away network activity, but
          their performance bottleneck has now moved elsewhere. They've already
          made impressive improvements over the status quo (about 500ms to
          1500ms for most dynamic content), so it would be great to see them
          close that last gap and achieve single-frame responsiveness.
       
            m-s-y wrote 38 min ago:
            150ms is sluggish?
            4000ms is normal?
            
            The comments are absolutely wild in here with respect to
            expectations.
       
              layer8 wrote 11 min ago:
              150 ms is definitely on the “not instantaneous” side: [1] The
              stated 500 ms to 1500 ms are unfortunately quite frequent in
              practice.
              
   URI        [1]: https://ux.stackexchange.com/a/42688
       
          lwansbrough wrote 4 hours 4 min ago:
          Trite remark. The author was referring to behaviour that has nothing
          to do with “how the web has become.”
          
          It is specifically to do with behaviour that is enabled by using
          shared resources (like IndexedDB across multiple tabs), which is not
          simple HTML.
          
          To do something similar over the network, you have until the next
          frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms
          budget for processing. Good luck!
       
          jitl wrote 5 hours 5 min ago:
          A web request to a data center even with a very fast backend server
          will struggle to beat 8ms (120hz display) or even 16ms (60hz
          display), the budget for next frame painting a navigation. You need
          to have the data local to the device and ideally already in memory to
          hit 8ms navigation.
       
            delusional wrote 2 hours 17 min ago:
            I can't help but feel this is missing the point. Ideally, next
            refresh click latency is a fantastic goal, we're just not even
            close to that.
            
            For me, on the web today, the click feedback for a large website
            like YouTube is 2 seconds for first change and 4 seconds for
            content display. 4000 milliseconds. I'm not even on some bad
            connection in Africa. This is a gigebit connection with 12ms of
            latency according to fast.com.
            
            If you can bring that down to even 200ms, that'll feel
            comparatively instantaneous for me. When the whole internet feel
            like that, we can talk about taking it to 16ms
       
            dustingetz wrote 3 hours 23 min ago:
            actually if you live near a city the edge network is 6ms RTT ping
            away, that’s 3ms each direction, so if e.g. a virtual scroll
            frontend is windowing over a server array retained in memory, you
            can get there and back over websocket, inclusive of the windowing,
            streaming records in and out of the DOM at the edges of the
            viewport, and paint the frame, all in less than 8ms 120hz frame
            budget, and the device is idle, with only the visible resultset in
            client memory. That’s 120hz network. Even if you don’t live
            near a city, you can probably still hit 60hz. It is not 2005
            anymore. We have massively multiplayer video games, competitive
            multiplayer shooters and can render them in the cloud now. Linear
            is office software, it is not e-sports, we’re not running it on
            the subway or in Africa. And AI happens in the cloud, Linear’s
            website lead text is about agents.
       
              Joeri wrote 39 min ago:
              Those are theoretical numbers for a small elite. Real world
              numbers for most of the planet are orders of magnitude worse.
       
                dustingetz wrote 23 min ago:
                it is my actual numbers from my house in the Philadelphia
                suburbs right now, 200 miles away from the EWR data center
                outside NYC. Feel free to double them, you’re still inside
                the 60hz frame budget with  better than e-sports latency
       
            ahofmann wrote 4 hours 49 min ago:
            This is not the point, or other numbers matter more, then yours.
            
            In 2005 we wrote entire games for browsers without any frontend
            framework (jQuery wasn't invented yet) and managed to generate
            responses in under 80 ms in PHP. Most users had their first bytes
            in 200 ms and it felt instant to them, because browsers are
            incredibly fast, when treated right.
            
            So the Internet was indeed much faster then, as opposed to now.
            Just look at GitHub. They used to be fast. Now they rewrite their
            frontend in react and it feels sluggish and slow.
       
              DanielHB wrote 4 hours 9 min ago:
              Unless you are running some really complicated globally
              distributed backend your roundtrip will always be higher than
              80ms for all users outside your immediate geographical area. And
              the techniques to "fix" this usually only mitigate the problem in
              read-scenarios.
              
              The techniques Linear uses are not so much about backend
              performance and can be applicable for any client-server setup
              really. Not a JS/web specific problem.
       
                porker wrote 22 min ago:
                > Unless you are running some really complicated globally
                distributed backend your roundtrip will always be higher than
                80ms for all users outside your immediate geographical area.
                
                Many of us don't have to worry about this. My entire country is
                within 25ms RTT of an in-country server. I can include a dozen
                more countries within an 80ms RTT. Lots of businesses focus
                just on their country and that's profitable enough, so for them
                they never have to think about higher RTTs.
       
                imiric wrote 3 hours 6 min ago:
                > Unless you are running some really complicated globally
                distributed backend your roundtrip will always be higher than
                80ms for all users outside your immediate geographical area.
                
                The bottleneck is not the roundtrip time. It is the bloated and
                inefficient frontend frameworks, and the insane architectures
                built around them.
                
                Here's the creator of Datastar demonstrating a WebGL app being
                updated at 144FPS from the server: [1] This is not magic. It's
                using standard web technologies (SSE), and a fast and efficient
                event processing system (NATS), all in a fraction of the size
                and complexity of modern web frameworks and stacks.
                
                Sure, we can say that this is an ideal scenario, that the
                server is geographically close and that we can't escape the
                rules of physics, but there's a world of difference between a
                web UI updating at even 200ms, and the abysmal state of most
                modern web apps. The UX can be vastly improved by addressing
                the source of the bottleneck, starting by rethinking how web
                apps are built and deployed from first principles, which is
                what Datastar does.
                
   URI          [1]: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848
       
                  mike_hearn wrote 8 min ago:
                  To see this first hand try this website if you're in Europe
                  (maybe it's also fast in the US, not sure): [1] ?
                  
                  The entire thing is a JavaFX app (i.e. desktop app),
                  streaming DOM diffs to the browser to render its UI. Every
                  click is processed server side (scrolling is client side).
                  Yet it's actually one of the faster websites out there, at
                  least for me. It looks and feels like a really fast and
                  modern website, and the only time you know it's not the same
                  thing is if you go offline or have bad connectivity.
                  
                  If you have enough knowledge to efficiently use your
                  database, like by using pipelining and stored procedures with
                  DB enforced security, you can even let users run the whole
                  GUI locally if they want to, and just have it do the
                  underlying queries over the internet. So you get the best of
                  both worlds.
                  
                  There was a discussion yesterday on HN about the DOM and how
                  it'd be possible to do better, but the blog post didn't
                  propose anything concrete beyond simplifying and splitting
                  layout out from styling in CSS. The nice thing about JavaFX
                  is it's basically that post-DOM vision. You get a "DOM" of
                  scene graph nodes that correspond to real UI elements you
                  care about instead of a pile of divs, it's reactive in the
                  Vue sense (you can bind any attribute to a lazily computed
                  reactive expression or collection), it has CSS but a
                  simplified version that fixes a lot of the problems with web
                  CSS and so on and so forth.
                  
   URI            [1]: https://www.jpro.one/
       
                ahofmann wrote 3 hours 58 min ago:
                My take is, that a performant backend gets you so much runway,
                that you can reduce a lot of complexity in the frontend. And
                yes, sometimes that means to have globally distributed
                databases.
                
                But the industry is going the other way. Building frontends
                that try to hide slow backends and while doing that handling so
                much state (and visual fluff), that they get fatter and slower
                every day.
       
              Zanfa wrote 4 hours 39 min ago:
              > Now they rewrite their frontend in react and it feels sluggish
              and slow.
              
              And decided to drop legacy features such as  tags and broke
              browser navigation in their new code viewer. Right click on a
              file to open in a new tab doesn’t work.
       
        petralithic wrote 5 hours 15 min ago:
        ElectricSQL and TanStack DB are great, but I wonder why they focus so
        much on local first for the web over other platforms, as in, I see
        mobile being the primary local first use case since you may not always
        have internet. In contrast, typically if you're using a web browser to
        any capacity, you'll have internet.
        
        Also the former technologies are local first in theory but without
        conflict resolution they can break down easily. This has been from my
        experience making mobile apps that need to be local first, which led me
        to using CRDTs for that use case.
       
          owebmaster wrote 1 hour 27 min ago:
          Because web apps run in a web browser, which is the opposite of a
          local first platform.
          
          Local-first is actually the default in any native app
       
          946789987649 wrote 3 hours 33 min ago:
          In this case it's not about being able to use the product at all, but
          the joy from using an incredibly fast and responsive product, which
          therefore you want to use local-first.
       
          jitl wrote 4 hours 52 min ago:
          Because building local first with web technologies is like infinity
          harder than building local first with native app toolkits.
          
          Native app is installed and available offline by default. Website
          needs a bunch of weird shenanigans to use AppManifest or
          ServiceWorker which is more like a bunch of parts you can maybe use
          to build available offline.
          
          Native apps can just… make files, read and write from files with
          whatever 30 year old C code, and the files will be there on your
          storage. Web you have to fuck around with IndexedDB (total pain in
          the ass), localStorage (completely insufficient for any serious
          scale, will drop concurrent writes), or OriginPrivateFileSystem. User
          needs to visit regularly (at least once a month?) or Apple will erase
          all the local browser state. You can use JavaScript or hit C code
          with a wrench until it builds for WASM w/ Emscripten, and even then
          struggle to make sync C deal with waiting on async web APIs.
          
          Apple has offered CoreData + CloudKit since 2015, a completed first
          party solution for local apps that sync, no backend required. I’m
          not a Google enthusiast, maybe Firebase is their equivalent? Idk.
       
            mike_hearn wrote 3 hours 25 min ago:
            Well .... that's all true, until you want to deploy. Historically
            deploying desktop apps has been a pain in the ass. App stores
            barely help. That's why devs put up with the web's problems.
            
            Ad: unless you use Conveyor, my company's product, which makes it
            as easy as shipping a web app (nearly): [1] You are expected to
            bring your own runtime. It can ship anything but has integrated
            support for Electron and JVM apps, Flutter works too although
            Flutter Desktop is a bit weak.
            
   URI      [1]: https://hydraulic.dev/
       
            agos wrote 4 hours 21 min ago:
            and if you didn't like or cared to learn CoreData? just jam a
            sqlite db in your application and read from it, it's just C. This
            was already working before Angular or even Backbone
       
        mbaranturkmen wrote 5 hours 17 min ago:
        How is this approach better than using react-query to persist storage
        which periodically sync the local storage and the server storage?
        Perhaps I am missing something.
       
          petralithic wrote 5 hours 13 min ago:
          That approach is precisely what the new TanStack DB does, which if
          you don't know already has the same creator as React Query. The
          former extends the latter's principles to syncing, via ElectricSQL,
          both organizations have a partnership with each other.
       
        Gravityloss wrote 6 hours 13 min ago:
        Some problem on the site. Too much traffic?
        
            Secure Connection Failed
            An error occurred during a connection to bytemash.net.
        PR_END_OF_FILE_ERROR
            Error code: PR_END_OF_FILE_ERROR
       
          jcusch wrote 5 hours 59 min ago:
          It looks like I was missing a www subdomain CNAME for the underlying
          github pages site. I think it's fixed now.
       
            Gravityloss wrote 5 hours 15 min ago:
            I still see the same error
       
              Gravityloss wrote 4 hours 29 min ago:
              Ok, it works, problem was probably on my end.
       
       
   DIR <- back to front page