_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   XSLT – Native, zero-config build system for the Web
       
       
        riedel wrote 2 hours 6 min ago:
        Funnily back in the 90s working as a webdesigner in my high school
        years (whatever you would call web design these days), I remember
        building a DSSSL- dialect based pipeline to generate websites from a
        newsfeed published. I still like XSLT transformations. I even used the
        bananas XI reader [0]  to transform actual text using XSLT for
        transforming and templating . I have, however, met few people that also
        appreciated this. Often such tooling was replaced once someone else
        took over the job...
        
        [0]
        
   URI  [1]: http://www.ananas.org/xi/
       
        giantrobot wrote 3 hours 29 min ago:
        This elides a huge advantage to this approach: your blog (or whatever)
        is just raw data. Consuming it with a browser applies the linked
        stylesheet and spit out HTML. But you can consume the endpoint with
        anything.
        
        For instance you could share a music playlist as an XSPF document. In
        the browser your style sheet could make it into a nice web page with
        audio tags to play the content. But that exact same endpoint opened
        with VLC would just treat it as a normal playlist.
        
        You can just publish raw data (with robust schema validation) and each
        user agent will handle it appropriately. Even a bare bones style sheet
        could just say "open this URL with some particular application.
        
        Since the XSLT engine is built into browsers you get a free
        transformation engine without any JavaScript.
       
        michaelsbradley wrote 6 hours 34 min ago:
        Grug-speak is really not that endearing, could do without it entirely,
        maybe that’s just me. But exploration of old-ish ideas years after
        their hype cycles can be worthwhile indeed!
       
          fkyoureadthedoc wrote 5 hours 46 min ago:
          Yes, one line of it would be plenty. I didn't make it past the second
          paragraph, and don't care enough about the content to let ChatGPT
          make it less annoying.
       
        CamouflagedKiwi wrote 8 hours 41 min ago:
        I worked with XSLT a few companies ago. They had several XSLT documents
        as a transformation to various output formats (this was a pretty minor
        part of the overall product).
        
        I'm not sure I've ever seen something less popular. Feature requests
        and the odd bug would build up, eventually an engineer would be
        assigned to it for a week and they'd fix a bunch of things, then
        essentially would rather quit than keep doing it, so next time it'd be
        someone else's turn.
        
        I don't even think it was particularly bad. It seemed like it was just
        always like that. Thank goodness it isn't so popular any more so it
        doesn't turn up jammed into random places as it did then.
       
        flakiness wrote 8 hours 54 min ago:
        You call XML-based transformation "zero-config", I feel old.
       
        bmacho wrote 9 hours 16 min ago:
        What an incoherent writing lol. I'm not sure if grug = incoherent
        necessarily, but I'm sure that there is the type of genius that every
        sentence of them is painfully clear. Wouldn't it be better to cater
        towards that?
        
        Anyway.
        
        Paco Grug talks about how they want a website (e.g. a blog) without a
        server-side build-step. Just data, shape of data, and the building
        happening automagically, this time on the client. HTML has javascript
        and frames for that, but HTML painfully lacks transclusion, for header
        menu, sidebar and footer, which birthed myriads of web servers and
        webserver technologies.
        
        It seems that .xml can do it too, e.g. transclusion and probably more.
        The repo doesn't really showcase it.
        
        Anyway, I downloaded the repo, and ran it on a local webserver, it
        works. It also works javascript disabled, on an old browser. (Not as
        opened as a file tho.) Nice technology, maybe it is possible to use it
        for something useful (in a very specific niche). For most other things
        javascript/build-step/dynamic webserver is better.
        
        Also, I think that for a blog you'll want the  posts in separate files,
        and you can't just dump them in a folder and expect that the browser
        will find them. You'll need a webserver/build-step/javascript for that.
       
        noisy_boy wrote 9 hours 41 min ago:
        I used XSLT in the past for trade message transformation from one
        format of XML (produced by an upstream system) to another (used by the
        downstream consuming system). It works reasonably well for not overly
        complex stuff but debugging things are a pain once the complexity
        increases. Prefer to not do that again.
       
        darwi wrote 10 hours 19 min ago:
        The x86-cpuid-db project [1] heavily uses XSLT 3.0 through the
        “saxonche” PIP package.
        
        It has worked amazingly well for us, and the generated files are
        already merged in the Linux Kernel.
        
   URI  [1]: https://gitlab.com/x86-cpuid.org/x86-cpuid-db
       
          pyuser583 wrote 9 hours 11 min ago:
          Thank you! I've been looking for python support for XSLT 3.0! Not
          looking very hard, but this is still saved me some time!
       
        stuaxo wrote 10 hours 42 min ago:
        Thanks, I've been wanting this for 25 years.
       
        jonathaneunice wrote 10 hours 48 min ago:
        Blast from the past:
        
        "XSLT is a failure wrapped in pain"
        
        original article seems offline but relevant HN discussion:
        
   URI  [1]: https://news.ycombinator.com/item?id=8708617
       
        ulrischa wrote 10 hours 55 min ago:
        Throw in php in the mix and you have a wonderful solution for
        templating with bullet proof standards:
        
        // XML
        $xml_doc = new DOMDocument();
        $xml_doc->load("file1.xml");
        
        // XSL
        $xsl_doc = new DOMDocument();
        $xsl_doc->load("file.xsl");
        
        // Proc
        $proc = new XSLTProcessor();
        $proc->importStylesheet($xsl_doc);
        $newdom = $proc->transformToDoc($xml_doc);
        
        print $newdom->saveXML();
        
        XSLT lacks functionality? No problem, use php functions in xslt: [1]
        RTFM
        
   URI  [1]: https://www.php.net/manual/en/xsltprocessor.registerphpfunctio...
       
        alganet wrote 11 hours 0 min ago:
        I remember learning XSLT from this: [1] Still a great resource.
        
        --
        
        I would say CSS selectors superseeded XPath for the web. If one could
        do XSLT using CSS selectors instead, it would feel fresh and modern.
        
   URI  [1]: https://zvon.org/xxl/XSLTutorial/Books/Output/contents.html
       
        ako wrote 11 hours 14 min ago:
        I built an actual shipping product that used this approach over 25
        years ago. The server would have the state of every session, that would
        be serialized to xml, and then xslt templates would be used to render
        html. Idea was that this would allow customers to customize the visual
        appearance of the webpages, but xslt was too difficult. Not a success.
       
          xhrpost wrote 11 hours 9 min ago:
          I did something like this at an employer a while ago as well. Taking
          it a step further, we wanted to be able to dynamically build the
          templates that the browser would then use for building the HTML.
          Senior dev felt the best way would be to have a "master" xslt that
          would then generate the xslt for the browser. I ended up building the
          initial implementation and it was a bit of a mind bender. Fun, but
          not developer friendly for sure .
       
        PedroBatista wrote 11 hours 17 min ago:
        I still have PTSD from XSLT in college.
        
        Recently I need a solution for a problem and what XSLT promises is a
        big part of the solution, so I'm in an existential and emotional
        crisis.
       
        kiliancs wrote 11 hours 18 min ago:
        - article schema
        - page schema
        - non-technical users can author & upload
        
        And the browser takes care of the rendering.
        
        Good times.
       
        Evidlo wrote 11 hours 18 min ago:
        I also did a similar XSL blog demo a few years ago.  Here is the demo:
        
   URI  [1]: https://evidlo.github.io/xsl-website
       
        nashashmi wrote 11 hours 30 min ago:
        This gist page uses "me not know, but me know now" to express even a
        cave man can do it (no offense to cavemen).
        
        I learned one thing: Apply XSL to an XML by editing the XML. But can we
        flip it?
        
        The web works in MVC ways. Web servers are controllers that output the
        view populated with data.
        
        (XML) Data is in the backend. (XSLT) View page is the front end. 
        (XPath) Query filters is requesting (XML) data like controllers do.
       
        dingi wrote 11 hours 38 min ago:
        XML needs a renaissance because it solves problems modern formats still
        fumble with. Robust schema validation, namespaces, mixed content, and
        powerful tooling like XPath/XSLT. It's verbose, yes. It's can be made
        to look like shit and make you wanna throw up, but also battle-tested
        and structured for complexity. We ditched it too soon chasing
        simplicity.
       
        a4isms wrote 11 hours 40 min ago:
        A long time ago, in a dystopic project far, far, away:
        
        Depressed and quite pessimistic about the team’s ability to
        orchestrate Java development in parallel with the rapid changes to the
        workbook, he came up with the solution: a series of XSLT files that
        would automatically build Java classes to handle the Struts actions
        defined by the XML that was built by Visual Basic from the workbook
        that was written in Excel. [1] HN Discussions: [2] ·
        
   URI  [1]: https://raganwald.com/2008/02/21/mouse-trap.html
   URI  [2]: https://news.ycombinator.com/item?id=120379
   URI  [3]: https://news.ycombinator.com/item?id=947952
       
        ozim wrote 11 hours 42 min ago:
        Huh? If I have to write XML why bother. I would do HTML directly.
       
        codelikeawolf wrote 11 hours 46 min ago:
        I know XML and XSLT gets a lot of hate. To some extent, the hate for
        XSLT is warranted. But I have to work with XML files for my job, and it
        was pretty refreshing to not have to install any libraries to work with
        them in a web app. We use XML as the serialization format for a
        spaceflight mission planning app, so there's a lot of complex data that
        would be trickier to represent with JSON. At the end of the day, HTML
        is spicy XML, so you can use all the native web APIs to
        read/write/query/manipulate XML files and even apply XSLT
        transformations.
        
        I suspect some of the hate towards XML from the web dev community boils
        down to it being "old". I'll admit that used to have the same opinion
        until I actually started working with it. It's a little bit more of a
        PITA than working with JSON, but I think I'm getting a much more
        expressive and powerful serialization format for the cost of the added
        complexity.
       
          nashashmi wrote 11 hours 13 min ago:
          Do you find it wrong that the XML needs to call the XSL instead of
          vice versa? As in XSLT calling XML data?
       
        jarofgreen wrote 11 hours 47 min ago:
        > can use HTML import? nope not exist
        
        Well, Apache says hi: [1] (Look for "include")
        
   URI  [1]: https://httpd.apache.org/docs/2.4/howto/ssi.html
       
          Evidlo wrote 11 hours 22 min ago:
          Doesn't work on Github Pages, but this will.
       
            jarofgreen wrote 10 hours 50 min ago:
            True - just thought people would be interested in some options
       
        mlok wrote 11 hours 55 min ago:
        I believe some people might find Zjs Components interesting for this
        matter : [1] Paper abstract :
        
        ZjsComponent: A Pragmatic Approach to Modular, Reusable UI Fragments
        for Web Development
        
            In this paper, I present ZjsComponent, a lightweight and
        framework-agnostic web component designed for creating modular,
        reusable UI elements with minimal developer overhead. ZjsComponent is
        an example implementation of an approach to creating components and
        object instances that can be used purely from HTML. Unlike traditional
        approaches to components, the approach implemented by ZjsComponent does
        not require build-steps, transpiling, pre-compilation, any specific
        ecosystem or any other dependency. All that is required is that the
        browser can load and execute Javascript as needed by Web Components.
        ZjsComponent allows dynamic loading and isolation of HTML+JS fragments,
        offering developers a simple way to build reusable interfaces with
        ease. This approach is dependency-free, provides significant DOM and
        code isolation, and supports simple lifecycle hooks as well as
        traditional methods expected of an instance of a class.
        
   URI  [1]: https://news.ycombinator.com/item?id=44290315
       
        samuell wrote 12 hours 2 min ago:
        In the early 2000s, XSLT allowed me as a late teenager    with some HTML
        experience but without real coding skills (I could copy some lines of
        PHP from various forums and get it to work) to build a somewhat fancy
        intranet for a local car shop, complete with automatic styling of a
        feed of car info from a nationwide online sales portal.
        
        Somehow it took me many years, basically until starting uni and taking
        a proper programming class, before I started feeling like I could
        realize my ideas in a normal programming language.
        
        XSLT was a kind of tech that allowed a non-coder like me to step by
        step figure out how to get things to show on the screen.
        
        I think XSLT really has some strong points, in this regard at least.
       
          samuell wrote 11 hours 59 min ago:
          In later years, I returned to XSLT to try parsing a structured text
          format for tool definitions in the Galaxy bioinformatics platform.
          
          Turns out you can do a lot with the RegEx-support in XSLT 2.0! [1]
          The result? A Java-based tools for creating CLI commands via a
          wizard:
          
   URI    [1]: https://saml.rilspace.com/exercise-in-xslt-regex-partial-gal...
   URI    [2]: https://www.youtube.com/watch?v=WMjXsBVqp7s
       
        patwolf wrote 12 hours 31 min ago:
        I'm old enough to remember when Google released AJAXSLT in 2005. It was
        a JS implementation of XSLT so that you could consistently use XSLT in
        the browser.
        
        The funny thing is that the concept of AJAX was fairly new at the time,
        and so for them it made sense that the future of "fat" web pages
        (that's the term they use in their doc) was to use AJAX to download XML
        and transform it. But then people quickly learned that if you could
        just use JS to generate content, why bother with XML at all?
        
        Back in 2005 I was evaluating some web framework concepts from R&D at
        the company I worked, and they were still very much in an XML mindset.
        I remember they created an HTML table widget that loaded XML documents
        and used XPATH to select content to render in the cells.
       
        jkmathes wrote 12 hours 43 min ago:
        To show how wild things got w/ XML and XSLT in the early 2000s, I
        worked for a company that built an ASIC to parse XML at wire speed and
        process XSLT natively in the chip - because the anticipated future of
        the internet was all XML/XSLT. Intel bought the company and the guts
        made their way into the SSE accelerators.
       
          Alifatisk wrote 11 hours 15 min ago:
          > ASIC to parse XML at wire speed and process XSLT natively in the
          chip
          
          Just imagine how fast websites would have rendered if we went that
          route
       
          stopthe wrote 11 hours 36 min ago:
          IBM is still selling hardware that roughly matches your description:
          DataPower Gateway.
       
        mattbis wrote 12 hours 59 min ago:
        Please let this come back since I was highly skilled at it and nobody
        uses it and I am the sads.. since it was a bit functional and a good
        challenge and was fun. And I would like to be paid to write teh
        complicated stylesheets again. Thanks
       
        DonHopkins wrote 13 hours 2 min ago:
        A trip down memory lane to the Museum of Obsolete Technology (with
        video demos):
        
        Here's how use XSLT to make Punkemon Pie Menus! [ WARNING: IE 5
        required! ;) ]
        
        The "htc" files are ActiveX components written in JScript, aka "Dynamic
        HTML (DHTML) behaviors": [1] >HTML Components (HTCs) are a legacy
        technology used to implement components in script as Dynamic HTML
        (DHTML) "behaviors" in the Microsoft Internet Explorer web browser.
        Such files typically use an .htc extension and the "text/x-component"
        MIME type.
        
        JavaScript Pie Menus, using Internet Explorer "HTC" components, xsl,
        and xml: [2] >Pie menus for JavaScript on Internet Explorer version 5,
        configured in XML, rendered with dynamic HTML, by Don Hopkins.
        
        punkemonpiemenus.html: [3] punkemon.xsl: [3] punkemon.xml: [3]
        punkemonpiemenus.xml: [3] piemenu.htc: [3] Also an XML Schema driven
        pie menu editor:
        
        piemenuschemaeditor.html: [3] piemenuschemaeditor.xsl: [3]
        piemenuschema.xml: [3] piemenuschemaeditor.htc: [3]
        piemenuxmlschema-1.0.xsd: [3] Here's an earlier version that uses
        ActiveX OLE Control pie menus, xsl, and xml, not as fancy or schema
        driven:
        
        ActiveX Pie Menus: [13] >Demo of the free ActiveX Pie Menu Control,
        developed and demonstrated by Don Hopkins.
        
        ActiveXPieMenuEditor.html: [3] piemenueditor.xsl: [3]
        piemenueditor.html: [3] piemenueditor.htc: [3] piemenumetadata.xml: [3]
        Fasteroids (Asteroids comparing Pie Menus -vs- Linear Menus):
        
        fasteroids.html: [3] fasteroids.htc: [3] If that wasn't obsolete
        enough, here is the "ConnectedTV Skin Editor". It was a set of HTC
        components, XML, and XML Schemas, and a schema driven wysiwyg skin
        editor for ConnectedTV: a Palm Pilot app that turned your Palm into a
        personalized TV guide + smart remote.
        
        Full fresh lineup of national and local broadcast + TiVo + Dish TV
        guides with customized channel groups, channel and show filtering and
        favorites, hot sync your custom tv guide with just the shows you watch,
        weeks worth of schedules you could download and hot sync nightly with
        the latest guide updates.
        
        Integrated with trainable consumer IR remote controller with custom
        touch screen user interfaces (with 5-function "finger pie menus" that
        let you easily tap or stroke up/down/left/right to stack up multiple
        gesture controls on each button (conveniently opposite and orthogonal
        for volume up/down, channel next/previous, page next/previous, time
        forward/back, show next/previous, mute/unmute, favorite/ignore, etc --
        finger pies are perfect for the kind of opposite and directionally
        oriented commands on remote controls, and you need a lot fewer 5-way
        buttons than single purpose physical buttons on normal remotes, so you
        could pack a huge amount of functionality into one screen, or have any
        number of less dense screens, customized for just the devices you have
        and features you use. Goodbye TiVo Monolith Monster remote controls,
        since only a few of the buttons were actually useful, and ConnectedTV
        could put 5x the number of functions per gesture activated finger pie
        menu button.
        
        The skin editor let you make custom user interfaces by wysiwyg laying
        out and editing out any number of buttons however you liked and bind
        tap/left/right/up/down page navigation, tv guide time and channel and
        category navigation, sending ir commands to change the channel (sends
        multi digits per tap on station or show so you can forget the numbers),
        volume, mute, rewind/skip tivo, etc.
        
        Also you could use finger pies easily and reliably on the couch in a
        dark room with your finger instead of the stylus. Users tended to lose
        their Palm stylus in the couch cushions (which you sure don't wanna go
        fishing around for if JD Vance has been visiting) while eating popcorn
        and doing bong hits and watching tv and patting the dog and listening
        to music and playing video games in their media cave, so non-stylus
        finger gesture control was crucial.
        
        Finger pies were was like iPhone swipe gestures, but years earlier,
        much cheaper (you could get a cheap low end Palm for dirt cheap and
        dedicate it to the tv). And self revealing (prompt with labels and give
        feedback (with nice clicky sounds) and train you to use the gestures
        efficiently) instead of invisible mysterious iPhone gestures you have
        to discover and figure out without visual affordances. After filtering
        out all the stuff you never watch and favoriting the ones you do, it
        was much easier to find just the shows you like and what was on right
        now.
        
        More on the origin of the term "Finger Pie" for Beatles fans (but I
        digress ;) : [21] [22] It was really nice to have the TV guide NOT on
        the TV screen taking you away from watching the current show, and NOT
        to have to wait 10 minutes while it slowly scrolled the two visible
        rows to through 247 channels to finally see the channel you wanted to
        watch (by that time you'll miss a lot of the show, but be offered lots
        of useless shit and psychic advice to purchase from an 800 number with
        your credit card!).
        
        Kids these days don't remember how horrible and annoying those slow
        scrolling TV guides with ads for tele-psychics and sham wows and
        exercise machines used to be.
        
        I can objectively say that it was much better than the infamous ad
        laden TV Guide Scroll: [23] Using those slow scrolling non-interactive
        TV guides with obnoxious ads was so painful that you needed to apply
        HEAD ON directly to the forehead again and again and again to ease the
        pain. [24] You could use the skin editor to create your own control
        panels and buttons for whatever TV, TiVO, DVR, HiFi, Amplifier, CD,
        DVD, etc players you wanted to use together. And we had some nice color
        hires skins for the beautiful silver folding Sony Clie. [25] It was
        also nice to be able to curate and capture just the buttons you wanted
        for the devices that you actually use together, and put them all onto
        one page, or factor them out into different pages per device. You could
        ignore the 3 digit channel number and never peck numbers again, just
        stroke up on your favorite shows to switch the channel automatically.
        
        We ran out of money because it was so expensive to license the nightly
        feed of TV guide (downloading a huge sql dump every night of the latest
        schedules as they got updated), and because all of our competitors were
        just stealing their data by scraping it from TV guide web sites instead
        of licensing it legally. (We didn't have Uber or OpenAI to look up to
        for edgy legal business practice inspiration.)
        
        Oh well, it was fun while it lasted, during the days that everybody was
        carrying a Palm Pilot around beaming their contacts back and forth with
        IR. What a time that was, right before and after 9/11 2001. I remember
        somebody pointedly commented that building a Palm app at that time in
        history was kind of like opening a flower shop at the base of the World
        Trade Center. ;( [26] [27] [28] Connected TV User Guide:
        
        Overview: [29] Setting Up: [30] Using: [31] Memory: [32] Sony: [33] 
        
   URI  [1]: https://en.wikipedia.org/wiki/HTML_Components
   URI  [2]: https://www.youtube.com/watch?v=R5k4gJK-aWw
   URI  [3]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [4]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [5]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [6]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [7]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [8]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [9]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [10]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [11]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [12]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [13]: https://www.youtube.com/watch?v=nnC8x9x3Xag
   URI  [14]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [15]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [16]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [17]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [18]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [19]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [20]: https://github.com/SimHacker/IE-JScript-HTC-PieMenus/blob/main...
   URI  [21]: https://news.ycombinator.com/item?id=16615023
   URI  [22]: https://donhopkins.medium.com/gesture-space-842e3cdc7102
   URI  [23]: https://www.youtube.com/watch?v=JkGR29TSueM
   URI  [24]: https://www.youtube.com/watch?v=Is3icfcbmbs
   URI  [25]: https://en.wikipedia.org/wiki/Sony_CLI%C3%89_PEG-TG50
   URI  [26]: https://github.com/SimHacker/ConnectedTVSkinEditor
   URI  [27]: https://www.pencomputing.com/palm/Pen44/connectedTV.html
   URI  [28]: https://uk.pcmag.com/first-looks/29965/turn-your-palm-into-a-t...
   URI  [29]: https://donhopkins.com/home/ConnectedTVUserGuide/Guide1-Overvi...
   URI  [30]: https://donhopkins.com/home/ConnectedTVUserGuide/Guide2-Settin...
   URI  [31]: https://donhopkins.com/home/ConnectedTVUserGuide/Guide3-Using....
   URI  [32]: https://donhopkins.com/home/ConnectedTVUserGuide/Guide4-Memory...
   URI  [33]: https://donhopkins.com/home/ConnectedTVUserGuide/Guide5-Sony.h...
       
        FjordWarden wrote 13 hours 4 min ago:
        You don't even need XML anymore to do XML, "thanks" to iXML where you
        can provide a  grammer of any language and have that work as if you are
        working with XML. Not saying that is a good idea though.
       
          bokchoi wrote 11 hours 20 min ago:
          Invisible XML? [1] This is the first I've seen it.  Interesting...
          
   URI    [1]: https://www.w3.org/community/reports/ixml/CG-FINAL-ixml-2023...
       
        imdsm wrote 13 hours 5 min ago:
        no more xml
        
        me have make vomit from seeing xml
       
        beAbU wrote 13 hours 10 min ago:
        Man, I'm sure this is good and all, but I still have ptsd from trying
        to understand XSLT back in my uni days 15 years ago...
       
        Devasta wrote 13 hours 41 min ago:
        Abandoning XML tech is was and forever will be the webs biggest
        mistake. The past 20 years has been just fumbling about trying to
        implement things that it would have provided easily.
       
        shireboy wrote 13 hours 47 min ago:
        My first intranet job early 2000s reporting was done this way.    You
        could query a db via asp to get some xml, then transform using xslt 
        and get a big html report you could print.  I got pretty good at xslt.
        Nowadays I steer towards a reporting system for reports, but for other
        scenario you’re typically doing one of the stacks he mentioned: JSON
        or md + angular/vue/react/next/nuxt/etc
        
        I’ve kinda gotten to a point and curious if others feel same: it’s
        all just strings.  You get some strings from somewhere, write some more
        strings to make those strings show other strings to the browser.
        Sometimes the strings reference non strings for things like
        video/audio/image. But even those get sent over network with strings in
        the http header.  Sometimes people have strong feelings about their
        favorite strings, and there are pros and cons to various strings.  Some
        ways let you write less strings to do more.  Some are faster.  Some
        have angle brackets, some have curly brackets, some have none at all! 
        But at the end of the day- it’s just strings.
       
          tokinonagare wrote 13 hours 22 min ago:
          My first personal page was made this way too. Nightmare to debug,
          since "view source" only gave the XML code, not the computed XHTML.
       
        egorfine wrote 14 hours 17 min ago:
        XSLT was truly cool.
        
        I have created a CMS that supported different building blocks
        (plugins), each would output its data in XML and supply its XSLT for
        processing. The CMS called each block, applied the concatenated XSLT
        and output HTML.
        
        It was novel at the time and really nice and handy to use.
       
          anentropic wrote 13 hours 19 min ago:
          I remember doing the same around 25 years ago...!
          
          all in VBScript, god help me
          
          It felt like a great idea at the time, but it was incredibly slow to
          generate all the HTML pages that way.
          
          Looking back I always assumed it was partly because computers back
          then were too weak, although reading other comments in this thread it
          seems like even today people are having performance problems with
          XSLT.
       
        donatj wrote 14 hours 28 min ago:
        internet Explorer also had the ability to render XML directly into HTML
        tables without using any JS using the datasrc attribute. I had to deal
        with this nonsense early in my career in the early 2000s, along with
        people regularly complaining that it did not work in Firefox.
        
   URI  [1]: https://learn.microsoft.com/en-us/previous-versions/windows/de...
       
        thom wrote 14 hours 39 min ago:
        XSLT was many people’s first foray into functional programming
        (usually unwilling, because their company got a Google Search Appliance
        or something). I can’t imagine ever reaching for it again personally,
        but it was useful and somewhat mind-expanding in its heyday.
       
          bambax wrote 14 hours 33 min ago:
          I made many transformation pipelines with XSLT back in the days, and
          even a validation engine using Schematron; it was one of the most
          pleasant experience I had.
          
          It never broke, ever.
          
          It could have bugs, of course! -- but only "programmer bugs"
          (behavior coded in a certain way that should have been coded in
          another); it never suddenly stopped working for no reason like
          everything does nowadays.
       
        tempfile wrote 14 hours 47 min ago:
        XSLT is probably the #1 reason people get turned off from XML and swear
        it off as a mistaken technology. I actually quite like XML, so I have
        been trying lately to tease out exactly what it is that makes XSLT a
        mistake.
        
        XML is a semi-structured format, which (apart from & < >) includes
        plain text as a more or less degenerate case. I don't think we have any
        other realistic format for marking up plain text with arbitrary
        semantics. You can have, for example, a recipe format with  as part of
        its schema, and it's trivial to write an Xpath to pull out all the s
        (to put them in your shopping list, or whatever).
        
        Obviously, XSLT is code. Nobody denies this really. One thing about
        code is that it's inherently structured. Only the craziest of literate
        programmers would try to embed executable code inside of text. But I
        don't think that's the biggest problem. Code is special in that special
        purpose programming languages always leak outside the domain they're
        designed for. If you try and write a little language that's really
        well-scoped to transforming XML, you are definitely going to want to
        call stuff outside it sooner or later.
        
        Combined with the fact that there really isn't any value in ever
        parsing or processing a stylesheet, it seems like it was doomed never
        to pan out.
       
        donatzsky wrote 15 hours 12 min ago:
        A (very) relevant post from 3 months ago:
        
        Xee: A Modern XPath and XSLT Engine in Rust
        
   URI  [1]: https://news.ycombinator.com/item?id=43502291
       
        sneak wrote 15 hours 16 min ago:
        TBH if we were going with old, bad standards, I would rather write m4
        macros.  It’s preinstalled everywhere too, unlike a browser.
       
        scotty79 wrote 15 hours 24 min ago:
        Long time ago somebody wanted to put a searchable directory of products
        on a CD. It was maybe 100MB. There was no sqlite back then and the best
        browser you could count on your client having was probably IE 5.5
        
        JS was waay too slow, but it turned out that even back then XSLT was
        blazing fast. So I basically generated XML with all the data, wrote a
        simple XSLT with one clever XPath that generated search input form, did
        the search and displayed the results, slapped the xml file in CD
        auto-run and called it a day. It was finding results in a second or
        less. One of my best hacks ever.
        
        Since then I always wanted to make a html templating system that
        compiles to XSLT and does the HTML generation on client side. I wrote
        some, but back then Firefox didn't support displaying XML+XSLT directly
        and the workaround I came up with I didn't like. Then the AJAX came and
        then JS got faster and client side rendering with JS became viable. But
        I still think it's a good idea, to send just dynamic XMLs with static
        XSLTs preloaded and cached, if we ever want to come back to purely
        server driven request-response flow. Especially if binary format for
        XML catches on.
        
   URI  [1]: https://en.wikipedia.org/wiki/Efficient_XML_Interchange
       
        hamdouni wrote 15 hours 33 min ago:
        Still maintaining an e-commerce site using XML/xslt and Java/servlet...
        Passed easily each wave of tech and survived 2 databases migrations
        (mainframe/db2 => sqlserver => ERP)
       
        kimi wrote 15 hours 44 min ago:
        Just my two cents - the worst pieces of tech I ever worked with in my
        40+ year career were Hibernate (second) and XSLT templating for an
        email templating system around 2005. Would not touch it with a stick if
        I can avoid it.
       
        bayindirh wrote 15 hours 44 min ago:
        People love to complain about verbosity of XML, and it looks
        complicated from a distance, but I love how I can create a good file
        format based on XML, validate with a DTD and format with XSLT if I need
        to make it very human readable.
        
        XML is the C++ of text based file formats if you ask me. It's mature,
        batteries included, powerful and can be used with any language, if you
        prefer.
        
        Like old and mature languages with their own quirks, it's sadly
        fashionable to complain about it. If it doesn't fit the use case, it's
        fine, but treating it like an abomination is not.
       
          guerrilla wrote 12 hours 32 min ago:
          Why DTD and not XSD?
       
        smackeyacky wrote 15 hours 54 min ago:
        It’s weird to see the hate for xslt.    I loved it, but maybe I just
        like stack based languages.
       
        meinersbur wrote 15 hours 57 min ago:
        There is a classic DailyWTF about this technique: [1] > [...] the idea
        of building a website like this in XML and then transforming it using
        XSL is absurd in and of itself [...]
        
        In the comments the creators comment on it, like that it was a mess to
        debug. But I could not find anything wrong with the technique itself,
        assuming that it is working.
        
   URI  [1]: https://thedailywtf.com/articles/Sketchy-Skecherscom
       
          jcmeyrignac wrote 13 hours 7 min ago:
          There are 2 main problems with XSLT.
          The first one is that manipulating strings is a pain. Splitting
          strings, concatenating them is verbose like hell and difficult to
          read.
          The second one is that it quickly becomes a mess when you use the
          "priority" attribute to overload functions.
          I compare XSLT to regular expressions, with great flexibility but
          impossible to maintain due to poor readability. To my knowledge, it's
          impossible to trace.
       
        cess11 wrote 15 hours 58 min ago:
        XML is great, one just need to have the appropriate tooling. XSLT, like
        XSD, is XML too, so the same tooling apply to those as well.
        
        If you're manually writing the <>-stuff in an editor you're doing it
        wrong, do it programmatically or with applications that abstract it
        away.
        
        Use things like JAXB or other mature libraries, eXist-db ( [1] ),
        programs that can produce visualisations and so on.
        
   URI  [1]: http://exist-db.org
       
        xg15 wrote 16 hours 2 min ago:
        I remember Blizzard actually using this concept for their battle.net
        site like, 10 years ago. I found it always really cool, but at some
        point I think they replaced it with a "regular" SPA stack.
        
        I think one big problem with popularizing that approach is that XSLT as
        a language frankly sucks. As an architecture component, it's absolutely
        the right idea, but as long as actually developing in it is a world of
        pain, I don't see how people would have any incentive to adopt it.
        
        The tragic thing is that there are other pure-functional XML
        transformation languages that are really well-designed - like XQuery.
        But there is no browser that supports those.
       
          mdaniel wrote 9 hours 57 min ago:
          > like XQuery
          
          My favorite thing about XQuery is that it supports logically named
          functions, not just templates that happen to work upon whatever one
          provides it as with XSLT. I think golang's text/template suffers from
          the same problem - good luck being disciplined enough to always give
          it the right context, or you get bad outcomes
          
          An example I had lying around:
          
            declare function local:find-outline-num( $from as element(), $num
          as xs:integer ) as element()* {
              for $el in
          $from/following-sibling::h:div[@class=concat('outline-',
          $num)]/*[local-name()=concat('h', $num)]
              return $el
            };
       
        ZYbCRq22HbJ2y7 wrote 16 hours 17 min ago:
        When I a teenager around 2002, I made what one might call a blogging
        platform today, and it was using asp, xhtml, xslt, and xml. It worked
        well in browsers at that time. When I look back on it, it depresses me
        that I didn't even realize someone could make money hacking together
        web applications until like a decade later.
       
          Calwestjobs wrote 16 hours 12 min ago:
          Epub is this, compressed into one file/package. So you could be
          amazon ;)
       
        badmintonbaseba wrote 16 hours 22 min ago:
        I have worked for a company that (probably still is) heavily invested
        in XSLT for XML templating. It's not good, and they would probably
        migrate from it if they could.
        
          1. Even though there are newer XSLT standards, XSLT 1.0 is still
        dominant. It is quite limited and weird compared to the newer
        standards.
        
          2. Resolving performance problems of XSLT templates is hell. XSLT is
        a Turing-complete functional-style language, with performance very much
        abstracted away. There are XSLT templates that worked fine for most
        documents, but then one document came in with a ~100 row table and it
        blew up. Turns out that the template that processed the table is O(N^2)
        or worse, without any obvious way to optimize it (it might even have an
        XPath on each row that itself is O(N) or worse). I don't exactly know
        how it manifested, but as I recall the document was processed by XSLT
        for more than 7 minutes.
        
        JS might have other problems, but not being able to resolve algorithmic
        complexity issues is not one of them.
       
          larodi wrote 8 hours 5 min ago:
          XSLt is not easy. It’s prologue on shrooms so to speak and it has a
          steep learning curve. Once mastered gives sudoku level satisfaction,
          but can hardly ever be a standard approach to built or templating as
          normally people need much less to achieve goals.
          
          Besides XML is not universally loved.
       
            j45 wrote 4 hours 34 min ago:
            Universal love is one factor, best tool for a job may leave only a
            few choices including XML.
            
            It's not my first choice, but I won't rule it out because I know
            how relatively    flexible and capable it can be.
            
            XSLT might just need a higher abstraction level on top of it?
       
          ChrisMarshallNY wrote 9 hours 56 min ago:
          > Even though there are newer XSLT standards, XSLT 1.0 is still
          dominant.
          
          I'm pretty sure that's because implementing XSLT 2.0 needs a
          proprietary library (Saxon XSLT[0]). It was certainly the case in the
          oughts, when I was working with XSLT (I still wake up screaming).
          
          XSLT 1.0 was pretty much worthless. I found that I needed XSLT 2.0,
          to get what I wanted. I think they are up to XSLT 3.0.
          
          [0]
          
   URI    [1]: https://en.wikipedia.org/wiki/Saxon_XSLT
       
            dragonwriter wrote 9 hours 35 min ago:
            Are you saying it is specified that you literally cannot implement
            it other than on top of, or by mimicing bug-for-bug, that library
            (the way it was impossible to implement WebQSL without a particular
            version of SQLite) or is Saxon XSLT just the only existing
            implementation of the spec?
       
              ChrisMarshallNY wrote 8 hours 59 min ago:
              Support required support from libxml/libxsl. That tops out at
              1.0. I guess you could implement your own, as it’s an open
              standard, but I don’t think anyone ever bothered to.
              
              I think the guy behind Saxon may be one of the XSLT authors.
       
                int_19h wrote 5 hours 1 min ago:
                The author of Saxon is on the W3C committee for XPath, XSLT,
                and XQuery.
                
                That said, Saxon does (or at least did) have an open source
                version. It doesn't have all the features, e.g. no schema
                validation or query optimization, but all within the boundaries
                of the spec. The bigger problem there is that Saxon is written
                in Java, and browsers understandably don't want to take a
                dependency on that just for XSLT 2+.
       
          bambax wrote 12 hours 49 min ago:
          > XSLT 1.0 is still dominant
          
          How, where? In 2013 I was still working a lot with XSLT and 1.0 was
          completely dead everywhere one looked. Saxon was free for XSLT 2 and
          was excellent.
          
          I used to do transformation of both huge documents, and large number
          of small documents, with zero performance problems.
       
            int_19h wrote 5 hours 3 min ago:
            In the browsers.
       
            pmarreck wrote 11 hours 33 min ago:
            Probably corps. I was working at Factset in the early 2000's when
            there was a big push for it and I imagine the same thing was
            reflected across every Microsoft shop across corporate America at
            the time, which (at the time) Microsoft was winning big marketshare
            in. (I bet there are still a ton of internal web apps that only
            work with IE... sigh)
            
            Obviously, that means there's a lot of legacy processes likely
            still using it.
            
            The easiest way to improve the situation seems to be to upgrade to
            a newer version of XSLT.
       
            PantaloonFlames wrote 11 hours 54 min ago:
            I recently had the occasion to work with a client that was heavily
            invested in XML processing for a set of integrations. They’re
            migrating / modernizing but they’re so heavily invested in XSL
            that they don’t want to migrate away from it. So I conducted some
            perf tests and, the performance I found for xslt in .NET
            (“core”) was slightly to significantly better than the
            performance of Java (current) and Saxon. But they were both fast.
            
            In the early days the xsl was all interpreted. And was slow. From
            ~2004 or so, all the xslt engines came to be jit compiled. XSL
            benchmarks used to be a thing, but rapidly declined in value from
            then onward because the perf differences just stopped mattering.
       
          nithril wrote 13 hours 44 min ago:
          XSLT/XPath have evolved since XSLT 1.0.
          
          Features are now available like key (index) to greatly speedup the
          processing. 
          Good XSLT implementation like Saxon definitively helps as well on the
          perf aspect.
          
          When it comes to transform XML to something else, XSLT is quite handy
          by structuring the logic.
       
            echelon wrote 11 hours 19 min ago:
            XSLT just needs a different, non-XML serialization.
            
            XML (the data structure) needs a non-XML serialization.
            
            Similar to how Semantic Web's Owl has four different
            serializations, only one of them being the XML serialization. (eg.
            Owl can be represented in Functional, Turtle, Manchester, Json, and
            N-triples syntaxes.)
       
              j45 wrote 4 hours 36 min ago:
              This is very understandable, where I get left at is the remaining
              gap between XML and XSLTs and where JSON or another format can
              cover.
              
              Trying to close the gap often ends up creating more complexity
              than intended, or maybe even more than XML in some hands.
              
              It definitely would be an interesting piece.
       
              int_19h wrote 5 hours 3 min ago:
              XQuery is pretty close to "XSLT with sane syntax", if that's what
              you mean.
              
              But the fundamental problem here is the same: no matter what new
              things are added to the spec, the best you can hope for in
              browsers is XSLT 1.0, even though we've had XSLT 3.0 for 8 years
              now.
       
              bokchoi wrote 6 hours 9 min ago:
              I just posted this in another comment:
              
   URI        [1]: https://github.com/Juniper/libslax/wiki/Intro
       
              marcosdumay wrote 6 hours 40 min ago:
              > XML (the data structure) needs a non-XML serialization.
              
              KDL is a very interesting attempt, but my impression is that
              people are already trying to shove way too much unnecessary
              complexity into it.
              
              IMO, the KDL's document transformation is not a really good
              example of a better XSLT, tough. I mean, it's better, but it
              probably can still be improved a lot.
       
              jimbokun wrote 6 hours 57 min ago:
              You're looking for S-expressions.
       
                SkiFire13 wrote 2 hours 5 min ago:
                S-expressions only represent nested lists. You need some other
                convention _on top of them_ to represent other kind of data,
                and that's generally the hard part.
       
              alganet wrote 10 hours 42 min ago:
              > XML (the data structure) needs a non-XML serialization.
              
              That's YAML, and it is arguibly worse. Here's a sample YAML 1.2
              document straight from their spec:
              
                  %TAG !e! tag:example.com,2000:app/
                  ---
                  - !local foo
                  - !!str bar
                  - !e!tag%21 baz
              
              Nightmare fuel. Just by looking at it, can you tell what it does?
              
              --
              
              Some notes:
              
              - SemWeb also has JSON-LD serialization. It's a good compromise
              that fits modern tooling nicely.
              
              - XML is still a damn good compromise between human readable and
              machine readable. Not perfect, but what is perfect anyway?
              
              - HTML5 is now more complex than XHTML ever was (all sorts of
              historical caveats in this claim, I know, don't worry).
              
              - Markup beauty is relative, we should accept that.
       
            thechao wrote 11 hours 40 min ago:
            Can you name a non-Saxon XSLT processor? I'd really like one.
            Preferably, open-source.
       
              jraph wrote 9 hours 19 min ago:
              - Browsers are XSLT (1.0) processors.
              
              - Xee: [1] - xrust: [2] - XJSLT (compiles XSLT to JS): [3] Xee is
              WIP AFAIK and I don't know the maturity of xrust and XJSLT.
              
   URI        [1]: https://github.com/Paligo/xee
   URI        [2]: https://docs.rs/xrust/latest/xrust/xslt/
   URI        [3]: https://github.com/egh/xjslt
       
                thechao wrote 6 hours 5 min ago:
                Yeah... I posted too quickly: I want XSLT 3. The 1 & 2 specs
                are good first attempts, but are very difficult to use,
                effectively. As another poster also commented: it'd be nice if
                the implementation wasn't tied to XML, as well!
                
                Also, I want a cookie & a pony.
       
              badmintonbaseba wrote 9 hours 42 min ago:
              I only know libxslt, but it's XSLT 1.0 and some of EXSLT. I don't
              recommend.
       
            sam_lowry_ wrote 12 hours 48 min ago:
            Keys were a thing in XSLT 1.x already.
            
            XSLT 2+ was more about side effects.
            
            I never really grokked later XSLT and XPath standards though.
            
            XSLT 1.0 had a steep learning curve, but it was elegant in a way
            poetry is elegant because of extra restrictions imposed on it
            compared to prose. You really had to stretch your mind to do useful
            stuff with it. Anyone remembers Muenchian grouping? It was
            gorgeous.
            
            Newer standards lost elegance and kept the ugly syntax.
            
            No wonder they lost mindshare.
       
              bokchoi wrote 6 hours 11 min ago:
              I haven't tried it yet, but I came across this alternate syntax
              for XSLT which is much more friendly: [1] It looks like it was
              developed by Juniper and has shipped in their routers?
              
   URI        [1]: https://github.com/Juniper/libslax/wiki/Intro
       
              jerf wrote 11 hours 33 min ago:
              "Newer standards lost elegance and kept the ugly syntax."
              
              My biggest problem with XSLT is that I've never encountered a
              problem that I wouldn't rather solve with an XPath library and
              literally any other general purpose programming language.
              
              When XSLT was the only thing with XPath you could rely on, maybe
              it had an edge, but once everyone has an XPath library what's
              left is a very quirky and restrictive language that I really
              don't like. And I speak Haskell, so the critic reaching for the
              reply button can take a pass on the "Oh you must not like
              functional programming" routine... no, Haskell is included in
              that set of "literally any other general purpose programming
              language" above.
       
                smrtinsert wrote 5 hours 21 min ago:
                Pretty true.  I created a simplified XPath layer to a stax
                parser back in the day and it was a break through in xml
                usability.
       
                yoz wrote 8 hours 0 min ago:
                Serious question: would it be worth the effort to treat XSLT as
                a compilation target for a friendlier language, either extant
                or new?
                
                There's clearly value in XSLT's near-universal support as a
                web-native system. It provides templating out of the box
                without invoking JavaScript, and there's demand for that[1].
                But it still lacks decent in-browser debugging which JS has in
                spades.
                
   URI          [1]: https://justinfagnani.com/2025/06/26/the-time-is-right...
       
                  bokchoi wrote 6 hours 10 min ago:
                  I just posted this in another comment:
                  
   URI            [1]: https://github.com/Juniper/libslax/wiki/Intro
       
                  jerf wrote 7 hours 23 min ago:
                  It would at least be an interesting project. If someone put
                  the elbow grease into it it is distinctly possible that an
                  XSLT stylesheet could be not just converted to JS (which is
                  obviously true and just a matter of effort), but converted to
                  something that is at least on the edge of human usable and
                  editable, and some light refactoring away from being decent
                  code.
       
          agumonkey wrote 15 hours 1 min ago:
          It's odd cause xslt was clearly made in an era where expecting long
          source xml to be processed was the norm, and nested loops would blow
          up obviously..
       
            j16sdiz wrote 14 hours 20 min ago:
            It was in the era when everything walk on the DOM tree, not
            streams.
            
            Streaming is not supported until later version.
       
              agumonkey wrote 13 hours 55 min ago:
              Hmm my memory is fuzzy but I remember seeing backend processing
              of xml files a lot around 2005.
       
                count wrote 11 hours 30 min ago:
                Yeah, I was using Novell DirXML to do XSLT processing of
                inbound/outbound data in 2000 ( [1] ) for directory services
                stuff.    It was full XML body (albeit small document sizes, as
                they were usually user or identity style manifests from HR
                systems), no streaming as we know it today.
                
   URI          [1]: https://support.novell.com/techcenter/articles/ana2000...
       
                  agumonkey wrote 10 hours 51 min ago:
                  Ok, I never heard of the pre and post xml streaming era.. I
                  got taught.
       
                reactordev wrote 13 hours 47 min ago:
                But they worked on the xml body as a whole, in memory, which is
                where all the headaches started. Then we introduced WSDLs on
                top, and then we figured out streaming.
       
          nolok wrote 15 hours 19 min ago:
          It's generally speaking part of the problem with the entire "XML as a
          savior" mindset of that earlier era and a big reason of why we left
          them, doesn't matter if XSLT or SOAP or even XHTML in a way ... Those
          were defined as machine language meant for machine talking to
          machine, and invariably something go south and it's not really made
          for us to intervene in the middle; it can be done but it's way more
          work than it should be; especially since they clearly never based it
          on the idea that those machine will sometime speak "wrong", or a
          different "dialect".
          
          It looks great, then you design your stuff and it goes great, then
          you deploy to the real world and everything catches on fire instantly
          and everytime you stop one another one starts.
       
            vjvjvjvjghv wrote 6 hours 45 min ago:
            Now we have "JSON as savior". I see it way too often where new
            people come into a project and the first thing they want to do is
            to replace all XML with JSON, just because. Never mind that this
            solves basically nothing and often introduces its own set of
            problems. I am not a big fan of XML but to me it's pretty low in
            the hierarchy of design problems.
       
              SoftTalker wrote 5 hours 54 min ago:
              The only problem with XML is the verbosity of the markup.
              Otherwise it's a nice way to structure data without the bizarre
              idiosyncracies of YAML or JSON.
       
                vjvjvjvjghv wrote 3 hours 13 min ago:
                XML has its own set of idiosyncrasies like everything being a
                string. Or no explicit markup of arrays. The whole confusion
                around attributes vs values. And many others.
                
                JSON has its own set of problems like lack of comments and for
                some reason no date type.
                
                But in the end they are just data file formats. We have bigger
                things to worry about.
       
                int_19h wrote 4 hours 59 min ago:
                I mean, XML has its own bizarre idiosyncrasies like the whole
                attribute vs child element distinction (which maps nicely to
                text markup but less so for object graphs).
                
                I would say that the main benefit of XML is that it has a very
                mature ecosystem around it that JSON is still very much
                catching up with.
       
            jimbokun wrote 7 hours 8 min ago:
            It was very odd that a simple markup language was somehow seen as
            the savior for all computing problems.
            
            Markup languages are a fine and useful and powerful way for
            modeling documents, as in narrative documents with structure meant
            for human consumption.
            
            XML never had much to recommend it as the general purpose format
            for modeling all structured data, including data meant primarily
            for machines to produce and consume.
       
            em-bee wrote 11 hours 56 min ago:
            Those were defined as machine language meant for machine talking to
            machine
            
            i don't believe this is true. machine language doesn't need the
            kind of verbosity that xml provides. sgml/html/xml were designed to
            allow humans to produce machine readable data. so they were meant
            for humans to talk to machines and vice versa.
       
              soulofmischief wrote 7 hours 53 min ago:
              Yes, I think the main difference is having imperative vs
              declarative computation. With declarative computation, the
              performance of your code is dependent on the performance and
              expressiveness of the declarative layer, such as XML/XSLT. XSLT
              lacks the expressiveness to get around its own performance
              limitations.
       
            chriswarbo wrote 12 hours 23 min ago:
            > part of the problem with the entire "XML as a savior" mindset of
            that earlier era
            
            I think part of the problem is focusing on the wrong aspect. In the
            case of XSLT, I'd argue its most important properties are being
            pure, declarative, and extensible. Those can have knock-on effects,
            like enabling parallel processing, untrusted input, static
            analysis, etc. The fact it's written in XML is less important.
            
            Its biggest competitor is JS, which might have nicer syntax but it
            loses those core features of being pure and declarative (we can
            implement pure/declarative things inside JS if we like, but
            requiring a JS interpreter at all is bad news for parallelism,
            security, static analysis, etc.).
            
            When fashions change (e.g. XML giving way to JS, and JSON), we can
            end up throwing out good ideas (like a standard way to declare pure
            data transformations).
            
            (Of course, there's another layer to this, since XML itself was a
            more fashionable alternative to S-expressions; and XSLT is sort of
            like Lisp macros. Everything old is new again...)
       
            diggan wrote 14 hours 12 min ago:
            > It's generally speaking part of the problem with the entire "XML
            as a savior" mindset of that earlier era and a big reason of why we
            left them
            
            Generally speaking I feel like this is true for a lot of stuff in
            programming circles, XML included.
            
            New technology appears, some people play around with it. Others
            come up with using it for something else. Give it some time, and
            eventually people start putting it everywhere. Soon "X is not for
            Y" blogposts appear, and usage finally starts to decrease as people
            rediscover "use the right tool for the right problem". Wait yet
            some more time, and a new technology appears, and the same cycle
            begins again.
            
            Seen it with so many things by now that I think "we'll" (the
            software community) forever be stuck in this cycle and the only way
            to win is to explicitly jump out of the cycle and watch it from
            afar, pick up the pieces that actually make sense to continue using
            and ignore the rest.
       
              colejohnson66 wrote 11 hours 25 min ago:
              A controversial opinion, but JSON is that too. Not as bad as XML
              was (̶t̶h̶e̶r̶e̶'̶s̶ ̶n̶o̶ ̶"̶J̶S̶L̶T̶"̶)̶,
              but wasting cycles to manifest structured data in an unstructured
              textual format has massive overhead on the source and destination
              sides. It only took off because "JavaScript everywhere" was
              taking off — performance be damned. Protobufs and other binary
              formats already existed, but JSON was appealing because it's
              easily inspectable (it's plaintext) and easy to use —
              `JSON.stringify` and `JSON.parse` were already there.
              
              We eventually said, "what if we made databases based on JSON" and
              then came MongoDB. Worse performance than a relational database,
              but who cares! It's JSON! People have mostly moved away from
              document databases, but that's because they realized it was a bad
              idea for the majority of usecases.
       
                jimbokun wrote 6 hours 59 min ago:
                Both XML and JSON were poor replacements for s-expressions. 
                Combined with Lisp and Lisp macros, a more powerful data
                manipulation text format and language has never been created.
       
                imtringued wrote 9 hours 1 min ago:
                The fact that you bring up protobufs as the primary replacement
                for JSON speaks volumes. It's like you're worried about a
                problem that only exists in your own head.
                
                >wasting cycles to manifest structured data in an unstructured
                textual format
                
                JSON IS a structured textual format you dofus. What you're
                complaining about is that the message defines its own schema.
                
                >has massive overhead on the source and destination sides
                
                The people that care about the overhead use MessagePack or CBOR
                instead.
                
                I personally hope that I will never have to touch anything
                based on protobufs in my entire life. Protobuf is a garbage
                format that fails at the basics. You need the schema one way or
                another, so why isn't there a way to negotiate the schema at
                runtime in protobuf? Easily half or more of the questionable
                design decisions in protobuffers would go away if the client
                retrieved the schema at runtime. The compiler based workflow in
                Protobuf doesn't buy you a significant amount of performance in
                the average JS or JVM based webserver since you're copying from
                a JS object or POJO to a native protobuf message anyway. It's
                inviting an absurd amount of pain for essentially zero to no
                benefits. What I'm seeing here is a motte-bailey justification
                for making the world a worse place. The motte being the
                argument that text based formats are computationally wasteful,
                which is easily defended. The bailey being the implicit
                argument that hard coding the schema the way protobuf does is
                the only way to implement a binary format.
                
                Note that I'm not arguing particularly in favor of MessagePack
                here or even against protobuf as it exists on the wire. If
                anything, I'm arguing the opposite. You could have the benefits
                of JSON and protobuf in one. A solution so good that it makes
                everything else obsolete.
       
                  colejohnson66 wrote 8 hours 47 min ago:
                  I didn't say protobufs were a valid replacement - you only
                  think I did. "Protobufs and other binary formats already
                  existed, [..]". I was only using it as an example of a binary
                  format that most programmers have heard of; More people know
                  of protobufs than MessagePack and CBOR.
                  
                  Please avoid snark.
       
                diggan wrote 9 hours 13 min ago:
                Yup, agree with everything you said!
                
                I think the only left out part is about people currently
                believing in the current hyped way, "because this time it's
                right!" or whatever they claim. Kind of the way TypeScript
                people always appear when you say that TypeScript is currently
                one of those hyped things and will eventually be overshadowed
                by something else, just like the other languages before it,
                then soon sure enough, someone will share why TypeScript happen
                to be different.
       
                ako wrote 11 hours 9 min ago:
                There is JSLT: [1] and it can be useful if you need to
                transform a json document into another json structure.
                
   URI          [1]: https://github.com/schibsted/jslt
       
                  nolok wrote 9 hours 24 min ago:
                  The people who made that are either very funny in a
                  sarcastic, way or in severe lack of a history lesson of the
                  area they're working in.
       
                    ako wrote 7 hours 17 min ago:
                    What is a better alternative if you just need to transform
                    JSON from one structure to another JSON structure?
       
                      rorylaitila wrote 6 hours 21 min ago:
                      Imperative code. Easy to mentally parse, comment, log,
                      splice in other data. Why add another dependency just to
                      go from json>json? That'd need an exceptional
                      justification.
       
                      asa400 wrote 6 hours 31 min ago:
                      Load it into a full programming language runtime and use
                      the great collections libraries available in almost all
                      languages to transform it and then serialize it into your
                      target format. I want to use maps and vectors and real
                      integers and functions and date libraries and spec
                      libraries. String to string processing is hell.
       
              colonwqbang wrote 12 hours 52 min ago:
              There have been many such cycles, but the XML hysteria of the 00s
              is the worst I can think of. It lasted a long time and the square
              peg XML was shoved into so many round holes.
       
                0x445442 wrote 12 hours 19 min ago:
                IDK, the XML hysteria is similar by comparison to the dynamic
                and functional languages hysterias. And it pales in comparison
                to the micro services, SPA and the current AI hysterias.
       
                  vjvjvjvjghv wrote 6 hours 44 min ago:
                  Exactly. Compared to microservices XML is a pretty minor
                  problem.
       
                  homebrewer wrote 9 hours 41 min ago:
                  IMHO it's pretty comparable, the difference is only in the
                  magnitude of insanity. After all, the industry did crap out
                  these hardware XML accelerators that were supposed to improve
                  performance of doing massive amounts of XML transformations
                  — is it not the GPU/TPU craze of today? [1] E.g.
                  
   URI            [1]: https://en.wikipedia.org/wiki/XML_appliance
   URI            [2]: https://www.serverwatch.com/hardware/power-up-xml-da...
       
                    bogeholm wrote 6 hours 1 min ago:
                    From your first link
                    
                    > An XML appliance is a special-purpose network device used
                    to secure, manage and mediate XML traffic.
                    
                    Holy moly
       
                    soulofmischief wrote 8 hours 4 min ago:
                    At least arrays of numbers are naturally much closer to the
                    hardware, we've definitely come a long way in that regard.
       
                  xorcist wrote 9 hours 54 min ago:
                  Agreed. Also, Docker.
       
          mark_and_sweep wrote 15 hours 23 min ago:
          From my experience, most simple websites are fine with XSLT 1.0 and
          don't experience any performance problems.
       
            badmintonbaseba wrote 15 hours 14 min ago:
            Sure, performance might never become a problem, it is relatively
            rare. But when it does there is very little you can do about it.
       
          bux93 wrote 15 hours 26 min ago:
          Are you using the commercial version of Saxon? It's not expensive,
          and IMHO worth it for the features it supports (including the newer
          standards) and the performance. If I remember correctly (it was a
          long time ago) it does some clever optimizations.
       
            rjsw wrote 14 hours 59 min ago:
            The final free version of Saxon is a lot faster than earlier ones
            too. My guess is that it compiles the XSLT in some way for the JVM
            to use.
       
            badmintonbaseba wrote 15 hours 16 min ago:
            We didn't use Saxon, I don't work there anymore. We also supported
            client-side (browser) XSLT processing, as well as server-side. It
            might have helped on the server side, maybe could even resolve some
            algorithmic complexities with some memoization (possibly trading
            off memory consumption).
            
            But in the end the core problem is XSLT, the language. Despite
            being a complete programming language, your options are very
            limited for resolving performance issues when working within the
            language.
       
              halffullbrain wrote 15 hours 7 min ago:
              O(n^2) issues can typically be solved using keyed lookups, but I
              agree that the base processing speed is slow and the language
              really is too obscure to provide good DX.
              
              I worked with a guy who knew all about complexity analysis, but
              was quick to assert that "n is always small". That didn't hold -
              but he'd left the team by the time this became apparent.
       
          woodpanel wrote 15 hours 46 min ago:
          Same here.
          
          A couple of blue chip websites I‘ve seen that could be completely
          taken down just by requesting the sitemap (more than once per
          minute).
          
          PS: That being said it is an implantation issue. But it may speak for
          itself that 100% of the XSLT projects I‘ve seen had it.
       
        rpigab wrote 16 hours 31 min ago:
        My first resume was in XSLT, because I didn't want to duplicate HTML
        tags and style around, it worked really well, and it was fun to see the
        xml first when clicking "view source".
       
        julius wrote 16 hours 32 min ago:
        Anyone with recent real-world experience?
        
        From talking to AI, it seems the main issues would be:
        
        - SEO (googlebot)
        
        - Social Media Sharing
        
        - CSP heavy envs could be trouble
        
        Is this right?
       
        JimDabell wrote 16 hours 40 min ago:
        I used XSLT as a build system for websites way back in 1999–2000. The
        developer ergonomics were terrible. Looking at the example given, it
        doesn’t seem like anything much has changed.
        
        Has there been any progress on making this into something developers
        would actually like to use? As far as I can tell, it’s only ever used
        in situations where it’s a last resort, such as making Atom/RSS feeds
        viewable in browsers that don’t support them.
       
        almaight wrote 16 hours 48 min ago:
        What is needed more now is YAML, especially the visualization of the
        YAML format supported by k8s by default. On the contrary, in the devops
        community, people need to generate YAML through HTML to execute cicd.
        For example, this tool shows k8s-generator.vercel.app
       
        aarroyoc wrote 16 hours 58 min ago:
        It's worth mentioning that current XSLT version is 3.0 but browsers are
        only compatible with XSLT 1.0
       
        p2detar wrote 17 hours 6 min ago:
        I have last used XSLT probably about 2 decades ago. Back then XML was
        king. Companies were transferring data almost always using XML and
        translating it to a visual web-friendly format with XSLT was pretty
        neat. Cool tech and very impressive.
       
        tgma wrote 17 hours 8 min ago:
         [1] is an XML page styled with XSLT updated by a bash script in CI
        
   URI  [1]: https://packages.grpc.io
       
        pjmlp wrote 17 hours 9 min ago:
        I love XSLT, that is what I ported my site to after the CGI phase.
        
        Unfortunately it is not a sentiment that is shared by many, and many
        developers always had issues understanding the FP approach of its
        design, looking beyond the XML.
        
        25 years later we have JSON and YAML formats reinventing the wheel,
        mostly badly, for that we already had nicely available on the XML
        ecosystem.
        
        Schemas, validation, graphical transformation tools, structured
        editors, comments, plugins, namespaces,...
       
          masklinn wrote 16 hours 31 min ago:
          > many developers always had issues understanding the FP approach of
          its design, looking beyond the XML.
          
          It would probably help if xslt was not a god-awful language even
          before it was expressed via an even worse syntax.
       
            pjmlp wrote 15 hours 23 min ago:
            The root cause is that many failed to grasp XML isn't to be
            manually written by hand on vi, rather it is a tool oriented
            format.
            
            Now ironically, we have to reach for tooling to work around the
            design flaws of json and yaml.
       
              masklinn wrote 15 hours 3 min ago:
              > The root cause is that many failed to grasp XML isn't to be
              manually written by hand on vi, rather it is a tool oriented
              format.
              
              That reads like an indictment of using XML for a programming
              language.
              
              Not that it has anything to do with the semantics of XSLT.
       
                pjmlp wrote 14 hours 59 min ago:
                I don't see why separate both.
                
                XML is tooling based, and there have been plenty of tools to
                write XSLT on, including debugging and processing example
                fragments, naturally not something vi crowd ever became aware
                of amid their complaints.
       
          windowsworkstoo wrote 16 hours 59 min ago:
          Agree, when MS moved their office file formats to xml, I made plenty
          of money building extremely customizable templating engines all based
          on a very small amount of XSLT - it worked great given all the
          structure and metadata available in xml
       
        preaching5271 wrote 17 hours 19 min ago:
        Cant take it seriously with that language, sorry
       
        em-bee wrote 17 hours 25 min ago:
        i have a static website with a menu. keeping the menu synchronized over
        the half dozen pages is a pain.
        
        my only option to fix this are javascript, xslt or a server side html
        generator. (and before you ask, static site generators are no better,
        they just make the generation part manual instead of automatic.)
        
        i don't actually care if the site is static. i only care that
        maintenance is simple.
        
        build tools are not simple. they tend to suffer from bitrot because
        they are not bundled with the hosting of the site or the site content.
        
        server side html generators (aka content management systems, etc.) are
        large and tie me to a particular platform.
        
        frontend frameworks by default require a build step and of course need
        javascript in the browser. some frameworks can be included without
        build tools, and that's better, but also overkill for large sites. and
        of course then you are tied to the framework.
        
        another option is writing custom javascript code to include an html
        snippet from another file.
        
        or maybe i can try to rig include with xslt. will that shut up the
        people who want to view my site without javascript?
        
        at some point there was discussion for html include, but it has been
        dropped. why?
       
          rossant wrote 13 hours 48 min ago:
          Frames. Use frames. They're the future. Definitely.
       
            em-bee wrote 13 hours 7 min ago:
            on stackoverflow on the question how to include html, one answer
            does indeed suggest frames...
       
          bambax wrote 14 hours 26 min ago:
          > i have a static website with a menu. keeping the menu synchronized
          over the half dozen pages is a pain
          
          You can totally do that with PHP? It can find all the pages, generate
          the menu, transform markdown to html for the current page, all on the
          fly in one go, and it feels instantaneous. If you experience some
          level of traffic you can put a CDN in front but usually it's not even
          necessary.
       
            em-bee wrote 12 hours 54 min ago:
            that's the server side html generator i already mentioned. ok, this
            one is not large, but it still ties me to a limited set of server
            platforms that support running php. and if i have to write code i
            may as well write javascript and get a platform independent
            solution.
            
            the point is, none of the solutions are completely satisfactory.
            every approach has its downsides. but most critically, all this
            complaining about people picking the wrong solution is just
            bickering that my chosen solution does not align with their
            preference.
            
            my preferred solution btw is to take a build-less frontend
            framework, and build my site with that. i did that with aurelia,
            and recently built a proof of concept with react.
       
              ndriscoll wrote 12 hours 16 min ago:
              You didn't actually indicate a downside to using xslt, and yes it
              would fit your use case of a static include for a shared menu,
              though the better way to do it is to move all of the shared
              pieces of your site into the template and then each page is just
              its content. Sort of like using a shared CSS file.
              
              To just do the menu, if your site is xhtml, IIRC you could link
              to the template, use a    in the page, and then the template just
              gives a rule to expand that to your menu.
       
                em-bee wrote 12 hours 7 min ago:
                the downside to xslt is xslt itself, and lack of maintenance of
                xslt support in the browser. (browsers only supports xslt 1.0
                and it looks like even that may be dropped in the future,
                making its use not futureproof without server side support)
       
                  ndriscoll wrote 7 hours 30 min ago:
                  I'm not sure how xslt itself is a downside. It's a pretty
                  natural template language to use if you already know HTML.
                  You don't need more than 1.0 for your simple use-case. e.g.
                  here's a complete example (tested in Firefox and Chrome):
                  
                      
                      
                        
                        
                      
                        
                          
                  
                  * [1] * [2] * [3] Then here's a page to use it:
                  
                      
                      
                      
                        
                      Welcome to my page
                        
                        
                      
                      Welcome to the page!
                      
                  
                  This is the content
                        
                      
                  
                  Anywhere you want more templates, you add another
                  
                      
                        
                      
                  
                  And now you can use your custom  directly in your HTML. You
                  can of course also have attributes and children for your
                  custom elements and do all sorts of programming things like
                  variables and conditionals with XSLT if you dip your toes in
                  a little further.
                  
                  As far as longevity goes, it's lasted 25 years now, so that's
                  something. As far as I know, there are a bunch of government
                  services out there that still use it (which is great!
                  Governments should be making things cheap and simple, not
                  chasing trends), so removing support for it is somewhat
                  infeasible. If it were removed, you could always make a
                  Makefile that runs `xsltproc` on all of your xhtml files to
                  spit out html files, so worst case you have a build step, but
                  it's the world's easiest build step.
                  
                  One nice benefit of doing things this way is that just like
                  with CSS files, the more you pull into the template, the
                  smaller all of your pages can be since you have a single
                  static file for most of the page, and each page is only its
                  unique data. If you lean into it a little more and are
                  building an application, you can also have each page be its
                  own "API endpoint" by returning XML in your native domain
                  model. Databases can also output such XML directly, so you
                  can make highly efficient single queries to build entire
                  pages.
                  
   URI            [1]: page1.xhtml
   URI            [2]: page2.xhtml
   URI            [3]: contact.xhtml
       
          rsolva wrote 16 hours 24 min ago:
          I recently tried building a website using Server Side Includes (SSI)
          with apache/nginx to make templates for the head, header and footer.
          Then I found myself missing the way Hugo does things, using a base
          template and injecting the content into the base template instead.
          
          This was easy do achieve with PHP with a super minimal setup, so I
          thought, why not? Still no build steps!
          
          PHP is quite ubiquitous and stable these days so it is practically
          equivalent to making a static site. Just a few sprinkles of dynamism
          to avoid repeting HTML all over the place.
       
        w3news wrote 17 hours 25 min ago:
        I remember that I did the same in 2005-2006, just combine XML with
        XSL(T) to let the browser transform the XML into HTML.
        After that, also combined XML with XSL(T) with PHP.
        At that time modern way of working, separate concerns in the frontend.
        Around 2008-2009 I stopped with this method, and start using e.g.
        smarty.
        I still like the idea of using all native methods from browsers, that
        are described at the W3c.
        No frameworks or libraries needed, keep it simple and robust.
        
        I think there are just a few that know XSL(T) these days, or need some
        refresh (like me).
       
        nmeofthestate wrote 17 hours 28 min ago:
        XSLT is cool and was quite mind-expanding for me when it came out - I
        wouldn't say it's "grug brain" level technology at all. An XML language
        for manipulating XML - can get quite confusing and "meta". I wouldn't
        pick it as a tool these days.
       
        brospars wrote 17 hours 33 min ago:
        All that fuss just to deploy a static website on Vercel? :p
       
        elcapitan wrote 17 hours 37 min ago:
        > how I can run it? open XML file 
        > open blog.xml -a Safari
        
        This didn't work for me on my browsers (FF/Chrome/Safari) on Mac,
        apparently XSLT only works there when accessed through HTTP:
        
            $ python3 -m http.server --directory .
            $ open http://localhost:8000/blog.xml
        
        I remember long hours using XSLT to transform custom XML formats into
        some other representation that was used by WXWindows in the 2000s,
        maybe I should give it a shot again for Web :)
       
          notpushkin wrote 16 hours 56 min ago:
          > --directory .
          
          Huh, neat! Did’t know it supported that. (python3 -m http.server
          will default to current directory anyway though)
       
            susam wrote 16 hours 27 min ago:
            Yes! I often use a command like this to test my statically
            generated website locally using a command like this:
            
              python3 -m http.server -d _site/
            
            Example:
            
   URI      [1]: https://github.com/susam/susam.net/blob/0.3.0/Makefile#L26...
       
        sivanmz wrote 17 hours 46 min ago:
        I worked with XSLT almost from the beginning of my career and it was a
        blessing in disguise. Shoutout to Michael Kay.
       
          azurezyq wrote 17 hours 30 min ago:
          My first internship was in intel on XSLT 2.0 processor. Michael Key
          is a legend indeed. IIRC, Saxon was his one-man creation. Crazy!
       
        p0w3n3d wrote 17 hours 47 min ago:
        Ok, so it might be a long shot, but I would say that
        
        1. the browsers were inconsistent in 1990-2000 so we started using JS
        to make them behave the same
        
        2. meanwhile the only thing we needed were good CSS styles which were
        not yet present and consistent behaviour
        
        3. over the years the browsers started behaving the same (mainly
        because Highlander rules - there can be only one, but Firefox is also
        coping well)
        
        4. but we already got used to having frameworks that would make the
        pages look the same on all browsers. Also the paradigm was switched to
        have json data rendered
        
        5. at the current technology we could cope with server generated
        old-school web pages because they would have low footprint, work faster
        and require less memory.
        
        Why do I say that? Recently we started working on a migration from a
        legacy system. Looks like 2000s standard page per HTTP request. Every
        action like add remove etc. requires a http refresh. However it works
        much faster than our react system. Because:
        
        1. Nowadays the internet is much faster
        
        2. Phones have a lot of memory which is wasted by js frameworks
        
        3. in the backend all's almost same old story - CRUD CRUD and CRUD (+
        pagination, + transactions)
       
          ozim wrote 15 hours 29 min ago:
          AJAX and updating DOM wasn't there just to "make things faster" it
          was implemented there to change paradigm of "web sites" or "web
          documents" — because web was for displaying documents. Full page
          reload makes sense if you are working in a document paradigm.
          
          It works well here on HN for example as it is quite simple.
          
          There are a lot of other examples where people most likely should do
          a simple website instead of using JS framework.
          
          But "we could all go back to full page reloads" is not true, as there
          really are proper "web applications" out there for which full page
          reloads would be a terrible UX.
          
          To summarize there are:
          
          "websites", "web documents", "web forms" that mostly could get away
          with full page reloads
          
          "web applications" that need complex stuff presented and manipulated
          while full page reload would not be a good solution
       
            alganet wrote 11 hours 12 min ago:
            > full page reloads
            
            grug remember ancestor used frames
            
            then UX shaman said frame bad all sour faced frame ugly they said,
            multiple scrollbar bad
            
            then 20 years later people use fancy js to emulate frames grug
            remember ancestor was right
            
   URI      [1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Referenc...
       
              kbolino wrote 9 hours 56 min ago:
              Classic frames were quite bad. Every frame on a page was a
              separate, independent, coequal instance of the browser engine.
              This is almost never what you actually want. The
              header/footer/sidebar frames are subordinate and should not
              navigate freely. Bookmarks should return me to the frameset state
              as I left it, not the default for that URL. History should
              contain the frameset state I saw, not separate entries for each
              individual frame.
              
              Even with these problems, classic frames might have been
              salvageable, but nobody bothered to fix them.
       
                p0w3n3d wrote 5 hours 57 min ago:
                Iframes are no longer the thing? I must have slept over this
                scene
       
                  kbolino wrote 5 hours 52 min ago:
                  By "classic frames", I mean  not . Though iframes have
                  <i>some</i> of the same problems, they don't have <i>all</i>
                  of the same problems. They also tend to be used differently,
                  though you can certainly create a frameset-like experience
                  using only iframes.
       
                alganet wrote 9 hours 21 min ago:
                You can see frames in action on the POSIX spec: [1] They can
                navigate targeting any other frame. For example, clicking
                "System Interfaces" updates the bottom-left navigation menu,
                while keeping the state of the main document frame.
                
                It's quite simple, just uses the `target` attribute
                (target=blank remains popular as a vestigial limb of this whole
                approach).
                
                This also worked with multiple windows (yes, there were
                multi-window websites that could present interactions that
                handled multiple windows).
                
                The popular iframe is sort of salvaged from frame tech, it is
                still used extensively and not deprecatred.
                
   URI          [1]: https://pubs.opengroup.org/onlinepubs/9799919799/
       
                  kbolino wrote 9 hours 19 min ago:
                  An iframe is inherently subordinate. This solves one of the
                  major issues with classic frames.
                  
                  Classic frames are simple. Too simple. Your link goes to the
                  default state of that frameset. Can you link me any
                  non-default state? Can I share a link to my current state
                  with you?
       
                bmacho wrote 9 hours 36 min ago:
                > Every frame on a page was a separate, independent, coequal
                instance of the browser engine. This is almost never what you
                actually want.
                
                Most frames are used for menu, navigation, frame for data,
                frame for additional information of data. And they are great
                for that. I don't think that frames are different instances of
                the browser engine(?) but that doesn't matter the slightest(?).
                They are fast and lightweight.
                
                > The header/footer/sidebar frames are subordinate and should
                not navigate freely.
                
                They have the ability to navigate freely but obviously they
                don't do that, they navigate different frames.
       
                  kbolino wrote 9 hours 23 min ago:
                  With a frameset page:
                  
                  History doesn't work right
                  
                  Bookmarks don't work right -- this applies to link sharing
                  and incoming links too
                  
                  Back button doesn't work right
                  
                  The concept is good. The implementation is bad.
       
                    alganet wrote 8 hours 43 min ago:
                    > History doesn't work right
                    
                    > Bookmarks don't work right -- this applies to link
                    sharing and incoming links too
                    
                    > Back button doesn't work right
                    
                    Statements that apply to many JS webpages too.
                    
                    pushState/popState came years after frames lost popularity.
                    These issues are not related to their downfall.
                    
                    Relax, dude. I'm not claiming we should use frames today.
                    I'm saying they were simple good tools for the time.
       
                      kbolino wrote 8 hours 32 min ago:
                      They were never good. They were always broken in these
                      ways. For some sites, it wasn't a big deal, because the
                      only link that ever mattered was the main link. But a lot
                      of places that used frames were like the POSIX specs or
                      Javadocs, and they sucked for anything other than
                      immediate, personal use. They were not deprecated because
                      designers hated scrollbars (they do hate them, and that
                      sucks too, but it's beside the point).
                      
                      And, ironically, the best way to fix these problems with
                      frames is to use JavaScript.
       
                        alganet wrote 8 hours 8 min ago:
                        > They were never good
                        
                        They were good enough.
                        
                        > For some sites, it wasn't a big deal
                        
                        Precisely my point.
                        
                        > POSIX specs or Javadocs
                        
                        Hey, they work for me.
                        
                        > the best way to fix these problems with frames is to
                        use JavaScript.
                        
                        Some small amounts of javascript. Mainly, proxy the
                        state for the main frame to the address bar. No need
                        for virtual dom, babel, react, etc.
                        
                        --
                        
                        _Again_, you're arguing like I'm defending frames for
                        use today. That's not what I'm doing.
                        
                        Many websites follow a "left navigation, center
                        content" overall layout, in which the navigation stays
                        somehow stationary and the content is updated. Frames
                        were broken, but were in the right direction. You're
                        nitpicking on the ways they were broken instead of
                        seeing the big picture.
       
                          kbolino wrote 7 hours 50 min ago:
                          Directionally correct but badly done can poison an
                          idea. Frames sucked and never got better.
                          
                          Along with other issues, this gave rise to AJAX and
                          SPAs and JS frameworks. A big part of how we got
                          where we are today is because the people making the
                          web standards decided to screw around with XHTML and
                          "the semantic web" (another directionally correct but
                          badly done thing!) and other BS for about a decade
                          instead of improving the status quo.
                          
                          So we can and often should return to ancestor but if
                          we're going to lay blame and trace the history, we
                          ought to do it right.
       
                            alganet wrote 7 hours 9 min ago:
                            Your history is off, and you are mixing different
                            eras and browser standards with other initiatives.
                            
                            Frames gave place to (the incorrect use of) tables.
                            The table era was way worse than it is today.
                            Transparent gif spacers, colspan... it was all
                            hacks.
                            
                            The table era gave birth to a renewal of web
                            standards. This ran mostly separately from the
                            semantic web (W3C is a consortium, not a single
                            central group).
                            
                            The table era finally gave way to the jQuery era.
                            Roughly around this time, browser standards got
                            their shit together... but vendors didn't.
                            
                            Finally, the jQuery era ended with the rise of full
                            JS frameworks (backbone first, then ember, winjs,
                            angular, react). Vendors operating outside
                            standards still dominate in this era.
                            
                            There's at least two whole generations between
                            frames and SPAs. That's why I used the word
                            "ancestor", it's 90s tech I barely remember because
                            I was a teenager. All the other following eras I
                            lived through and experienced first hand.
                            
                            The poison on the frames idea wore off ages ago.
                            The fact that websites not made with them resemble
                            their use is a proof of that, they just don't share
                            the same implementation. The "idea" is seen with
                            kind eyes today.
       
                              kbolino wrote 6 hours 43 min ago:
                              I feel like we're mostly in violent agreement.
                              
                              The key point about frames in the original
                              context of this thread as I understood it was
                              that they allowed a site to only load the content
                              that actually changes. So accounting for the
                              table-layout era doesn't really change my
                              perspective: frames were so bad, that web sites
                              were willing to regress to full-page-loads
                              instead, at least until AJAX came along -- though
                              that also coincides with the rise of the (still
                              ongoing) div-layout era.
                              
                              I agree wholeheartedly that the concept of
                              partial page reloading in a rectilinear grid is
                              alive and well. Doing that with JavaScript and
                              CSS is the whole premise of an SPA as I
                              understand it, and those details are key to the
                              difference between now and the heyday of frames.
                              But there was also a time when full-page-loading
                              was the norm between the two eras, reflecting the
                              disillusionment with frames as they were
                              implemented and ossified.
                              
                              The W3C (*) spent a good few years working on
                              multiple things most of which didn't pan out.
                              Maybe I'm being too harsh, but it feels like a
                              lot of their working groups just went off and
                              disconnected from practice and industry for far
                              too long. Maybe that was tangential to the
                              ~decade-long stagnation of web standards, but
                              that doesn't really change the point of my
                              criticism.
                              
                              * = Ecma has a part in this too, since JavaScript
                              was standardized by them instead of W3C for
                              whatever reason, and they also went off into
                              la-la land for roughly the same period of time
       
                                alganet wrote 5 hours 52 min ago:
                                > I feel like we're mostly in violent
                                agreement.
                                
                                Probably, yes!
                                
                                > So accounting for the table-layout era
                                doesn't really change my perspective: frames
                                were so bad, that web sites were willing to
                                regress to full-page-loads instead
                                
                                That's where we disagree.
                                
                                From my point of view, what brought sites to
                                full page loads were designers. Design folk
                                wanted to break out of the "left side
                                navigation, right content" mold and make good
                                looking visual experiences.
                                
                                This all started with sites like this: [1] This
                                website is a interstitial fossil between frames
                                and full table nightmare. The homepage
                                represents what (at the time) was a radical way
                                of experiencing the web.
                                
                                It still carries vestiges of frames in other
                                sections: [1] cmp/jamcentral/jamcentralframe...
                                
                                However, the home is their crown jewel and it
                                is representative of the years that followed.
                                
                                This new visual experience was enough to
                                discard partial loading. And for a while, it
                                stayed like this.
                                
                                JS up to this point was still a toy. DHTML,
                                hover tricks, trinkets following the mouse
                                cursor. It was unthinkable to use it to manage
                                content.
                                
                                It was not until CSS zen garden, in 2003, that
                                things started to shift: [3] Now, some people
                                were saying that you could do pretty websites
                                without tables. By this time, frames were
                                already forgotten and obsolete.
                                
                                So, JS never killed frames. There was a whole
                                generation in between that never used frames,
                                but also never used JS to manage content (no
                                AJAX, no innerHTML shinenigans, nothing).
                                
                                Today, websites look more like the POSIX spec
                                (in structure and how content is loaded) than
                                the SpaceJam website that defined a generation.
                                The frames idea is kind of back in town. It
                                doesn't matter that we don't use the same 90s
                                tech, they were right about content over style,
                                right about partial loading, right about a lot
                                of structural things.
                                
   URI                          [1]: https://www.spacejam.com/1996/
   URI                          [2]: https://www.spacejam.com/1996/cmp/jamc...
   URI                          [3]: https://csszengarden.com/pages/about/
       
                                  kbolino wrote 5 hours 13 min ago:
                                  I appreciate looking at things from a
                                  different perspective! I can see your line of
                                  argument now.
                                  
                                  I should clarify. I don't think JS killed
                                  frames, that's not what I meant. If anything,
                                  I think JS could have saved frames. But the
                                  failure of frames left a gap that eventually
                                  JS (esp. with AJAX) filled. Lots of other
                                  stuff was going on at this time too,
                                  including alternative tech like Java, Flash,
                                  and ActiveX, all of which were trying to do
                                  more by bypassing the "standard" tech stack
                                  entirely.
                                  
                                  I think the ossification of web standards
                                  from ca. 1999 to 2012, combined with the
                                  rapidly growing user base, and with web
                                  developers/designers aggressively pushing the
                                  envelope of what the tech could do, put the
                                  standard stuff on the back foot pretty badly.
                                  Really, I'm talking about the whole ecosystem
                                  and not just the standards bodies themselves;
                                  there was an era where e.g. improving HTML
                                  itself was just not the active mentality.
                                  Both inside and outside of W3C (etc.), it
                                  seemed that nobody cared to make the standard
                                  stuff better. W3C focused on unproductive
                                  tangents; web devs focused on non-standard
                                  tech or "not the intended use" (like tables
                                  for layout).
                                  
                                  So I think we can say that  frames died a
                                  somewhat unfair death, caused partly by their
                                  initial shortcomings, partly by people trying
                                  to break outside of the (literal) boxes they
                                  imposed, and partly by the inability of the
                                  standard tech to evolve and address those
                                  shortcomings in a timely fashion. But just as
                                  there was a reason they failed, there was a
                                  reason they existed too.
       
                    bmacho wrote 9 hours 13 min ago:
                    Yup, they are not enough for an SPA, not without
                    javascript. And if you have javascript to handle history,
                    URL, bookmarks and all that, you can just use divs without
                    frames.
       
                      kbolino wrote 9 hours 5 min ago:
                      This has nothing to do with SPAs.
                      
                      Take the POSIX specs linked in a sibling comment.
                      
                      Or take the classic Javadocs. I am currently looking at
                      the docs for java.util.ArrayList. Here's a link to it
                      from my browser's URL bar: [1] But you didn't go to the
                      docs for java.util.ArrayList, you went to the starting
                      page. Ok, fine, I'll link you directly to the ArrayList
                      docs, for which I had to "view frame source" and grab the
                      URL: [1] java/util/ArrayLis...
                      
                      Ok, but now you don't see any of the other frames, do
                      you? And I had one of those frames pointing at the
                      java.util class. So none of these links show you what I
                      saw.
                      
                      And if I look in my history, there is no entry that
                      corresponds to what I actually saw. There are separate
                      entries for each frame, but none of them load the
                      frameset page with the correct state.
                      
                      These are strongly hyperlinked reference documents.
                      Classic use of HTML. No JavaScript or even CSS needed.
                      
   URI                [1]: https://docs.oracle.com/javase/8/docs/api/
   URI                [2]: https://docs.oracle.com/javase/8/docs/api/java/u...
       
                        bmacho wrote 8 hours 45 min ago:
                        This is exactly what I wrote? But let me rephrase it:
                        frames are not enough solely for an SPA, they can't
                        keep state, you need javascript/dynamic webserver for
                        that.
                        
                        > Ok, fine, I'll link you directly to the ArrayList
                        docs, for which I had to "view frame source" and grab
                        the URL:
                        
                        You could've just right click on the "frames" link, and
                        copy the URL: [1] . They use javascript to navigate
                        based on the search params in the URL. It's not great,
                        it should update the URL as you navigate, maybe you can
                        send them a PR for that. (And to change state of the
                        boxes on the left too.)
                        
                        Also browser history handling is really messy and hard
                        to get right, regardless of frames.
                        
                        > And if I look in my history, there is no entry that
                        corresponds to what I actually saw.
                        
                        ? If you write a javascript +1 button that updates a
                        counter, there won't be a corresponding entry in your
                        history for the actual states of your counter. I don't
                        see how that is a fundamental problem with
                        javascript(?).
                        
   URI                  [1]: https://docs.oracle.com/javase/8/docs/api/inde...
       
                          kbolino wrote 8 hours 36 min ago:
                          It's cool that they have that link. Most frame sites
                          didn't. JS actually isn't necessary to make that
                          work, they could have just interpolated the requested
                          page server-side. But it only correctly points to one
                          frame. It's the most important frame, to be fair, but
                          it doesn't do anything for the other two frames.
                          
                          I don't understand how pre-HTML5, non-AJAX reference
                          docs qualify as an "SPA". This is just an ordinary
                          web site.
       
            alerighi wrote 13 hours 25 min ago:
            Yes, of course for web applications you can't do full page reload
            (you weren't either back in the days, where web applications
            existed in form of java applets or flash content).
            
            Let's face it, most uses of JS frameworks are for blogs or things
            that with full page reload you not even notice: nowadays browsers
            are advanced and only redraw the screen when finished loading the
            content, meaning that they would out of the box mostly do what
            React does (only render DOM elements who are changes), meaning that
            a page reload with a page that only changes one button at UI level
            does not result in a flicker or loading of the whole page.
            
            BTW, even React now is suggesting people to run the code
            server-side if it is possible (it's the default of Next.JS), since
            it makes the project easier to maintain, debug, test, as well as
            get better score in SEO from search engines.
            
            I'm still a fan of the "old" MVC models of classical frameworks
            such as Laravel, Django, Rails, etc. to me make overall projects
            that are easier to maintain for the fact that all code runs in the
            backend (except maybe some jQuery animation client side), model is
            well separated from the view, there is no API to maintain, etc.
       
          bob1029 wrote 15 hours 44 min ago:
          > at the current technology we could cope with server generated
          old-school web pages because they would have low footprint, work
          faster and require less memory
          
          I've got a .NET/Kestrel/SQLite stack that can crank out SSR responses
          in no more than ~4 milliseconds. Average response time is measured in
          hundreds of microseconds when running release builds. This is with
          multiple queries per page, many using complex joins to compose
          view-specific response shapes. Getting the data in the right shape
          before interpolating HTML strings can really help with performance in
          some of those edges like building a table with 100k rows. LINQ is
          fast, but approaches like materializing a collection per row can get
          super expensive as the # of items grows.
          
          The closer together you can get the HTML templating engine and the
          database, the better things will go in my experience. At the end of
          the day, all of that fancy structured DOM is just a stream of bytes
          that needs to be fed to the client. Worrying about elaborate
          AST/parser approaches when you could just use StringBuilder and
          clever SQL queries has created an entire pointless, self-serving
          industry. The only arguments I've ever heard against using something
          approximating this boil down to arrogant security hall monitors who
          think developers cant be trusted to use the HTML escape function
          properly.
       
            chriswarbo wrote 11 hours 58 min ago:
            > arrogant security hall monitors who think developers cant be
            trusted to use the HTML escape function properly.
            
            Unfortunately, they're not actually wrong though :-(
            
            Still, there are ways to enforce escaping (like preventing
            "stringly typed" programming) which work perfectly well with
            streams of bytes, and don't impose any runtime overhead (e.g.
            equivalent to Haskell's `newtype`)
       
          viraptor wrote 17 hours 6 min ago:
          That timeline doesn't sound right to me. JS was rarely used to
          standardise behaviour - we had lots of user agent detection and
          relying on quirks ordering to force the right layout. JS really was
          for the interactivity at the beginning - DHTML and later AJAX. I
          don't think it even had easy access to layout related things? (I may
          be mistaken though) CSS didn't really make things more consistent
          either - once it became capable it was still a mess. Sure, CSS garden
          was great and everyone was so impressed with semantic markup while
          coding tables everywhere. It took ages for anything to actually pass
          first two ACIDs. I'm not sure frameworks ever really impacted the
          "consistent looks" side of things - by the time we grew out of
          jQuery, CSS was the looks thing.
          
          Then again, it was a long time. Maybe it's me misremembering.
       
            middleagedman wrote 15 hours 37 min ago:
            Old guy here. Agreed- the actual story of web development and
            JavaScript’s use was much different.
            
            HTML was the original standard, not JS. HTML was evolving early on,
            but the web was much more standard than it was today.
            
            Early-mid 1990s web was awesome. HTML served HTTP, and pages used
            header tags, text, hr, then some backgound color variation and
            images. CGI in a cgi-bin dir was used for server-side
            functionality, often written in Perl or C: [1] Back then, if you
            learned a little HTML, you could serve up audio, animated gifs, and
            links to files, or Apache could just list files in directories to
            browse like a fileserver without any search. People might get a
            friend to let them have access to their server and put content up
            in it or university, etc. You might be on a server where they had a
            cgi-bin script or two to email people or save/retrieve from a
            database, etc. There was also a mailto in addition to href for the
            a (anchor) tag for hyperlinks so you could just put you email
            address there.
            
            Then a ton of new things were appearing. PhP on server-side.
            JavaScript came out but wasn’t used much except for a couple of
            party tricks. ColdFusion on server-side. Around the same time was
            VBScript which was nice but just for IE/Windows, but it was big.
            Perl then PhP were also big on server-side. If you installed Java
            you could use Applets which were neat little applications on the
            page. Java Web Server came out serverside and there were JSPs. Java
            Tomcat came out on server-side. ActionScript came out to basically
            replace VBScript but do it on serverside with ASPs. VBScript
            support went away.
            
            During this whole time, JavaScript had just evolved into more party
            tricks and thing like form validation. It was fun, but it was PhP,
            ASP, JSP/Struts/etc. serverside in early 2000s, with Rails coming
            out and ColdFusion going away mostly. Facebook was PhP mid-2000s,
            and LAMP stack, etc. People breaking up images using tables, CSS
            coming out with slow adoption. It wasn’t until mid to later 2000s
            until JavaScript started being used for UI much, and Google’s
            fostering of it and development of v8 where it was taken more
            seriously because it was slow before then. And when it finally got
            big, there was an awful several years where it was framework after
            framework super-JavaScript ADHD which drove a lot of developers to
            leave web development, because of the move from server-side to
            client-side, along with NoSQL DBs, seemingly stupid things were
            happening like client-side credential storage, ignoring ACID for
            data, etc.
            
            So- all that to say, it wasn’t until 2007-2011 before JS took
            off.
            
   URI      [1]: https://en.m.wikipedia.org/wiki/Common_Gateway_Interface
       
              p0w3n3d wrote 2 hours 27 min ago:
              I must contradict. 2005-6 was happening my PRADO development,
              which already was present on market as a framework that was
              exensively using javascript (mimicking Microsoft's  ASP.NET
              forms) to make AJAX requests and regenerate states of components
              that were placed on the web page using DOM.
              
              The thing was that it was really hard to write code that did the
              same DOM + placement on all the browsers, and if a framework
              could do that, this was becoming great help. I started my webpage
              development in 2000-ish with if (`document.forms /* is ie */`)
              ... and was finding a way to run IE on my Linux computer to test
              the webpage rendering there. And CSS 2 was released on 1998 and
              could change everything and was the Deus Ex Machine everyone
              expected, except for it didn't work, especially on IE (which had
              majority of market, and especially if you developed a business
              application, you had to count it as the majority of all your
              clients, if not the only ones). So in CSS 2 you could
              __allegedly__ do things you really needed, like placing things
              together or in a related position, instead of calculating
              browser's sizes etc., but it didn't work correctly, so you had to
              fallback to javascript `document.getElementById().position =
              screenWidth/2 etc`.
              
              So according to my memory, (1) these were the dark times mainly
              because of m$ being lazy and abusing their market position (2) we
              used javascript to position elements, colorize them, make
              complicated bevels, borders etc (3) this created gap for Google
              that they could use to gain power (which we admired at that time
              as the saviours of the web) (4) Opera was the thing and
              Resistance icon (boasting themselves of fulfilling all standards
              and being fast, but they failed a few times too)
              
              also DSL, LAN internet sharing and AOL (in Poland 0202122
              ppp/ppp), tshshshshshs, tidutidumtidum, tshshshshshsh ...
       
              nasduia wrote 14 hours 18 min ago:
              Though much less awesome was all the Flash, Realplayer and other
              plugins required.
       
                sim7c00 wrote 12 hours 47 min ago:
                Realplayer. christ, forgot all about that one.... thanks...
                frozenface
       
                  p0w3n3d wrote 12 hours 24 min ago:
                  ah the feelings. those were the times
       
                    viraptor wrote 12 hours 16 min ago:
                    If your site didn't have a flash animated menu, was it even
                    a real website at that time?
       
            jonwinstanley wrote 16 hours 56 min ago:
            For me, JQuery was the thing that fixed the browser
            inconsistencies. If you used JQuery for everything, your code
            worked in all the browsers.
            
            This was maybe 2008?
       
              viraptor wrote 14 hours 25 min ago:
              I wasn't clear, jQuery was definitely used for browser
              inconsistencies, but in behaviour, but layout. It had just a
              small overlap with CSS functionality (at first, until it all got
              exposed to JS)
       
              Cthulhu_ wrote 15 hours 37 min ago:
              Before jQuery there was Prototype.js, part of early AJAX support
              in RoR, which fixed inconsistencies in how browsers could fetch
              data, especially in the era between IE 5 and 7 (native JS
              `XMLHttpRequest` was only available from IE 7 onwards, before
              that it was some ActiveX thing. The other browsers supported it
              from the get go). My memory is vague, but it also added stuff
              like selectors, and on top of that was script.aculo.us which
              added animations and other such fanciness.
              
              jQuery took over very quickly though for all of those.
       
                arkh wrote 15 hours 21 min ago:
                > native JS `XMLHttpRequest` was only available from IE 7
                onwards, before that it was some ActiveX thing.
                
                Almost sure it was available on IE6. But even if not, you could
                emulate it using hidden iframes to call pages which embedded
                some javascript interacting with the main page. I still have
                fond memories of using mootools for lightweight nice animations
                and less fond ones of dojo.
       
                  JimDabell wrote 13 hours 46 min ago:
                  Internet Explorer 5–6 was the ActiveX control. Then other
                  browsers implemented XMLHTTPRequest based on how that ActiveX
                  control worked, then Internet Explorer 7 implemented it
                  without ActiveX the same way as the other browsers, and then
                  WHATWG standardised it.
                  
                  Kuro5hin had a dynamic commenting system based on iframes
                  like you describe.
       
              benediktwerner wrote 16 hours 23 min ago:
              Wasn't it more about inconsistencies in JS though? For stuff
              which didn't need JS at all, there also shouldn't be much need
              for JQuery.
       
                dspillett wrote 15 hours 24 min ago:
                jQuery, along with a number of similar attempts and more
                single-item-focused polyfills¹ was as much about DOM
                inconsistencies as JS ones. It was also about making dealing
                with the DOM more convenient² even where it was already
                consistent between commonly used browsers.
                
                DOM manipulation of that sort is JS dependent, of course, but I
                think considering language features and the environment, like
                the DOM, to be separate-but-related concerns is valid. There
                were less kitchen-sink-y libraries that only concentrated on
                language features or specific DOM features. Some may even
                consider a few parts in a third section: the standard library,
                though that feature set might be rather small (not much more
                than the XMLHTTPRequest replacement/wrappers?) to consider its
                own thing.
                
                > For stuff which didn't need JS at all, there also shouldn't
                be much need for JQuery.
                
                That much is mostly true, as it by default didn't do anything
                to change non-scripted pages. Some polyfills for static HTML
                (for features that were inconsistent, or missing entirely in,
                usually, old-IE) were implemented as jQuery plugins though.
                
                --------
                
                [1] Though I don't think they were called that back then, the
                term coming later IIRC.
                
                [2] Method chaining³, better built-in searching and filtering
                functions⁴, and so forth.
                
                [3] This divides opinions a bit though was generally popular,
                some other libraries did the same, others tried different
                approaches.
                
                [4] Which we ended up coding repeatedly in slightly different
                ways when needed otherwise.
       
              JimDabell wrote 16 hours 24 min ago:
              jQuery in ~2008 was when it kinda took off, but jQuery was itself
              an outgrowth of work done before it on browser compatibility with
              JavaScript. In particular, events.
              
              Internet Explorer didn’t support DOM events, so
              addEventListener wasn’t cross-browser compatible. A lot of
              people put work in to come up with an addEvent that worked
              consistently cross-browser.
              
              The DOMContentLoaded event didn’t exist, only the load event.
              The load event wasn’t really suitable for setting up things
              like event handlers because it would wait until all external
              resources like images had been loaded too, which was a
              significant delay during which time the user could be interacting
              with the page. Getting JavaScript to run consistently after the
              DOM was available, but without waiting for images was a bit
              tricky.
              
              These kinds of things were iterated on in a series of blog posts
              from several different web developers. One blogger would publish
              one solution, people would find shortcomings with it, then
              another blogger would publish a version that fixed some things,
              and so on.
              
              This is an example of the kind of thing that was happening, and
              you’ll note that it refers to work on this going back to 2001:
              [1] When jQuery came along, it was really trying to achieve two
              things: firstly, incorporating things like this to help browser
              compatibility; and second, to provide a “fluent” API where
              you could chain API calls together.
              
   URI        [1]: https://robertnyman.com/2006/08/30/event-handling-in-jav...
       
              jbverschoor wrote 16 hours 49 min ago:
              Probably 2005.
              
              2002, I was using “JSRS”, and returning http 204/no content,
              which causes the browser to NOT refresh/load the page.
              
              Just for small interactive things, like a start/pause button for
              scheduled tasks. The progress bar etc.
              
              But yeah, in my opinion we lost about 15 years of proper
              progress.
              
              The network is the computer came true
              
              The SUN/JEE model is great.
              
              It’s just that monopolies stifle progress and better standards.
              
              Standards are pretty much dead, and everything is at the
              application layer.
              
              That said..  I think XSLT sucks, although I haven’t touched it
              in almost 20 years. The projects I was on, there was this
              designer/xslt guru. He could do anything with it.
              
              XPath is quite nice though
       
                JimDabell wrote 16 hours 19 min ago:
                > But yeah, in my opinion we lost about 15 years of proper
                progress.
                
                Internet Explorer 6 was released in 2001 and didn’t drop
                below 3% worldwide until 2015. So that’s a solid 14 years of
                paralysis in browser compatibility.
       
                  jbverschoor wrote 14 hours 7 min ago:
                  Time flies when you’re having fun
       
          em-bee wrote 17 hours 18 min ago:
          at the current technology we could cope with server generated
          old-school web pages because they would have low footprint, work
          faster and require less memory.
          
          unless you have a high latency internet connection:
          
   URI    [1]: https://news.ycombinator.com/item?id=44326816
       
            p0w3n3d wrote 17 hours 14 min ago:
            however when you have a high latency connection, the "thick client"
            json-filled webapp will only have its advantages if the most of the
            business logic happens on the browser. I.e. Google Docs - great and
            much better than it used to be in 2000s design style. Application
            that searches the apartments to rent? Not really I would say.
            
            -- edit --
            
            by the way in 2005 I programmed using very funny PHP framework
            PRADO that was sending every change in the UI to the server. Boy it
            was slow and server heavy. This was the direction we should have
            never gone...
       
              catmanjan wrote 16 hours 14 min ago:
              Lol you'd hate to see what blazor is doing then
       
                Tade0 wrote 16 hours 7 min ago:
                Or Phoenix.LiveView for that matter.
       
                  p0w3n3d wrote 2 hours 23 min ago:
                  I have no hate/love relation to that matter. Tbh I don't
                  care, but my phone gets hot when it has to load another
                  5/10/20/100MB Single Page Application that displays a few
                  lines of nicely formatted text, an animated background and a
                  button "subscribe"
                  
                  By the way, GWT did it before.
       
              em-bee wrote 16 hours 59 min ago:
              Application that searches the apartments to rent? Not really I
              would say.
              
              not a good example. i can't find it now, but there was a
              story/comment about a realtor app that people used to sell
              houses. often when they were out with a potential buyer they had
              bad internet access and loading new data and pictures for houses
              was a pain. it wasn't until they switched to using a frontend
              framework to preload everything with the occasional updates that
              the app became usable.
              
              low latency affects any interaction with a site. even hackernews
              is a pain to read over low latency and would improve if new
              comments where loaded in the background. the problem creeps up on
              you faster than you think.
       
                _heimdall wrote 13 hours 57 min ago:
                Prefetching pages doesn't require a frontend framework though.
                All it takes is a simple script to preload all or specific
                anchor links on the page, or you could get fancier with a
                service worker and a site manifest if you want to preload pages
                that may not be linked on the current page.
       
                  chriswarbo wrote 11 hours 51 min ago:
                  It shouldn't need any scripts [1] It can also be imposed by
                  the client, e.g. via a
                  
   URI            [1]: https://en.wikipedia.org/wiki/Link_prefetching
   URI            [2]: https://en.wikipedia.org/wiki/Web_accelerator
       
                    _heimdall wrote 11 hours 41 min ago:
                    Yep, that works as well. I'll reach for a script still if I
                    want more logic around when to prefetch, like only
                    prefetching on link hover or focus. A script is also needed
                    for any links that you need to preload but aren't included
                    on the current page.
       
        intellectronica wrote 17 hours 48 min ago:
        Blast from the past. I actually used XSLT quite a bit in the early 00s.
        Eventually I think everyone figured out XML is an ugly way to write
        S-expressions.
       
        fergie wrote 17 hours 50 min ago:
        What is this "XSLT works natively in the browser" sourcery? The last
        time I used XSLT was like 20 years ago- but I used it A LOT, FOR YEARS.
        In those days you needed a massive wobbly tower of enterprise Java to
        make it work which sort of detracted from the elegance of XSLT itself.
        But if XSLT actually works in the browser- has the holy grail of
        host-anywhere static templating actually been sitting under our noses
        this whole time?
       
          marcosdumay wrote 6 hours 31 min ago:
          Do you remember when people started talking about XHTML?
          
          It was exactly because of the "holy grail of host-anywhere static
          templating". But somehow everybody that knew about it made a vow of
          silence and was forbidden from actually saying it.
       
          Mikhail_Edoshin wrote 10 hours 41 min ago:
          Chrome has libxslt; FireFox has something called "Transformiix". Both
          1.0. Chrome has no extensions, only 'exsl:node-set'; FireFox has
          quite a few, although not all of EXSLT.
          
          Plug: here is a small project to get the basic information about the
          XSLT processor and available extensions. To use with a browser find
          the 'out/detect.xslt' file there and drag it into the browser. Works
          with Chrome and Firefox; didn't work with Safari, but I only have an
          old Windows version of it.
          
   URI    [1]: https://github.com/MikhailEdoshin/xslt-detect-ext/
       
          _heimdall wrote 12 hours 50 min ago:
          XSLT works, though if I'm not mistaken browsers are all stuck on
          older versions of the spec. Firefox has a particularly annoying bug
          that I run into related to `disable-output-escaping` not really
          working when you need to encode HTML from the document to render as
          actual DOM (it renders the raw HTML text).
       
          deanebarker wrote 13 hours 0 min ago:
          > massive wobbly tower of enterprise Java to make it work
          
          ??
          
          I was transforming XML with, like, three lines of VBScript in classic
          ASP.
       
            g8oz wrote 9 hours 57 min ago:
            The MSXML parser was pretty darn solid.
       
          bambax wrote 14 hours 31 min ago:
          > In those days you needed a massive wobbly tower of enterprise Java
          to make it work
          
          You needed the jvm and saxon and that was about it...
       
            fergie wrote 11 hours 10 min ago:
            How deep was your file tree? Be honest! ;)
       
          jillesvangurp wrote 15 hours 39 min ago:
          > massive wobbly tower of enterprise Java to make it work
          
          It wasn't that bad. We used tomcat and some apache libraries for
          this. Worked fine.
          
          Our CMS was spitting out XML files with embedded HTML that were very
          cachable. We handled personalization and rendering to HTML (and js)
          server side with a caching proxy. The XSL transformation ran after
          the cache and was fast enough to keep up with a lot of traffic.
          Basically the point of the XML here was to put all the ready HTML in
          blobs and all the stuff that needed personalization as XML tags. So
          the final transform was pretty fast. The XSL transformer was heavily
          optimized and the trick was to stream its output straight to the
          response output stream and not do in memory buffering of the full
          content. That's still a good trick BTW. that most frameworks do wrong
          out of the box because in memory buffering is easier for the user. It
          can make a big difference for large responses.
          
          These days, you can run whatever you want in a browser via wasm of
          course. But back then javascript was a mess and designers delivered
          photoshop files, at best. Which you then had to cut up into frames
          and tables and what not. I remember Google Maps and Gmail had just
          come out and we were doing a pretty javascript heavy UI for our CMS
          and having to support both Netscape and Internet Explorer, which both
          had very different ideas about how to do stuff.
       
          arccy wrote 16 hours 6 min ago:
          it works, i think the most visible ones are where people style their
          atom / rss feeds instead of rendering separate xml / html pages
          
   URI    [1]: https://rknight.me/blog/styling-rss-and-atom-feeds/
       
          rsolva wrote 16 hours 36 min ago:
          Browsers support XSLT v1.0 only, and from what I understand, there
          has been talk of depricating it.
          
          I would rather that they introduced support for v3, as that would
          make it easier to serving static webpages with native support for
          templating.
       
            smartmic wrote 16 hours 21 min ago:
            I'm also more concerned about depreciation risk. However, you can
            still do a lot with XSLT 1.0. There is also SaxonJS, which allows
            you to run XSLT 3.0. However, embedding JavaScript to use XSLT
            defeats the purpose of this exercise.
       
              imtringued wrote 7 hours 39 min ago:
              It doesn't really defeat the purpose. It just shows how much fuss
              about avoiding JS is a sign of insisting on ideological purity
              rather than accomplishing any particular goal.
              
              What exactly is the difference between generating HTML using the
              browser's XLST 1.0 runtime and SaxonJS's XLST 3.0 runtime? Before
              you say the goal is to not have to deal with JS, then you've
              already accomplished that goal. You don't need to touch NPM,
              webpack, React, JSX, etc.
              
              Blocking first party JS is lunacy by the way.
       
                ndriscoll wrote 6 hours 7 min ago:
                > What exactly is the difference between generating HTML using
                the browser's XLST 1.0 runtime and SaxonJS's XLST 3.0 runtime?
                
                Several hundred kB (compressed) of runtime, for one. It could
                make sense for browsers to have something like that built-in
                like they did with pdf.js, though Saxon is proprietary so it
                would not be that thing.
       
                rsolva wrote 7 hours 7 min ago:
                I speak only for my self, but I greatly value having no
                dependencies or build processes. Just put the files on a web
                server and have it run for the next 20 years.
                
                It might not scale for larger businesses, but for regular
                people on the web who just want to put something out in the
                world and have minimal churn keeping it up, it can have great
                value!
       
          Symbiote wrote 17 hours 38 min ago:
          I worked with a site using XSLT in the browser in 2008, but I think
          support goes back to the early 2000s.
       
            fergie wrote 17 hours 28 min ago:
            I was _really_ deep into XSLT- I even wrote the XSLT 2 parser for
            Wikipedia in like 2009, so I'm not sure why I haven't been aware of
            browser native support for transformations until now. Or maybe I
            was and I just forgot.
       
              rjsw wrote 14 hours 41 min ago:
              It was a feature of IE5.
              
              I updated an XSLT system to work with then latest Firefox a
              couple of years ago.  We have scripts in a different directory to
              the documents being transformed which requires a security setting
              to be changed in Firefox to make it work, I don't know if an
              equivalent thing is needed for Chrome.
       
        chrismorgan wrote 17 hours 57 min ago:
        I’m disappointed that this uses a custom XML format, rather than RSS
        (tolerable) or Atom (better). Then you could just drop it into a feed
        reader fine.
        
        A few years ago, I decided to style my own feeds, and ended up with
        this: [1] . [2] is pretty detailed, I don’t think you’ll find one
        with more comprehensive feature support. (I wrote a variant of it for
        RSS too, since I was contemplating podcasts at the time and almost all
        podcast software is stupid and doesn’t support Atom, and it’s all
        Apple’s fault: [3] .)
        
        At the time, I strongly considered making the next iteration of my
        website serve all blog stuff as Atom documents—post lists as feeds,
        and individual pages as entries. In the end, I’ve decided to head in
        a completely different direction (involving a lot of handwriting!), but
        I don’t think the idea is bad.
        
   URI  [1]: https://chrismorgan.info/blog/tags/fun/feed.xml
   URI  [2]: https://chrismorgan.info/atom.xsl
   URI  [3]: https://temp.chrismorgan.info/2022-05-10-rss.xsl
       
          Lex-2008 wrote 3 hours 4 min ago:
          Hey, thanks a lot for the atom.xsl! Used it to learn a lot while
          converting main page of my blog to an Atom feed half a year ago.
       
        jbaiter wrote 17 hours 58 min ago:
        Does anybody remember Cocoon? It was an XSLT Web Framework that built
        upon Spring. It was pretty neat, you could do the stuff XSLT was great
        at with stylesheets that were mapped to HTTP routes, and it was very
        easy to extend it with custom functions and supporting Java code to do
        the stuff it wasn't really great at. Though I must say that as the XSLT
        stylesheets grew in complexity, they got *really* hard to understand,
        especially compared to something like a Jinja template.
       
          evanelias wrote 10 hours 34 min ago:
          Yes! In the mid 00's, two places I worked (major US universities)
          used Cocoon heavily. It was a good fit for reporting systems that had
          to generate multiple output formats, such as HTML and PDF.
       
        cyphax wrote 18 hours 0 min ago:
        In my first job, when .net didn't yet exist, xml + xslt was the
        templating engine we used for html and (html) e-mail and sometimes csv.
        I'd write queries in sql server using "for xml" and it would output all
        data needed for a page and feed it to an xsl template (all server side)
        which would output html. Microsoft had a caching xsl parser that would
        result in less than 10ms to load such a page. Up until we though "hey,
        let's start using xml namespaces, that sounds like a good idea!". Was a
        bit less fun after that!
        Looking back it was a pretty good stack, and it would still work fine
        today imho. I never started disliking it, but after leaving that job I
        never wrote another stylesheet.
       
        rossant wrote 18 hours 3 min ago:
        I made a website based on XML documents and XSLT transformations about
        20 years ago. I really liked the concept. The infrastructure could have
        been made much simpler but I guess I wanted to have an excuse to play
        with these technologies.
        
        After spending months working on my development machine, I deployed the
        website to my VPS, to realize to my utter dismay that the XSLT module
        was not activated on the PHP configuration. I had to ask the (small)
        company to update their PHP installation just for me, which they
        promptly did.
       
        susam wrote 18 hours 4 min ago:
        These days I use XSLT to style my feeds.  For example: [1]
        
   URI  [1]: https://susam.net/feed.xml
   URI  [2]: https://susam.net/feed.xsl
       
          dev0p wrote 13 hours 11 min ago:
          I always forget XML can do that. It just feels wrong for some reason.
       
          pacifika wrote 13 hours 13 min ago:
          This does make me think why is a blog not just an rss feed.
       
            sumtechguy wrote 12 hours 34 min ago:
            with xslt it probably could be.
       
            _heimdall wrote 12 hours 53 min ago:
            I've built my personal site on XSLT a couple times just to see how
            far I could push it.
            
            It works surprisingly well, the only issue I ever ran into was a
            decades old bug in Firefox that doesn't support rendering HTML
            content directly from the XML document. I.e. If the blog post
            content is HTML via cdata, I needed a quick script to force Firefox
            to render that text to innerHTML rather than rendering the raw
            cdata text.
       
          kome wrote 16 hours 32 min ago:
          beautiful, well done! i hope people will copy that for their own
          websites. and use it creatively.
       
        petesergeant wrote 18 hours 5 min ago:
        XSLT is great fun as a general functional programming language! You can
        build native functional data-structures[1], implement graph-traversal
        algorithms[2], and even write test assertions[3]!
        
        1: [1] 2: [1] 3:
        
   URI  [1]: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/util...
   URI  [2]: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/util...
   URI  [3]: https://github.com/pjlsergeant/xslt-fever-dream/blob/main/util...
       
          bmacho wrote 16 hours 48 min ago:
          Files are missing from the repo(?). What about util-map.xsl,
          test-map.xsl, util-serialize.xsl
       
            petesergeant wrote 16 hours 41 min ago:
            I've updated this, as well as included instructions on running the
            built-in unit tests, which are of course also written in XSLT.
       
        Hendrikto wrote 18 hours 6 min ago:
        I hate this grug brain writing style. It sounds bad and is hard to
        read. Please just write normal, full sentences.
       
          antonvs wrote 17 hours 9 min ago:
          Presumably part of the goal is to implicitly claim that what's being
          described is so simple a caveman could understand it. But writing
          such a post about XSLT is like satire. Next up, grug brain article
          about the Coq proof assistant?
       
          jurip wrote 17 hours 31 min ago:
          Yeah I don't get it. I had to stop reading after a couple of
          sentences, I just can't deal with that.
       
          s4i wrote 17 hours 54 min ago:
          Maybe it’s just the way the author writes?
       
        _def wrote 18 hours 9 min ago:
        We've come full circle again. Yes this works great since many years,
        XML is just so much clutter.
       
          kome wrote 16 hours 37 min ago:
          clutter? i find it MUCH more elegant and simple, but conceptually and
          practically, than the absolute clown-car of modern js driven web, css
          frameworks hacks, etc etc
       
        alexjplant wrote 18 hours 10 min ago:
        One of my first projects as a professional software engineer at the
        ripe age of 19 was customizing a pair of Google Search Appliances that
        my employer had bought. They'd shelled out hundreds of thousands of
        dollars to rack yellow-faced Dell servers running CentOS with some
        Google-y Python because they thought that being able to perform
        full-text searches of vast CIFS document stores would streamline their
        business development processes. Circa 2011 XHTML was all the rage and
        the GSA's modus operandi was to transform search results served from
        the backend in XML into XHTML via XSLT. I took the stock template and
        turned it into an unholy abomination that served something resembling
        the rest of the corporate intranet portal by way of assets and markup
        stolen from rendered Coldfusion application pages, StackOverflow, and
        W3Schools tutorials.
        
        I learned quickly to leave this particular experience off of my resume
        as sundry DoD contractors contacted me on LinkedIn for my "XML
        expertise" to participate in various documentation modernization
        projects.
        
        The next time you sigh as you use JSX to iterate over an array of
        Typescript interfaces deserialized from a JSON response remember this
        post - you could be me doing the same in XSLT :-).
       
        tannhaeuser wrote 18 hours 17 min ago:
        I had done a couple of nontrivial projects with XSLT at the time and
        the problem with it is its lack of good mnemonics, discoverability from
        source code, and other ergonomics coupled with the fact that it's only
        used rarely so you find yourself basically relearning after having not
        used it for a couple of weeks. Template specifity matching is a
        particularly bad idea under those circumstances.
        
        XSLT technically would make sense the more you're using large amounts
        of boilerplate XML literals in your template because it's using XML
        itself as language syntax. But even though using XML as language
        meta-syntax, it has lots of microsyntax ie XPath, variables, parameters
        that you need to cram into XML attributes with the usual quoting
        restrictions and lack of syntax highlighting. There's really nothing in
        XSLT that couldn't be implemtented better using a general-purpose
        language with proper testing and library infrastructure such as
        Prolog/Datalog (in fact, DSSSL, XSLT's close predecessor for templating
        full SGML/HTML and not just the XML subset, was based on Scheme) or
        just, you know, vanilla JavaScript which was introduced for DOM
        manipulation.
        
        Note maintainance of libxml2/libxslt is currently understaffed [1], and
        it's a miracle to me XSLT (version v1.0 from 1999) is shipping as a
        native implementation in browsers still unlike eg. PDF.js.
        
        [1] 
        
   URI  [1]: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
       
        CiaranMcNulty wrote 18 hours 18 min ago:
        It's sad how the bloat of '00s enterprise XML made the tech seem
        outdated and drove everyone to 'cleaner' JSON, because things like XSLT
        and XPath were very mature and solved a lot of the problems we still
        struggle with in other formats.
        
        I'm probably guilty of some of the bad practice: I have fond memories
        of (ab)using XSLT includes back in the day with PHP stream wrappers to
        have stuff like ``
        
        This may be out-of-date bias but I'm still a little uneasy letting the
        browser do the locally, just because it used to be a minefield of
        incompatibility
       
          tootie wrote 5 hours 0 min ago:
          I never enjoyed XSLT. It always felt like a square peg for a round
          hole. I do miss XML though. It had so, so many power features that
          too few people knew how to use. XSD was incredibly good for domain
          modeling. It had an include systems for composing files. And nobody
          really made good use of mixed content, but it was a crazy powerful
          feature. You embed structured content in unstructured content inside
          structured content.
       
            int_19h wrote 4 hours 52 min ago:
            The original idea was good: having a purely declarative language
            running on the client which just does the model -> view
            transformation, and having the server serve the models. XSLT as an
            implementation of that idea is pretty bad, but mostly because using
            XML as the underlying syntax for a PL is very unergonomic. If the
            initial version of XSLT looked more like XQuery does, I think it
            would have been a lot more popular.
       
              tootie wrote 4 hours 8 min ago:
              Yeah, the idea makes sense. More sense than CSS which ended
              requiring years and years of refinement and pre-processors to be
              usable.
       
          kllrnohj wrote 7 hours 54 min ago:
          The game Rimworld stores all its game configuration data in XML and
          uses XPath for modding and it's so incredibly good. It's a seriously
          underrated combination for enabling relatively stable local
          modifications of data. I don't know of any other game that does this,
          probably because XML has a reputation of being "obsolete" or
          whatever. But it's just such a robust system for this use case.
          
   URI    [1]: https://rimworldwiki.com/wiki/Modding_Tutorials/PatchOperati...
       
          tannhaeuser wrote 12 hours 14 min ago:
          > bloat of '00s enterprise XML
          
          True, and it's even more sad that XML was originally just intended as
          a simplified subset of SGML (HTML's meta syntax with tag inference
          and other shortforms) for delivery of markup on the web and to evolve
          markup vocabularies and capabilities of browsers (of which only SVG
          and MathML made it). But when the web hype took over, W3C (MS) came
          up with SOAP, WS-this and WS-that, and a number of programming
          languages based on XML including XSLT (don't tell HNers it was
          originally Scheme but absolutely had to be XML just like JavaScript
          had to be named after Java; such was the madness).
       
          aitchnyu wrote 15 hours 23 min ago:
          In the 2003 The Art of Unix Programming, the author advocated bespoke
          text formats and writing parsers for them. Writing xml by hand is his
          list of war crimes. Since then syntax highlighting and autocomplete
          and autoformatting narrowed the effort gap and tolerant parsers
          (browsers being the main example) got a bad rap. Would Markdown and
          Yaml exist with modern editors?
       
          Cthulhu_ wrote 15 hours 34 min ago:
          It's been 84 years but I still miss some of the "basics" of XML in
          JSON - a proper standards organization, for one. But things like
          schemas were (or, felt like) so much better defined in XML land, and
          it took nearly a decade for JSON land to catch up.
          
          Last thing I really did with XML was a technology called EXI, a
          transfer method that converted an XML document into a compressed
          binary data stream. Because translating a data structure to ASCII,
          compressing it, sending it over HTTP etc and doing the same thing in
          reverse is a bit silly. At this point protobuf and co are more
          popular, but imagine if XML stayed around. It's all compatible
          standards working with each other (in my idealized mind), whereas
          there's a hard barrier between e.g. protobuf/grpc and JSON APIs.
          Possibly for the better?
       
            bokchoi wrote 11 hours 35 min ago:
            I just leaned about EXI as it's being used on a project I work on. 
            It's quite amazingly fast and small!  It is a binary representation
            of the xml stream.  It can compress quite small if you have an
            xmlschema to go with your xml.
            
            I was curious about how it is implemented and I found the spec easy
            to read and quite elegant:
            
   URI      [1]: https://www.w3.org/TR/exi/
       
            sumtechguy wrote 12 hours 43 min ago:
            That data transform thing xslt could do was so cool.  You could
            twist it into emitting just about any other format and XML was the
            top layer.  You want it in tab delimited yaml.    Feed it the right
            style sheet and there you go.  Other system wants CSV.    Sure thing
            different style sheet and there you go.
            
            For a transport tech XML was OK.  Just wasted 20% of your bandwidth
            on being a text encoding.  Plus wrapping your head around those
            style sheets was a mind twister.  Not surprised people despise it. 
            As it has the ability to be wickedly complex for no real reason.
       
            chrisweekly wrote 13 hours 14 min ago:
            84 years? nope.
       
          rwmj wrote 16 hours 19 min ago:
          XML is fine.  A bit wordy, but I appreciate its precision and
          expressiveness compared to YAML.
          
          XPath is kind of fine.    It's hard to remember all the syntax but I
          can usually get there with a bit of experimentation.
          
          XSLT is absolutely insane nonsense and needs to die in a fire.
       
            tclancy wrote 10 hours 31 min ago:
            That's funny, I would reverse those. I loved XSLT though it took me
            a long time for it to click; it was my gateway drug to concepts
            like functional programming and idempotency. XPath is pretty great
            too. The problem was XML, but it isn't inherent to it -- it
            empowered (for good and bad) lots of people who had never heard of
            data normalization to publish data and some of it was good but,
            like Irish Alzheimer's, we only remember the bad ones.
       
            cturner wrote 13 hours 21 min ago:
            It depends what you use it for. I worked on a interbank messaging
            platform that normalised everything into a series of standard xml
            formats, and then used xslt for representing data to the client.
            Common use case - we could rerender data to what a receiver’s
            risk system were expecting in config (not compiled code). You could
            have people trained in xslt doing that, they did not need to be
            more experienced developers. Fixes were fast.  It was good for
            this. Another time i worked on a production pipeline for a
            publisher of education books. Again, data stored in normalised xml.
            Xslt is well suited to mangling in that scenario.
       
          maxloh wrote 16 hours 53 min ago:
          However, XML is actually a worse format to transfer over the
          internet. It's bloated and consumes more bandwidth.
       
            bokchoi wrote 11 hours 32 min ago:
            Check out EXI.    It compresses the xml stream into a binary encoding
            and is quite small and fast:
            
   URI      [1]: https://www.w3.org/TR/exi/
       
            JimDabell wrote 16 hours 5 min ago:
            XML is a great format for what it’s intended for.
            
            XML is a markup language system. You typically have a document, and
            various parts of it can be marked up with metadata, to an arbitrary
            degree.
            
            JSON is a data format. You typically have a fixed schema and things
            are located within it at known positions.
            
            Both of these have use-cases where they are better than the other.
            For something like a web page, you want a markup language that you
            progressively render by stepping through the byte stream. For
            something like a config file, you want a data format where you can
            look up specific keys.
            
            Generally speaking, if you’re thinking about parsing something by
            streaming its contents and reacting to what you see, that’s the
            kind of application where XML fits. But if you’re thinking about
            parsing something by loading it into memory and looking up keys,
            then that’s the kind of application where JSON fits.
       
            rwmj wrote 16 hours 18 min ago:
            Only if you never use compression.
       
          codeulike wrote 17 hours 33 min ago:
          Xpath would have been nice if you didnt have to pedantically
          namespace every bit of every query
       
            masklinn wrote 16 hours 33 min ago:
            That… has nothing to do with xpath?
            
            If your document has namespaces, xpath has to reflect that. You can
            either tank it or explicitly ignore namespaces by foregoing the
            shorthands and checking `local-name()`.
       
              codeulike wrote 14 hours 18 min ago:
              Ok. Perhaps 'namespace the query' wasnt quite the right way of
              explaining it. All I'm saying is, whenever I've used xpath,
              instead of it looking nice like
              
              /*bookstore/*book/*title
              
              its been some godawful mess like
              
              /*[name()='bookstore']/*[name()='book']/*[name()='title']
              
              ... I guess because they couldn't bear to have it just match on
              tags as they are in the file and it had to be tethered to some
              namespace stuff that most people dont bother with. A lot of XML
              is ad-hoc without a namespace defined anywhere
              
              Its like
              
              Me: Hello Xpath, heres an XML document, please find all the
              bookstore/book/title tags
              
              Xpath: *gasps* Sir, I couldn't possibly look for those tags
              unless you tell me which namespace we are in. Are you some sort
              of deviant?
              
              Me: oh ffs *googles xpath name() syntax*
       
                masklinn wrote 12 hours 37 min ago:
                > the tags as they are in the file
                
                Is not actually relevant and is not an information the average
                XML processor even receives. If the file uses a default
                namespace (xmlns), then the elements are namespaced, and
                anything processing the XML has to either properly handle
                namespaces or explicitly ignore namespaces.
                
                > A lot of XML is ad-hoc without a namespace defined anywhere
                
                If the element is not namespaced xpath does not require a
                prefix, you just write
                
                    //bookstore/book/title
       
                ndriscoll wrote 12 hours 45 min ago:
                I don't recall ever needing to do that for unnamespaced tags.
                Are you sure the issue you're having isn't that the tags have a
                namespace?
                
                my:book is a different thing from your:book and you generally
                don't want to accidentally match on both. Keeping them separate
                is the entire point of namespaces. Same as in any programming
                language.
       
                rhdunn wrote 13 hours 2 min ago:
                Newer versions of XPath and XSLT allow
                
                    /*:bookstore/*:book/*:title
       
        podgorniy wrote 18 hours 18 min ago:
        Good old xslt. Was quite in the center of attention when strict xml was
        still a next standard candidate. html5 won.
       
        captn3m0 wrote 18 hours 20 min ago:
        I use XSLT to generate a markdown README from a Zotero export XML file.
        It works well, but some simple things become much harder - sorting,
        counting, uniqueness. [1] It also feels very arcane - hard to debug and
        understand unfortunately.
        
   URI  [1]: https://github.com/captn3m0/boardgame-research
       
        b0a04gl wrote 18 hours 24 min ago:
        xslt does one thing clean , walks trees on tree input. both data and
        layout stay in structured memory. no random jumps. browser-native xslt
        eval can hit perf spots most json-to-dom libs might miss. memory layout
        was aligned by design. we dropped it too early just cuz xml got
        unpopular
       
        kstrauser wrote 18 hours 29 min ago:
        Whoa, I just realized how much Zope’s page templates were basically
        XSLT that looked slightly different.
        
        This gives me new appreciation for how powerful XSLT is, and how glad I
        am that I can use almost anything else to get the same end results.
        Give me Jinja or Mustache any day. Just plain old s-exprs for that
        matter. Just please don’t ever make me write XML with XML again.
       
          pornel wrote 16 hours 6 min ago:
          Zope was cool in that you couldn't generate ill-formed markup, and
          optionally wrapping something in `` didn't need repeating the same
          condition for ``.
          
          However, it was much simpler imperative language with some macros.
          
          XSLT is more like a set of queries competing to run against a
          document, and it's easy to make something incomprehensibly complex if
          you're not careful.
       
        murukesh_s wrote 18 hours 32 min ago:
        Sometimes I wish we could have kept XML alive alongside JSON.. I miss
        the comments, CDATA etc, especially when you have to serialize complex
        state. I know there are alternatives to JSON like YAML but I felt XML
        was better than YAML. We adopted JSON for its simplicity but tried to
        retrofit schema and other things that made XML complex. Like we kind of
        reinvented JSON Schema, and ended up like what XSD did decades ago and
        still lacking a good alternative to XSLT..
       
          mike_hearn wrote 17 hours 1 min ago:
          The XSL:T equivalent for JSON is React.
          
          Let's not romanticize XML. I wrote a whole app that used XSL:T about
          25 years ago (it was a military contract and for some reason that
          required the use of an XML database, don't ask me). Yes it had some
          advantages over JSON but XSL:T was a total pain to work with at
          scale. It's a functional language, so you have to get into that
          mindset first. Then it's actually multiple functional languages
          composed together, so you have to learn XPath too, which is only a
          bit more friendly than regular expressions. The language is dominated
          by hacks working around the fact that it uses XML as its syntax. And
          there are (were?) no useful debuggers or other tooling. IIRC you
          didn't even have any equivalent of printf debugging. If you screwed
          up in some way you just got the wrong output.
          
          Compared to that React is much better. The syntax is much cleaner and
          more appropriate, you can mix imperative and FP, you have proper
          debugging and profiling tools, and it supports incremental
          re-transform so it's actually useful for an interactive UI whereas
          XSL:T never was so you needed JS anyway.
       
            bravesoul2 wrote 11 hours 53 min ago:
            The XSL:T equivalent for JSON is jq [1] Learn it. It is insanely
            useful for mungling json in day to day work.
            
   URI      [1]: https://github.com/jqlang/jq
       
          ahofmann wrote 17 hours 18 min ago:
          I just had to explain to some newbies that SOAP is a protocol with
          rigid rules; REST is an architectural style with flexibility. The
          latter means that you have to work and document really well and
          consumers of the API need tools like Postman etc. to be even able to
          use the API. With SOAP, you get most of that for free.
       
            Kwpolska wrote 12 hours 34 min ago:
            Postman is just a terrible GUI for making HTTP requests. Using a
            REST API can be as simple as `curl [1] `, and you can even open
            that link in a browser. SOAP requires sending a ton of XML [0] - it
            is not very usable without a dedicated SOAP-aware tool.
            
            [0]
            
   URI      [1]: https://api.github.com/repos/torvalds/linux
   URI      [2]: https://en.wikipedia.org/wiki/SOAP#Example_message_(encaps...
       
          n_plus_1_acc wrote 18 hours 24 min ago:
          I agree wholeheartedly, but the XML library in them JS ecosystem is
          shit.
       
        ryoshu wrote 18 hours 35 min ago:
        Blizzard uses/used XSLT for WoW.
       
          calmbonsai wrote 18 hours 25 min ago:
          Was that before/after the LUA adoption?
       
            shakna wrote 18 hours 19 min ago:
            Before. And after.
            
            XSLT controls the styling, Lua the running functions. When Lua
            adjusts a visible thing, it generates XSLT.
            
            "FrameXML" is a thin Lua wrapper around the base XSLT.
       
        HexDecOctBin wrote 18 hours 36 min ago:
        me busy fixing asan, "illegal instruction", blah blah blah, me sad and
        frustrated, much scowling.
        
        me come to hn, see xml build system, me happy, much smiling, me hit up
        arrow, me thank good stranger.
       
          7bit wrote 17 hours 45 min ago:
          Dear God the writing style on that article
       
        tomduncalf wrote 18 hours 36 min ago:
        Early in my career I worked on a carrier's mobile internet portal in
        the days before smartphones. It was XSLT all the way down, including
        individual XSLT transforms for every single component the CMS had for
        every single handset we supported (hundreds) as they all had different
        capabilities and browser bugs. It was not that fun to write complex
        logic in haha but was kind of an interesting thing to work on, before
        iPhone etc came along and everything could just render normal websites.
       
          calmbonsai wrote 18 hours 26 min ago:
          Same.  I was part of the mobile media messaging (WAP) roll-out at
          Vodafone.  Oh man, XSLT was one of those "theoretical" W3C languages
          that (rightfully) aged like milk.  Never again.
       
            tomduncalf wrote 18 hours 23 min ago:
            Ha! I was at Orange. I suspect all the carriers had similar setups.
            Yeah I don’t miss working with that lol
       
              enqk wrote 17 hours 59 min ago:
              I worked in the same period for a finnish startup (iobox.fi) that
              ended up being acquired by telefonica.
              
              Our mobile and web portal was made of j2ee services producing XML
              which were then transformed by XSLT into HTML or WAP
              
              At the time it blew me away that they expected web designers to
              work in an esoteric language like that
              
              But it was also nicely separated
       
        Dachande663 wrote 18 hours 38 min ago:
        Many, many years back I used Symphony21[0] for an events website.
        It’s whole premise was build an XML structure via blueprints and then
        your theme is just XSLT templates for pages.
        
        Gave it up because it turns out the little things are just a pain.
        Formatting dates, showing article numbers and counts etc.
        
        [0]
        
   URI  [1]: https://www.getsymphony.com/
       
          k4runa wrote 18 hours 34 min ago:
          Wow, blast from the past.
       
        Wololooo wrote 18 hours 39 min ago:
        Me simple man. Me see caveman readme, me like. Sometimes me feel like
        caveman hitting keyboard to make machine do no good stuff. But
        sometimes, stuff good. Me no do websites or web things, but me not know
        about XSLT. Me sometimes hack XML. Me sometimes want show user things.
        Many many different files format makes head hurt. Me like pretty things
        though. Me might use this.
        
        Thank you reading specs.
        
        Thank you making tool.
       
        JonChesterfield wrote 18 hours 44 min ago:
        I looked into this a while ago and concluded that it works fine but
        browsers are making stroppy noises about deprecating it, so ended up
        running the transform locally to get html5. Disappointing.
       
       
   DIR <- back to front page