_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   Vec<T>
       
       
        germandiago wrote 2 hours 2 min ago:
        Can anyone compare it to C++ in terms of memory layout for typical
        implementations?
       
          swiftcoder wrote 1 hour 30 min ago:
          It should be more-or-less the same on typical implementations. You
          might have 3 pointers instead of 1 pointer and 2 lengths, or some mix
          thereof, but they are all equivalent in memory layout.
       
          sesm wrote 1 hour 40 min ago:
          That's one of major problems I have with Rust docs and community. The
          best way to explain most things in Rust to experienced dev, is by
          comparison to C++. But all the docs and blog posts explicitly target
          newbies and avoid comparison with C++ at all cost.
       
            zozbot234 wrote 1 hour 19 min ago:
            I disagree, Rust and C++ are very different languages with
            significant impedance mismatches once you go beyond the common
            C-like subset.    Referencing C++ as a matter of course in docs and
            blog posts would just cause confusion.    If you want a modern
            language that really is a lot closer to C++ you may want to check
            out Carbon.
       
              sesm wrote 1 hour 3 min ago:
              Rust was created as a C++ replacement, borrows the 'zero cost
              abstractions' motto from C++, relies on RAII for resource
              management like C++ (not many languages do it), has the same
              approach to concurrency (in-place mutation guarded by locks),
              uses the codegen backend that was created for C++. It's mostly a
              C++ subset with more guardrails.
              
              Edit: RAII
       
                zozbot234 wrote 4 min ago:
                It's nowhere near a C++ subset. The way it works under the hood
                is quite different, and even trying to add some of the
                underlying mechanisms to C++ (such as what they call "trivially
                relocatable types", or "destructing moves") has been quite
                difficult.
       
          username223 wrote 1 hour 41 min ago:
          The vector implementation on my machine uses three pointers: start,
          end, and end-of-storage. It's basically the same, but uses addresses
          rather than counts. The valarray implementation uses a pointer and a
          single count.
       
        blacklion wrote 2 hours 27 min ago:
        It is not clear, does compiler explicitly knows about properties of
        `NonNull` (for example, that it has neutral value to collapse
        `Option>`), so it is part of compiler, or it is expressed in type
        system, so it is part of standard library?
        
        Same question about other «signal» types.
       
          dathinab wrote 1 hour 24 min ago:
          >  part of compiler, or it is expressed in type system, so it is part
          of standard library
          
          compiler + standard library (core library to be more specific)
          
          `NonNull` is mostly "just" a library type in the core libary, but it
          uses two unstable/nightly attributes to tell the compiler about it's
          special properties (see other answers, also to be clear its not a
          `lang_item` attribute but something other types can use, too).
          
          The special thing about core/std in rust is that due to it being part
          of the specific compiler version it can use (some, careful
          considered) subset of unstable features as long as they work for that
          specific use case and don't "leak"/accidentally stabilize through it.
          
          But this attributes are in progress of getting replaces with a new
          mechanism of pattern annotations. I.e. NonNull will be annotated with
          attribute which contains a `1..` pattern (i.e. value is 1 or larger)
          but other patterns are getting support, too. (The second attribute to
          enforce/guarantee niche optimizations might still be needed for
          stability guarantees.)
          
          And I think??? there are plans to stabilize this new mechanism, but
          I'm absolutely not sure what the state of this is (RFC accepted?,
          stabilization open/likely/decided? implementation done/partial?). I'm
          currently not in the loop.
          
          So in the long future you probably can write your own types with
          other niches e.g. a u8 with `..127` (MSB unused).
       
          zdimension wrote 2 hours 14 min ago:
          For anyone wondering, NonNull (along with other "constrained" types
          like NonZero integers) uses internal compiler attributes ( [1] ):
          
            #[rustc_layout_scalar_valid_range_start(1)]
          
          This gives rustc enough information to perform niche optimizations
          such as collapsing Option>.
          
          You can technically use those for your own types, but it requires
          nightly, obviously.
          
   URI    [1]: https://doc.rust-lang.org/src/core/ptr/non_null.rs.html#72
       
          conradludgate wrote 2 hours 16 min ago:
          It's a mix of both. [1] #[rustc_layout_scalar_valid_range_start(1)]
            #[rustc_nonnull_optimization_guaranteed]
          
          This declares that NonNull cannot be outside of the range (1..).
          These magic attributes are only allowed to be used by the stdlib, and
          they're direct compiler instructions. Nonnull is special because the
          optimisation is guaranteed so the compiler does have special code to
          handle it, but other types can be optimised without compiler changes.
          
   URI    [1]: https://doc.rust-lang.org/src/core/ptr/non_null.rs.html#72-7...
       
        tialaramex wrote 2 hours 41 min ago:
        Because this is focused on how the data structure works it doesn't
        mention lots of nice API design choices in Rust.
        
        The one I particularly want to call out because it came up this morning
        is providing both Vec::reserve and Vec::reserve_exact
        
        Vec::reserve lets us hint about our upcoming capacity expectations
        without damaging the O(1) amortized growth which is the whole point of
        this collection type, but it can waste some memory to achieve this.
        This is probably what you want in most code.
        
        Vec::reserve_exact is a more narrow idea - we can hint about the
        ultimate capacity needed, if we're wrong and later need more capacity
        this has a significant performance cost because we thew away the
        amortized growth promise to get this, but we don't waste memory.
        
        C++ only provides the equivalent of Vec::reserve_exact which makes this
        a footgun but if you use this call when you really needed the other one
        you trash your perf, as a result of which people teaching C++ tend to
        just say don't use reservation to uh, reserve capacity, but now you're
        leaving perf on the table so that's not great.
       
          lossolo wrote 2 hours 5 min ago:
          > Vec::reserve_exact is a more narrow idea - we can hint about the
          ultimate capacity needed, if we're wrong and later need more capacity
          this has a significant performance cost because we thew away the
          amortized growth promise to get this, but we don't waste memory.
          
          The claim that using reserve_exact "throws away the amortized growth
          promise" is wrong. You don't disable amortized growth, you just won't
          get extra headroom from that call. If you guessed too small, you may
          pay an extra reallocation later. Rust's Vec still provides amortized
          O(1) push overall.
          
          > C++ only provides the equivalent of Vec::reserve_exact which makes
          this a footgun but if you use this call when you really needed the
          other one you trash your perf, as a result of which people teaching
          C++ tend to just say don't use reservation to uh, reserve capacity,
          but now you're leaving perf on the table so that's not great.
          
          It's not a reserve_exact only footgun. The standard says reserve(n)
          makes capacity at least n, implementations may allocate exactly n (
          and many do), but they're allowed to overallocate and future growth
          remains amortized. Used properly (when you know or can bound the
          size), reserve is a common and recommended optimization.
       
            khuey wrote 1 hour 5 min ago:
            > The claim that using reserve_exact "throws away the amortized
            growth promise" is wrong
            
            Well, in some sense this depends on what exactly you think you're
            amortizing over. If you `Vec::reserve` with size N, fill to N, and
            then append a single element you get the usual amortized O(1)
            growth of an append (or at least you can, the docs for
            `Vec::reserve` say it may reserve additional space, not that it
            must). But if you `Vec::reserve_exact` with size N, fill to N, and
            then append a single element you are guaranteeing that that first
            append triggers a potentially O(N) resize.
            
            From that point on in either case you would get the usual amortized
            growth of course.
       
              lossolo wrote 40 min ago:
              Sure, but this isn't a job for reserve_exact. Vec::reserve is
              like a framing hammer made for speed and momentum. You might dent
              a little extra wood (overallocate), but you drive lots of nails
              fast and keep amortized O(1) growth.
              reserve_exact is a jeweler's hammer so great when you know the
              exact setting and won't touch it again, precise fit, zer slack
              etc.
              
              Now, try to frame a house with jeweler's hammer, tapping in tiny
              increments and resizing every few boards and you will crawl into
              quadratic time.
              
              Who is using jeweler's hammer to frame their house?
              
              So what I'm arguing is if you don't misuse reserve_exact, then as
              you said you still get amortized growth. The OP's example misuses
              the tool and then blames it for not behaving differently on that
              first append.
       
            adgjlsfhk1 wrote 1 hour 30 min ago:
            > The claim that using reserve_exact "throws away the amortized
            growth promise" is wrong
            
            The key part is that multiple calls reserve_exact will cause
            repeated allocations. The classic example is if someone defines a
            `pushfive` function that uses reserve_exact to increase the size by
            5, and then pushes 5 times. Calling this function in a loop will
            take quadratic time since each reserve_exact call increases the
            array size. With reserve, the runtime is linear as expected.
       
              lossolo wrote 1 hour 18 min ago:
              Quadratic pushfive is just a cautionary tale about misusing
              reserve_exact. Basically use reserve_exact when you know the
              final size (reserve once), or you're doing a one off tight sizing
              where memory footprint matters.
              
              Don't pre reserve inside pushfive, just push the 5 elements (Vec
              handles growth)
              or if you know how many times you'll call pushfive in a loop,
              reserve up front once vec.reserve(5 * times) or use reserve
              instead of reserve_exact for incremental growth.
       
                adgjlsfhk1 wrote 55 min ago:
                that's exactly the footgun. reserve_exact usage needs to be
                analyzed globally, while `resere` has the extra fuzziness
                needed to ensure that you can't mess anything up too badly with
                it.
       
                  lossolo wrote 31 min ago:
                  Calling that a "footgun" feels really overstated.
                  
                  Yes, reserve is the safer default because it gives you slack
                  and preserves amortized growth even if your usage pattern is
                  messy.
                  
                  But "needs global analysis" isn't unique to reserve_exact.
                  Lots of perf knobs do (chunk sizes, buffering, locking
                  granularity etc). The fix to this is to use the tool where
                  its preconditions actually hold, not to avoid the tool.
                  
                  So what I'm basically saying is that reserve_exact isn't
                  inherently dangerous, it just assumes you know the final size
                  and won't exceed it etc. If you keep bumping it in tiny steps
                  (pushfive style), that's just misuse so treating it as a flaw
                  is unwarranted.
       
          smallstepforman wrote 2 hours 17 min ago:
          From your description, I cannot determine the difference. Engineers
          like exact things, whats this fuzzy concept called “hint”?
       
            mytailorisrich wrote 2 hours 3 min ago:
            The difference according to Rust doc itsef:
            
            reserve()
            
            > Reserves capacity for at least additional more elements to be
            inserted in the given Vec. The collection may reserve more space to
            speculatively avoid frequent reallocations.
            
            reserve_exact()
            
            > Reserves the minimum capacity for at least additional more
            elements to be inserted in the given Vec. Unlike reserve, this will
            not deliberately over-allocate to speculatively avoid frequent
            allocations... Prefer reserve if future insertions are expected.
            
            So if you know you'll need n now but will probably need more later,
            use reserve(). If you know n is all you'll need, use
            reserve_exact().
            
            Vec is a contiguous growable array, so the optimization game is to
            minimise allocations, reallocations (to keep it contiguous), and
            unused allocated memory.
       
              kstrauser wrote 2 min ago:
              I could look this up, but I’m enjoying reading this
              conversation.
              
              Do reserve and reserve_exact increment the capacity, or ensure
              there’s still at least that much capacity remaining?
              
              If the former, if I reserve(1) 10 times in a row, does that mean
              it could be rounding up to a thousand elements (say, because of
              the page table size) each time?
       
            throwaway173738 wrote 2 hours 13 min ago:
            Suppose I think I may need 128 entries at some point, but the
            vector is allocated with room for 16 entries by default. I may not
            want to allocate and then immediately allocate again. But if I get
            to a 17th entry then I’m already causing allocation. So I might
            as well allocate 128 at that time so there are no more allocations
            at all.
       
              arka2147483647 wrote 1 hour 55 min ago:
              All the functions mentioned above, even the cpp one, will reserve
              atleast the number of elements given to resize() or
              resize_exact(), but may reserve more than that.
              
              After some pondering, and reading the rust documentation, I came
              to the conclusion that te difference is this:
              reserve() will grow the underlaying memory area to the next
              increment, or more than one increment, while
              reserve_exact() will only grow the underlaying memory area to the
              next increment, but no more than that.
              
              Eg, if grow strategy is powers of two, and we are at pow(2), then
              reserve() may skip from pow(2) to pow(4), but reserve_exact()
              would be constrained to pow(3) as the next increment.
              
              Or so i read the documentation. Hopefully someone can confirm?
       
                swiftcoder wrote 1 hour 33 min ago:
                > even the cpp one, will reserve atleast the number of elements
                given
                
                The C++ one, however, will not reserve more than you ask for
                (in the case that you reserve greater than the current
                capacity). It's an exact reservation in the rust sense.
                
                > reserve() will grow the underlaying memory area to the next
                increment, or more than one increment, while reserve_exact()
                will only grow the underlaying memory area to the next
                increment, but no more than that
                
                No, not quite. Reserve will request as many increments as it
                needs, and reserve_exact will request the exact total capacity
                it needs.
                
                Where the docs get confusing, is that the allocator also has a
                say here. In either case, if you ask for 21 items, and the
                allocator decides it prefers to give you a full page of memory
                that can contain, say, 32 items... then the Vec will use all
                the capacity returned by the allocator.
       
                  arka2147483647 wrote 1 hour 9 min ago:
                   [1] says
                  
                  > Increase the capacity of the vector (the total number of
                  elements that the vector can hold without requiring
                  reallocation) to a value that's greater or equal to new_cap.
                  
                  I belive that the behaviour of reserve() is implementation
                  defined.
                  
   URI            [1]: https://en.cppreference.com/w/cpp/container/vector/r...
       
                    tialaramex wrote 53 min ago:
                    Because there's only a single function here, it has to
                    either be Vec::reserve or Vec::reserve_exact
                    
                    If you don't offer Vec::reserve_exact then people who
                    needed that run out of RAM and will dub your stdlib
                    garbage. If you don't offer Vec::reserve as we've seen C++
                    programmers will say "Skill issue" whenever a noob gets
                    awful performance as a result. So, it's an easy choice.
       
                    mandarax8 wrote 54 min ago:
                    That said MSVC,GCC and clang all implement it to allocate
                    an exact value.
       
                  kbolino wrote 1 hour 25 min ago:
                  As far as I can tell, in the current implementation,
                  reserve_exact is indeed exact. The only situation in which
                  the capacity after calling reserve_exact will not equal
                  length + additional is when it was already greater than that.
                  Even if the allocator gives more than the requested amount of
                  memory, the excess is ignored for the purposes of Vec's
                  capacity: [1] Of course, this can change in the future; in
                  particular, the entire allocator API is still unstable and
                  likely won't stabilize any time soon.
                  
   URI            [1]: https://github.com/rust-lang/rust/blob/4b57d8154a6a7...
       
                    tialaramex wrote 57 min ago:
                    Maybe more interestingly, line 659, slightly above that,
                    explains that we know we got [u8] but today the ordinary
                    Rust allocator promises capacity is correct, so we just
                    ignore the length of that slice.
                    
                    We could, as that comment suggests, check the slice and see
                    if there's enough room for more than our chosen capacity.
                    We could also debug_assert that it's not less room, 'cos
                    the Allocator promised it would be big enough. I dunno if
                    that's worthwhile.
       
              demurgos wrote 1 hour 59 min ago:
              I believe that you are describing `Vec::with_capacity` which
              allows to change the initial reserved memory on construction.
              
              `reserve` and `reserve_exact` are used when mutating an existing
              vec. What you provide is not the total wanted capacity but the
              additional wanted capacity.
              
              `reserve` allows to avoid intermediate allocation.
              
              Let's say that you have a vec with 50 items already and plan to
              run a loop to add 100 more (so 150 in total).
              The initial internal capacity is most likely 64, if you just do
              regular `push` calls without anything else, there will be two
              reallocations: one from 64 to 128 and one from 128 to 256.
              
              If you call `reserve(100)`, you'll be able to skip the
              intermediate 64 to 128 reallocation: it will do a single
              reallocation from 64 to 256 and it will be able to handle the 100
              pushes without any reallocation.
              
              If you call `reserve_exact(100)`, you'll get a single
              reallocation for from 64 to 150 capacity, and also guarantee no
              reallocation during the processing loop.
              
              The difference is that `reserve_exact` is better if these 100
              items were the last ones you intended to push as you get a full
              vec of capacity 150 and containing 150 items. However, if you
              intend to push more items later, maybe 100 more, then you'd need
              to reallocate and break the amortized cost guarantees. With
              `reserve`, you don't break the amortized cost if there are
              follow-up inserts; at the price of not being at 100% usage all
              the time. In the `reserve` case, the capacity of 256 would be
              enough and let you go from 150 to 250 items without any
              reallocation.
              
              In short, a rule of thumb could be:
              
              - If creating a vec and you know the total count, prefer
              `Vec::with_capacity`
              
              - If appending a final chunk of items and then no longer adding
              items, prefer `Vec::reserve_exact`
              
              - If appending a chunk of items which may not be final, prefer
              `Vec::reserve`
       
                rsyring wrote 7 min ago:
                This should be in the docs or a blog post somewhere.  Very
                clear explanation.
       
        assbuttbuttass wrote 2 hours 51 min ago:
        I'm sure all these layers are good engineering and provide meaningful
        safety guarantees, but it sure makes the code harder to understand if
        you want to see how things are implemented.
        
        Another example, I was trying to see how i64::isqrt is implemented, but
        first you have to wade through layers of macros
       
          ameliaquining wrote 18 min ago:
          "The standard library internally overuses macros and you probably
          shouldn't use them that much in your own code" is at least a
          respectable opinion in the Rust community, I think.
       
          ozgrakkurt wrote 2 hours 48 min ago:
          This is just how rust and c++ code is in my experience.
       
            germandiago wrote 1 hour 58 min ago:
            C++ is bc of compatibility with god knows how many standards, but
            at the end it is simpler than it looks at first sight. The
            uglification of names in the std lib make it disgusting to read. If
            only that was removed it would improve a lot.
       
        gblargg wrote 3 hours 15 min ago:
         [1] (if their site blocks your browser)
        
   URI  [1]: https://archive.is/Cqr4E
       
        constantcrying wrote 3 hours 22 min ago:
        I gave up half way through. The constant description of deep emotional
        feelings when it comes to something as sober as data types in the rust
        standard library make this rather hard to read. It is difficult to
        untangle the point the author is trying to make from the constant
        bombardment with deep emotional writing.
       
          swiftcoder wrote 1 hour 26 min ago:
          If a little lighthearted banter counts as "deeply emotional", I guess
          I understand why your username is "constant crying"
       
            constantcrying wrote 28 min ago:
            Even if you do not expect a professional tone it is totally bizarre
            to write a technical article (which I would have liked to
            understand, since the question is interesting to me) as if you are
            talking to a toddler. The allusions to Grand conspiracy theories
            and grave emotional turns which are everywhere make the article
            sound as if it was written for children.
            
            I do not like lighthearted banter, but this is not the problem. The
            problem is that the author has a hard time writing a single
            paragraph without it. They should seriously consider doing some
            training in technical or scientific writing.
            
            >, I guess I understand why your username is "constant crying"
            
            What a novel comment. But yes, I do hate the constant positivity,
            especially in the face of a subpar product.
       
        James_K wrote 3 hours 46 min ago:
        While safe Rust may be relatively simple to write, and certainly easier
        to write than safe C, this article has someone added to my belief that
        unsafe Rust is far too difficult to write. Perhaps some of this is
        deliberate, as a kind of defence mechanism against people using it
        willy-nilly, it still seems over-designed.
       
          dathinab wrote 1 hour 5 min ago:
          It's a pretty normal experience for lookin at standard libraries (I
          mean look at cpython, where you now need to know C++, cpython
          interpreter internal,C++ std in addition to all the complicated meta
          class magic you normally don't need to care about.)
          
          Like in Vec we have following things most normal rust code wouldn't
          do:
          
          - Unique instead of Unique, to micro optimize code size
          
          - RawVec, RawVecInner, due to the previous point, internal structure
          of libstd, relations to libcore/liballoc usage and more clean
          separation between unsafe and safe code etc. Which a "just my own
          vec" impl might not have
          
          - internal `Unique` helper type to reuse code between various data
          structures in std
          
          - the `NonNull` part is another micro optimization which might save
          1-4 byte for `Option>` (up to 4 due to alignment, but I think this
          optimization isn't guaranteed in this context)
          
          Like it could be just a `Vec { ptr: *const T, cap: usize, len: usize,
          _marker: PhantomData }`, but that wouldn't have quite the maximized
          (and micro optimized) fine tuning which tends to be in std libraries
          but not in random other code. (I mean e.g. the code size micro-opt.
          is irrelevant for most rust code, but std goes into _all_ (std using)
          rust code, so it's needed.
          
          But anyway, yes, unsafe rust can be hard. Especially if you go beyond
          "straight forward C bindings" and write clever micro optimized data
          structures. It's also normally not needed.
          
          But also "straight forward C bindings" can be very okay from
          complexity _as long as you don't try to be clever_ (which in part
          isn't even rust specific, it tends to just become visible in rust).
       
          ozgrakkurt wrote 2 hours 46 min ago:
          Yes, it is indeed very difficult to write unsafe rust, also very
          difficult to write anything low level and reusable
       
          hmry wrote 3 hours 6 min ago:
          I think most of the complexity here comes from trying to write as
          little unsafe code as possible (and not repeating any unsafe
          operation in multiple places, hence the many layered abstractions).
          
          If you were to implement Vec without layering, it would be no more
          complicated than writing a dynamic array in C (but more verbose due
          to all the unsafe blocks and unabbreviated names)
       
            kbolino wrote 1 hour 57 min ago:
            Notably, some of the abstraction is actually there to prevent the
            compiler from generating a lot of redundant code. The process of
            monomorphization (turning polymorphic generic code into usable
            machine code for particular types) can seriously bloat the size of
            binaries.
       
          tialaramex wrote 3 hours 8 min ago:
          Unsafe Rust does have tooling to help you not make some kinds of
          mistakes, but by its nature you have to be able to press on after the
          tools can't help you.
          
          For example MIRI can understand pointer twiddling under strict
          provenance. It'll synthesize provenance for the not-really-pointers
          it is working with and check what you wrote works when executed. But
          if your algorithm can't work with strict provenance (and thus
          wouldn't work for CHERI hardware for example) but your target is
          maybe an x86-64 Windows PC so you don't care, you can write exposed
          or even just arbitrary "You Gotta Believe Me" pointer twiddling, MIRI
          just can't check it any more when you do that, so, hope you never
          make mistakes.
       
          SonOfLilit wrote 3 hours 42 min ago:
          When I was young I peeked into the C++ stdlib, which is probably the
          best thing to compare this to. It was orders of magnitude worse.
       
            tonyedgecombe wrote 2 hours 11 min ago:
            stdlib code was always impenetrable to me, this was one of the
            reasons I moved away from C++.
       
        Vipsy wrote 3 hours 52 min ago:
        I love how it shows that you can have both performance and safety
        without jumping through hoops. The little details of memory management
        really shape how we end up writing code, often without even noticing.
        Rust makes it possible to get close to the hardware while still keeping
        you out of the usual trouble spots. It’s one of those things you
        appreciate more the longer you use it
       
          johnisgood wrote 3 hours 6 min ago:
          Erm what? The article contradicts this. I'd push back on "without
          jumping through hoops". The article itself demonstrates 6 layers of
          abstraction (Vec -> RawVec -> RawVecInner -> Unique -> NonNull ->
          *const u8) built on unsafe primitives. The standard library does the
          hoop-jumping for you, which is valuable, but the hoops exist, they're
          just relocated.
          
          I bet Vec's implementation is full of unsafe blocks and careful
          invariant management. Users face different hoops: lifetime wrangling,
          fighting the borrow checker on valid patterns, Pin semantics, etc.
          Rust trades runtime overhead for upfront costs: compile-time
          complexity and developer time wrestling with the borrow checker which
          is often the right trade, but it's not hoop-free.
          
          The "close to hardware" claim needs qualification too. You're close
          to hardware through abstractions that hide significant complexity.
          Ada/SPARK gives formal correctness guarantees but requires proof
          effort and runtime checks (unless you use SPARK which is a subset of
          Ada). C gives you actual hardware access but manual memory
          management. Each has trade-offs - Rust's aren't magically absent.
       
            zozbot234 wrote 2 hours 42 min ago:
            It's fine [1] [2] (But until two weeks ago, it wasn't - you could
            in theory cause UB by writing purely safe code within the Vec
            implementation.)
            
   URI      [1]: https://github.com/model-checking/verify-rust-std/issues/2...
   URI      [2]: https://github.com/model-checking/verify-rust-std/pull/481
       
              johnisgood wrote 2 hours 32 min ago:
              I had no idea, that is wild, especially considering how everyone
              has been outcrying "Rust is safe". Thanks for the info though.
              
              One should wonder what else is there... but they do not wonder,
              they are sadly rigid with their thinking that Rust is the perfect
              memory safe language with zero runtime overhead.
       
                zozbot234 wrote 2 hours 21 min ago:
                It's not real unsoundness because it only applies within the
                private implementation details of Vec - you still can't cause
                UB from outside code. But it's a real oversight that only
                proves how hard writing unsafe code is.
       
                Klonoar wrote 2 hours 22 min ago:
                You know you can discuss languages without the weird emotional
                labeling from your last paragraph, right?
                
                Plenty of people wonder. It’s part of building the language.
       
            ozy wrote 2 hours 45 min ago:
            Most of those are abstractions, but not a runtime overhead. NonNull
            even enables an optimization not available to most other languages.
            
            And you can wonder, is this accidental complexity? Or is this
            necessary complexity?
       
          alickz wrote 3 hours 45 min ago:
          I come from a Swift/Kotlin background and I've been learning Rust for
          fun in my spare time
          
          From what I heard online I was expecting it to be a lot harder to
          understand!
          
          The moving/borrowing/stack/heap stuff isn't simple by any means, and
          I'm sure as I go it will get even harder, but it's just not as
          daunting as I'd expected
          
          It helps that I love compiler errors and Rust is full of them :D
          Every error the compiler catches is an error my QA/users don't
       
            Tade0 wrote 3 hours 2 min ago:
            The language and associated tooling keep improving.
            
            Over the course of the last decade I've made several attempts to
            learn Rust, but was always frustrated with having code that
            reasonably should compile, but didn't and I had to run the build to
            even find that out.
            
            Now we have rust-analyzer and non-lexical lifetimes, both
            tremendously improving the overall experience.
            
            I still don't enjoy the fact that borrows are at the struct level,
            so you can't just borrow one field (there's even a discussion on
            that somewhere in Rust's) repo, but I can work around that.
            
            To drive the point home: I'm a frontend developer. This is a very
            different environment compared to what I'm used to, yet I can be
            productive in it, even if at a slower pace in comparison.
       
              germandiago wrote 1 hour 52 min ago:
              Rust is not the most productive language by any means... it is
              hardened AF if you avoid unsafe, but productive?
              
              I can code much faster in almost any language compared to Rust.
              It creates mental overhead. For example, to compare it to
              siblings, Swift and C++ (yes, even C++, but with a bit of
              knowledge of good practices) lets you produce stuff more easily
              than Rust.
              
              It is just that Rust, compared to C++, comes extra-hardened. But
              now go get refactor your Rust code if you started to add
              lifetimes around... things get coupled quickly. It is
              particularly good at hardening and particularly bad at
              prototyping.
              
              They seem to be the opposite in the absence of borrowing without
              a garbage collector, since you need to mark the lifetime in some
              way.
       
                zozbot234 wrote 1 hour 45 min ago:
                Rust is not that bad at prototyping, you need to use the right
                featureset when writing quick prototype code.  Don't use
                lifetimes, use clone() and Rc/Arc freely to get around
                borrow-checker issues. Use .unwrap() or .expect("etc.")
                whenever you're not sure how to pass an error back to the
                caller. Experiment with the "Any" trait for dynamic typing and
                downcasts, etc. The final code will still be very
                high-performance for prototype code, and you can use the added
                boilerplate to guide a refactoring into a "proper"
                implementation once the design stabilizes.
       
            LoganDark wrote 3 hours 28 min ago:
            > It helps that I love compiler errors and Rust is full of them :D
            Every error the compiler catches is an error my QA/users don't
            
            amen! I despise Python even though I used to love it, and that's
            because it's full of runtime errors and unchecked exceptions. The
            very instant I learned more strongly typed languages, I decided
            never to go back.
       
              germandiago wrote 1 hour 52 min ago:
              Python properly paired with typing is very competent IMHO.
       
                mring33621 wrote 59 min ago:
                I would like to learn how to do this well!
       
              zelphirkalt wrote 2 hours 55 min ago:
              And then comes the point, where you get a scenario, which is
              actually, you'd think, at least, something simple, but then gets
              really tough to express in strongly, statically typed languages,
              and you might take a step back and consider for a moment, how
              simple it would be to express this in a language like Python,
              while still being memory safe.
              
              Statically typing things is great, and I enjoy it too, when it is
              practical, but I don't enjoy it, when it becomes more complicated
              than the actual thing I want to express, due to how the type
              system of that particular language or third party tool (in
              Python) works.
       
                LoganDark wrote 25 min ago:
                I find it fun to statically prove correctness of difficult
                problems. To each their own, though, of course.
       
        5- wrote 3 hours 55 min ago:
        related:
        
   URI  [1]: https://doc.rust-lang.org/nomicon/vec/vec.html
       
          James_K wrote 3 hours 45 min ago:
          Oh, I never knew that Rust had variance. I always just assumed
          everything was invariant. Strange that they've got no way to write it
          down in the type system.
       
            pornel wrote 3 hours 20 min ago:
            In prehistoric Rust, variance used to be named more explicitly.
            However, the terminology of covariant and contravariant subtyping
            of lifetimes is a language theory jargon. This is the right
            perspective for language design, but programmers using the language
            don't necessarily use these terms.
            
            It's been replaced with a "by example" approach. It's much easier
            to teach it: just add a fake field that acts if you had this type
            in your struct. Rust then figures out all of the details it needs.
       
              lpghatguy wrote 2 hours 40 min ago:
              Years ago, I introduced Flow gradual typing (JS) to a team. It
              has explicit annotations for type variance which came up when
              building bindings to JS libraries, especially in the early days.
              
              I had a loose grasp on variance then, didn't teach it well, and
              the team didn't understand it either. Among other things, it made
              even very early and unsound TypeScript pretty attractive just
              because we didn't have to annotate type variance!
              
              I'm happy with Rust's solution here! Lifetimes and Fn types
              (especially together) seem to be the main place where variance
              comes up as a concept that you have to explicitly think about.
       
              kibwen wrote 2 hours 53 min ago:
              Variance is an absolute disaster when it comes to language
              pedagogy. One of the smartest things Rust ever did was avoiding
              mentioning it in the surface-level syntax.
       
                branko_d wrote 2 hours 37 min ago:
                Why? It's just a property of type transformation.
                
                Assuming Parent <- Child ("<-" denotes inheritance):
                
                - If Generic <- Generic: it's covariant.
                
                - If Generic -> Generic: it's contravariant.
                
                - Otherwise: it's invariant.
                
                Or at least it's that straightforward in C#. Are there
                complications in Rust?
       
                  kannanvijayan wrote 1 hour 33 min ago:
                  The difficulty is that even trivial generic types aren't
                  cleanly one or the other.  A mutable reference type is
                  covariant on read, and contra on write.
                  
                  Scala was the first language in my exposure to try to
                  simplify that away by lifting the variance annotations into
                  the type parameter directly.  It reduced some of the power
                  but it made things easier to understand for developers.  A
                  full variance model would annotate specific features
                  (methods) of a type as co/contra/invariant.
                  
                  I'm not sure what approach C# takes - I haven't looked into
                  it.
                  
                  Rust doesn't expose variances for data structures at all.  It
                  exposes them for traits (type classes) and lifetimes, but
                  neither of those are accessible in a higher order form. 
                  Trait usage is highly constrained and so is lifetime usage.
                  
                  Traits mainly can be used as bounds on types.  Some subset of
                  traits, characterized as object traits, can be used as types
                  themselves, but only in an indirect context.  These are
                  highly restricted.  For example, if you have traits T and U
                  where U extends T, and you have a reference `&dyn U`, you
                  can't convert that to a `&dyn T` without knowledge of the
                  underlying concrete type.  You _can_ convert `&A where A: U`
                  to `&B where B: T`, but that just falls out of the fact that
                  the compiler has access to all the concrete type and trait
                  definitions there and can validate the transform.
                  
                  Rust's inheritance/variance story is a bit weird.  They've
                  kept it very minimal and constrained.
       
                    zozbot234 wrote 1 hour 13 min ago:
                    Note that the Rust folks are working on a "safe-transmute"
                    facility that may end up introducing a kind of variance to
                    the language.
       
                  sesm wrote 1 hour 38 min ago:
                  It wad easy to teach to Java devs just by using one mnemonic:
                  PECS = Producer Extends, Consumer Super
       
                  Sharlin wrote 2 hours 23 min ago:
                  It's not particularly easy to teach what that actually means
                  and why it's a thing. It's quite easy to show why in general
                  G cannot be a subtype of G even if A is a subtype of B, it's
                  rather more involved pedagogically to explain when it can,
                  and even more confusing when it's actually the other way
                  around (contravariance).
                  
                  Anyway, Rust has no subtyping except for lifetimes ('a
       
                    smallstepforman wrote 2 hours 11 min ago:
                    I’m getting old. I can understand the words, but not the
                    content. At this point, show me dissassembly so that I can
                    understand what actually happens on the fundamental
                    byte/cpu instruction level, then I can figure out what
                    you’re trying to explain.
       
                      Sharlin wrote 1 hour 46 min ago:
                      Nothing really happens on the instruction level because
                      this is all type system logic.
       
                  tialaramex wrote 2 hours 25 min ago:
                  Sure, you can keep telling me that and it doesn't stay. I'm
                  completely happy writing Rust, and I am aware it needs
                  variance to work in principle and when I do need that
                  information I know the magic words to type into doc search.
                  
                  It's like how I can hold in my head how classical DH KEX
                  works and I can write a toy version with numbers that are too
                  small - but for the actual KEX we use today, which is
                  Elliptic Curve DH I'm like "Well, basically it's the same
                  idea but the curves hurt my head so I just paste in somebody
                  else's implementation" even in a toy.
                  
                  Sorry?
       
              lyu07282 wrote 3 hours 12 min ago:
              > but programmers using the language don't necessarily use these
              terms.
              
              this always annoyed me about the python type annotations, you are
              supposed to already know what contravariant / covariant /
              invariant means, like: `typing.TypeVar(name, covariant=False,
              contravariant=False, infer_variance=False)`
              
              Its used in documentation and error messages too:
              
              > the SendType of Generator behaves contravariantly, not
              covariantly or invariantly.
              
   URI        [1]: https://docs.python.org/3/library/typing.html#annotating...
       
            tialaramex wrote 3 hours 33 min ago:
             [1] Lifetimes imply the need for the idea of variance.
            
   URI      [1]: https://doc.rust-lang.org/reference/subtyping.html#varianc...
       
              kibwen wrote 2 hours 51 min ago:
              Rather, any language with both generics and subtyping needs to
              consider variance, and Rust doesn't have subtyping in general,
              but lifetimes do technically have subtyping relationships, so
              lifetimes are the only place where this matters, and fortunately
              Rust can just infer the right behavior based on some simple
              type-level heuristics.
       
        veber-alex wrote 4 hours 22 min ago:
        This article feels too long and rambly considering how little actual
        technical information it provides.
       
          LoganDark wrote 3 hours 31 min ago:
          I don't think the point of the article is the technical information,
          I think it's more of an emotional expression. Still valuable, just
          differently, I suppose.
       
            germandiago wrote 1 hour 59 min ago:
            Like much of what surrounds Rust. Looks quite emotional to me. If
            you do not know what I mean, go to the Rust reddit and discuss and
            compare on solid grounds without using an extremely flattering
            tone.
            
            You will see armies of fanatics voting negative.
       
              LoganDark wrote 27 min ago:
              I won't deny that there are lots of emotions surrounding Rust,
              both for myself and for many others. But there are different ways
              to write about it, and this article looks more of an emotional
              style ("here's how my journey went") than a technical one
              ("here's how this works"). I still find it fun to read, but not
              everyone will, and that's okay.
       
              spoiler wrote 1 hour 0 min ago:
              To be fair, that is just the typical Reddit (or any social media
              platform) experience
       
              echelon wrote 1 hour 21 min ago:
              It's what you call an "educational post".
              
              The Rust community is doing a fantastic job of training both the
              next generation as well as those looking to dip their toes in.
       
        rossant wrote 4 hours 24 min ago:
        Tangent but nice web design.
       
          herywort wrote 4 hours 14 min ago:
          First thing I did was toggle reader view, it's noticeably more
          responsive scrolling there.
       
       
   DIR <- back to front page