_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   How We Test Concurrent Primitives in Kotlin Coroutines
   DIR   text version
       
       
        brabel wrote 3 days ago:
        In the first test example, how does the framework know that `size`
        shouldn't return -1?
       
          the-smug-one wrote 3 days ago:
          There's a finite amount of valid linear execution histories for the
          example given. Run all of these in one thread, collect states (or
          return values, or whatever), and compare them when model checking.
          
          I don't know if that's what LinCheck does, but that's what I'd do if
          I wrote LinCheck :-).
       
        bitcharmer wrote 3 days ago:
        Someone correct me if I'm wrong, but there is no mention of confidence
        intervals for the testing framework they use.
        
        In testing concurrency constructs and primitives with a framework like
        that there is always a non-zero probability of ending up with a false
        positive. I mean, it would make sense to at least hint at the results
        not being the ultimate source of truth. I have seen no mention of this.
        
        What am I missing?
       
          jillesvangurp wrote 3 days ago:
          False positive would be something breaking when it didn't break. This
          instead would tell you something broke and give you a detailed trace
          of how it happened telling you how to reproduce the breakage (which
          is super hard with concurrency bugs). If that is somehow expected
          behavior, you have a flaky test. If you can't rely on your tests,
          basically you should fix them.
          
          And yes that is hard with testing asynchronous and concurrent code.
          This framework is intended to make that easier and get more out of
          your tests without a lot of boilerplate.
       
          the-smug-one wrote 3 days ago:
          Why would it be able to generate false positives?
       
       
   DIR <- back to front page