_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
   URI Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
   URI   How often should I rotate my SSH keys?
   DIR   text version
       
       
        EthanHeilman wrote 2 days ago:
        FTA:
        
        >The approach we take in our own infrastructure, modeled after TLS
        certificate infrastructure and Let’s Encrypt in particular, is to
        authenticate each device+person combination with a separate private
        key; that way, if a credential is stolen, we always know from where.
        
        The startup I co-founded, bastionzero.com, is motivated by  this exact
        problem: What is the best way to do SSH key management. We hope to
        improve remote shell access by replacing long lived SSH keys stored on
        your machine with cryptographic identity attestations from multiple
        sources, e.g. OpenID Connect, FIDO, Network, etc... The end goal is to
        eliminate both single points of trust and long lived secrets.
       
        ozim wrote 2 days ago:
        Advice in the article unfortunately applies to people who don't lock
        their computers so it is talking to the wall.
        
        I rotate my ssh keys with a new laptop/pc. Private key should stay on
        one machine only. I think longest I had usable laptop was 5 to 6 years.
        
        I read the article and probably people who are interested in security
        will click it so it also is preaching to the choir.
       
        jon-wood wrote 3 days ago:
        If you're on AWS I strongly recommend EC2 Instance Connect, which uses
        the AWS CLI and IAM to issue short lived SSH keys that are
        automatically accepted by EC2 instances you have been granted access to
        via IAM policies.
       
        igneo676 wrote 3 days ago:
        Yubikey + OpenGPG + Enable SSH support on your gpg agent
        
        Unless I'm missing something, that essentially solves the issue, no?
        Persistent private key, usable across all your machines that can't be
        exfiltrated. Bonus since you can even generate the key directly on the
        device!
        
        This gets even easier with newer OpenSSH versions if you use FIDO2 auth
        for SSH. I've not played with it personally, but word is you just plug
        in the key and off you go
       
          EthanHeilman wrote 2 days ago:
          For a single person that a pretty good solution just make sure you
          backup you SSH key so you can recover if you lose your yubikey.
          
          In a multi-person enterprise the issue becomes how to manage all
          these keys with on-boarding, off-boarding, logs, access policies as
          people's roles change. Maybe someone only needs access for a day? How
          do you ensure their key is removed tomorrow morning. Maybe someone
          never setup their yubikey and their SSH key is just living on their
          laptop and backed up to dropbox. SSH is a great solution for
          individuals but I've encountered so many anti-patterns when used
          among large teams.
       
        lucideer wrote 3 days ago:
        > if you have your ssh private key on several machines, you have to
        remember to copy it to all those places
        
        Private keys should be per-host. I did copy my private key across
        multiple machines way back when I was a bit greener and first learning
        about how SSH worked. I did that for years. Not everyone knows best
        practice automatically and we should try and educate instead of shaming
        people who do it.
        
        BUT
        
        When you believe you have enough knowledge on the subject to go and
        write an article about SSH key best practice and you're still casually
        doing this as if it's OK, that's different. This is basic stuff; I have
        no interest in following any advice from this author if they think
        copying private keys to different hosts is ok.
       
          seiferteric wrote 2 days ago:
          Also, "ssh -A" allows you to take your keys with you. I also used to
          copy my keys around, but now I just use "-A" option when ssh'ing into
          a different host from my main workstation when I need my keys for
          something (git cloning for example).
       
        rhuber wrote 3 days ago:
        The author of this article should consider following their own advice,
        since they have a woefully outdated RSA-1024 ssh key securing their
        GitHub account.
        
        $ curl -s [1] > blah
        
        $ ssh-keygen -l -f blah
        
        1024 SHA256:1IWAUSXOcCKLcmOdAec8JbDt3T75udA4KSpRosEWUaU no comment
        (RSA)
        
        (update: they have now replaced it with an RSA 2048 bit key. progress.)
        
   URI  [1]: https://github.com/apenwarr.keys
       
          suifbwish wrote 3 days ago:
          It would still take a long ass time to brute force a 1024 unless
          there is no brute force detection. Alternatively capturing the
          traffic can allow brute forcing the applied algorithm itself.
       
            Retr0spectrum wrote 3 days ago:
            It's a public key, you can perform the "brute force"
            (factorisation) entirely offline, to derive the private key.
            Hypothetically. For now, RSA-1024 is too expensive to crack, for
            mere mortals.
       
            rhuber wrote 3 days ago:
            I wasn't commenting on the strength of RSA-1024, per se, but on the
            assumed age of that key. OpenSSH's ssh-keygen hasn't defaulted to
            1024 bit RSA keys since before version 4.2, in 2005. (I had to look
            it up: [1] )
            
            You can still generate a 1024 bit RSA key, but someone would have
            to go out of their way to do so, and I can't imagine why they would
            have done that in the past .. decade?
            
   URI      [1]: https://www.openssh.com/releasenotes.html
       
              foolmeonce wrote 3 days ago:
              > I can't imagine why they would have done that in the past ..
              decade?
              
              Maybe they aren't using software keys, but rather a low
              quality/older/small-kb hardware token or following the default
              guide for one? The vast majority supported 2048 in 2010 though..
       
            katbyte wrote 3 days ago:
            alternatively theres no reason to use 1024? i've been using 4096
            for maybe a decade now?
       
        tonoto wrote 3 days ago:
        I believe this article is a bit out of context, yes?
        
        In an ordinary environment (at home, at big corp.) you treat your keys
        as highly personal valuables and would not store keys at shared storage
        (unless special circumstances, due to air gaps or because shared non
        personal jobs need to run with a key - and then by a separately
        generated key).
        
        If I'm at that point that I would have to rotate my private SSH key,
        then game is lost anyway.
        
        If the storage at your personal device cannot be encrypted, might it be
        a solution to pass along something like ssh-add - <<< "$(pass show
        ssh/env1-key1)", and have that gpg-key on a yubikey for instance.
        
        In my (perhaps too small) view, this issue is not the issue, and the
        issue should instead be tackled. For instance, not by letting SSH be
        publicly accessible as the first thing and educate staff on locking
        their personal workstations. With the correct mindset other things
        should be of focus in the area of security.
       
        martin_a wrote 3 days ago:
        tl;dr: How often should I rotate my ssh keys? More often than never! As
        often as you can.
       
        loftsy wrote 3 days ago:
        I haven't worried about SSH keys in years since moving everything to
        GCP or AWS. One less thing to worry about.
       
        bombcar wrote 3 days ago:
        It would be really nice if there was an "ssh update" feature similar to
        ssh-copy-id but built into the ssh command itself. So I could somehow
        indicate to SSH on my machine that key_2 is a replacement for key_1 -
        and any host that I ssh into that has key_1 in authorized_keys should
        be replaced with key_2 (and leave everything else the same).
        
        That way I could rotate keys and painlessly update them (and still have
        the old key for rarely accessed machines).
        
        Sure it's not perfect, but it's a sight better than never rotating.
        
        I wonder if there's something similar for the ssh_host_key - somehow to
        say "old key is deprecated but still here so ssh doesn't scream bloody
        murder, but use the new key from now on".
       
          jlgaddis wrote 2 days ago:
          > I wonder if there's something similar for the ssh_host_key ...
          
          There is, since version 6.8 (~6 years ago) [0].
          
          Make sure you have "UpdateHostkeys" set to "yes" (or "ask") on the
          client side. It was off by default when it was first added but I
          think I remember reading at some point that it changed to enabled by
          default.
          
          --
          
          [0]:
          
   URI    [1]: https://lwn.net/Articles/637156/
       
        egberts wrote 3 days ago:
        I recall a problem with using a enterprise SSH portal a while back in
        which to centralize not only keys but logging as well.
        
        — Did a MitM affected that design ... still? Asking for a friend.
       
        sshexpert wrote 3 days ago:
        Every 360 degrees
       
        GekkePrutser wrote 3 days ago:
        I just have my SSH keys on a yubikey: They're generated on token and
        the private key can't be exported. This makes them unique and
        unstealable.
        
        They can still be abused "live" while they are inserted via an SSH
        agent, but I turned on the "touch to sign" feature so for every use you
        have to touch the token button as well.
       
          jlgaddis wrote 2 days ago:
          Hopefully your Yubikey was manufactured AFTER the last bug in the
          (Infineon) key generation library was found and fixed.
       
        arichard123 wrote 3 days ago:
        Logically you're going to need some kind of digital key, and logically
        keys can get stolen. So the problem is detecting theft. Rotating keys
        is not detecting theft, YubiKey is not detecting theft. Could some kind
        of jump server where all your ssh connections for your company's
        servers have to originate from do the trick of logging, reporting and
        restricting usage to detect theft? You log in with key1 to an
        unprivileged account on the jump server, and it decides if you look
        genuine (based on ip/ time of day/ time since last usage/ whatever) and
        then alerts an admin if a trigger is ticked. Then if you get access,
        you as a user are given logs to help you detect theft. Then if all is
        well you SSH to your actual destination, perhaps through port
        forwarding using key2 stored on your local machine, or perhaps from the
        jump server with key2 stored there.
       
          crdotson wrote 2 days ago:
          Using a separate secure element built into a smartphone is probably
          the best solution here. Most smartphone users will not lose them
          often, and will notice almost immediately.
       
          jlgaddis wrote 2 days ago:
          > ... perhaps through port forwarding ...
          
          See "ProxyJump" in the "ssh" (client) man page.
       
        ItsMrMe wrote 3 days ago:
        How is a ssh jumphost helping with rotating your keys? You still need
        to update the jumphost and all other servers with the new key right?
       
          jasongill wrote 3 days ago:
          I will never understand people who think a bastion host or "jump box"
          is a good idea. I've seen so many companies that have experienced
          extended downtime because they had an issue and it also broke their
          jumpbox - so nothing could be fixed until their bastion host was
          fixed. Plus, the companies I've bought that used bastion hosts ended
          up having the worst security on the jumpbox machine because it's a
          "set it and forget it" machine that isn't normally treated like a
          production server. I've seen bastion hosts that had root kits for
          years and nobody noticed because they never really logged in to look
          around at the bastion host itself.
          
          I've never understood the appeal, nor have I seen anyone do it well,
          honestly. It seems to get mentioned in all of these novice and
          semi-pro SSH related articles as a good idea and just makes my eye
          twitch when I see it.
       
          orwin wrote 3 days ago:
          I strongly disagree with the article, but:
          
          Have a yubikey to connect to your jumphost(we called that access vm),
          then have a cronjob that will, every month, deploy ansible, generate
          a key pair on the accessvm and deploy those new key on accessible vms
          (ideally not your log collecting vm, that should be even more
          secured.).
       
            jlgaddis wrote 2 days ago:
            Were you using some awesome new cryptographically secure method to
            manage the passphrases of the newly generated keys (and communicate
            them to the user)?
            
            Or, more likely, were all of these new private keys granting access
            to all of your hosts just sitting there on disk unencrypted -- and,
            as a result, freely available to and easily stolen and used to gain
            access to all of your production machines by the first attacker to
            come along and compromise the bastion host?
       
        gitanovic wrote 3 days ago:
        Actually a ssh key-rotation could be scripted in a cron-job and be
        completely transparent to the user...
        
        Every X days generate a new RSA key pair, connect to all the hosts and
        replace the public key matching the previous one in the
        ~/.ssh/authorized_keys
        
        Is there any issue with this solution?
       
          SahAssar wrote 3 days ago:
          You can't have a passphrase on the key which IMO probably provides
          better security than rotating SSH keys.
       
        ThePhysicist wrote 3 days ago:
        I understand why they don't talk about the CA functionality in OpenSSH
        (they want to sell their own product after all), but OpenSSH supports
        CA-based certificates out of the box, which makes it easy to generate
        auto-expiring SSH credentials. Then again, with a CA-based approach the
        root key becomes the golden key and needs to be protected, which might
        still be easier than protecting individual keys though.
        
        I recommend using a bastion host through which all SSH connections go
        and to use a DevOps tool like Ansible to automate certificate/key
        placement on every single host. In addition, if you have a trusted
        network that people log in via VPN you should also limit SSH
        connections to the bastion host to addresses from this network, which
        will make it harder again to use stolen credentials. Finally, only a
        handful of people should have such credentials, most maintenance should
        be performed in an automated way via DevOps tooling.
       
          SahAssar wrote 3 days ago:
          If you use the u2f support in openSSH the risk of credentials being
          stolen without you knowing basically goes down to zero (as in, if you
          have the physical key the credential isn't stolen).
       
        anticristi wrote 3 days ago:
        > An even more robust approach is to use some kind of hardware token
        that can sign short-lived ssh keys, and teach all your servers how to
        deal with those. That’s neat, but it’s hard to deploy (needs custom
        ssh settings).
        
        Ahem, no. I use Yubikeys for a few years now. They are literally
        braindead to use, and works out of the box in recent Ubuntu. Here is an
        Ansible role to get started: [1] Stop making excuses and start
        protecting your SSH keys!
        
        Disclaimer: I'm not compensated in any way by Yubico, but their product
        is so darn good that I really want people to start using it.
        
   URI  [1]: https://github.com/cristiklein/stateless-workstation-config/bl...
       
          michaelt wrote 3 days ago:
          I looked at using this setup once - but the whole setup looked really
          precarious.
          
          Using U2F in the browser, you just buy the cheapest yubikey, plug it
          in and it works - any OS is fine.
          
          But to do the same with SSH you've got to buy a particular yubikey,
          install five different bits of software, adjust a bunch of config
          files, restart services, adjust your agent autostart files, upload
          your 'subkeys' (whatever those are)... and that's just to support one
          OS.
          
          It just seemed like the kind of second-class-citizen setup that
          barely anyone else is using or testing against, so it'd be constantly
          breaking down.
       
            darkr wrote 3 days ago:
            U2f support is available in recent versions of OpenSSH - though
            ideally you should to configure a pin code as well, which requires
            some configuration. Can't speak for how good this support is as
            I've not used it - but for web it is as you say zero,
            configuration, and just works, always.
            
            For GPG/SSH, there is a bit of an initial setup process to setup
            the card and generate keys (ideally generate them on-card, so you
            know they cannot be exist elsewhere) - this can be scripted though,
            as we have done. As part of our deployment process we generate all
            needed passphrases and revocation certificates, storing in
            encrypted storage, as well as uploading the public key to a known
            URL, which is also referenced in the smartcard configuration.
            
            Once the card is setup - all you need on a machine is
            gnupg/gpg-agent and a ~/.gnupg/gpg-agent.conf file that looks like:
            
            no-grab
            pinentry-program /usr/bin/pinentry-curses
            default-cache-ttl 2400
            default-cache-ttl-ssh 14400
            enable-ssh-support
            
            Using the card on a new machine is as straightforward as fetching
            the public key to your local/default keychain (gpg --card-edit,
            then 'fetch').
            
            Switching between machines is then seamless - we have many
            engineers switching between macOS + Linux multiple times per day
            without issue.
       
          8fingerlouie wrote 3 days ago:
          I'm currently testing the "longevity" of them.
          
          I've had one on my keychain for a couple of years now, and so far it
          appears to hold up pretty darned well. It sits in my pocket
          unprotected with all my other keys, and despite its fair share of
          scratches it still works.
          
          The reason for the "testing" is that they appear kinda "flimsy"
          compared to my Nitrokey, but so far it has stood up to every beating
          i've given it.
          
          +1 for Yubikey.
       
            darkr wrote 3 days ago:
            Have used them for the past 6 or so years, and issued as standard
            to any engineers needing sensitive system access - over 30 or so
            keys that have been used and thoroughly abused over that time I've
            only ever seen one broken - somehow an engineer managed to snap off
            the top half of it that attaches to a keyring. The hardware was
            still functional though.
       
              8fingerlouie wrote 3 days ago:
              I have not looked into it, but isn't there some way to "backup" a
              Yubikey ?
              
              There is probably a difference between company usage and personal
              usage. In a company setting i would expect to have backup
              yubikeys, but for a personal setup a recovery situation would
              involve getting your keys "out of it" until a replacement
              arrives.
       
                darkr wrote 3 days ago:
                When you're generating keys you have two options:
                
                1. Generate keys off card and import them (you can then backup
                these keys)
                
                2. Generate keys on-card
                
                I always chose the 2nd option; not being able to extract the
                keys from the card is strongly desired security feature.
       
                  jlgaddis wrote 2 days ago:
                  On the other hand, I chose the first option (several years
                  ago) -- and doing so saved me from having to generate new
                  keys (and rotate them on every host I used) when I got my
                  Yubikeys replaced the second time (due to a bug in the key
                  generation library used on the Yubikey).
                  
                  --
                  
                  I generated my new GPG key while booted into a "live CD"
                  environment on an air-gapped host.
                  
                  Because I believe in doing things right, I deliberately
                  selected a machine for this task that 1) didn't have Intel
                  AMT/ME, 2) didn't have any wireless network interfaces, 3)
                  had no storage devices installed, and 4) had PS/2 ports for
                  the mouse and keyboard (for "better"entropy)!
                  
                  I set up the new Yubikeys, generated my new GPG master key,
                  and generated a different ("authentication") subkey for each
                  Yubikey. The master (certification) key and signing and
                  encryption subkeys -- but not the authentication subkeys --
                  were exported and then backed up on a brand new USB flash
                  drive that I'd purchased at a retail store, just taken out of
                  the package, and created a small LUKS-encrypted filesystem on
                  -- using an outrageously long, randomly-generated passphrase,
                  of course.
                  
                  The USB flash drive is kept in a sealed envelope inside a
                  tamper-evident bag that's kept in the safe. The passphrase is
                  kept, well, somewhere else, obviously, as is the passphrase
                  for the GPG master key. Using the keys on the Yubikey doesn't
                  require them; only the PIN -- which is long but not as long
                  as the passphrases -- that exists only in my head -- and
                  that's easy enough; I typically do that a few dozen times a
                  day.
                  
                  Since going through that whole process, there have been two
                  times that I've retrieved the USB flash drive and
                  passphrases. Once was to sign a bunch of GPG keys (from a
                  key-signing party) and the other time was in order to rotate
                  my authentication (SSH) subkeys and "renew" (i.e., extend the
                  expiration date of) the others.
                  
                  Was it a huge pain in the ass? Absolutely! Was it worth it,
                  though? Sure. First and foremost, I don't worry about the
                  security of my keys at all and -- perhaps more importantly --
                  I don't have to keep an eye out for the next bug that's found
                  in the third-party (Infineon, IIRC) libraries that Yubico
                  chose to use.
       
        knorker wrote 3 days ago:
        Hardware keys. In my opinion it's no longer optional. Nor is it
        expensive or hard.
        
        E.g. just get a yubikey per employee (or more).
        
        Yes, they can be stolen (put PINs on them), but they can't be copied.
        
        I have just one software key, because i don't have a solution for SSH
        from my phone with hw key yet.
       
        juskrey wrote 3 days ago:
        Yeah, if I was an evil mastermind, I'd silently gather all private keys
        I come across. Then check them against all that zillions of public keys
        available all over the internet
       
        mdriley wrote 3 days ago:
        Definitely not appropriate for protecting Real Infrastructure, but for
        my handful of personal machines I put my authorized keys in a Google
        Doc and configure hosts to download it using `AuthorizedKeysCommand`.
        Makes it easy to add and revoke hosts in one place, which also makes
        rotation possible.
        
        I have a hardware-backed "doomsday key" to use if the Google Doc stops
        working.
        
        Writeup and script at
        
   URI  [1]: https://github.com/mmdriley/authorized_keys
       
          c3141 wrote 3 days ago:
          Have you thought about using [1] instead of Google Docs?
          
   URI    [1]: https://github.com/mmdriley.keys
       
          sodality2 wrote 3 days ago:
          I read "I put my keys in a Google Doc" and my heart rate doubled
          until I read "authorized keys".
          
          But still, surely there's a better way than relying on google not
          controlling your "key infrastructure", even for personal use?
       
        avaika wrote 3 days ago:
        Also ssh supports "from" stanza, which allows you to limit networks
        from where you can login. Ideally it should set to either corporate
        network (and make people to vpn to your corporate perimeter before
        going ssh) or to your home ISP range (if you're not a company). It's
        not a replacement for key rotation, but significantly reduces
        probability of described case.
       
          jlgaddis wrote 2 days ago:
          > Also ssh supports "from" stanza, which allows you to limit networks
          from where you can login.
          
          Additionally, sshd supports "Match", which can limit where any or all
          of your users can log in from.
          
          There's also 
          "AuthenticationMethods publickey", "PasswordAuthentication no", and
          "PermitRootLogin no", all of which one should also be using --
          ideally, on top of (both host- and network-based) access lists /
          firewall rules preventing access to 22/TCP from everywhere except the
          hosts and/or networks you've explicitly permitted.
       
        pisipisipisi wrote 3 days ago:
        There is sekey app for macs, use it. There are also yubikeys which can
        be used for SSH and OpenSSH knows how to use fido keys, so even more to
        choose from.
       
        slaymaker1907 wrote 3 days ago:
        I like using KeypassXC to manage my SSH keys. It can cooperate with an
        SSH agent and avoids leaving SSH keys unencrypted on disk if you care
        about that. Honestly though I use it so I can keep my secrets together
        in an easy to sync location.
       
        xaduha wrote 3 days ago:
        
        
   URI  [1]: https://en.wikipedia.org/wiki/PKCS_11
       
        tptacek wrote 3 days ago:
        The cool-kids answer to this problem is --- in very Tailscale-fashion
        --- not to have static keys at all. Instead, you issue short-expiry
        certificates, ideally from a strong root of trust, like an IdP that
        does 2FA. There are other benefits; for instance, you don't have to
        directly provision keys to machines anymore.
       
          forty wrote 3 days ago:
          We do that with hashicorp vault and adfs+duo it works pretty nicely.
          
          The question that can be asked then is: how often should should I
          rotate the CA  key? ;)
       
            idlewords wrote 3 days ago:
            You never rotate it directly. Instead, generate it from your SSH
            keys.
       
          jkire wrote 3 days ago:
          One thing that always has scared me a bit with using CAs for SSH is
          how you protect the signing certificates? After all, if an attacker
          gets that cert then they get full access to everything, and can
          masquerade as anyone. You end up with a choice between a) have lots
          of SSH keys out in the wild, each with varying degrees of access, or
          b) have a single cert that is on your infrastructure but has access
          to everything. (Not to mention how you deal with the operating the
          site, what happens if it crashes? How do you log in without the site
          to sign your ssh key? Using standard trusted SSH keys to access feels
          like its somewhat undermining the point of using CAs).
          
          Has anyone solved this, or got a write up of some best practices for
          running this? All I've managed to find are articles about how to run
          such apps, rather than how it fits into the broader security
          architecture.
          
          Ideally ideally, what I would actually like is the ability to
          configure OpenSSH to require multiple things to log in, i.e. both
          that they SSH key is trusted and that it has recently been signed by
          the signing service. That way gaining access to the signing
          certificate doesn't help without also gaining a trusted SSH key (it's
          still bad, but not quite Game Over levels of bad). I had a quick look
          to see if I could hack together a patch to do this, but alas I had
          forgotten how weak my C foo is :(
       
            lifeisstillgood wrote 1 day ago:
            Have you found any such write-ups on how this all fits together - I
            am also looking in vain.
       
            jlgaddis wrote 2 days ago:
            > ... what I would actually like is the ability to configure
            OpenSSH to require multiple things to log in, ...
            
            With OpenSSH, you can require multiple authentication methods to
            succeed before access is granted.
            
            For example, "publickey,password" to require password
            authentication after key-based authentication has succeeded. You
            could even do "publickey,publickey,publickey" to require three
            different keys to be used!
            
            This has been supported for several years, by the way. See
            "AuthenticationMethods" in the "sshd_config*" man page.
       
            _n_b_ wrote 3 days ago:
            > how you protect the signing certificates
            
            You get an HSM like this: [1] that stays air-gapped.
            
            Then you build procedures around it, like [2] Not cheap or easy.
            
   URI      [1]: https://www.veritech.net/product-detail/keyper-hsm/
   URI      [2]: https://www.iana.org/dnssec
       
              foolmeonce wrote 3 days ago:
              If you have no compliance requirements, you can also just use any
              pkcs#11 token (with support for non-extractable keys) to secure
              the key, and setup an air-gapped process on a laptop with a
              bootcd, etc, to minimize the risk of compromising your process.
       
          xyzzy_plugh wrote 3 days ago:
          > ideally from a strong root of trust, like an IdP that does 2FA.
          
          I understand the concepts, but how does this work in practice? Do you
          have an example of generating a short-expiry certificate from an IdP,
          such as Google?
       
            ThePhysicist wrote 3 days ago:
            You can do it directly with OpenSSH, no need for third-party
            software. There are many good blog articles / tutorials on the
            subject, e.g. search for "ssh ca certificate". Most people don't
            know that you can do this but it's actually quite easy.
       
              vaylian wrote 3 days ago:
              I did a bit of reading on the topic. But it is still unclear to
              me what the workflow is. How would a typical day look like for an
              admin and one of the users?
       
            AlphaSite wrote 3 days ago:
            Vault can do something like this.
       
        kayson wrote 3 days ago:
        The article mentions copying a Let's encrypt style architecture where
        keys are tied to user+device which can be rotated frequently, and some
        service blasts the public keys around as necessary. Are there any good,
        existing open source implementations of such a setup?
       
          jlgaddis wrote 2 days ago:
          Assuming one existed, how would it handle the passphrases used to
          protect your ("the user's") new keys and how would it securely
          communicate the new key's passphrase to you ("the user")?
       
        rzimmerman wrote 3 days ago:
        Similar to the recommendations to use a YubiKey/hardware token, SeKey
        on a Mac lets you use a key generated in the Secure Enclave in an
        unexportable form ( [1] )
        
   URI  [1]: https://github.com/sekey/sekey
       
          8fingerlouie wrote 3 days ago:
          So if your mac dies, you effectively lock yourself out of any servers
          ? Or do you have a backup key ? If so, how do you protect that ?
          
          Not criticizing, just genuinely interested in how to best manage
          keys.
       
            lucideer wrote 3 days ago:
            > do you have a backup key ?
            
            You should always have keys per device (as has been discussed in
            other comments here). So if you have >1 device, you'll
            automatically have backups.
            
            > If so, how do you protect that ?
            
            While it may seem bad to have some keys less securely protected
            than others, the ability to revoke a single device means that using
            Yubikey / Secure Enclave / whatever on one device is still better
            than using them on none.
       
            GekkePrutser wrote 3 days ago:
            You can always add more authorized keys.. That's what I do and
            definitely the way to go for this scenario.
       
              archi42 wrote 3 days ago:
              With ssh the situation is quite good, and it should serve as an
              example on "how to do it right".
              
              I find web services to be a huge pain, though: Obviously most
              don't offer any kind of 2FA, or maybe Google Authenticator or SMS
              at best (which means those websites must be so bad that people
              don't login to it on their phone?). But even those who do
              "proper" 2FA often will only allow a single U2F token - and
              enforce GA, SMS or a secondary email as fallback.
              
              (Putting this rant here so maybe a webdev or even two do it
              better the next time they do some auth stuff ;-))
       
          pram wrote 3 days ago:
          Secretive also does this, and works on any Mac with the T2. I use it
          for all my ssh keys these days. It’s super slick!
          
   URI    [1]: https://github.com/maxgoedjen/secretive
       
          tn1 wrote 3 days ago:
          Does anyone know of the TPM equivalent for this? I found this [1] but
          when I tried it, the Windows Hello prompt only accepts USB security
          keys and not the fingerprint sensor in my laptop (already set up for
          login).  
          My knowledge of WebAuthn is limited but their invocation of the
          relevant API seems like it should work for fingerprints also.
          
   URI    [1]: https://github.com/tavrez/openssh-sk-winhello
       
        rzimmerman wrote 3 days ago:
        > or Bluetooth (which works maybe 97% of the time
        
        Best description of Bluetooth I've ever read.
       
          dylan604 wrote 3 days ago:
          seems too high in my experience with certain devices.
       
        staticassertion wrote 3 days ago:
        >  But the problem with static ssh keys is that if they are stolen,
        it’s undetectable.
        
        It's like, super detectable. You have endpoint logs for the file
        access, you have network logs, you have sshd logs (which contain the
        public key and the IP), etc.
        
        > if you have your ssh private key on several machines, you have to
        remember to copy it to all those places
        
        Your ssh key should never leave a host. That should be a policy and you
        should write rules to detect when that policy is being violated (check
        for processes accessing the file).
        
        If you need access from N computers you should be generating N keys.
        
        The reason rotation isn't recommended is because it leads to bad
        practices (people just add a '1' to their password), it's a hassle, and
        it can never be fast enough to meaningfully impact an attacker - once
        they have SSH keys it's likely they can gain persistence and C2 before
        your rotation takes place. Not because people reuse their passwords in
        multiple places.
        
        Setting up a CA for SSH is definitely a really good practice but I
        think that most companies would find it far simpler to just enforce 2FA
        for SSH access. Still, I'd really like to see an article about how you
        set that up, especially if it targets smaller enterprise customers.
        
        For others who might be interested, here's bless from Netflix: [1]
        edit: Oh, and just to be clear, 2FA for your SSH is not a silver bullet
        - even a yubikey. But it's a cheap, scalable, near-zero overhead way to
        protect against an attacker who's got access to your key (but not one
        who has access to your system, assuming an active session!).
        
   URI  [1]: https://github.com/Netflix/bless
       
          dotancohen wrote 3 days ago:
          > check for processes accessing the file
          
          How? I could check at any point in time if the file is being accessed
          now, but how could I ensure that it hasn't been accessed in the past?
          The ext4 access time can be forged by the same process that accesses
          the file. Should I have a daemon running to check this? If so, then
          why isn't this already a feature in common Linux distros?
       
            jlgaddis wrote 3 days ago:
            > If so, then why isn't this already a feature in common Linux
            distros?
            
            It is. It's called auditd, is quite possibly already installed
            (albeit probably not configured to do much), and can easily ship
            its logs off to another host (natively or via syslogd).
       
              dotancohen wrote 2 days ago:
              Thank you. I'm checking out auditd now.
       
          thegeekbin wrote 3 days ago:
          > Your ssh key should never leave a host.
          
          If you need an ssh key for whatever reason from a host (for example,
          git pulls on a staging machine), you should generate one on that box
          and narrow it's scope in the machine that will receive it (eg, Gitlab
          Deployment Keys -- locked in read-only mode, single purpose).
          
          > The reason rotation isn't recommended is because it leads to bad
          practices (people just add a '1' to their password)
          
          To some degree. I personally rotate my keys whenever I change my
          personal/work device (perm. change), or, around every year or so.
          It's not a hard requirement, but just a personal preference.
          
          > it's a hassle
          
          ssh-keygen -b 4096 isn't a hassle... and if you do deployment
          properly it shouldn't be an issue to sync these keys (eg, an AD
          system holding public key, cloud directories like JumpCloud, etc can
          all sync the moment it's updated -- even Salt/Ansible/Chef/etc can do
          it easily, with modules pre-written to sync keys already).
          
          In any event, MFA is always a good idea. But, my biggest concern is
          that someone would leave ssh open to the public... the time it takes
          to setup an ACL or VPN to connect to the machine is hardly anything
          these days with the amount of automated tooling to do it... so why
          aren't people?
       
            staticassertion wrote 3 days ago:
            > ssh-keygen -b 4096 isn't a hassle...
            
            I was referring to password rotation specifically with those
            points, not SSH key rotation, because the quote in my post was also
            in the context of password rotation.
       
          Uptrenda wrote 3 days ago:
          Posts like this just make me realize how impossible it is to follow
          every security best practice and stay one step ahead of the latest
          techniques. You can maybe try tick the most common boxes, but
          expecting users to be able to police their machines, networks, and
          other attack surfaces (and still find time to get work done) is
          unrealistic, IMO. I'm starting to think the common factor here
          between secure systems and insecure ones is lucking out on having no
          interested / skilled attackers...
       
            mscarborough wrote 3 days ago:
            It's definitely a balancing act, calculating the ROI for improving
            security vs implementing more immediate customer-focused work.
            
            This common problem is what led to rise of the MSSPs
            
   URI      [1]: https://en.wikipedia.org/wiki/Managed_security_service
       
          dspillett wrote 3 days ago:
          > Your ssh key should never leave a host.
          
          A lot of people think of the SSH keys like PGP keys (where the one
          private key is your identity) which is not how they are intended to
          be used (by my understanding which seems to agree with yours).
          authorised_keys can contain many so you should never need to
          duplicate a key because you need to access a given account from
          multiple hosts.
          
          > That should be a policy and you should write rules to detect when
          that policy is being violated (check for processes accessing the
          file).
          
          For static locations, one option I like for this is whitelisting the
          source address for each key ( [1] ). You can then monitor abuse of
          the policy by looking for keys with no source limit, though this
          isn't something I've ever done, and it means that a stolen private
          key is more difficult to use from another location.
          
          Of course this doesn't work for connecting directly from client hosts
          that move around (i.e. a user connecting from a laptop that could
          potentially connect from any address unless you enforce VPN access
          for sensitive resources).
          
   URI    [1]: https://unix.stackexchange.com/questions/353044/
       
          mschuster91 wrote 3 days ago:
          > You have endpoint logs for the file access
          
          No endpoint security system I'm aware of being used outside of core
          banking/telco/government system logs all file accesses. It would
          crash instantly for one single build of your average NodeJS
          application.
       
            staticassertion wrote 3 days ago:
            File monitoring is particularly slow, though not for any
            fundamental reason. But you can still monitor lots of files
            efficiently by just limiting which files you look for, which lots
            of companies do.
       
            goguy wrote 3 days ago:
            I've worked at places that use the below or a flavour of to monitor
            specific files for modification etc.
            
   URI      [1]: https://www.newnettechnologies.com/nnt-file-integrity-moni...
       
          chmod775 wrote 3 days ago:
          No. You can't assume your logs are trustworthy if you are
          compromised. You have to assume an attacker will remove any traces of
          themselves from those logs or will stop them from being written in
          the first place.
          
          It's pretty standard practice to do this post-exploitation.
          
          Further this requires you to regularly check your log files for
          suspicious activity, which is way more work than just rotating your
          ssh keys - which can be easily automated. Running a script that will
          automatically rotate your ssh keys on all servers in your
          .ssh/known_hosts is trivial.
          
          Also, rotating your ssh keys is something that has a chance to
          prevent intrusion, whereas if you see something in your logs it's
          already too late.
       
            kortilla wrote 3 days ago:
            > You have to assume an attacker will remove any traces of
            themselves from those logs or will stop them from being written in
            the first place.
            
            You’re assuming the attacker has write access to the log storage
            of the system they ssh into. This is not the norm for production
            systems. If your auth processes aren’t shipping off logs
            immediately, your system is broken regardless of ssh.
       
              chmod775 wrote 3 days ago:
              Ship them off to where? A system that also can be accessed with
              your ssh key?
              
              If not, can an attacker gain access to your email server or DNS
              with your ssh key? If either is true, they now have access to
              everything not protected by 2FA that uses an email address they
              now control.
              
              There's so many things to get right.
              
              You can design a system such that there's a very high likelyhood
              even a nation-state attacker won't be able to intrude without
              leaving traces - if you make no mistakes.
              
              Or you know, you could also just rotate your ssh keys in addition
              to everything else. "I have logs" is really no excuse to forego
              something that is this easy.
       
                kortilla wrote 3 days ago:
                Holy shit you should not have access to your logging
                systems/DNS servers with the same exact credentials as your
                main app servers. Do you let devs onto the HR system with the
                same ssh key as well?
       
                nitrogen wrote 3 days ago:
                Ship them off to where?
                
                The usual networked logging systems that don't have ssh logins,
                e.g. syslog into ELK stack.
       
                  vel0city wrote 3 days ago:
                  curl -X "DELETE" http://log-server:9200/logstash-*
                  
                  Probably works for a large percentage of deployed ELK logging
                  stacks.
       
                    ddoeth wrote 2 days ago:
                    Are people not using elastic authentication?
       
                      CrazyPyroLinux wrote 21 hours 4 min ago:
                      I always just put it behind an authenticating proxy.
       
                      vel0city wrote 1 day ago:
                      Elasticsearch existed for years without any
                      authentication on its community tier version, with xpack
                      behind a call with a sales rep and a heavy pocketbook.
                      These days, authentication is provided after you enable
                      the free xpack plugin, but it is not enabled by default
                      and their install guide doesn't exactly point it out to
                      you right away. It starts delving into JVM tuning options
                      before it even references there's this xpack thing you
                      may want to look into.
       
                  neolog wrote 3 days ago:
                  How do you set up elastic on a server that doesn't run sshd?
       
                    viraptor wrote 3 days ago:
                    Console. Cloud login. Different VPN connection. Different
                    credential set then other hosts. Using a hosted service
                    from a completely separate provider. Setting up
                    infrastructure from images without external access.
                    
                    And I'm sure I missed some - there's a thousand ways to do
                    it.
       
                    tonoto wrote 3 days ago:
                    For real?
                    console (in various ways/flavours) is one way.
       
                pyrale wrote 3 days ago:
                > Ship them off to where? A system that also can be accessed
                with your ssh key?
                
                If the ssh key you lost has access to all the critical
                infrastructure, then, certainly you have a problem. The
                solution is to not give away write-access keys to your entire
                system.
                
                > If either is true, they now have access to everything not
                protected by 2FA that uses an email address they now control.
                
                The initial question was not whether losing a key would cause a
                breach, but whether a detection mechanism is reliable.
                
                > Or you know, you could also just rotate your ssh keys in
                addition to everything else.
                
                The question is how does it help? If you can't detect a breach,
                it will live for as long as your key rotation policy, if not
                longer. If you can detect and mitigate a breach, it will be
                closed quickly regardless of your rotation policy.
       
                  tonoto wrote 3 days ago:
                  I've not seen an enterprise environment where "ordinary"
                  sysadmins also have access to the log hosts. Must be
                  extremely rare setup in a healthy environment.
                  Likewise selldom to see "logging staff" to actually login to
                  a shell. Logs naturally aggregates to some kind of a service
                  with visualizations, alerting and filtering capabilities.
                  
                  In those special cases an "extraordinary" sysadmin gets
                  onboard a log host, it is not through the ordinary access
                  ways, such as SSH from where the other sysadmins play around.
       
            viraptor wrote 3 days ago:
            > You can't assume your logs are trustworthy if you are
            compromised.
            
            It's not that simple. In a larger/mature environment you'll have
            log aggregation where the initial login is close to certain to be
            forwarded before it can be messed with. (Unless someone can log in,
            escalate, kill the right daemon, and somehow prevent monitoring
            from noticing a missing endpoint - all before the log gets
            forwarded)
            
            And that's just host logs, without the networking, potential
            forwarding, etc.
       
            staticassertion wrote 3 days ago:
            You can assume a lot of logs are not compromised. For example, logs
            from your network infrastructure. In this case you could trust sshd
            logs (so long as you ship them off quickly) and network logs.
            
            As for endpoint logs, yes, a privileged attacker could disrupt
            them. But honestly, even with regards to your endpoint, attackers
            often don't disable logging - though I do see it, for sure. For
            "blessed" logging like Windows Event Log you'll have an even harder
            time - they do take measures to protect the files on disk, even
            against privileged attackers, and supported methods of deleting the
            event log actually themselves generate a "Someone deleted the event
            log" event, which I would highly recommend you watch out for :)
            
            The simplest advice for dealing with this is to ship logs off of
            the device ASAP and to make sure that disrupting the service
            requires privileges.
       
              chmod775 wrote 3 days ago:
              > For "blessed" logging like Windows Event Log you'll have an
              even harder time [...] even against privileged attackers
              
              Trying to prevent a determined privileged attacker from doing
              something is an exercise in futility, since it's impossible in
              every sense of the word. If they have total control over a
              system, they can do whatever they want, even if you put up a
              bunch of stopgaps.
              
              Most post-exploitation frameworks (prominent example:
              DanderSpritz) have modules to remove stuff from the windows event
              logs without leaving traces.
              
              It's pretty basic stuff.
       
                staticassertion wrote 3 days ago:
                > Trying to prevent a determined privileged attacker from doing
                something is an exercise in futility,
                
                That's not really true at all.
                
                > since it's impossible in every sense of the word
                
                Ok? Lots of things are impossible, and lots of those things are
                also still very very hard and costly. Hash collisions aren't
                impossible, and yet here we are, with a world hinging on them
                being very hard.
                
                > , they can do whatever they want
                
                Not really.
                
                > Most post-exploitation frameworks (prominent example:
                DanderSpritz) have modules to remove stuff from the windows
                event logs without leaving traces.
                
                You realize that:
                
                a) DanderSpritz's logic to bypass the event log was a huge deal
                
                b) It's literally an NSA leaked exploit????? Like are you
                kidding me using an NSA developed exploit as "pretty basic
                stuff" lol
                
                Sorry but I think I'm gonna stop responding here.
       
                  foolmeonce wrote 2 days ago:
                  > b) It's literally an NSA leaked exploit????? Like are you
                  kidding me using an NSA developed exploit as "pretty basic
                  stuff" lol
                  
                  It must have been very expensive and innovative, does that
                  make it hard to copy into your scriptz folder?
       
                    staticassertion wrote 2 days ago:
                    I don't understand the point you're trying to make. That
                    because one exploit exists, and using that exploit is
                    cheap, the entire class of attacks is cheap?
                    
                    Sorry, but this entire thread is nonsense, and it's just a
                    clear demonstration of a lack of threat modeling and
                    frankly a lack of understanding of what attacker
                    capabilities are.
       
                      foolmeonce wrote 2 days ago:
                      The basic principle of real security models is once an
                      attacker has full access all data on the system is
                      potentially tampered with.
                      
                      You can try to play games with ensuring logs leave the
                      system, but everyone takes shortcuts to make sure they
                      can recover the system when networking is down etc.
                      
                      Everyone and their grandma has access to complex scripts,
                      etc that were once very expensive. Whether they invest
                      the energy in learning methods to hide their presence or
                      just go straight to some other goal is going to depend on
                      how they intend to abuse your systems.
                      
                      I think most "security professionals" pretend they are
                      going to catch an oddity that occurs that doesn't match
                      what their automated tools would catch and occurs in the
                      middle of some other crisis or holiday break. I would say
                      good luck with that.
       
                        staticassertion wrote 2 days ago:
                        > The basic principle of real security models is once
                        an attacker has full access all data on the system is
                        potentially tampered with.
                        
                        That's not true at all. Like, not at all.
       
                natmaka wrote 3 days ago:
                A log immediately stored on a "physically" write-once device,
                or printed, may be more difficult to dispose of (w/o physical
                access to the premises).
       
          sneak wrote 3 days ago:
          Forget 2FA - yubikeys can be actual full key generation/storage for a
          GPG key in smartcard mode, and then via gpg-agent those can be used
          for ssh. (gpg-agent replaces ssh-agent.)
          
          My ssh keys never leave the yubikey.
          
          I have a different dedicated yubikey in each computer, with its own
          unique key, and a stolen key is useless without its unlock PIN.
       
            ozim wrote 2 days ago:
            Dedicated yubikey in each computer seems like your threat model is
            quite complex.
            
            Do you leave computers in weird places?
            Are you getting robbed once a year?
            Are you a high stakes poker player or CEO of a shady company?
       
              sneak wrote 2 days ago:
              No I just don't like having to stop what I'm doing, get up, go
              into another room or my backpack, and fetch a yubikey,
              interrupting my focus for 30-90 seconds.  I have enough trouble
              getting and staying focused without additional interruptions.
              
              I have like 4 laptops and two main desktops and if I didn't have
              4 yubikeys in them then this would be a multiple-times-daily
              occurrence. Yubikeys aren't that expensive, and I mostly use the
              usb-c "nano" ones which are designed to live 24/7 in a computer's
              port, only sticking out about 2-3mm.  Sometimes I have to move
              them around to other temporary machines but for the most part
              having approximately the same number of keys and computer
              workstations means that this is pretty infrequent.
              
              I even have two Davinci Resolve Studio activation dongles for
              this same reason, and I can't physically edit video on two
              different computers at once, one would do if I were willing to
              keep track of where it is and shuffle it around between my
              various machines as needed.
              
              It's pure speed/convenience, not a response to some data threat.
       
            justincormack wrote 3 days ago:
            Also the Apple enclave can store ssh keys eg using sekey, which
            needs your fingerprint to use.
       
            graindcafe wrote 3 days ago:
            Yubikey with a PIN is 2FA, isn't it?
       
              sneak wrote 3 days ago:
              Technically yes but when people say "yubikey 2fa" they 99.99% of
              the time mean U2F, which is Not This.
       
                toastal wrote 3 days ago:
                From Wikipedia: Universal 2nd Factor (U2F) is an open standard
                that strengthens and simplifies two-factor authentication (2FA)
                using specialized Universal Serial Bus (USB) or near-field
                communication (NFC) devices based on similar security
                technology found in smart cards.
                
                Using Yubikey to mean U2F is like people saying "Google this
                term", "the image is Photoshopped", "Hoover the floor", "grab
                me a Kleenex", or even "take the escalator". It possible
                "Yubikey" could become a generic trademark, but if possible
                people should be wary of using brand names in this way before
                it 'sticks'.
       
                  sneak wrote 3 days ago:
                  Neither Photoshop, Kleenex, nor Google have lost trademark
                  protection.
                  
                  Additionally, the term Yubikey isn't likely to become
                  synonymous with 2FA in any case. Most people don't know that
                  yubikeys work in several different, independent modes, such
                  as FIDO/U2F 2FA, or CCID smartcard, or Yubico OTP (those long
                  annoying strings your yubikey types when it brushes your
                  thigh or hand).
                  
                  The CCID smartcard mode requires a pin, which is technically
                  two factor authentication (knowledge of PIN and possession of
                  yubikey), which is an entirely different thing than FIDO/U2F
                  2FA (which is what most people mean when they talk about
                  using a yubikey for 2FA, not that "yubikey" and "2FA" are
                  interchangeable terms).
                  
                  This is further complicated by the fact that CCID smartcard
                  mode can be used for ssh (via gpg-agent, with ssh keys inside
                  the yubikey itself), AND, separately, OpenSSH (with other
                  keys) can use a yubikey for U2F.
       
            devwastaken wrote 3 days ago:
            My concern with physical keys is what happens if it stops working?
       
              Ciantic wrote 3 days ago:
              If you use GPG and YubiKey approach, you can create the keys in
              offline computer, store them to YubiKey, and make paper copy of
              the private key. Also you probably shouldn't have only single way
              to access the remote computer, I still intend to store password
              for root that I never use.
              
              I wrote about my endeavour with this approach just few days ago
              [1]:
              
   URI        [1]: https://github.com/Ciantic/thoughts/blob/master/2021/yub...
       
                sneak wrote 3 days ago:
                You should totally disable password authentication for SSH.
       
                  Ciantic wrote 3 days ago:
                  You are right, but you can still use password for recovery
                  purposes even if it's not used for SSH authentication.
                  
                  Most VPS provides "web console" access. It connects like
                  terminal, like Digital Ocean's web-console that doesn't
                  require SSH access [1]:
                  
   URI            [1]: https://www.digitalocean.com/docs/droplets/resources...
       
              GoblinSlayer wrote 3 days ago:
              Have a higher security fallback.
       
              devoutsalsa wrote 3 days ago:
              I'm more worried about simply losing it.  I travel a lot.
       
                GekkePrutser wrote 3 days ago:
                Just use multiple. That's what I do.
                
                As backup you can also use OpenPGP cards which cost much less
                than a yubikey. Or a cheaper Fido2 token if you use Fido2 for
                SSH access (I don't yet but it's coming into vogue). An OpenPGP
                card will cost about a tenner, you'll need a card reader to use
                it but for backup purposes it's perfect.
       
              darkr wrote 3 days ago:
              Then that key can no longer be used. The solution to this
              resulting in being permanently locked out is to have more than
              n+1 redundancy in hardware, and ideally meatware too.
       
              danielheath wrote 3 days ago:
              You need two; one stowed safely and only retrieved to enroll a
              new service, and one to use day-to-day.
       
                sneak wrote 3 days ago:
                enroll a new service? we are talking about gpg smartcard usage,
                for ssh - not u2f 2fa.
                
                you don't need the physical key to "enroll", you just keep a
                copy of its pubkey.
       
                  tmottabr wrote 3 days ago:
                  the idea is to generate the key in the device, so it never
                  leave the device, therefore you cannot store it somewhere
                  else like another device.
                  
                  So, the ideia is for you to have two devices, each with its
                  own key, the first device you use daily and the second you
                  use store in a safe location.
                  
                  if your first device in daily use stop working or is lost you
                  use the second device you have stored to login to your
                  systems to remove the keys from the lost device and add the
                  keys for a new device that replace it and then store the
                  second device back in a safe location.
       
              sneak wrote 3 days ago:
              I have one key permanently in a usb port of each computer, and
              one on my keychain as a backup (which is also used for U2F and
              has lightning so works with my phone for 2FA), so five in total,
              each with their own RSA key for ssh.
       
          amenghra wrote 3 days ago:
          > Still, I'd really like to see an article about how you set that up,
          especially if it targets smaller enterprise customers.
          
   URI    [1]: https://github.com/square/sharkey
       
            staticassertion wrote 3 days ago:
            Oh I mean I know how to do it, I just mean I'd like to see
            companies talking about how they do it, and I'd really really like
            to see someone make it drop-dead simple.
            
            It's valuable to hear how companies are protecting their
            infrastructure if they're going to break down how they accomplished
            it, how they continue to maintain it, etc.
       
              lifeisstillgood wrote 3 days ago:
              If I don't know how to do it, where do I start to understand ?
              
              Edit: I suppose I am asking how does it "all" fit together.  The
              CA that grants servers or services a short lived key - the other
              servers that then can trust that.  It makes kind of sense but I
              think I am missing some parts as when I try and read how others
              do it, some parts seem to be missing.
              
              Too many blog posts seem to be something something kubernetes
              will arrange it. Or Oauth or ...
              
              For example, this article seems to blow against the idea of a
              central key management service like Vault and have the device
              decide to rotate keys. But I am not 100% sure because it's one
              sentence.  And how do they provide authentication for a service
              account (say a web worker that processes some incoming requests).
               That's not device+person.  The same idea can happily apply but
              do they?
              
              I think I am just moaning.
       
              bassdrop wrote 3 days ago:
              We use symops[0] to connect to our servers. Not sure about the
              implementation details, but the gist is that we connect through
              our aws credentials (via SSM), which we retrieve using Okta,
              which is then protected using 2fa. No ssh key to be found, and
              you can disable all ssh ingress rules. Since this is via SSM, you
              can also use the full power of AWS IAM to allow/deny access.
              
              Also, Sym can be set up to require approvals too which is great
              for security auditing since it's a third party.
              
              [0]
              
   URI        [1]: https://symops.com/
       
                staticassertion wrote 3 days ago:
                Ah nice. Yeah, SSM is awesome, we use it as well.
       
              shandor wrote 3 days ago:
              OpenSSH 8.4 added support for FIDO2. It's literally the same
              commands (with some different parameters) to take into use as a
              normal keypair. And FIDO2 tokens are becoming quite affordable.
              To me 2FA with SSH became a solved problem with that.
       
                ldng wrote 3 days ago:
                Could you point to a relevant article explaining that setup
                please?
       
                  shandor wrote 3 days ago:
                  This one had quite a lot of background, but also the needed
                  commands: [1] With a properly set-up token, basically you
                  only need 2 commands:
                  
                     ssh-keygen -t ed25519-sk -O resident -f ~/.ssh/id_mykey_sk
                     ssh-add -K
                  
                  I used a Yubikey and needed also to install 'ykman' to set a
                  PIN for my token, otherwise ssh-add kept failing. Dunno if I
                  omitted something for a proper setup for my token initially,
                  but I don't think that was a problem with OpenSSH in
                  particular.
                  
                  Apart from the small headache with the PIN, the whole thing
                  was almost magical in its simplicity.
                  
   URI            [1]: https://www.stavros.io/posts/u2f-fido2-with-ssh/
       
                    loulouxiv wrote 2 days ago:
                    The token must be pin protected to be eligible to resident
                    credentials. I think it is one of the significant
                    differences between FIDO1 and FIDO2. I also think that you
                    can use ed25519-sk with a non pin-protected, but then you
                    won't be able to authenticate if you can't access to the
                    generated key file (if it was deleted or on another
                    machine)
       
                      shandor wrote 2 days ago:
                      Thanks for the info. Makes complete sense to require a
                      PIN with resident keys, I'm glad. Now I'll need to write
                      the article myself, with this information included :)
       
        pmorici wrote 3 days ago:
        Hardware tokens like a Yubi key prevent this.
       
          surround wrote 3 days ago:
          A rouge employee could steal a physical key left lying out, much like
          they can steal keys from computers.
       
            sneak wrote 3 days ago:
            Yubikeys used to store gpg keys for use with gpg or ssh require a
            pin code to do any signing/decryption.
            
            You only get some limited number of pin attempts before it locks
            you out.
            
            A stolen key is useless for gpg/ssh.
       
            hiq wrote 3 days ago:
            The point is that you would realize much sooner, since it's real
            theft, not copying.
       
              lanstin wrote 3 days ago:
              Not necessarily in a company. Lots of inventory can be
              unaccounted for.
       
                dastx wrote 3 days ago:
                Sure, inventory is unaccounted for but you as an engineer knows
                it's not accounted for. So you go to IT, and you ask for a new
                one. IT revokes access of the original key, and gives you a new
                one.
                
                No one in that transaction cares if it was lost or stolen.
       
                MrManatee wrote 3 days ago:
                But it’s not enough to steal a blank YubiKey from the office
                storage room. You would have to steal some specific person’s
                activated YubiKey, and that person will notice it if they need
                the key regularly to do their job.
       
        genericstorage3 wrote 3 days ago:
        In addition, any node_modules can read your .ssh folder (on windows).
       
          jlgaddis wrote 2 days ago:
          That's why we all protect (encrypt) any private keys on disk with a
          strong passphrase, right?
       
          jtsiskin wrote 3 days ago:
          Woah, this just gave me an uneasy feeling...
       
          usr1106 wrote 3 days ago:
          All code you install to your machine. System packages, Python
          packages in virtualenvs, scripts.
          
          I don't keep important private keys in my .ssh folder. Well, it's
          just security by obscurity. An educated, determined attacker would
          find them. But 
          some random malicious code would not immediately find them.
          
          I run the Web browser in firejail (Linux).
       
            henvic wrote 3 days ago:
            The main issue here is that in the JavaScript ecosystem there is
            this trend of using external dependencies for everything,
            regardless of size (i.e., leftpad), and trusting forward versions
            blindly. Security in npm / JS / node_modules ecosystem is quite
            reactive, instead of active.
            
            If there is a bad actor that releases a widely using dependency,
            for sure it's going to be gone from npm quite fast most of the
            time! However, it'll take some time for it to get noticed, and
            people will invariably get affected.
            
            You shouldn't bring an open honeypot to a place where bears can
            attack you easily, right?
       
              marcosdumay wrote 3 days ago:
              If you remove the "regardless of size" part, you'll be describing
              any modern language dependency system.
              
              And most of them also execute external code on module
              importation... what I'm not sure if it's even relevant, because
              you will run the module at some point anyway.
              
              So, yeah, JS makes the problem one or two orders of magnitude
              larger. But the problem is still there, whether you use npm or
              avoid it.
       
        alex_young wrote 3 days ago:
        Isn’t the real problem here having too many cooks in the kitchen?
        
        If you have a small number of people who have access to your production
        environment, and they practice their trade like they are actually
        trusted with  said production access, that provides a very small attack
        surface which can be analyzed and hardened.
       
          makotoNagano wrote 3 days ago:
          But then you're forcing an organisation style/hierarchy because of a
          tech problem. If I want all my developers to be able to access
          production directly for quick development, then that should be
          possible
       
            alex_young wrote 3 days ago:
            What about setting up a staging environment mirroring production?
            
            Most security experts recommend restricting prod access away from
            your dev team because doing so alleviates risks from a compliance
            perspective, and prevents bugs and regressions from being
            introduced inadvertently.
            
            I’m not providing links here because I do think it’s worth
            googling and discovering more of the nuanced points many others
            have made.  Sure, you’ll find some shops that use another model,
            but for most use cases separate environments exist for a reason.
       
              Agingcoder wrote 2 days ago:
              In my experience ( as a developer) this doesn't work in practice,
              since mirroring an actual, complete bunch of production systems
              in a large company is a difficult task unto itself! More often
              than not, you end up with a staging environment as per the
              security guys recommendations, but which is unfortunately barely
              usable.
              
              It also makes investigating difficult bugs extremely difficult
              (staging tends to be slightly different from prod, smaller as
              well, different hardware, network, etc) since you can't reproduce
              them, and your prod team can't help you much, since what you need
              is actual full box access to poke around.
              
              I agree with you on the compliance point.
       
        LogicX wrote 3 days ago:
        I don’t get it. Use ssh keys with a passphrase? Then stolen keys are
        useless?
       
          beermonster wrote 3 days ago:
          You'd want to use a strong passphrase. Even if you do, it's
          protecting the physical key. If an attacker has access to the that
          they can perform an offline attack. i.e. attacking a passphrase
          offline with decent hardware and no likelihood for being noticed
          means it had better be strong!
       
          staticassertion wrote 3 days ago:
          A passphrase on your key is a great idea, but as soon as you unlock
          it once it's cached in memory. Since processes are not isolated
          within a user they're allowed to scrape the memory of other
          processes. If you check your running processes you're going to see
          that ssh-agent is running as your current user.
          
          But a far more likely scenario is that the attacker will simply
          leverage existing sessions/ steal a socket, which, notably, will
          bypass any sort of 2FA on SSH connections.
       
            dotancohen wrote 2 days ago:
            Or store your keys in a different user. I personally use have the
            users dotancohen and dotancohens (trailing s for Secure) on my
            laptop. I simply su into dotancohens and then from there SSH into
            various servers. The /home/dotancohens/ directory has 0700
            permissions.
       
              jiveturkey wrote 2 days ago:
              How does this help you? If your dotancohen acct is compromised, I
              will just capture the password for dotancohens when you su to
              that account.
       
                dotancohen wrote 21 hours 43 min ago:
                Hmmm...
                
                Maybe I should calling the su binary directly from /usr/bin/.
                Any thoughts on that? Or should I open a new VT?
       
            chousuke wrote 3 days ago:
            On Linux, you can harden a bit against memory dumping by disabling
            ptrace. Set the "kernel.yama.ptrace_scope" sysctl to 3 and the
            easiest attack will no longer work, if you have processes that
            don't explicitly request disallowing ptrace.
       
              ikiris wrote 3 days ago:
              No, you should use short lived certificates, ideally locked
              inside hardware tokens and 2fa.
              
              This is just snake oil that doesn't actually add protection.
       
                chousuke wrote 3 days ago:
                But it does help; in most cases, it requires no effort
                whatsoever, in contrast to using something like SSH
                certificates which may not even be possible, depending on the
                environment.
                
                There's no such thing as perfect security, but that doesn't
                mean you shouldn't lock your door.
       
              staticassertion wrote 3 days ago:
              I actually don't believe that scraping memory is the easiest
              attack, I just mentioned it informationally. I strongly believe
              attackers are more likely to hijack sessions.
              
              But yeah, ptrace is definitely something to watch out for.
              Monitoring ptrace is also something defenders can do if they're
              not in a position to disable it (if you're working for a software
              company your engineers will ptrace).
       
          genericstorage3 wrote 3 days ago:
          Brute force. Usually people put a simple one
       
            dawnerd wrote 3 days ago:
            Or worse, none at all since a lot of guides, especially for CI,
            tell you to just use none.
       
          pat2man wrote 3 days ago:
          What if the passphrase gets stolen as well?
       
       
   DIR <- back to front page