_______ __ _______ | | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----. | || _ || __|| < | -__|| _| | || -__|| | | ||__ --| |___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____| on Gopher (inofficial) URI Visit Hacker News on the Web COMMENT PAGE FOR: URI Discord says 70k users may have had their government IDs leaked in breach dwayne_dibley wrote 12 min ago: I wonder how many people in the UK have actually got their passport out to sign into these services. I'm guessing the average HN user isn't likely to do this, but I'd love to see the numbers for the general populous. 1970-01-01 wrote 32 min ago: The one approach that has never failed is to use a fake identity when signing up for online services. It is a violation of TOS but not a crime to do so. Only give your real information to the government. If companyX requires hard information but cannot protect this PII, then they don't deserve real data. Aerroon wrote 28 min ago: The problem is that the government has these leaks too. dwayne_dibley wrote 14 min ago: sure, but your reducing the likelihood of your real data getting out there if it's only stored in one place, rather than hundreds. ratelimitsteve wrote 41 min ago: we, uuuuhhhhhh, we still gonna make every E-Tom, Dick.com and HarryAPI collect people's identifying information? rsynnott wrote 58 min ago: Ah, the thing that everyone warned would happen has happened. lofaszvanitt wrote 1 hour 41 min ago: Discord always was a privacy nightmare. How come people upload ids there? And why do the service stores them in hot storage? kogasa240p wrote 1 hour 54 min ago: This is the end result of forcing private companies enforce ID verification. lofaszvanitt wrote 1 hour 39 min ago: No, this is the result that companies dngaf about your private data. Sue them to oblivion. quentindanjou wrote 1 hour 31 min ago: Hard disagree. Companies could care about your data and still be subject to rbeach. ID verification is the source of the issue. lofaszvanitt wrote 32 min ago: Anyone with a semblance of security awareness wouldn't store photo ids in net accessible storage gamerdonkey wrote 51 min ago: Everyone, please, don't fight. It's both. The companies wouldn't have this specific data if it wasn't for the age verification laws. Companies also work to amass as much private data as possible about their users without any influence from government and are often not good stewards of it. Let's also not forget that companies like Discord often support and work with governments on these kind of laws because they prefer a consolidated regulatory structure and it has the added benefit of making life more difficult for smaller competitors that may enter the space. SuperSandro2000 wrote 2 hours 9 min ago: If only someone would have warned us qwertox wrote 2 hours 35 min ago: Pieces of shit. Do they need to look at them on a daily basis or isn't is enough to use them to confirm identity when received and then encrypt them and move them to an offline storage? daveoc64 wrote 1 hour 23 min ago: It's just a standard helpdesk application. You submit a ticket to Discord with the ID attached when the automated ID verification didn't work for you. Once the ticket is dealt with, Discord could have a policy of deleting the IDs, but they don't. jacquesm wrote 2 hours 32 min ago: So many companies do not understand this simple principle. Blast radius reduction. But no, they need to have everything online, and instantly accessible all the time. Because they can't possibly be inconvenienced with a short delay in case they ever want to look at that piece of data that they will never want to look at anyway. It is going to take a long time before companies realize that data they don't need is a liability, not an asset. prmoustache wrote 2 hours 49 min ago: why would one give their government ID to Discord? laylower wrote 3 hours 23 min ago: Will the British Government be held liable for ID Thefts from this? If they hadn't created a honeypot with minimal security would this info now be out there? WTF were they thinking about? pbohun wrote 3 hours 31 min ago: The Principle of Least Privilege is one of the foundational aspects of security. Governments should be enforcing that not requiring companies to collect very sensitive information like they are currently doing. Things like "prove your age", digital ID, and Chat Control are actively malicious when it comes to safety, security, and privacy. atbvu wrote 3 hours 42 min ago: Every time I see a data breach caused by a third party vendor, I can't help but wonder why are these big companies so deeply reliant on outsourcing, yet so lax when it comes to controlling security? kevincox wrote 1 hour 56 min ago: Because the consequences of events like this are minimal so why would they waste time and effort worrying about it? theknarf wrote 2 hours 46 min ago: Usually some regulation change that the company is not aware off, they have to run to find a fix as soon as possible, some business guy who don't know anything about tech find a vendor who are ready to sell a solution (they probably created their whole business last month on a gamble that the new regulation would be passed and that businesses would be rushing for a solution). Then they simply buy that solution "for compliance" as a top down decision, even when internal employees ring the warning bell. ktosobcy wrote 4 hours 28 min ago: I kinda hope and root for EU's spec ( [1] ) with "Zero Knowledge Proof" that wouldn't require passing actual ID to the service⦠URI [1]: https://ageverification.dev/Technical%20Specification/architec... k__ wrote 3 hours 10 min ago: This. We're talking about a solved problem here. Similar to storing passwords as unhashed/plaintext. Bender wrote 4 hours 18 min ago: My preference would be just requiring site operators to add the RTA header [1] for anything that could potentially be adult in nature or user contributed content and let parents decide if devices should have parental controls. Not perfect, nothing is but would protect most small children. Teens will easily bypass any method as many today watch porn together in rated-g/pg video games that allow setting up a streaming player in-game. [1] - URI [1]: https://www.rtalabel.org/index.php?content=howtofaq#single buyucu wrote 5 hours 7 min ago: This is why social media should never ever ask for an ID. b00ty4breakfast wrote 5 hours 40 min ago: I understand I grew up in a different era but it is beyond absurd to me that a chat application requires government ID from it's users. I understand the rationale but I do not find it convincing in the least, especially with the way that security is treated at basically any entity that has this kind of info on file. I do not like this world that we have created and I would like to apply for a full refund jefozabuss wrote 5 hours 9 min ago: Rationale is likely the requirements of age verification rules by UK, some US states, etc. We could likely see a bit more of these data leaks in the future I guess, due to how there are more and more countries/states adopting this. timpera wrote 5 hours 43 min ago: I work at a company where we also store government IDs in Zendesk. I've alerted management multiple times but no one seems to care. It's a disaster waiting to happen⦠TavsiE9s wrote 5 hours 14 min ago: Leave paper trails (emails most likely) and keep hard copies. verytrivial wrote 6 hours 18 min ago: ID checks, driven by prudishness, are an absolute gift to the big social media companies. They're the only entities whom (a) already know the check's answers, and (b) have the resources to keep hackers largely at bay. I am not surprised these laws are landing with such little resistence. MaxikCZ wrote 5 hours 32 min ago: Its as if the big social media companies lobbied for extra redtape, eh? spacebanana7 wrote 2 hours 18 min ago: Surprisingly they've generally lobbied against it for ideological reasons despite their economic incentives. bArray wrote 6 hours 27 min ago: Looking forward to being forced to provide my government ID to access Discord [1], when they have only just suffered a major breach. Good stuff. URI [1]: https://support.discord.com/hc/en-us/articles/30326565624343-H... Razengan wrote 7 hours 17 min ago: And how will they pay for it? How did we get to this state anyway? Isn't HN supposed to be populated by the people who work at these companies, the fuck are you guys doing?? bell-cot wrote 37 min ago: Whatever stereotypes you've read, about 0.01% of HNer's hold C-level jobs at huge tech companies, to be setting such policies. And even at modest-sized companies, those are decided by Legal Dept's and senior business managers. While you might find it cathartic, to angrily curse at some convenient Post Office employee for (say) the Postmaster General's latest postage stamp price increase - that is really not a classy move. neuronic wrote 8 hours 20 min ago: This is why I am really looking forward to PIDs in the European Digital Identity ecosystem (EUDI) [1]. This works with the OpenID Verifiable Credentials spec built on top of Oauth2. There are open source solutions in the competition for building the EUDI Wallet and the architecture and reference framework is openly accessible [2]. All credentials are kept with the holder (you) at all times. Basically implementation of the EU eIDAS 2.0 regulation, obviously subject to GDPR. Mandated to be accessible to EU citizens by 2027 when all Member States have developed a Wallet solution. Not associated but learned through it at work recently, just awesome project and thought I'd share in this context. [1] [2] URI [1]: https://commission.europa.eu/strategy-and-policy/priorities-20... URI [2]: https://eu-digital-identity-wallet.github.io/eudi-doc-architec... URI [3]: https://github.com/openwallet-foundation/credo-ts 11mariom wrote 8 hours 24 min ago: First problem is - they never should have such data. Why you are sending them IDs? miroljub wrote 8 hours 28 min ago: It's great news. Introducing totalitarian laws and rushing companies to implement them, who would've thought something would go wrong? I hope this incident and future data breaches will finally raise awareness of which direction many regimes are going. armada651 wrote 8 hours 21 min ago: Don't worry, the only thing governments will learn from this is that they need to exert even more control. They'll use this as a convenient excuse to centralize the age verification in the interest of security, which conveniently gives the government the final say over which web services you're allowed to use. miroljub wrote 8 hours 13 min ago: The stricter the dictatorship is, the more likely people will resist the regime. That's why many of the traditional totalitarian regimes are populistic, they do what their people want them to do or what they can convince them is good for them. New Western hybrid regimes still didn't realize they can't rule against their own people forever. croes wrote 8 hours 44 min ago: Why did they have them in the first place? jonplackett wrote 8 hours 47 min ago: Why are they even storing these? Once they have verified them as old enough, why keep them? These companies should be forced to release a proper account of events - like Google/Cloudflare do when they mess something up eternauta3k wrote 9 hours 13 min ago: More governments should provide a system like the German electronic ID*, which lets you prove your age without revealing other information. * Tragically underused because impractical consp wrote 8 hours 26 min ago: As far as I have heard zero knowledge proofs have become optional (thus dead) in the EU wallet specification. I expect selective disclosure in all form to be completely axed next. OvbiousError wrote 8 hours 47 min ago: In Belgium we have a service called "itsme". Had it for ages, works very well, used to be mainly for government but banks are also switching to it. luplex wrote 9 hours 7 min ago: not just impractical, but also not easy and free to integrate with your service. Seems designed to push you to use a commercial product. URI [1]: https://www.ausweisapp.bund.de/so-werden-sie-diensteanbieter eleveriven wrote 9 hours 15 min ago: The whole "it wasn't us, it was our third-party vendor" line is getting way too common. If you're collecting government IDs for age verification, the security bar should be extremely high... no matter who's handling the data baobabKoodaa wrote 7 hours 21 min ago: But our subcontractor made a contractual promise to use only sub-subcontractors who use only sub-sub-subcontractors who promise to be secure! Spivak wrote 54 min ago: Ahh I see you've done work for the government. teekert wrote 9 hours 41 min ago: I think it is nice that the GDPR forces companies to not keep too much data about people. And you can only have data that you need for the stated purpose (of course this leaves loopholes but it is good data hygiene to always consider). For example, if you state you want to verify age, you only need the ID for a couple of seconds. So why didn't they think about the risk of a hack before? They could have done the age verification and then immediately deleted the document. The cynical take is af course they did think about it but would take the fine if it came to that... Maybe it is good to make an example out of Discord? Don't keep stuff around if you don't need it should be common sense. Vipsy wrote 9 hours 44 min ago: One important problem that's mostly ignored is the lack of transparency about the third-party providers handling such sensitive ID documents. When a breach occurs, public statements rarely name the exact vendor responsible, making it difficult for affected users to understand who actually had access and who might still have their data. This opacity delays accountability and creates ongoing risks, since users have no meaningful way to audit or assess the practices of these shadow providers. Unless this layer of the data-handling ecosystem is discussed and regulated, future breaches will remain inevitable and largely untraceable. wosined wrote 8 hours 19 min ago: The biggest problem is giving data to people in the first place. eleveriven wrote 8 hours 55 min ago: The third-party layer is basically the dark matter of data breaches like invisible to users, barely acknowledged by companies, and completely unaccountable when things go wrong baby wrote 9 hours 56 min ago: Why does discord have gov IDs? At this point we already have the tech to prove using zero knowledge that we have an ID miohtama wrote 10 hours 0 min ago: ZenDesk boasts on this: âDiscord's investments in AI-driven self-service with the Zendesk CX platform have enabled the company to provide seamless support.â xaxaxa123 wrote 10 hours 23 min ago: KYC is a bug Anduia wrote 10 hours 37 min ago: Discord uses Zendesk (1). However in the press release they don't name the third party that was compromised, and Zendesk denies that it was their service. What other third party was Discord using if not Zendesk? Who's reputation are they protecting? URI [1]: https://www.zendesk.fr/customer/discord/ BryantD wrote 3 hours 6 min ago: Do you happen to have a link to Zendesk's denial? buckle8017 wrote 4 hours 42 min ago: The wording Discord used leaves open the possibility that a ZenDesk account was compromised through no fault of ZenDesk. Kinda feels like Discord is lying by omission. Edit: Actually my bet is their support staff just sold them out. Mattwmaster58 wrote 3 hours 9 min ago: vx-underground claims to have communication with the group, and this post of theirs adds to the support agent theory: [1] > they were able to compromise Discord Zendesk by compromising a "BPO Agent" (outsourced support). > Of course, as is tradition, it is also entirely possible they're lying URI [1]: https://xcancel.com/vxunderground/status/19762388156658566... Draiken wrote 5 hours 9 min ago: I don't understand how we allow these companies to protect each other even in the face of egregious malpractice. This might even be a PR move. They fucked up and can merely say "a third party" did it. Who's gonna verify this? Unless we have whistleblowers we will never know. What a disgrace. geenat wrote 10 hours 48 min ago: Why are they permanently storing government ID's? andsoitis wrote 11 hours 14 min ago: What is the use case for uploading your government ID to Discord? tavavex wrote 9 hours 49 min ago: Two of the other replies are wrong. This isn't actually about the new 18+ age verification stuff that countries seem to be ramming through right now - as far as I know, Discord uses third parties for that service. The link from Discord's statement in the article mentions that this is about appealing account bans of users who were suspected to be under the legal age to use Discord at all (<13 in most places). This is an older thing, which also explains the amount of data that was leaked. miohtama wrote 10 hours 3 min ago: Online Safety Act for the UK. You will be safe. hmry wrote 11 hours 8 min ago: Joining "NSFW channels", which usually means porn. But some normal channel are also tagged NSFW to opt out of Discord's forced content filter on public servers, which has occasional baffling false positives. andsoitis wrote 11 hours 2 min ago: So people are willing to upload their govt IDs to watch porn. Wow. 0xfffafaCrash wrote 11 hours 11 min ago: As the article says itâs used for age verification kwar13 wrote 11 hours 35 min ago: How many times the same thing... most even tell you that they verify you and then delete your ID. ZK proofs cannot become mainstream fast enough. rendall wrote 11 hours 40 min ago: I once accidentally set an incorrect birth year on Twitter. They locked me out of my account and insisted that I upload a government ID to unlock my account. Aachen wrote 10 hours 31 min ago: Did they accept the edited ID with a DoB matching the account data or how did you solve that? rendall wrote 5 hours 39 min ago: I just... sent a scan of my passport. I mean, they promised to delete it right? Nothing could go wrong? maxlin wrote 12 hours 6 min ago: .... The government ID's they only started asking for as a bullshit requirement after running for like 10 years without needing them? At some point we'll start seeing companies that rotate your passwords automatically and integrate with your autologins, and send immediate reports of breaches / suddenly failing logins. Wait. Why isn't this a thing missingrib wrote 12 hours 12 min ago: Why haven't zero knowledge proofs shined in this area? Can anyone explain? Nasrudith wrote 7 hours 56 min ago: Aren't ZKPs useless for their paranoid 'children will die if they see boobies' crap because then they'd allow for a single common token to be shared willy nilly? Not to mention that surveillance is the clear government actual goal. jhasse wrote 1 hour 13 min ago: No, Discord would create a new challenge for every user by creating a random nonce. tzs wrote 11 hours 41 min ago: They are on the way. The EU is field testing such a system now. tiku wrote 12 hours 22 min ago: Why is it still so hard to identify yourself online? driverdan wrote 12 hours 45 min ago: Bring back IRC. amatecha wrote 12 hours 31 min ago: It's still very much alive! Regularly active on a few channels spread across different IRC servers. Still works great. whatever1 wrote 12 hours 52 min ago: Oh no! Anyway software engineers are not real engineers so nobody will be held accountable. quintes wrote 13 hours 2 min ago: Why. I see Australia is intending on blocking YouTube and other platforms. Expect this more regularly EGreg wrote 13 hours 6 min ago: Those are rookie numbers. Time to pump up those numbers⦠we publish this every year or so: URI [1]: https://qbix.com/blog/ qwertytyyuu wrote 13 hours 12 min ago: Wait already? I was hoping to hear about it next year. Maybe itâs a good thing that it happened early so they can fix? saagarjha wrote 13 hours 9 min ago: No, itâs a good thing it happened early so they can remove it. codedokode wrote 13 hours 45 min ago: Companies usually promise that the ID would be used only for validation and then immediately deleted. How so many IDs could leak then? They verify millions of IDs per month? ok123456 wrote 1 hour 28 min ago: Discord is a fed honeypot so why would they. _ink_ wrote 7 hours 39 min ago: I guess they are required to store everything for years for "compliance". How else are they are going to save their butts when someone manages to fake their identity through them? xxs wrote 8 hours 15 min ago: The fact the deletion is at all needed speaks for a pretty terrible design. The data should simply not be permanently stored. I have quite a lot of experience dealing with personal identity information. Unless the latter has to be reported then it's never stored. Along with the fact it's actually deleted to comply with GDPR and friends (when it has to be recorded). In any case if any personal data is to be stored, it's always encrypted with personal keys. eleveriven wrote 8 hours 43 min ago: Either the deletion promise is a lie, or the third-party vendor was storing the data anyway crossroadsguy wrote 8 hours 25 min ago: Or it's all kosher as per their "internal policy" which translates to "yes, it was deleted on the server where you first uploaded it" but "pre-deletion" it was "transitioned" to "another secure server" for "your convenience" and "everything is as per our T&C that you agreed to and we follow the highest standards of data security and safety. Thank you for your time". If Kafka were alive today, he'd see the world has outdone itself. whiplash451 wrote 8 hours 56 min ago: The regulation lets identity verification companies store identity data for up to three years. The providers typically do it to train machine learning models for fraud detection. magicalhippo wrote 12 hours 36 min ago: From the previous[1] statement: The unauthorized party also accessed a âsmall numberâ of images of government IDs from âusers who had appealed an age determination.â It makes sense they have to hang on to the ID in case of processing an appeal, which probably doesn't have the highest priority and hence stretches out in time. [1] URI [1]: https://www.theverge.com/news/792032/discord-customer-servic... BLKNSLVR wrote 11 hours 3 min ago: The funny thing about this is that it kinda makes it OK for Discord to still have the records. But... 1. Discord still got hacked despite being a company that must have passed some level of authorised audit in order to be able to store government ID cards. (who audits the auditors? Is there an independent rating of security audit companies? What was the vulnerability? Was there any Government due diligence?) 2. This is a great example of why "something else" is needed for proof of identity transactions over the wire, and this "something else" should exist, and have existed for long enough to develop a level of trust, before Governments start mandating that private companies audited by other private companies must undertake actions that require the storage of Government ID documents. Banking level security and regulation should be required for any aggregator of such sensitive data. That fucking Discord had Government ID docs at all is beyond ridiculous. More-so for Governments of countries other than where Discord was incorporated. A state-sponsored Russian / Chinese / North Korean / Iranian / Discord-alternative would have been an interesting situation. The implicit trust in Discord, and any other "app publisher" requiring ID confirmation is just peculiar. ooterness wrote 1 hour 26 min ago: > passed some level of authorised audit in order to be able to store government ID cards. In a perfect world, maybe. Not in this one. jeffparsons wrote 8 hours 7 min ago: There is no reason for a company like Discord to ever see the ID. The owner of each relevant form of ID â usually a government agency/department â should provide an attestation service, such that users prove their identity to the agency and the agency tells the company "yes, this user is who they say they are". It's not that hard. Legislators around the world are consistently dropping the ball on this. magicalhippo wrote 7 hours 50 min ago: Doesn't seem like they did. From the original article I referenced earlier: One of Discordâs third-party customer service providers was compromised by an âunauthorized party,â the company says. [...] The unauthorized party âdid not gain access to Discord directly.â jeffparsons wrote 7 hours 40 min ago: The third party company shouldn't ever need to see the IDs, either. Same issue. magicalhippo wrote 6 hours 50 min ago: When governments do things the wrong way around, like mandating age control before they have a method for doing that in a secure manner, what's a company to do? BLKNSLVR wrote 12 hours 43 min ago: The Discord message (in Australia at least) specifically says: The information you provide is only used to confirm your age group, then it's deleted Refer screenshot: [1] I can still swipe the message away, so I haven't done it yet. I'm going to work out how I can fake the face scan. I ain't sending Government ID to some chat app (no matter how big or small) that's over the top. As an aside, I would have thought the age groups should be: 13 to 18, and 18+. They're the only ones that materially matter to the reason this check exists, in Australia at least. I don't want to contribute to their demographic analysis. URI [1]: https://www.reddit.com/r/discordapp/comments/1nkrxcp/discord... daveoc64 wrote 1 hour 28 min ago: That is not the system that was compromised. It was Discord's helpdesk software (reported to be Zendesk). If you have problems with that system, you can log a support ticket with the Discord helpdesk, attaching your ID, and they can override it for you. elAhmo wrote 6 hours 42 min ago: Unless they get fined for this, nothing will change. peanutz454 wrote 11 hours 28 min ago: When the australia sub reddit was discussing the introduction of id on discord, the top comment was something along the lines of "look up openfeint". That was the day I uninstalled discord. It may not be an easy decision, especially if you are part of important social communities, but we cannot accept this level of disregard for our identities. Krasnol wrote 10 hours 49 min ago: I just looked up "Openfeint". It took me a while to find the connection to Discord. Not sure if I did because it seems like some mobile app for people who play mobile games with some connection to some Japanese network and hosted in China or something? cokecan wrote 10 hours 25 min ago: OpenFeint was founded by the same guy who founded Discord. From the Wikipedia page: "In 2011, OpenFeint was party to a class action suit with allegations including computer fraud, invasion of privacy, breach of contract, bad faith and seven other statutory violations. According to a news report "OpenFeint's business plan included accessing and disclosing personal information without authorization to mobile-device application developers, advertising networks and web-analytic vendors that market mobile applications"." leoqa wrote 1 hour 50 min ago: I was entertaining an offer from Discord and also stumbled upon the founderâs former company debacle. The platform vision pitched to me in the interview seemed similar and seeing as how he started to implement spyware I decided to bail. Krasnol wrote 9 hours 53 min ago: Oh wow ok. Now I understand :D sampli wrote 12 hours 50 min ago: deleted = database column crossroadsguy wrote 8 hours 19 min ago: Or maybe they define 'delete' as moving data from "production" env to "deleted" env and if someone asked that data to be deleted even from there then the next step is moving from "deleted" to "purged". o11c wrote 13 hours 7 min ago: Lying is usually legal. And even if lying is illegal in a particular context, it's de-facto legal since nobody ever gets punished for it. schaefer wrote 12 hours 58 min ago: fraud is not legal. There's a difference between lying on the playground and fraud in a business setting. o11c wrote 12 hours 27 min ago: Again: fraud is de facto legal. It is ubiquitous in every part of the business world, both internal and consumer-facing. ocdtrekkie wrote 11 hours 0 min ago: A more useful construct is that civil offenses are only a problem if someone is aware of, motivated, and able to afford to sue you over it. Businesses do a lot of arguably illegal things that are not likely to lead to an actual lawsuit. eviks wrote 11 hours 34 min ago: De facto is the opposite of de jure, so no, non-enforcement doesn't make it legal LoganDark wrote 3 hours 15 min ago: Again, nobody said it was legal. They said de facto legal, which does not mean it's actually legal but just that it's effectively treated as legal. encrypted_bird wrote 13 hours 20 min ago: Do they actually say in the TOS that they will delete them? If they do, do they say immediately? How immediately? Right away or, perhaps, 1 month? Unless specified in contractual documentation, words like "immediately" or "soon" do not have any single definition, which allows them to stretch it without technically being in breach of contract. Not to mention that often times, governments mandate data retention for so-and-so amount of time, so the companies are legally required in such cases to keep the data even if they, miraculously, desire not to. ChrisArchitect wrote 14 hours 27 min ago: Source: URI [1]: https://discord.com/press-releases/update-on-security-incident... elevation wrote 14 hours 34 min ago: I didn't feel comfortable giving discord my phone number when they demanded it, so I lost access to the open source communities that insist on collaborating there. I wish breaches like this would cause people to reconsider their choices but sadly, it's unlikely most users will move. nulld3v wrote 13 hours 21 min ago: I also wish open-source communities would move off of Discord for another reason: Users are limited to joining a maximum of 100 servers. I've hit the cap and it's driving me crazy. It's really easy to hit it since each friend group, hobby group, gaming community, and open-source community often all have their own servers. Aachen wrote 11 hours 0 min ago: That limit is per account, right? noitpmeder wrote 12 hours 36 min ago: I can barely keep up with 6 semi active discord servers, each with tens of semi active channels... Much less think about doing it with hundreds. More power to you, must have figured out a good notification scheme jamwil wrote 6 hours 56 min ago: I am super curious how other people use discord. Iâm like youâtrying and basically failing to keep up with 6 servers. I just want to watch a power user out of morbid curiosity. I suspect they are also browser tab hoarders, which Iâm also curious about. Gigachad wrote 14 hours 24 min ago: Discord doesnât require a phone number. Itâs individual community owners who opt to require it. You can create a server that doesnât require one but it effectively means you canât ban people since they can just sign up again on a new account. ikkun wrote 12 hours 26 min ago: I tried making an account once, technically my account was created but trying to log in only gets me a screen that requires I verify a phone number. I was never even able to attempt to join a server. I assume it's my browser's privacy settings and ad blocker but I'm not sure. frumplestlatz wrote 13 hours 54 min ago: I refuse to use their âcreate a serverâ language. It is not a server by any definition of the word server. You can set up a community on their servers. Iâm not sure why they chose to use misleading language, but it is misleading. Aachen wrote 10 hours 54 min ago: Fun fact: Discord called them guilds before realising that they could compete with paid services that set up actual (e.g. Mumble) servers for you by pretending this is equivalent and free I also have trouble going along with the doublespeak. If a supermarket called their beer apple juice, I'd also not be offering my friends "apple juice", I'd call it what it is Guild is innocuous enough and since the API docs still call their communities that, that can be a term to use among those in the know to have common and clear terminology 'Guilds in Discord represent an isolated collection of users and channels, and are often referred to as "servers" in the UI.' â URI [1]: https://discord.com/developers/docs/resources/guild monerozcash wrote 7 hours 34 min ago: This seems like a distinction without a difference. If you used a paid service offering Mumble servers that used some custom software that allowed them to offer multiple ... "servers" on different ports/IP addresses from a single daemon, would you really care? Focusing on the fact that it's not really a "server" because they aren't running as separate processes seems like utterly silly pedantry, and we probably don't even know if that's actually true regarding Discord or not. wizzwizz4 wrote 7 min ago: The distinction matters. The cost (to my users) of switching from one Mumble server to another is the same, regardless of who hosts the server. The cost of switching from one "Discord server" to another is much lower than the cost of switching between Discord and any Discord clone, keeping people on Discord. Gigachad wrote 12 hours 32 min ago: Itâs wrong in terms of the technical implementation and right in terms of user experience. Gamers are well familiar with different communities actually hosting servers and instances for games or voice chat pre discord. Discord offers the same experience but without physically being different servers. Keeping the name guides users in the same way OSs call it a recycling bin despite not actually being a bin. noitpmeder wrote 12 hours 33 min ago: I'm not sure it matters in this situation ...? Server/instance/VM/shard/... when used in this context is pure corporate naming BS. They'd have called it "setting up a new circle jerk" if they thought it would increase metrics LoganDark wrote 14 hours 11 min ago: Discord has an account flag that triggers a mandatory phone number verification. It happens if you do things like send messages too quickly over the span of about a minute, or send multiple friend requests, or join too many servers, or start too many DMs, or indeed, join any server that is set to require phone number verification. zahlman wrote 13 hours 59 min ago: I am in dozens of servers and have not encountered this demand for a phone number. I have been in servers that required it for moderators as part of 2FA, and I just declined to moderate there. It had no effect on my use of any other server. jjulius wrote 13 hours 31 min ago: It has happened to me on two accounts. OP is also not the only other person I've seen who has dealt with it. Bully for you that you haven't encountered it, but it's certainly a thing. BoorishBears wrote 13 hours 35 min ago: Anyone on HN should know the fickle nature of fraud detection, especially when the cost of getting it wrong is 0. malfist wrote 13 hours 55 min ago: Just because something hasn't happened to you, doesn't mean it doesn't happen to other people xorbax wrote 13 hours 35 min ago: It never happened to him, so it's never happened Makes this huge data leak a real head scratcher giancarlostoro wrote 14 hours 31 min ago: The issue is if you don't enforce the phone number requirement on your server you get all the trolls who don't use phone numbered accounts. I wish Discord would allow you to restrict known VPNs instead of requiring phone numbers. It would solve so many issues. I know a LOT of VPNs wont be caught, but if you block MOST non-residential IP blocks, you'll capture a lot of them. csmantle wrote 14 hours 11 min ago: Trolls likely have access to phone number farms though. And in some parts of the world it's extra cheap to mass-register phone numbers. Trolls wouldn't be harmed in a data leak, only normal users get hurt. noitpmeder wrote 12 hours 32 min ago: Most trolls aren't the kind of trolls that run large scale networks, they're the 12 year olds you triggered by saying BLM elevation wrote 14 hours 17 min ago: Phone numbers may be required to bring order to a vast international user base, but a few dozen devs and a small user community can function without invasive moderation tactics. fishgoesblub wrote 14 hours 22 min ago: The communities I'm in don't require a phone number and very rarely gets trolls. Proper moderation is the most important part. Occasionally there's a spambot, but they're just hacked accounts from pre-existing real users, and as someone that uses a VPN with Discord, I'd prefer to not be treated as an evil-doer please. giancarlostoro wrote 14 hours 17 min ago: Sure, are the communities you're in tens of thousands of users or more? Because things change really quickly depending on how many users are active, if it's a community server, and the subject matter. Even a programming Discord is a hell hole. You cannot have enough mods ever. Things fall through the cracks and people get hurt. You can't moderate DMs or know the wellbeing of tens of thousands of your users who are being harassed in DMs and have no idea how to get help. Discords full of a lot of youth. Theres users who rotate community servers on a VPN / new spun up alts. They are relentless. I noticed the communities that are massive and do not have this problem to this extend all require phone number. mikert89 wrote 14 hours 34 min ago: When can people start going to jail for this kind of thing f4uCL9dNSnQm wrote 7 hours 30 min ago: It is UK. They find it hard to jail people that lied on purpose to jail innocent people, multiple times. heavyset_go wrote 13 hours 21 min ago: After a revolution rr808 wrote 13 hours 29 min ago: You know it'll be the IT pros going to jail not the execs right? sunaookami wrote 10 hours 46 min ago: Good, then they can stop the excuses for implementing the most shittiest things that ruined the web and just say no. EarlKing wrote 14 hours 19 min ago: Yes, good question: When can we start jailing CEOs and their employees for these blatant violations of the CPRA and GDPR? krainboltgreene wrote 13 hours 7 min ago: Immediately if you move to China. JoshTriplett wrote 13 hours 28 min ago: And the politicians who mandate ID-checking requirements, without which the "government IDs" part of this wouldn't have happened. (To be explicit, not supporting jailing here, just removing from office.) Imustaskforhelp wrote 13 hours 46 min ago: Was thinking the same exact thing!! fishmicrowaver wrote 14 hours 41 min ago: You've got to be a complete moron uploading your gov ID to discord sph wrote 7 hours 6 min ago: Are you seriously blaming kids and teenagers (who spend their free time on Discord) because they are not smart enough to know better and form communities elsewhere? You can do better than victim blame, and instead point the finger at Discord and whoever told the British government that delegating ID control to third-parties was a good idea. crossroadsguy wrote 8 hours 17 min ago: What would you say of a lot of FOSS companies/orgs who love to stay on places like Discord? Hell, some entities that pride themselves on "privacy" and "E2EE" shit are specifically on Discord. I think that must go beyond moronity. axus wrote 13 hours 22 min ago: A bunch of UK users are blocked from the more "free speech" (over 13) channels unless they prove their identity to Discord, to comply with the Online Safety Act. drawfloat wrote 11 hours 34 min ago: This applies to all users and isnât related to OSA (though that will probably make leaks like this more likely). Podrod wrote 11 hours 51 min ago: It's channels marked NSFW that you need verification for and it's also incredibly easy to bypass with a VPN. Crosseye_Jack wrote 13 hours 54 min ago: No need to blame the user for the companies actions. Company enacts policy enforced on them by law, for example requiring proof that a user is above the age of 18 to be able to use a channel where other users may use naughty words (The Horror!!!). User struggles to use the automated age check system (I used the "guess age by letting an AI have a look at a selfie" method and it was a pain in the ass which failed twice before it finally worked) so does what is recommended and make a support ticket. [0] User, relying on the published policy that Discord will delete ID directly after being used to to the age check [1] decides they wish to remain to have communication with their online friends uploads their ID. Discord then fail to honour their end of the deal by deleting their users documents after use, and then get breached. Full blame is on Discord for poorly handling their users data by their 3rd parties, and on the Governments forcing such practices. Discord should have their asses handed to them by the UK's ICO. Sure, us geeks can and will use self hosted systems and find ways to avoid doing ID checks, but your avg joe isn't going to do that. Hopefully cases like this will help with the push back on governments mandating these kind of checks, but I see the UK government just falling back to "think of the children" and laying all the blame on Discord, (who are not without fault in this case). [0] [1] URI [1]: https://support.discord.com/hc/en-us/articles/30326565624343... URI [2]: https://support.discord.com/hc/en-us/articles/30326565624343... Hawxy wrote 9 hours 4 min ago: > Discord then fail to honour their end of the deal by deleting their users documents after use, and then get breached. This wasn't documents uploaded via the automated ID checker, it was users manually sending ID documents to support in order to appeal an automated age decision. ryandrake wrote 13 hours 7 min ago: > User, relying on the published policy that Discord will delete ID directly after being used to to the age check [1] decides they wish to remain to have communication with their online friends uploads their ID. This is the part where the user has to take at least partial blame. You have to be utterly stupid (or at the very least way too sheltered) to believe a statement like this from a company, especially when there are zero consequences to the company for lying about it or negligently failing to live up to their policy. rs186 wrote 5 hours 20 min ago: Nobody believes the policy or even cares about the policy. They need to use the service, because everyone else is using the service, and they don't have a choice. Plain and simple. jamwil wrote 7 hours 24 min ago: Pure victim blaming. ryandrake wrote 1 hour 18 min ago: Calling "victim blaming" is not a retort. There is nothing wrong with dividing up blame among both people who offer a risky choice and people who make the risky decision to accept that choice, just because one of them suffered the downside of that risk. There are a lot of other examples where if you screw something up you might get hurt, and the victim is definitely at fault. It's a spectrum, as someone else put it. Sending your government ID over the Internet is a very risky decision, given the number and frequency of data breaches. The people who got burned here are not totally at fault but they share at least a little responsibility. jamwil wrote 47 min ago: If Discord says they delete the PII they collect and they ultimately fail to do that, whether by malice or negligence Discord owns 100% of the blame. If I get drunk and drive the wrong way down the highway and cause a wreck, the blame is not shared because the victim was driving a vehicle which is known to be a risky activity. I am culpable, full stop. ryandrake wrote 27 min ago: I hope we agree that there's a spectrum, and sometimes the victim is the one at fault. We just have to disagree about this specific case. I'm OK with that. All the best. BolexNOLA wrote 12 hours 28 min ago: You donât remember what it was like to just not think about this stuff too much because all our peers werenât either. How many of us freely and gleefully gave our info to Facebook, Google, etc all through the 2010âs? How many continue to? Crosseye_Jack wrote 12 hours 28 min ago: In the UK we have the ICO ( [1] ) who have the ability to fine companies who fail to live up to their data retention polices and/or fail to take adequate security measures to prevent or contain a serious personal data breaches. If the UK Government are determined to enforce companies having to validate user ID's to use the company's services, then the government better well be determined to enforce our data protection laws too. Governments can not have it both ways (esp as the UK government also want to role out new digital IDs that will need to be checked when getting a new job), demanding users hand over ID to access services but not kick butts when those services fuck things up is just idiotic (Ok its the government, they make being idiots a profession), but that's not the fault of the user. I'm mad at both Discord (for not securing their customers data inline with their published polices), and at the government (for forcing them into collecting the data in the first place, if Discord didn't have the data to begin with it can not be exposed). But I can not be mad as users of a service, who though no fault of their own just wished to continue to be in communication with their friends and were faced with the no-win choice of providing ID or being denied access to a communication platform. (just to be clear, I was not breached in this leak so I'm not being salty about the leak, but I see the point of view of the avg user because I see how the avg person uses the net every day.) URI [1]: https://ico.org.uk/ ryandrake wrote 11 hours 46 min ago: I'd have much more sympathy if this was the first instance ever of a corporation being negligent with people's data, and nobody was expecting it. We have to expect it, now. Corporations have a horrible track record of irresponsibility, and governments have a horrible track record of not punishing them. Data breaches are absolutely routine. Knowing this, it's very foolish to hand over ID through the Internet to someone. The top poster in this thread[1] has it right. At this point, you have to assume everything you submit or type into a web site is public information--that's how bad companies have gotten. I assume if I run out into the middle of the motorway, I'm likely to get hit by a car. That's why I don't do that. 1: URI [1]: https://news.ycombinator.com/item?id=45522379 Crosseye_Jack wrote 11 hours 6 min ago: > I assume if I run out into the middle of the motorway, I'm likely to get hit by a car. That's why I don't do that. The problem with this is that governments are now requiring you to cross the motorway if you wish to continue having the friends you have already made, but promise that the motorways are now safe for you to cross and they will hold to account anyone who makes crossing motorways unsafe, and the DoT have said "Its fine, we have put in crossings on the motorway to allow you to do so safely!" Your avg joe is going to take those reassurances made by multiple parties and assume the activity that would otherwise be risky is safe under these circumstances. When people go on thrill rides at amusement parks and get injured because the operator or manufacturer fucked up, we don't blame the rider "saying they should know better, look at all of those ride failures in the news!", as they expected the ride to be built to a high standard, it be maintained, operated corrected, and have safety watchdogs keeping an eye on everything. LadyCailin wrote 8 hours 7 min ago: I find it interesting where society draws the line in victim blaming. Because it is absolutely a spectrum, and there isnât really a pattern. Personally, I donât victim blame in this case, except for the people that explicitly voted for these short sighted âthink of the childrenâ politicians, but of course thereâs no way to single them out here. ryandrake wrote 1 hour 13 min ago: There's definitely a spectrum. Plenty of examples of people getting hurt through no fault of their own, and I would never assign blame to them. You're out walking your dog and get mugged--you did nothing risky, so you get no blame. But when you decide to do something risky, like skydiving or running in traffic or sending your government ID over the Internet (!!), and you suffer the known and anticipated downside risk, you need to at least share some of the blame. On the other side of the spectrum, if someone buys a penny stock and it loses all its value, that guy gets most of the blame. Some other reply posted "Victim blaming!" as if that shuts down the discussion. It shouldn't. Nathanba wrote 14 hours 24 min ago: At this point a whole bunch of crypto exchanges including chinese ones have my driver's license, passport and more. It is what it is, any real KYC process will require video identification anyway. giancarlostoro wrote 14 hours 30 min ago: It is specifically because you got banned for "being under 13" it comes from someone asking a question like "How many candles in this photo?" then you reply "7" then they edit the message to say "How old are you" and voila, underage ban. What you are overlooking is that Discord is the new MSN Messenger, YIM, etc your friends are not backed up in a meaningful way, nor the servers you're in, if you lose your account, you lose contact with basically your entire internet life and friends. Discord should not keep those IDs longer than a month at a time once the user is unbanned it should be deleted a week later, or removed from that panel altogether. Culonavirus wrote 13 hours 10 min ago: You can come up with all kinds of excuses, but Discord is not, and NEVER WAS a trustworthy company. > You've got to be a complete moron uploading your gov ID to discord ^ Still stands. giancarlostoro wrote 9 hours 23 min ago: I'm not making excuses for companies retaining PII longer than they should. I'm simply stating why someone might give their ID. Another reason is to verify yourself as a bot developer, though supposedly that is usually done via an entirely different third party. ternera wrote 13 hours 28 min ago: This hits the nail on the head. The big issue here is that the submitted photos were not deleted and that is quite concerning to me. giancarlostoro wrote 9 hours 20 min ago: This should be a warning to anyone providing function in any way similar to what Discord is doing. Do not keep PII longer than you legally have to. Don't have to keep it at all? Delete it. Leave a redacted record such as "Image verified by x, removed on x after unban" or something simple if you must. Remove PII from ticketing systems especially on a platform like Discord where users want to be private by design. tifik wrote 14 hours 46 min ago: I don't know if I just became cynical and jaded, but is this really surprising to anyone in any way? Any time I give out my personal information to anyone for any reason, I basically treat it as 'any member of public can now access it'. Even if a service doesn't have it in their TOS that they sell it to 3rd parties, they might do it anyway, or there will, sooner or later, be a breach of their poorly secured system. To make it clear - I don't particularly blame any one corporation, this is a systemic issue of governments not having/not enforcing serious security measures. I just completely dropped the expectation of my information being private, and for the very few bits that I do actually want to stay private, I just don't, or allow anyone to, digitalize or reproduce them at all in any way. troyvit wrote 29 min ago: > I just completely dropped the expectation of my information being private There are all the reasons in the world to feel that way. The scary thing (says troyvit as he passes out the tinfoil hats) is that privacy laws are all about an "expectation of privacy." In other words we all expect privacy when we're in our bathrooms, so government surveillance in the bathroom is hard to justify. Now that there are cameras in supermarket checkouts, and we all expect them, legally that's no longer a privacy concern and we can't claim that our privacy is being unreasonably infringed. And what you're saying is that now we've reached the stage in history where through incompetence and greed we shouldn't expect any privacy anyway, and that opens the door for all kinds of surveillance because our expectations have fallen so low. I'm not a lawyer btw so take it all with a grain of salt. NoSalt wrote 1 hour 58 min ago: > "this is a systemic issue of governments not having/not enforcing serious security measures" Is it this, or is it a "systemic issue of governments not minding their own damn business"??? johndhi wrote 3 hours 56 min ago: You really think governments could write rules that would help this? The only rule I can imagine is big penalties for data being breached, no matter the cause, but do we actually think it's a multi million dollar problem for 70k photos to be released? Hard problem. AlienRobot wrote 4 hours 44 min ago: I don't think you have become jaded. It's just the truth of the internet. If you upload anything to the internet, it's public. Even the passwords you type are potentially public. somenameforme wrote 6 hours 46 min ago: > "or there will, sooner or later, be a breach of their poorly secured system." It doesn't even need to be poorly secured. The oldest form of hacking is social engineering. If a company is storing valuable enough information, all one needs to do is compel the lowest common denominator with access to it to intentionally or inadvertently provide access. You can try to create all the sort loopholes and redundancies but in general the reality is that no system is ever going to be truly secure. Another reality is that many of the people with the greatest level of access will not be technical by nature. For instance apparently the DNC hacks were carried out by a textbook phishing email - 'You've like totally been hacked, click on this anonymizer link to leads to Goog1e.com so we can confirm your identity.' paganel wrote 6 hours 52 min ago: If âserious security measuresâ involves anything to that 2fa authentication that any normal person hates with a passion then you can forget about it. The real, long term answer to all this consists in having less of our lives in digital presence, that even means less digital government thingies and, yes, less payments and other money-related issues being handled online. stackbutterflow wrote 6 hours 55 min ago: For us it's too late. But we must push for better laws and build better systems for those that come after us. rwky wrote 7 hours 18 min ago: Same. I automatically assume that all information I send to any organisation will end up on the Internet sooner or later be it by accident or sold to some shady third party. raxxorraxor wrote 9 hours 6 min ago: > I don't particularly blame any one corporation, this is a systemic issue of governments not having/not enforcing serious security measures Wrong, governments caused the issue because they demand customers to ID themselves. There exists not a single viable security measure aside from not collecting the data. Government is also not able to propose any security measures. Unlikely that the data will ever be deleted now, no matter if Discord pays any ransoms or not. sc11 wrote 8 hours 14 min ago: In the context of age limits, that is wrong. The German eID has a zero knowledge method of proving that your age is above a certain number without revealing anything else. That method has been around for like 15 years and these days, thanks to smartphones with NFC readers, is quite user-friendly. In practice it's basically not used anywhere except for cigarette vending machines because it's much simpler to hire some dubious third party "wave your ID in front of your camera" service Edit: mandatory age verification is still an atrocious idea for a number of other reasons, just to be clear raxxorraxor wrote 8 hours 9 min ago: I won't use the eID because I don't believe in its promises. I don't need a third party, which would be completely dependent on government, to put a signature on my net access. I would even prefer the dubious service because of the relationship dynamics I mentioned. Best case is that age limits for the net should be enforced on device by parents. Problem solved, no unnecessary infrastructure needed. ImPostingOnHN wrote 5 hours 22 min ago: Theoretically you could have anyone sign and attest to your age at any time. So maybe the government gives you an attestation of 0 at birth, with timestamp (allowing age to be calculated at any time), as part of the normal new-human bureaucracy. And/or maybe you can separately hire an accredited (co-signed?) lab to perform carbon dating on you later on :) raxxorraxor wrote 1 hour 51 min ago: I totally would prefer the biopsy to a government Id. So carbon dating here I come. etiennebausson wrote 8 hours 24 min ago: The companies in question could have a flag in every user data to confirm they are over the age limit. At worse keep the birth date, since various aspect of a service can be available depending on age (and user can change locality / country, and therefore be subject to different law). If you keep on top of it, you have at most 3 days of user's "ongoing verification" sensible data available for theft. Keeping more than that will always be an invitation to bad actors. Braxton1980 wrote 52 min ago: Let's say Discord is sued for letting children access the service without verification or whatever. If they only store a boolean or a birthday then they can't show how they verified the data. mrweasel wrote 8 hours 24 min ago: No, governments caused the issue by demanding customers to ID themselves, while failing to provide the necessary tooling for doing so in a secure manor. There's really only a few countries in the world who can provide the services needed to make this work. On top of my head, Estonia, Sweden and Denmark (there's probably others). raxxorraxor wrote 1 hour 53 min ago: No, the problem is in the requirements already, not only in the implementation. I don't want to ID myself if it isn't necessary. Proven security mechanism to minize data collection. It is a security risk, even with ZKP. It wouldn't even be hard to correlate the data, especially since governments also force ISPs to save connection info. There is no need to a foul compromise here. paganel wrote 6 hours 49 min ago: Thereâs no unbreakable secure tooling, none. It might be unbreakable against script-kiddies level of hacking, even though I have my doubts even about that, but Snowden and the general atmosphere during the last decade or so have proved that State actors can put their hands on almost any piece of data out there, either through genuine hacking or other means involving their monopoly on violence. TingPing wrote 2 hours 58 min ago: Itâs absolutely possible to verify something anonymously. Here was an interesting example recently URI [1]: https://help.kagi.com/kagi/privacy/privacy-pass.html paganel wrote 56 min ago: You missed my part about State actors and their monopoly on violence. I think it used to be called the âhammer metaphorâ or some such, a not very technical solution, if at all, but more than efficient nonetheless. eleveriven wrote 9 hours 10 min ago: What's wild is that the burden keeps falling on individuals to be ultra-cautious, while the systems handling the data rarely face meaningful consequences southernplaces7 wrote 9 hours 19 min ago: I very much do blame the corporations and governments that push for these kinds of policies in some way or another. We see things like this, which happen about as often as fucking rainfall in a mountain forest, and then also see the ever increasing push towards ID verification by corporations and government organizations that pinkie-promise to secure or not retain any of the personal data you were wrist-burned into handing over to them. What a toxic mix of garbage that becomes. The result is crap like the above, making the internet ever worse and basic personal data security (to not even speak of lofty things like digital privacy and using the internet anonymously) pretty much null and void even if you really do try to take the right steps. Braxton1980 wrote 48 min ago: >I very much do blame the corporations and governments that push for these kinds of policies in some way or another 71% want age verification [1] How that's done is the issue but you can't blame the government and corporations from making it happen. URI [1]: https://www.pewresearch.org/short-reads/2023/10/31/81-of-u... eleveriven wrote 9 hours 0 min ago: It's really just creating massive honeypots of sensitive data that will eventually leak. And when it does, the consequences are always on us SequoiaHope wrote 9 hours 49 min ago: It is a common misconception that facts are reported because they are surprising. Facts are reported because they are important. More and more governments are passing age verification laws which put exactly this data in to the hands of even more shady private companies. This breach serves as evidence that those laws are misguided, and spreading news of this event may help build public support for those efforts. some_random wrote 50 min ago: Reminds me of the Panama Papers, which exposed a huge international money laundering/tax evasion ring that no one seemed to care about because "everyone knows they're doing this stuff" NoNotTheDuo wrote 36 min ago: I think it's a combination of "everyone knows they're doing this stuff" and "the ones who could do something about it (i.e. charge/prosecute, change laws, etc.) are implicated". Much like the problem in the US Congress: they are not subject to insider trading laws, so they can make huge sums of money acting on non-public information. The only people that can change that are ... members of the US Congress. alwayseasy wrote 36 min ago: Well, in a few notorious cases the tax services cared and the voters cared. ratelimitsteve wrote 38 min ago: Hey now, that's not fair. Someone cared enough to murder the journalist that published them with a car bomb. r2_pilot wrote 21 min ago: That allegedly would be Yorgen Fenech, via Alfred and George Degiorgio, Vincent Muscat, and as for the explosives, Robert Agius and Jamie Vella. iinnPP wrote 3 hours 36 min ago: Things that cease to be surprising can also cease being important. Which is made clear reading the remainder of the post. It's my take as well, frankly. monooso wrote 3 hours 52 min ago: I don't think there was any suggestion that the story should not have been reported, or that only "surprising" facts should be considered news. nomilk wrote 5 hours 22 min ago: Wonder if this will cause a surge in demand for fake IDs that are sufficient for age-verification but harmless if leaked. Telemakhos wrote 2 hours 51 min ago: It might give momentum to age-verification schemes like Apple Wallet [0]. Apple gets the state ID in wallet and exposes an age verification API to apps like Discord; Discord queries the API and relies on Apple's age verification without ever getting access to the personally-identifying information. [0] URI [1]: https://medium.com/@drewsmith_6943/apple-wallet-id-is-th... imglorp wrote 2 hours 16 min ago: Maybe not wallets but regular "sign in with X" SSO. If all the X's can agree that one of the claims in the SSO is "is_adult", then at least you limit the exposure of your government ID to X getting breached, while all the "sign in with X" sites won't have access to the ID itself, just the claim. Of course, pretty much every X gets breached anyway, and the walled garden shenanigans are not attractive, but it's better than ever site getting your ID. dylan604 wrote 1 hour 13 min ago: This makes me hate the Twitter rebrand even more. I'm reading your use of "X" as generic name to be filled in as needed vs the poorly rebranded Musk owned platform. Then again, I could see that platform actually promoting its services to do this very thing. ratelimitsteve wrote 37 min ago: it's time to bring back metasyntactic variables URI [1]: https://en.wiktionary.org/wiki/foo dylan604 wrote 3 min ago: as a fan of Mr. Robot, I like to use evilCorp to be replaced by which ever one is being discussed. imglorp wrote 40 min ago: Oof, I didn't even think about x/twitter... that was a poor choice of variable name! I shall try to eXcrete smarter in the future. bell-cot wrote 3 hours 23 min ago: Might that be a business model for an enterprising Secretary of State? They carefully verify your real ID, the fake ID's trivially tie back to that if the cops ask (not so useful for committing crimes), there are upcharges for multiple fake ID's, or tweaked ages / weights / photos. More upcharges for "vanity" names... "Really, your honor, it's hardly different from an author getting a DBA or LLC for his pen name." AznHisoka wrote 4 hours 28 min ago: Heck, i would like a fake name, social security number, and birthdate as well while I am at it sph wrote 7 hours 14 min ago: > Facts are reported because they are important. Without going too much off-topic: In a vacuum, you are right. In reality, facts are reported because they sell. It is a good day when important facts like this one happen to coincide with what people what to know more about. (the recent UK attempt at stripping the rights of its citizens) Tomorrow, people will have forgotten all about it, and the government can continue to expand its powers without anyone talking about it. consp wrote 8 hours 41 min ago: In the example you give there is no needed provision to store the id or all information in the document. Only extracting the date of birth, name and document number is sufficient. Yes I know this a utopia and it won't happen. Edit: afaik storing the photo is only needed in medical cases to alternatively asses having the correct person. Bit much for something simple as age verification. ajsnigrutin wrote 4 hours 1 min ago: But why? I mean... this data might be valueable at some time, if nothing else, when the company is sold to some other data-gathering company... and the punishment for such a breach will be less than the data is worth. I mean.. if the governments did their jobs and multipled the punishment for a single breach by 70.000 (in this case) and cause the company to go bankrupt.... well, only then would the companies reconsider. But until then, they won't. jasonjayr wrote 5 hours 7 min ago: Even then, for age verification, just verify the ID, record + sign the verification, and DESTROY THE DATA! Don't retain the original document "just in case", or even the birthday or name. mcintyre1994 wrote 7 hours 37 min ago: This breach is them being irresponsible with customer support software. In the case of automated age verification, the providers say that nothing identifiable gets stored and they might be lying but itâs feasible that you could run that service the way they say they do. This breach is about the manual alternative to that, where you can appeal to Discord customer support if the automated thing says youâre not the right age. They seem to do that in part by having you send a picture of your ID. Iâm sure in their database theyâre then just storing the date of birth etc, but then they obviously just donât bother deleting the private image from the customer service software. DrewADesign wrote 6 hours 33 min ago: Sounds like a great use case for an automated ML cleanup/reporting feature. Maybe as a daemon as a bolt-on fix, or integrated as a feature into the support software itself. kmbfjr wrote 5 hours 41 min ago: Add in blockchain and weâll be all set. boriskourt wrote 8 hours 54 min ago: This is the essential point, and why itâs always a bit frustrating seeing âis anyone surprisedâ take come up so often here. It lowers the quality of the possible discussion by trivialising it. troyvit wrote 27 min ago: To me it's an important point. We're all being worn down so much by these idiotic mistakes and intrusions that it's just another Thursday when it happens, like school shootings. I don't know what the great filter looks like on other planets, but here it's because we're smart enough to make all sorts of incredible toys and stupid enough to not know how to use them properly and we're just going to drive ourselves into the ground. monooso wrote 3 hours 47 min ago: It's a valid question, which speaks to the frequency with which these things happen. That's isn't trivialising the problem. viridian wrote 1 hour 56 min ago: The person might not intend to be trivializing the problem, but that is the common outcome. This was very observable in the wake of the Snowden leaks, where "is anyone actually surprised?" was a key prong in the narrative that argued that you shouldn't actually care about what the NSA was getting up to. philipov wrote 2 hours 18 min ago: No, it's very much used to express the sentiment "I don't care about this, and wish people would stop talking about it." franga2000 wrote 4 hours 7 min ago: "Is anyone surprised" is an important question to ask, although in this case it would be more valuable to ask on a less techy forum. I'm not surprised and many people here are not surprised, but most people are still surprised when they hear something like this, which is why they gladly give their information to anyone that asks. If the majority of Discord users knew breaches are inevitable and refused to give their information or at least took some protective measures like partial redaction and use-case watermarking, this breach would be less of an issue and/or such breaches would be less common. We need to make sure nobody is surprised. Everyone should rewrite every "upload" button in their head to say "publish". ashtakeaway wrote 24 min ago: It should say "publish" because that's what happens after the fact, not what it's "doing" for an amount of time until it stops. pessimizer wrote 2 hours 13 min ago: > "Is anyone surprised" is an important question to ask It definitely is not, unless you are doing some sort of survey. nirui wrote 10 hours 4 min ago: > I basically treat it as 'any member of public can now access it'. Still remember the conversation over "mega apps"? Based on my experience with Alipay, which was a Chinese financial focused mega app but now more like a platform of everything plus money, the idea of treating every bit information you uploaded online as public info is laughable. Back when Alipay was really just a financial app, it make sense for it to collect private information, facial data, government issued ID etc. But now as a mega app, the "smaller app" running inside it can also request permission to read these private information if they wanted to, and since most users are idiots don't know how to read, they will just click whatever you want them to click (it really work like this, magic!). Alipay of course pretends to have protection in place, but we all know why it's there: just to make it legally look like it's the user's fault if something went wrong -- it's not even very delicate or complex. Kinda like what the idea "(you should) treat it (things uploaded online) as 'any member of public can now access'" tries to do, blame the user, punch down, easy done. But fundamentally, the information was provided and used in different context, user provided the information without knowing exactly how the information will be used in the future. It's a Bait-and-switch, just that simple. Of course, Discord isn't Alipay, but that's just because they're not a mega app, yet. A much healthier mentality is ask those companies to NOT to collect these data, or refuse to use their products. For example, I've not ever uploaded my government ID photos to Discord, if some feature requires it, I just don't use that feature. andsoitis wrote 10 hours 58 min ago: > this is a systemic issue of governments not having/not enforcing serious security measures. To do so seems impractical. Imagine the government machinery that would be required to audit all companies and organizations and services to which someone can upload PII. Not tractable. stackbutterflow wrote 7 hours 0 min ago: Audit at random? With severe penalty in case of non compliance. austhrow743 wrote 10 hours 43 min ago: The systemic solution wouldnât be to do that. It would be to both remove their own requirements that organisations collect this data, and to penalise organisations for collecting it outside of a handful of already heavily regulated industries like banking. aydyn wrote 10 hours 53 min ago: The enforcement could be done by incentives, making sure the penalty for such breaches is large. andsoitis wrote 10 hours 39 min ago: Sure, but they would still happen is my point. 0xbadcafebee wrote 11 hours 39 min ago: It's not surprising because there's never been a significant penalty for it, I guess because everybody just got completely used to massive breaches without much reaction. But then again it's very hard to get legislation passed that's not in the interests of big business. cookiengineer wrote 11 hours 47 min ago: Honestly I don't understand why so many things are tied to one secret _that you have to share with others_ all the time. Why is there no rotation possible? Why is there no API to issue a new secret and mark the previous one as leaked? Why is there no way to have a temporary validation code for travels, which gets auto revoked once the citizens are back in their home country? It's like governments don't understand what identity actually means, and always confuse it with publicity of secrets. I mean, more modern digital passports now have a public and private key. But they put the private key on the card, which essentially is an absolute anti pattern and makes the key infrastructure just as pointless. If you as a government agency have a system in place that does not accommodate for the use case that passports are stolen all the time, you must be utterly out of touch with reality. gloosx wrote 9 hours 44 min ago: Governments don't get a damn thing about the internet. They just want to govern, and justify the spending. Their goal is not to build resilient systems â it iss to preserve control. The internet was born decentralised, while governments operate through centralised hierarchies. Every system they design ends up reflecting that mindset: central authority, rigid bureaucracy, zero trust in the user. So instead of adopting key rotation, temporary credentials, or privacy-first mechanisms, they recreate 1950s paperwork in digital form and call it innovation. SeanAnderson wrote 12 hours 6 min ago: ZK proofs for identity can't go mainstream quick enough. I agree with what you're saying completely. It's frustrating that we have the technology now to verify aspects of someone's identity without revealing it, but that it's going to take forever to become robust enough for mainstream use. raxxorraxor wrote 9 hours 5 min ago: You mean not collecting IDs is the real answer. Easy solution is the best solution and it already is mainstream. This is an example why that was a bad idea in the first place. No damage control for bad solutions will change that. anjel wrote 30 min ago: Mandated age checks (systemic deanonymization) is the gateway to social credit scores immibis wrote 9 hours 13 min ago: Anonymous proofs of age don't work, because (in theory) I could set up a server, plugged into my ID chip, that lets anyone download age proofs from me, and then anyone can be over 18. They don't just need to know someone is over 18 - they also need to know it's the same person using the website. beeflet wrote 9 hours 11 min ago: Make it so that the proofs are not reusable. xyzzy123 wrote 11 hours 46 min ago: It's an interesting litmus test because regulators would not accept ZK age proofs unless the stated purpose of age verification laws (reduce harm to minors) is the _actual_ purpose of those laws. Not some different unstated goal, such as ending online anonymity. doikor wrote 10 hours 26 min ago: That is exactly what EU is doing with its age verification law. Basically the service provider just has to accept the certificate and check that it is valid and all the cert says is "is over X years old". [1] And the fact that the companies have to implement the system themselves is just crazy. It is very obvious that if the government require such a check it has to provide the proof/way of checking just like in the physical world it provides the id card/passport/etc used for checking this. URI [1]: https://ageverification.dev/ dangus wrote 8 hours 36 min ago: > And the fact that the companies have to implement the system themselves is just crazy. Isnât this how most industry regulations work? Itâs not like the government provides designs to car companies to reduce emissions or improve crash safety. doikor wrote 5 hours 49 min ago: Government does issue passports for identiftying their citizens when traveling. It is the one who made/enforces the law that requires that so it is the one who has to provide the means to do that. Or are you suggesting that anyone should be able to make their own passport? Or a bit closer example. If there was no official id cards/passports/etc (there currently is no official way of proving your age online) and the government made a law that mandates that one has to be over X to buy alcohol. Whoâs job is it to provide the means to prove that you are over X? For the car a proper analogy would be the goverment requiring drivers license. Who provides the drivers license? Should every manufacturer provide its own? troupo wrote 10 hours 6 min ago: > just like in the physical world it provides the id card/passport/etc used for checking this. In Sweden it wasn't the government that provided id cards, but the post office and banks. It became the government's job sometime after Sweden joined the EU, after the introduction of the common EUID standard. And even then online identification is handled by a private company owned by banks: URI [1]: https://en.wikipedia.org/wiki/BankID_(Sweden) ninalanyon wrote 8 hours 32 min ago: We have BankID in Norway, run by DNB (I think). A single service that uses my personnummer (like a social security number but actually unique) as my user name and logs me in to almost all government services, banks, insurance companies, etc. Tor3 wrote 4 hours 12 min ago: And unfortunately it's also used in some places outside the ones you're mentioning, e.g. private persons renting out their camper (I've seen this). Which opens the doors to fraud, as has happened too many times (the fraudsters make it look like a normal bank-id lookup, gets you to do it twice, and then they have enough to open your bank account and withdraw money. If they can get you do to it three times they also have enough to remove the limit on withdrawal, and empty your account). The system is highly convenient and pretty safe, but it does still need vigilance from the user. Which is tricky, re all those phishing attempts and click-scams which people fall for again and again and again. doikor wrote 9 hours 59 min ago: Yeah we have something similar here in Finland with banks doing most of the (strong) identification. This also makes things difficult for immigrants for the first month or two in the country as a lot of services (like making a phone or internet contract) require this identification to use but it is also a bit of a hassle to get a bank account (but getting a new bank account in a different bank once you have a bank account to do the strong verification takes like 2 minutes) There is a government system but most don't use it but I expect once the eu digital identity wallet thing rolls around a lot of ppl will switch (or be required to?) to that [1] But very importantly this government, bank id, the identification part of the eu id wallet or really any identification system should not be used for age verification as it actually identifies the user not just give a proof that the user is over X years old. URI [1]: https://commission.europa.eu/strategy-and-policy/pri... Ekaros wrote 9 hours 48 min ago: These systems likely could be extended to just provide age information. If there truly was a wish for it. The suomi.fi systems can be configured. To pass or not pass address for example. So I see no need to pass personal identity number. doikor wrote 9 hours 45 min ago: Yes and the "backend" (what provides the certificate to the app) for the age verification app for Finland will most likely be suomi.fi (or some dvv.fi thing directly) systems. But we can't realistically expect every service that needs age check to work with 27 (eu countries) different systems but instead we need to unify it into a single api contract which is what this age verification app basically does. mindslight wrote 11 hours 46 min ago: That does not work without treacherous locked-down hardware. The marketing by Google et al is leaving out that fact to privacy-wash what is ultimately a push for digital authoritarianism. Think about it - the claim is that those systems can prove aspects of someone's identity (eg age), without the site where the proof is used obtaining any knowledge about the individual and without the proof provider knowing where the proof is used. If all of these things are true while users are running software they can control, then it's trivial for an activist to set up a proxy that takes requests for proofs from other users and generates proofs based on the activist's identity - with no downside for the activist, since this can never be traced back to them. The only thing that could be done is for proof providers to limit the rate of proofs per identity so that multiple activists would be required to say provide access to Discord to all the kids who want it. beeflet wrote 9 hours 15 min ago: >Think about it - the claim is that those systems can prove aspects of someone's identity (eg age), without the site where the proof is used obtaining any knowledge about the individual and without the proof provider knowing where the proof is used. That is not nessisarially true. There are ZK setups where you can tell when a witness is reused, such as in linkable ring signatures. Another simple example is blind signatures, you know each unblinded signature corresponds to a unique blind signature without knowing who blinded it. raxxorraxor wrote 8 hours 57 min ago: The easy solution is the best one. Just don't collect the info. Any problems resulting from that need to be handled differently. Proven to work and we wouldn't be dependent on untrustworthy identity providers. beeflet wrote 8 hours 52 min ago: I agree. It is possible, but that does not mean it should be done. The thing is with such a ZK system you are still collecting and compiling all this data, it's just done by some sort of (government?) notary and there is a layer of anonymity between the notary and the verifier (which they can cooperate to undo). The real political problem is the concentration of personal information in one place. The ZK system just allows that place (notary) to be separate from the verifier. mindslight wrote 9 hours 7 min ago: Sure, but making use of that introduces new problems. Fundamentally it limits a person to one account/nym per site. This itself removes privacy. An individual should be able to have multiple Discord nyms, right? Then if someone gets their one-account-per-site taken/used by someone else, now administrative processes are required to undo/override that. Then furthermore it still doesn't prevent someone from selling access to all the sites they don't care about. A higher bar than an activist simply giving it away for free, but still. beeflet wrote 8 hours 36 min ago: >An individual should be able to have multiple Discord nyms, right? Yeah, I think so. I mean this is like my 20th hacker news account. I am using my 5th discord account right now. But at the same time it would be an interesting to see how anonymous yet sybil-proof social media would work out. I get the feeling that it's already pretty easy to buy and sell fake IDs, so I don't think it would pan out in practice. I also had the same idea as you: if such a system were to exist, you could sell proofs for all the services you don't use. Usually, these zero-knowlege proofs are backed by some sort of financial cost, not the bureaucratic cost of acquiring an ID. All of these "linkable" ZK proofs are aimed at money systems or voting systems. In the blind-signature based money systems, a big problem used to be dealing with change; you had to go back and spend your unblinded signature at the signatory to get a new one. In a similar fashion, maybe you could make it so that users could produce a new ZK proof by invalidating an old one? So you could retire an old nym if you get banned, and create a new nym but you could only have one at a time? IDK if that is a reasonable tradeoff. mindslight wrote 33 min ago: > interesting to see how anonymous yet sybil-proof social media would work out. I agree it could be interesting but on the other hand we see plenty of people posting tripe under their public meatspace nym. The real problem with social media is the centralized sites optimizing for engagement, which includes boosting sockpuppets into view of the average user. So focusing on controlling users continues to ignore the puppetmaster elephants in the room. I think talking about crypto details is a red herring on this topic though. User controlled computing devices mean that any two people can run software that behaves as a single client, using the credentials of the first person to give access to the second person. The only way to stop this is to make the first person have skin in the game, which is directly contrary to all of the privacy goals. Chewing on this problem a bit more, it's starting to feel like this "use cryptography prove aspects of your identity without revealing your identity" is actually a bit of a nerd-snipe. It seems like a worthwhile problem because it copies what we do in meatspace for liquor/stripclubs/gambling/etc. But even the meatspace protocols are falling apart with a lot of places using ID scanners that query (ie log) a centralized database, rather than a mere employee who doesn't really care to remember you (and especially catalog your purchases). The straightforward answer to both is actually strong privacy laws that mandate companies cannot unnecessarily request or store data in the first place. Then some very simple digital protocols suffice to avoid this issue of identity being implied by knowing one mostly-public number. (FWIW the problem of making change always seemed very simple to me - binary denominations of coins/tokens) Terr_ wrote 11 hours 23 min ago: If I had my 'druthers, there would be a kind of physical vending machine installed at local city hall or whatever, which leverages physical controls and (dis-)economies of scale. The trusted machine would test your ID (or sometimes accept cash) and dispense single-use tokens to help prove stuff. For example, to prove (A) you are a Real Human, or (B) Real and Over Age X, or (C) you Donated $Y On Some Charity To Show Skin In The Game. That ATM-esque platform would be open-source and audited to try to limit what data the government could collect, using the same TPM that would make it secure in other ways. For example, perhaps it only exposes the sum total of times each ID was used at machine, but for the previous month only. The black-market in resold tokens would be impaired (not wholly prevented, that's impossible) by factors like: 1. The difficulty of scaling the physical portion of the work of acquiring the tokens. 2. Suspicion, if someone is using the machine dozens of times per monthâwho needs that many social-media signups or whatever? 3. There's no way to test if a token has already been used, except to spend it. By making reseller fraud easy, it makes the black-market harder, unless a seller also creates a durable (investigate-able) reputation. I suppose people could watch the vending-machine being used, but that adds another hard-to-scale physical requirement. michaelt wrote 8 hours 36 min ago: > 2. Suspicion, if someone is using the machine dozens of times per monthâwho needs that many social-media signups or whatever? Anyone who visits pornhub and doesn't want to open an account? mindslight wrote 10 hours 26 min ago: Yeah, introducing real world friction is seemingly one of the only ways of actually solving the problems of frictionless digital systems (apart from computational disenfranchisement, of course). It might be a better idea to frame your idea in terms of online interactive proofs rather than offline bearer tokens. It's of course a lot less private/convenient to have to bring a phone or other cell-modem enabled device to the vending machine, especially for the average person who won't exercise good digital hygiene. Still, some sort of high-latency challenge-proof protocol is likely the way to go, because bearer tokens still seem too frictionless. For example (3) could be mitigated with an intermediary marketplace that facilitated transactions with escrow. If tokens were worth say $2, then even just getting 10 at a time to sell could be worth it for the right kind of person. And personally I'd just get 10 tokens myself simply to avoid having to go back to the machine as much. In fact the optimal strategy for regular power users might be to get as many tokens as you think you might need to use (even if you have to pay for them), and then when they near expiration time you sell them to recoup your time/cost/whatever. Terr_ wrote 8 hours 57 min ago: My concern with some "bring your phone and use it immediately" scheme is that someone could pierce the privacy by looking at a correlation between the time an account was mode or a pattern of network-traffic occurred, versus the time someone was using/near the vending machine. Adding large and unpredictable amounts of latency makes that kind of correlation weaker and hopefully impractical. mindslight wrote 49 min ago: That's what I meant by "high latency". Workflow would be something like go to sign up to a site, site issues a challenge which is stored in your browser, then sometime in the next week/month/year you stop by the vending machine which generates a proof for the challenge, then you can finish the signup flow for the site in the next week/month/year. Of course, this would require people to exercise some restraint with regards to their timing. But the real problem is that nobody actually wants these types of systems, so there is no demand. The motivation only comes as directives from governments, so it's not about the technically best system but rather whatever corporate lobbyists can manage to get mandated. L-four wrote 12 hours 8 min ago: Developer time is more valuable than user data. The market is being efficient. baobabKoodaa wrote 7 hours 24 min ago: Externalized costs aren't weighed in that calculation kalaksi wrote 10 hours 5 min ago: I think you're assuming an ideal world where there's no information asymmetry, all the market participants receive and understand all the information and the risks, and clients could realistically move to an alternative platform that provably handles things better. hulitu wrote 10 hours 57 min ago: No.Just greedy. yibg wrote 12 hours 28 min ago: I blame companies (including discord) for collecting as much information as they can instead of as little as possible. More data collected -> more data that will eventually get sold / leaked / hacked. petre wrote 11 hours 52 min ago: Don't governments require them to chech people's IDs to make sure they aren't kids? eviks wrote 11 hours 37 min ago: Do they also require permanently storing the document instead of just the check result? hulitu wrote 10 hours 34 min ago: Oficially, no, unoficially, yes. throwaway473825 wrote 11 hours 40 min ago: It depends on the implementation. The EU's European Digital Identity Wallet will allow users to prove that they are over 18 without sharing any other personal information. immibis wrote 9 hours 11 min ago: Anonymous means you can pay someone $2 to use theirs. whatevertrevor wrote 8 hours 16 min ago: Surely that's solved easily by ensuring a 1:1 association between the proof of age and account? bell-cot wrote 54 min ago: Grandpa isn't interested in Discord, so you can open a second account using his Proof of Age. Maybe a third account, using Uncle Ned's. And a fourth account, using... BolexNOLA wrote 12 hours 52 min ago: I told the 2 servers I hang in about a month ago that if I randomly disappear itâs because I canât login without an ID and Iâm simply not doing it/that they should consider the post my preemptive âgoodbye.â I included where to contact me for those who want to. Frankly I think anyone on discord should do the same bsimpson wrote 13 hours 22 min ago: For years, I resisted TSA Pre check on principle, even though I was a frequent traveler. I finally relented when I realized there were places like Thailand that force you to give your biometrics, and almost certainly sell them back to shadowy US agencies. jonasdegendt wrote 9 hours 38 min ago: > places like Thailand that force you to give your biometrics You're being returned the favor! Anyone that's ever entered the US has had to do the same, and our prints are being stored in a DHS database. Out of curiosity, did you not need to provide prints to get a passport in the first place? I can't image a single developed country without biometric passports. octo888 wrote 5 hours 46 min ago: Fingerprints are not required in the UK to apply for a passport (for UK citizens who didn't apply for naturalisation etc). Biometric doesn't just mean fingerprints. weird-eye-issue wrote 12 hours 48 min ago: They might not be competent enough URI [1]: https://www.scmp.com/week-asia/politics/article/3300568/th... safety1st wrote 10 hours 33 min ago: Thailand has a big problem with identity theft, and another big problem with Chinese criminal syndicates committing various kinds of scams and fraud. So while they might share that biometric data with US government agencies, it seems more likely to me that at least one identity theft racket has acquired some of it. codedokode wrote 13 hours 44 min ago: Also this is an issue with people willing to send important documents to some company with which they do not even have a written agreement. 01HNNWZ0MV43FF wrote 13 hours 1 min ago: I'm not willing, I just don't have a choice. The US should regulate it from the top down like Europe does SamDc73 wrote 12 hours 42 min ago: Not sure what you mean by "like europe" because in Europe they are trying to implement `European Digital Identity (EUDI)` for age verification, which will make stuff like this even worse .... doikor wrote 5 hours 33 min ago: You are not supposed to use EUID for age verification. Instead you use the age verification system. EUID is made for working with government agencies, banks, etc where you need proper identification of the person and the age verification for verifying ones age (it doesn't even say how old you are just that you are over X years old) [1] End goal is to unify them into the same app at some point but the certificates/validation flows are different. Also as the use cases are very different for the proper identification a whilelist is used on who is allowed to request it. With age verification as it is just a certificate that anyone can validate against the public key so no whitelisting possible (or wanted really) URI [1]: https://ageverification.dev/ throwaway473825 wrote 11 hours 47 min ago: On the contrary, third parties will only get to know the age of the users, not their identities. raxxorraxor wrote 8 hours 54 min ago: That is not true, EUDI is a security problem instead of a solution. It is trivial to correlate the info and there is a critical path where a breach would expose even more. Best security: Don't collect. Nothing comes close, no even the best ZK setup. Also, as a European citizen I really don't want it. Ironically governments aren't mature enough for that. hulitu wrote 10 hours 35 min ago: You must be new here. /s SamDc73 wrote 11 hours 1 min ago: âLinkability is especially problematic because untrusted entities, such as attribute providers and relying parties acting together, can correlate and link auxiliary information to the same user, thereby breaching privacy and enabling tracking, profiling, or de-anonymisation.â [1] Thatâs assuming EUDI never gets breached â but if Google and every major tech company has been, itâs only a matter of time, but this will have way more personal info .... I've been using discord for 5 years and never upload my ID ⦠And I don't want discord (or any other company) to know my age, or any other identification ... URI [1]: https://www.wi.uni-muenster.de/news/5104-new-publica... gambiting wrote 9 hours 29 min ago: For sure, but with the EU system you'd just give discord an expiring certificate that proves you're over 18. They can leak that all they want, it's worthless otherwise. Right now you have to upload your actual ID which is obviously extremely dangerous if leaked. So yes, even though there are obvious problems that you mentioned, the EU implementation is better. raxxorraxor wrote 4 hours 19 min ago: EUDI requires Google or Apple, I hope it is DOA. It is even bloated before anyone adopted it. SamDc73 wrote 8 hours 34 min ago: I mean leaked from the EUDI side. > the EU implementation is better. It's better than the current implementation, sure, but you can never beat zero identifiers gambiting wrote 8 hours 23 min ago: Again, for sure and I agree with you - but we're talking about institutions that already have our IDs in some form or another, so just asking them to issue a certificate that says "yeah this user is actually over 18" seems like a no brainer functionality on top of an existing system. Like obviously our government office has a copy of my passport and ID card, but if those leak then we have a much bigger problem as a country. SamDc73 wrote 44 min ago: > we're talking about institutions that already have our IDs in some form or another The issue isnât who already has our IDs, itâs that EUDI introduces new auxiliary information (public keys, signatures, revocation identifiers) that create globally unique, linkable identifiers. Even if the same institutions issue the wallet, each transaction generates additional personal data that can be misused for tracking and profiling, far beyond the data already stored in government registries. fourside wrote 13 hours 26 min ago: A big problem is that the Silicon Valley playbook drives companies like Discord to be winner take all. Itâs hard to avoid using them, but then they require that give up sensitive documents. I shouldnât have to choose between keeping sensitive documents private and being able to participate in most gaming communities. Some open source projects have also starting adopting Discord to manage their communities. robinsonb5 wrote 8 hours 54 min ago: > Some open source projects have also starting adopting Discord to manage their communities. And I've chosen not to engage with more than one such community because I'm not perpared even to give Discord my phone number, let alone any kind of ID document. Luckily there's nothing on Discord I care about that much, so I'm not having to make too difficult a choice. I totally get why most people won't take such a stand. Gigachad wrote 14 hours 22 min ago: Itâs surprising that it happened to a big name like Discord in this day and age. Huge data breaches of large tech companies are becoming increasingly rare as security in general is getting better. Suzuran wrote 3 hours 20 min ago: Penetrations of this sort happen differently. If I want the ID of a bunch of Discord users, I don't go after Discord directly, I find some bot that the targeted users have on their discord servers, or third party service that Discord uses themselves. Then I find some individual person with access to those things, and I harass and/or threaten that person until they give me what I want to make me go away. If I think they might be crooked, I might just offer them a cut of the take. I'm probably not paying them though, not unless I think I can leverage them against other targets and need to keep them around. Either way, an individual person isn't going to be able to hold off a coordinated attack for very long, and law enforcement generally doesn't give a shit about internet randoms attacking individual people. hulitu wrote 10 hours 32 min ago: > Huge data breaches of large tech companies are becoming increasingly rare as security in general is getting better. Citation needed. /s cough Microsoft cough eviks wrote 11 hours 37 min ago: It's getting better, but never reaching good, so still no surprise tacticus wrote 13 hours 8 min ago: i mean it's only every other week we see orgs like TCS handing out admin PaulKeeble wrote 14 hours 52 min ago: The hackers claim they have data of 5.5 million, discord is saying 70k. Hmmmm alashow wrote 4 hours 48 min ago: URI [1]: https://x.com/IntCyberDigest/status/1975846997568737666?t=nD... selcuka wrote 14 hours 26 min ago: Probably 5.5 million emails/names, 70k photos. neilv wrote 14 hours 58 min ago: This is not OK, and the reporting is not OK. Opening with: > Discord has identified approximately 70,000 users that may have had their government ID photos exposed as part of a customer service data breach announced last week, spokesperson Nu Wexler tells The Verge. Then a big PR quote, letting a potential wrongdoer further spin it. Then closing with: > In its announcement last week, Discord said that information like names, usernames, emails, the last four digits of credit cards, and IP addresses also may have been impacted by the breach. This is awful corporate PR language, not journalism, on a big story about probable corporate negligence resulting in harm to tens of thousands people. Here's the bare minimum kind of lede I expect on this reporting: Discord may have leaked sensitive personal information about 70,000 users -- including (but not necessarily limited to) government IDs, names, usernames, email addresses, last 4 digits of SSN, and IP addresses. I'm ready to block both Discord and The Verge. Hikikomori wrote 8 hours 33 min ago: This is what most of journalism has been for quite some time. Read some of Noam Chomskys work. zahlman wrote 14 hours 1 min ago: > Discord may have leaked sensitive personal information about 70,000 users -- including (but not necessarily limited to) government IDs, names, usernames, email addresses, last 4 digits of SSN, and IP addresses. Credit card numbers are not SSNs, and I can't fathom why Discord would have the latter (I certainly never gave them any government ID either). Not to mention, "last 4 digits" of a credit card number will commonly appear on, for example, store receipts that people commonly just leave behind. Usernames can hardly be called sensitive information, either. The point is all the other stuff being tied to the username. heavyset_go wrote 13 hours 24 min ago: The fact that the data is digitized, indexed and can be easily correlated with other data points is what turns your seemingly innocuous 4 numbers into a way to better impersonate, phish, or otherwise harm you. nemomarx wrote 13 hours 37 min ago: Age verification is "scan your government ID or give us a detailed video of your face from various angles, open and close your mouth" etc. Not sure which is better to give out in a breach encrypted_bird wrote 13 hours 19 min ago: I think it's less a case of which is better and more of which is less bad... heavyset_go wrote 13 hours 26 min ago: It's enough data that, combined with photos and videos from social media, could allow for more convincing deepfakes of your average person. It's also enough data to improve surveillance and facial recognition systems, allowing them to identify you more easily. Spooky23 wrote 13 hours 54 min ago: Itâs an escalation path. When you store and image of an ID unnecessarily, then associate it with those last four digits, youâve created a way to link other data sources to individuals. Most scenarios Iâve worked with, you toss the ID image once you validate it. jay_kyburz wrote 13 hours 57 min ago: I think discord is one of the services that requires age verification in some countries. lschueller wrote 16 hours 37 min ago: Asking this out of curiosity: is it a requirement, that such data is being stored once the verification process is completed? stravant wrote 12 hours 31 min ago: Why are people assuming they did store it after the process was completed? With the relatively low number leaked here it could have been information collected actively during an ongoing breach, not a dump of some permanent database. imtringued wrote 9 hours 44 min ago: There are only a handful of countries where you are legally mandated to dox yourself and it's a recent change. You'd expect the numbers to be "low" either way. Spooky23 wrote 13 hours 49 min ago: Iâm in a different industry, but when Iâve had to collect identification for reasons we extracted metadata at the time of presentation, validated it, and discarded the image. We would never get clearance from counsel to store that in most scenarios, and I canât think of a reason to justify it for a age or name verification. 3eb7988a1663 wrote 14 hours 56 min ago: That is the bonkers thing about this story. Why take on the liability? Get what you need and toss the responsibility. If you must store it (which seems unlikely) put that extra-bad-if-leaked information behind a separate append only service for which read is heavily restricted. nothercastle wrote 12 hours 37 min ago: The data is valuable to sell or train ai on. You can use that data to train ai hr people or whatever heavyset_go wrote 13 hours 19 min ago: Because it's free training data and great for building profiles on users so you can make money showing them targeted ads tavavex wrote 9 hours 55 min ago: Discord isn't really monetized through 'traditional' targeted advertising, though. fuomag9 wrote 9 hours 37 min ago: Discord no, but my credit card from Advanzia bank actually changed their TOS to allow AI training with your submitted documents for their anti-fraud model. I complained to the CNPD of Luxembourg and sent a GDPR request, as they defaulted to doing this WITHOUT asking for consent (super illegal as doing AI training with your data is definitely not the minimum required to offer the service) jpalawaga wrote 13 hours 54 min ago: Because there is no liability. If they were fined $10k per leaked ID, then there is a serious liability there. Right now, they publish a press release, go 'oopsie poopsie', maybe have to pay for some anit-fraud things from equifax if someone asks, and call it day. ryandrake wrote 13 hours 11 min ago: > Right now, they publish a press release, go 'oopsie poopsie', maybe have to pay for some anit-fraud things from equifax if someone asks, and call it day. Don't forget the usual Press Release starting with "At [Company], we take security very seriously..." itake wrote 15 hours 5 min ago: Just a guess, but they may store the original ID card to audit duplicate accounts. If their machine learning models, think that two people are the exact same, having the original image, especially a photo of the same ID card could confirm that. dathinab wrote 14 hours 6 min ago: IMHO this is a pretty dump approach to the problem while there probably are some countries with terrible designed passport for most they are designed to be machine readable even with very old style (like >10year old tech) OCR systems so even if you want to do something like that you can extract all relevant information and just store that, maybe als extract the image this seems initially pointless, but isn't, if you store a copy of a photo of a people can use that to impersonate someone, if you only steel the information on it it's harder outside of impersonation issues another problem is that it's not uncommon that technically ids/passports count as property of the state and you might not be allowed to store full photo copies of it and the person they are for can't give you permission for it either (as they don't own the passport technically speaking). Most times that doesn't matter but if a country wants to screw with you holding images of ids/passports is a terrible idea. but then you also should ask yourself what degree of "duplicate" protection you actually need wich isn't a perfect one. If someone can circumvent it by spending multiple thousands to endup with a new full name + fudged id image this isn't something a company like discord really needs to care about. Or in other word storing a subset of the information on a passport, potentially hashed, is sufficient for like way over 90% of all companies needs for secondary account prevention. in the end the reason a company might store a whole photo is because it's convenient and you can retrospectively apply whatever better model you want to use and in many places the penalties for a data breach aren't too big. So you might even start out with "it's bad but we only do so for a short time while building a better system" situation, and then due to the not so threatening consequence of not fixing it (or awareness) it is constantly de-prioritized and never happens... Gigachad wrote 14 hours 20 min ago: Just store the name and the fact that it was verified and delete the photo. You get what you need without holding on to a massive liability. itake wrote 13 hours 38 min ago: How does this help you identify duplicate accounts? If the original photo is deleted, do you just trust the model to be correct 100% of the time when it rejects the newly created account? Or do you keep the original photo and allow a human to make a final decision? Gigachad wrote 12 hours 29 min ago: There are a million other signals for duplicate accounts anyway. Location, OS, device fingerprints, communities joined, etc. If those match and real name matches thatâs enough data. And if a few people manage to slip through itâs not really an issue. They will either get banned again for the same reasons or not violate the rules anymore so who cares selcuka wrote 14 hours 27 min ago: There are image processing methods for hashing people's faces. They don't have to store the actual photo to do that. itake wrote 13 hours 40 min ago: Models have racial biases, can't support aged faces, or look-alike faces. heavyset_go wrote 13 hours 17 min ago: You don't have to use ML models for this. itake wrote 10 hours 20 min ago: Can you elaborate more? Discord has 656m users. if 10% upload their ID, they'd have 65m ID photos to search through. There are 2 use-cases here: 1/ Safety Bans (lets pretend 0.01% of ID card users have been banned for safety reasons: 650k accounts) If a user submits their selfie/ID card, Discord needs to compare the new image with one of the 650k banned (but deleted?) images. I can't possible think how a human could remember the 650k photos well enough to declare a match. Even if such a human existed with this perfect recall, there can't be very many of them on this planet to hire. 2/ Duplicate account bans If a user registers, how can a support staff search the 65m photos without ML assistance to determine if this is a new user or a fraudster? AlienRobot wrote 5 hours 1 min ago: Do you understand how image hashing works? You don't need machine learning just to check if two images are potentially identical. itake wrote 2 hours 17 min ago: yes, I've worked on face recognition databases with 150m and 40m faces for banking and safety. The models are not perfect. Humans should still be in the loop to verify, especially when the consequences of being wrong really suck for the user: losing access to their bank account, getting fired from their job. If you're referring to algorithms like phash (Where they are using the same core image, but just add a filter), they wont work well, because everyone's ID card mostly looks the same. There will be too many FPs. Y_Y wrote 5 hours 6 min ago: If they can't handle that many users then they should close signups. The product scales, but sfaely using users' data doesn't? Hardly an excuse. selcuka wrote 9 hours 8 min ago: 0.01% of 65M is 6,500. Also apparently only 70K people uploaded their IDs. That being said, you can still hash faces and metadata (such as ID numbers) instead of storing the whole ID as a scanned photo, if the information is only used for duplicate checking. Hashing does not increase the racial bias. If your model has a bias it will always have a margin of error. itake wrote 2 hours 21 min ago: neat, but how do users appeal a false positive? Do companies just trust the users or should the company retain the original information so they can manually verify? fuzzfactor wrote 14 hours 44 min ago: The best years online were when it was universally recognized that government ID's are completely unsuitable for interaction with the internet in any way. Like it was since the beginning when government ID's first became a thing. dathinab wrote 15 hours 5 min ago: in case of the EU it's more the opposite GDPR requires data minimalism and ~use case binding so if you submit data for age verification there is no technical reason to keep it after knowing your age so you _have to_ delete it. GJim wrote 1 hour 7 min ago: I've come a long way down for somebody to have finally said this! The GDPR is your friend. It makes retailing unnecessary personal data a liability. As it should be. Discord is idiotic for operating in the UK and Europe without complying. No excuses. StanislavPetrov wrote 15 hours 42 min ago: Requirement by who? Discord isn't required to demand your ID, let alone store it. nomel wrote 14 hours 35 min ago: It's required in the UK to access non-child friendly content: URI [1]: https://support.discord.com/hc/en-us/articles/333624012879... DIR <- back to front page