_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI Writing "/etc/hosts" breaks the Substack editor
chrisjj wrote 1 day ago:
> Substack's filter is well-intentioned - protecting their platform
from potential attacks.
There is sadly no evidence in this article that the supposed filter
does protect the platform from potential attacks.
bastawhiz wrote 1 day ago:
I once helped maintain some PHP software that was effectively a CMS.
You'd drop a little PHP snippet into any page (e.g., that you make with
Dreamweaver) and it would automatically integrate it with the CMS
functionality.
We had unending trouble with mod_security. The worst issue I can
remember was that any POST request whose body contained the word
"delete" was automatically rejected. That was the full rule. To this
day I still can't imagine what the developers were thinking.
mattrighetti wrote 1 day ago:
This reminds me of that time I was discussing with friends about
something we did in our computer science class that day and I realised
writing toString in the Whatsapp client for macOS would crash the
application. At the time I didnât have the skills to understand why
so I recorded the bug on my phone to share with friends :)
Habgdnv wrote 1 day ago:
I have a lifetime Pastebin account that I hadn't used for some years.
Last year I enrolled in a "linux administration" class and tried to use
that pastebin (famous for sharing code) to share some
code/configurations with other students. When I tried to paste my
homework I kept getting a Cloudflare error page. I don't even remember
what I was pasting, but it was normal linux stuff. I contacted pastebin
support - of course I got ghosted.
I am sharing this in relation to the WAF comments and how much the
companies implementing WAF care about your case.
toogan wrote 1 day ago:
The title would be improved with "Writing the string ...". I first read
it as "Writing the file" which was pretty weird.
TRiG_Ireland wrote 5 hours 35 min ago:
It's in quotation marks, which I'd say makes it clear enough for most
people.
gitroom wrote 1 day ago:
The amount of headaches I've had from WAFs blocking legit stuff is
unreal. I just wish the folks turning those rules on had to use them
for a week themselves.
sudb wrote 1 day ago:
I had a problem recently trying to send LLM-generated text between two
web servers under my control, from AWS to Render - I was getting 403s
for command injection from Render's Cloudflare protection which is
opaque and unconfigurable to users.
The hacky workaround which has been stably working for a while now was
to encode the offending request body and decode it on the destination
server.
mike-cardwell wrote 1 day ago:
Just rot13 any request data using javascript before posting, and rot13
it again on the server side. Problem solved. (jk)
stefs wrote 1 day ago:
this feels like blocking terms like "null" or "select" just because you
failed to properly parameterize your SQL queries.
vintermann wrote 1 day ago:
That reminds me of issues I once had with Microsoft's boneheaded WAF.
We had base64 encoded data in a cookie, and whenever certain particular
characters were produced next to each other in the data - I think the
most common was "--" - the WAF would tilt and stop the "attempted SQL
injection attack". So every so often someone would get an illegal login
cookie and just get locked out of the system until they deleted it or
it expired. Took a while to find out what went wrong, and even longer
to figure out how to remove the more boneheaded rules from the WAF.
0xDEAFBEAD wrote 1 day ago:
Weird idea: What if user content was stored and transmitted encrypted
by default? Then an attacker would have to either (a) identify a
plaintext which encrypts to an attack ciphertext (annoying, and also
you could keep your WAF rules operational for the ciphertext, with
minimal inconvenience to users) or (b) attack the system when plaintext
is being handled (could still dramatically reduce attack surface).
lofaszvanitt wrote 1 day ago:
Using a WAF is the strongest indicator that someone doesn't know what's
happening and where or something underneath is smelly and leaking
profusely.
halffullbrain wrote 1 day ago:
At least, in this case, the WAF in question had the decency to return
403.
I've worked with a WAF installation (totally different product), where
the "WAF fail" tell was HTTP status 200 (!) and "location: /" (and some
garbage cookies), possibly to get browsers to redirect using said
cookies. This was part of the CSRF protection.
Other problems were with "command injection"-patterns (like in the
article, expect with specific Windows commands, too - they clash with
everyday words which the users submit), and obviously SQL injections
which cover some relevant words, too.
The bottom line is that WAFs in their "hardened/insurance friendly"
standard configs are set up to protect the company from amateurs
exposing buggy, unsupported software or architectures. WAF's are useful
for that, but you still gave all the other issues with buggy,
unsupported software.
As others have written, WAFs can be useful to protect against emerging
threats, like we saw with the log4j exploit which CloudFlare rolled out
protection for quite fast.
Unless you want compliance more than customers, you MUST at least have
a process to add exceptions to "all the rules"-circus they put in front
of the buggy apps.
Whack-a-mole security filtering is bad, but whack-a-mole relaxation
rule creation against an unknown filter is really tiring.
Too wrote 1 day ago:
Almost equally fun are the ones that simply drop the connection and
leave you waiting for a timeout.
donatj wrote 1 day ago:
We briefly had a WAF forced upon us and it caused so many problems like
this we were able to turn it off, for now. I'm sure it'll be back.
iefbr14 wrote 2 days ago:
So "/etc/h*sts" is not stopped by the filters? Nice to know for the
hackers :)
julik wrote 2 days ago:
Ok so: there is a blogging/content publishing engine, which is somewhat
of a darling of the startup scene. There is a cloud hosting company
with a variety of products, which is an even dearer darling of the
startup scene. Something is posted on the blobbing/content publishing
engine that clearly reveals that
* The product provided for blogging/content publishing did a shitty job
of configuring WAF rules for its use cases (the utility of a "magic WAF
that will just solve all your problems" being out of the picture for
now)
* The WAF product provided by the cloud platform clearly has shitty,
overreaching rules doing arbitrary filtering on arbitrary strings. That
filtering absolutely can (and will) break unrelated content if the
application behind the WAF is developed with a modicum of
security-mindedness. You don't `fopen()` a string input (no, I will not
be surprised - yes, sometimes you do `fopen()` a string input - when
you are using software that is badly written).
So I am wondering:
1. Was this sent to Substack as a bug - they charge money for their
platform, and the inability to store $arbitrary_string on a page you
pay for, as a user, is actually a malfunction and disfunction"? It
might not be the case "it got once enshittified by a CIO who mandated a
WAF of some description to tick a box", it might be the case "we
grabbed a WAF from our cloud vendor and haven't reviewed the rules
because we had no time". I don't think it would be very difficult for
me, as an owner/manager at the blogging platform, to realise that
enabling a rule filtering "anything that resembles a Unix system file
path or a SQL query" is absolutely stupid for a blogging platform - and
go and turn it the hell off at the first user complaint.
2. Similarly - does the cloud vendor know that their WAF refuses
requests with such strings in them, and do they have a checkbox for
"Kill requests which have any character an Average Joe does not type
more frequently than once a week"? There should be a setting for that,
and - thinking about the cloud vendor in question - I can't imagine the
skill level there would be so low as to not have a config option to
turn it off.
So - yes, that's a case of "we enabled a WAF for some
compliance/external reasons/big customer who wants a 'my vendor uses a
WAF' on their checklist", but also the case of "we enabled a WAF but
it's either buggy or we haven't bothered to configure it properly".
To me it feels like this would be 2 emails first ("look, your thing
that I pay you money for clearly and blatantly does , either let me
turn it off or turn it off yourself or review it please") - and a blog
post about it second.
ChrisArchitect wrote 2 days ago:
Just tried to post a tweet with this article title and link and got a
similar error (on desktop twitter.com). Lovely.
HenryBemis wrote 2 days ago:
Aaaahh they are trying to prevent a Little Bobby Tables story..
thayne wrote 2 days ago:
As soon as I saw the headline, I knew this was due to a WAF.
I worked on a project where we had to use a WAF for compliance reasons.
It was a game of wack-a-mole to fix all the places where standard rules
broke the application or blocked legitimate requests.
One notable, and related example is any request with the string "../"
was blocked, because it might be a path traversal attack. Of course, it
is more common that someone just put a relative path in their document.
t1234s wrote 2 days ago:
writing "bcc: someone@email.com" sometimes triggers WAF rules
swyx wrote 2 days ago:
substack also does wonderful things like preserve weird bullet points,
lack code block displays, and make it impossible to customize the
landing page of your site beyond the 2 formats they give you.
generally think that Substack has done a good thing for its core
audience of longform newsletter writer creators who want to be Ben
Thompson. however its experience for technical people, for podcasters,
for people who want to start multi-channel media brands, and for people
who write for reach over revenue (but with optional revenue) has been
really poor. (all 4 of these are us with Latent.Space). I've aired all
these complaints with them and theyve done nothing, which is their
prerogative.
i'd love for "new Substack" to emerge. or "Substack for developers".
rmccue wrote 2 days ago:
Ben Thompson is working on Passport, which seems to be a self-hosted
(WordPress-based) Substack: [1] He gave a talk on it at WordCamp Asia
at the start of last year, although I havenât heard of any progress
recently on it.
URI [1]: https://stratechery.com/2021/passport/
ZeroTalent wrote 2 days ago:
You're missing the point of Substack:
1. It's a social media platform with a network that is still easy
to extract organic growth from.
2. 99% email deliverability without configuring anything. It's
whitelisted everywhere.
godelski wrote 2 days ago:
I don't get it. Why aren't those files just protected so they have no
read or write permissions? Isn't this like the standard way to do
things? Put the blog in a private user space with minimal permissions.
Why would random text be parsed? I read the article but this doesn't
make sense to me. They suggested directory transversal but your text
shouldn't have anything to do with that and transversal is solved by
permission settings
tryauuum wrote 2 days ago:
this is the usual approach with web application firewalls, block all
the 100500 known attacks. Doesn't matter if they are not applicable
to your website. Some of them are obviously OS-depended (having .exe
in the URLs) but it doesn't matter, it's blocked just in case
I do understand this appoach. From the defence point of view it makes
sense, if you have to create a solution to protect millions of
websites it doesn't make sense to tailor it to specifics of a single
one
julik wrote 2 days ago:
I haven't been in the business of writing WAFs, but if I got an
assignment of "block every string in the request body that contains
/etc/passwd or /etc/hosts, including filenames in multipart forms"
â I would strongly debate that with the PMs requesting that. And
- probably - debate for some kind of "This website is served by an
application developed by people who know what they are doing"
checkbox, which would instantly neuter rules like that.
godelski wrote 2 days ago:
Can you help me understand the exploit here? I'd really like to
understand.
(What's below is written not as "this is how it should be done" but
instead "what I understand should be done". To provide context to
what I do and do not understand so that my misunderstandings can be
more directly addressed)
I understand being over zealous, an abundance of caution. But what
I'm confused about is why normal text could lead to an exploit in
the first place. I can only understand this being a problem if
arbitrary text is being executed. Which that would appear to be a
HUGE security hole ripe for exploitation. But path access is
handled by privileges. So even if arbitrary text is being executed,
how can that lead to exploitation without already having a major
security hole?
Maybe I'm not understanding substack? I've only been a reader. But
why is the writer not in a container or chroot? If you want you be
overly zealous why not use two vms? Put them in a vm to write, once
they've edited then run tests and then use that as the image for
the userspace in the ephemeral vm that is viewed by readers. Would
this still be exploitable? I don't mean image for the whole vm, I
really do mean a subimage so you can lock that up.
Macha wrote 1 day ago:
Misconfiguring your web server such that
example.com/../../etc/passwd returned the actual file was a
common vulnerability in the early 00s. Or cgi scripts that worked
with real file paths but accepted any path
WAFs blocking the string with the filename then is the "to make
sure nobody ever accidentally leaves your gate open, we've
replaced it with a concrete wall" solution to this problem. You
might never have this problem, and might need to actually use the
gate but the vendor/security team has successfully solved their
problem of checking off a blocked attack, and the consequences
are now your problem
godelski wrote 1 day ago:
> directory transversal attack
There's some serious miscommunication going on. I'm quite
willing to believe it is from my end, but I thought I explained
earlier that I'm well aware of directory transversal. The
example did not clarify things to me when the author mentioned
it
I asked why setting permissions was not a sufficient solution.
Is someone gaining root? How?
I understand there's the user visiting substack and the person
editing on substack. Certainly this is about the person
editing. This is why I asked about the containerization side.
That's an extra layer in permissions. Not only should that
editor not have permissions to edit `/etc/passwd` (or some
other file), but they wouldn't be able to do so even if gaining
root. They'd need to gain root, break out of the container, and
gain root again (hopefully that container is being run as a
user and not root!).
But even there, I'm still confused about the exploits. How is a
transversal even happening? Why is text even being executed? I
understand we're sending POST requests but why is that POST
request able to do anything other than input string literals
into a text body? Isn't this a prepared statement? Why does
that POST request have permission to access those files in the
first place? Like even if a malicious editor circumvented
defenses and was able to write injections, accessing those
files shouldn't be possible from permissions, right?
My understanding is that for this solution to be effective then
several other critical security flaws have to also have
happened. And how far does this thing need to go? Isn't it
defeatable if I chop up the keywords into benign ones, store as
variables, and then expand them? I guess it stops very low
hanging fruit attackers but again, isn't that also only in the
situations where there are also low hanging fruit attacks
available which can cause far more damage? That's where my
confusion is coming from.
Dylan16807 wrote 1 day ago:
There is no exploit... for this specific site.
But the WAF rule is not site-specific.
Almost all of your comment is asking site-specific questions,
but that's barking up the wrong tree. The WAF is working
under a completely different paradigm.
It especially doesn't know about specific user rules within a
specific site! Or file permissions. None of those are in
scope for the WAF. The WAF is trying to protect a million
sites at once.
> Isn't it defeatable if I chop up the keywords into benign
ones, store as variables, and then expand them?
That might work half the time, but not the other half. The
filter isn't pointless, it's just being badly and annoyingly
applied.
mifydev wrote 2 days ago:
It's /con/con all over again
nickagliano wrote 2 days ago:
As a card carrying Substack hater, Iâm not suprised.
> "How could Substack improve this situation for technical writers?"
They donât care about (technical) writers. All they care about is
building a TikTok clone to âdrive discoverabilityâ and make the
attention-metrics go up. Chris Best is memeing about it on his own
platform. Very gross.
paulpauper wrote 2 days ago:
Reminds me of Slashdot and breaking the page by widening it with
certain characters
robertlagrant wrote 2 days ago:
> This case highlights an interesting tension in web security: the
balance between protection and usability.
This isn't a tension. This rule should not be applied at the WAF level.
It doesn't know that this field is safe from $whatever injection
attacks. But the substack backend does. Remove the rule from the WAF
(and add it to the backend, where it belongs) and you are just as
secure and much more usable. No tension.
myflash13 wrote 2 days ago:
I would say itâs a decent security practice to apply WAF as a
blanket rule to all endpoints and then remove it selectively when
issues like this occur. Itâs much, much, harder to evaluate every
single public facing endpoint especially when hosting third party
software like Wordpress with plugins.
SonOfLilit wrote 2 days ago:
I don't agree. WAFs usually add more attack surface than they
remove. [1] Of course, Wordpress is basically undefendable, so I'd
never ever host it on a machine that has anything else of value
(including e.g. db credentials that give access to much more than
the public content on the WP installation).
URI [1]: https://www.macchaffee.com/blog/2023/wafs/
worewood wrote 2 days ago:
There is a tension, but it's between paying enough to developers to
actually produce decent code or pay a 3rd-party to firewall the
application.
marcosdumay wrote 2 days ago:
Again, there is no tension.
People will manage to circumvent the firewall if they want to
attack your site. But you will still pay, and get both the DoS
vulnerabilities created by the firewall and the new attack vectors
in the firewall itself.
nicoledevillers wrote 2 days ago:
it was a cf managed waf rule for a vulnerability that doesn't apply to
us. we've disabled it.
silverwind wrote 1 day ago:
Why not review rules before applying them?
SonOfLilit wrote 2 days ago:
This comment deserves to be much higher, assuming this user speaks
for Substack (no previous submissions or comments, but the comment
implies it).
nicoledevillers wrote 2 days ago:
i don't speak for substack, but i do work there and changed the WAF
rule :)
driverdan wrote 2 days ago:
This is a common problem with WAFs and, more specifically, Cloudflare's
default rulesets. If your platform has content that is remotely
technical you'll end up triggering some rules. You end up needing a
test suite to confirm your real content doesn't trigger the rules and
if it does you need to disable them.
jkrems wrote 2 days ago:
Could this be trivially solved client-side by the editor if it just
encoded the slashes, assuming it's HTML or markdown that's stored?
Replacing `/etc/hosts` with `/etc/hosts` for storage seems like
an okay workaround. Potentially even doing so for anything that's added
to the WAF rules automatically by syncing the rules to the editor code.
dvorack101 wrote 2 days ago:
Indeed a severe case of paranoia?
1. Create a new post.
2. Include an Image, set filter to All File types and select
"/etc/hosts".
3. You get served with an weird error message box displacing a weird
error message.
4. After this the Substack posts editor is broken. Heck, every time i
access the Dashboard, it waits forever to build the page.
Did find this text while browsing the source for an error (see original
ascii art: [1] ):
SUBSTACK WANTS YOU
TO BUILD A BETTER BUSINESS MODEL FOR WRITING
https://substack.com/jobs
URI [1]: https://pastebin.com/iBDsuer7
julik wrote 2 days ago:
"Who signed off on your WAF rules" would be a great reverse interview
question then.
johnklos wrote 2 days ago:
Content filtering should be highly context dependent. If the WAF is
detached from what it's supposed to filter, this happens. If the WAF
doesn't have the ability to discern between command and content
contexts, then the filtering shouldn't be done via WAF.
This is like spam filtering. I'm an anti-spam advocate, so the idea
that most people can't discuss spam because even the discussion will
set off filters is quite old to me.
People who apologize for email content filtering usually say that spam
would be out of control if they didn't have that in place, in spite of
no personal experience on their end testing different kinds of
filtering.
My email servers filter based on the sending server's configuration:
does the EHLO / HELO string resolve in DNS? Does it resolve back to the
connecting IP? Does the reverse DNS name resolve to the same IP? Does
the delivery have proper SPF / DKIM? Et cetera.
My delivery-based filtering works worlds better than content-based
filtering, plus I don't have to constantly update it. Each kind has
advantages, but I'd rather occasional spam with no false positives than
the chance I'm blocking email because someone used the wrong words.
With web sites and WAF, I think the same applies, and I can understand
when people have a small site and don't know or don't have the
resources to fix things at the actual content level, but the people
running a site like Substack really should know better.
Anamon wrote 2 min ago:
Yes to smart filtering at the right layer. The whole reverse DNS
checks et al. are so effective. I recently moved my personal mailbox
from a host who didn't do these kinds of checks to one that does. My
received spam volume instantly went from about 20 a day (across all
my aliases) to less than 1 a week.
myflash13 wrote 2 days ago:
SPF and DKIM are now more commonly implemented correctly by spammers
than by major email providers.
URI [1]: https://news.ycombinator.com/item?id=43468995
johnklos wrote 1 day ago:
Yes, but they are still effective at preventing spam from spammers
who are pretending to be others.
Osiris wrote 2 days ago:
I understand applying path filters in URLS and search strings, but I
find it odd that they would apply the same rules to request body
content, especially content encoded as valid JSON, and especially for a
BLOG platform where the content would be anything.
wglb wrote 2 days ago:
The problem with WAF is discussed in [1] .
One of the authors of the paper has said "WAFs are just speed bump to a
determined attacker."
URI [1]: https://users.ece.cmu.edu/~adrian/731-sp04/readings/Ptacek-New...
chrisjj wrote 1 day ago:
> "WAFs are just speed bump to a determined attacker."
We wish. Speed bumps don't totally immobilise a pseudo-random
selection of innocent vehicles.
p_ing wrote 2 days ago:
Locks are a speedbump for a lockpick.
Doors are a speedbump for a car.
Well yeah, sure, doesn't mean I'm going to have an open doorframe or
a door without a lock.
wat10000 wrote 2 days ago:
The difference is that a door tends to be the only thing between
you and an attacker. A speedbump is better than nothing.
This isn't like having a lock on your door, this is like having a
cheap, easily pickable padlock on your bank vault. If the vault has
a proper lock then the padlock serves no purpose, and if it doesn't
then you're screwed regardless.
9x39 wrote 2 days ago:
I think a WAF is closer to a component of an entry control point,
like on a military base. It's a tool for manned security to
interact with and inspect traffic. Unmanned, they're just an
obstacle to route around, but manned, they're an effective way to
provide asymmetries to the defender.
WAFs can have thousands of rules ranging from basic to the
sophisticated, not unlike mechanisms you can deploy at a
checkpoint.
Security devices like IDSes or WAFs allow deploying filtering
logic without touching an app directly, which can be hard/slow
across team boundaries. They can allow retroactive analysis and
flagging to a central log analysis team. Being able to
investigate whether an adversary came through your door after the
fact is powerful, you might even be able to detect a breach if
you can filter through enough alerts.
People are more likely to get dismissed for not installing an IDS
or WAF than having one. Its effectiveness is orthogonal to the
politics of its existence, most of the time.
Macha wrote 2 days ago:
And to extend the metaphor to cover the false positives these
systems produce, sometimes the padlock seizes shut if the air
temperature is in a certain range, and the team that put it there
refuses to take responsibility for the fact they've locked your
customers from accessing their assets with the valid key.
skybrian wrote 2 days ago:
Did anyone try reporting this to Substack?
josephcsible wrote 2 days ago:
WAFs were created by people who read [1] and didn't realize that TDWTF
isn't a collection of best practices.
URI [1]: https://thedailywtf.com/articles/Injection_Rejection
blenderob wrote 2 days ago:
> This case highlights an interesting tension in web security: the
balance between protection and usability.
But it doesn't. This case highlights a bug, a stupid bug. This case
highlights that people who should know better, don't!
The tension between security and usability is real but this is not it.
Tension between security and usability is usually a tradeoff. When you
implement good security that inconveniences the user. From simple
things like 2FA to locking out the user after 3 failed attempts. Rate
limiting to prevent DoS. It's a tradeoff. You increase security to
degrade user experience. Or you decrease security to increase user
experience.
This is neither. This is both bad security and bad user experience.
What's the tension?
crabbone wrote 1 day ago:
Precisely.
This also reminded me, I think in the PHP 3 era, PHP used to
"sanitize" the contents of URL requests to blanket combat SQL
injections, or perhaps, it was a configuration setting that would be
frequently turned on in shared hosting services. This, of course,
would've been very soon discovered by the authors of the PHP site and
various techniques were employed to circumvent this restriction,
overall giving probably even worse outcomes than if the "sanitation"
wasn't there to begin with.
TRiG_Ireland wrote 5 hours 58 min ago:
The days of addslashes() and stripslashes()!
myflash13 wrote 2 days ago:
I would say itâs a useful security practice in general to apply WAF
as a blanket rule to all endpoints and then remove it selectively
when issues like this occur. Itâs much, much, harder to evaluate
every single public facing endpoint especially when hosting third
party software like Wordpress with plugins.
Null-Set wrote 2 days ago:
This looks like it was caused by this update [1] rule 100741.
It references this CVE [2] which allows the reading of system files.
The example given shows them reading /etc/passwd
URI [1]: https://developers.cloudflare.com/waf/change-log/2025-04-22/
URI [2]: https://github.com/tuo4n8/CVE-2023-22047
mrspuratic wrote 2 days ago:
AFAICT it's also (though I'm very rusty) in ModSecurity, if XML
content processing is enabled then rules like these will trip:
SecRule
REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|ARGS_N
AMES|ARGS|XML:/* "@pmFromFile lfi-os-files.data"
SecRule
REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|ARGS_N
AMES|ARGS|XML:/* "@pmFromFile unix-shell.data" ...
where the referenced files contain the usual list of *nix suspects
including the offending filename (lfi-os-files.data, "local file
inclusion" attacks)
The advantage (whack-a-mole notwithstanding) of a WAF is it orders of
magnitude easier to tweak WAF rules than upgrade say, Weblogic, or
other teetering piles of middleware.
julik wrote 2 days ago:
So that's why immediately when I hear "WAF" I read "...and the site
will break in weird and exciting ways due to arbitrary, badly
developed heuristics outside of your control, every odd day of
every even week" - I remember the glory days of shared hosting and
mod_security.
Turns out the hunches were right all along.
nottorp wrote 2 days ago:
So everyone should start looking for vulnerabilities in the substack
site?
If that's their idea of security...
teddyh wrote 2 days ago:
> For now, I'll continue using workarounds like "/etc/h*sts" (with
quotes) or alternative spellings when discussing system paths in my
Substack posts.
Ahh, the modern trend of âunalivedâ¹ etc. comes to every corner of
society eventually.
1. < [1] >
URI [1]: https://knowyourmeme.com/memes/unalive
simonw wrote 2 days ago:
"How could Substack improve this situation for technical writers?"
How about this: don't run a dumb as rocks Web Application Firewall on
an endpoint where people are editing articles that could be about any
topic, including discussing the kind of strings that might trigger a
dumb as rocks WAF.
This is like when forums about web development implement XSS filters
that prevent their members from talking about XSS!
Learn to escape content properly instead.
awoimbee wrote 1 day ago:
I'm in the position where I have to run a WAF to pass security
certifications.
The only open source WAFs are modsecurity and it's beta successor,
coraza.
These things are dumb, they just use OWASP's coreruleset which is a
big pile of unreadable garbage.
ZeroTalent wrote 2 days ago:
hire a cybersec person. I don't think they one.
serial_dev wrote 2 days ago:
Surprisingly simple solution
SonOfLilit wrote 2 days ago:
After having been bitten once (was teaching a competitive programming
team, half the class got a blank page when submitting solutions, after
an hour of debugging I narrowed it down to a few C++ types and keywords
that cause 403 if they appear in the code, all of which happen to have
meaning in Javascript), and again (working for a bank, we had an API
that you're supposed to submit a python file to, and most python files
would result in 403 but short ones wouldn't... a few hours of debugging
and I narrowed it down to a keyword that sometimes appears in the code)
and then again a few months later (same thing, new cloud environment,
few hours burned on debugging[1]), I had the solution to his problem in
mind _immediately_ when I saw the words "network error".
[1] the second time it happened, a colleague added "if we got 403,
print "HAHAHA YOU'VE BEEN WAFFED" to our deployment script, and for
that I am forever thankful because I saw that error more times than I
expected
netsharc wrote 2 days ago:
+++ATH
simonw wrote 2 days ago:
Do you remember if that was Cloudflare or some other likely WAF?
SonOfLilit wrote 2 days ago:
First time something on-prem, maybe F5. Second time AWS.
Oh, I just remembered I had another encounter with the AWS WAF.
I had a Jenkins instance in our cloud account that I was trying to
integrate with VSTS (imagine github except developed by Microsoft,
and still maintained, nevermind that they own github and it's
undoubtedly a better product). Whenever I tried to trigger a build,
it worked, but when VSTS did, it failed. Using a REST monitor
service I was able to record the exact requests VSTS was making and
prove that they work with curl from my machine... after a few
nights of experimenting and diffing I noticed a difference between
the request VSTS made to the REST monitor and my reproduction with
curl: VSTS didn't send a "User-Agent" header, so curl supplied one
by default unless I added I think -H "User-Agent:", and therefore
did not trigger the first default rule in the AWS WAF, "if your
request doesn't list a user agent you're a hacker".
HAHAHA I'VE BEEN WAFFED AGAIN.
jmmv wrote 2 days ago:
I encountered this a while ago and it was incredibly frustrating. The
"Network error" prevented me from updating a post I had written for
months because I couldn't figure out why my edits (which extended the
length and which I assumed was the problem) couldn't get through.
Trying to contact support was difficult too due to AI chatbots, but
when I finally did reach a human, their "tech support" obviously didn't
bother to look at this in any reasonable timeframe.
It wasn't until some random person on Twitter suggested the possibility
of some magic string tripping over some stupid security logic that I
found the problem and could finally edit my post.
eniac111 wrote 2 days ago:
URI [1]: https://en.wikipedia.org/wiki/Bush_hid_the_facts
matt_heimer wrote 2 days ago:
The people configuring WAF rules at CDNs tend to do a poor job
understanding sites and services that discuss technical content. It's
not just Cloudflare, Akamai has the same problem.
If your site discusses databases then turning on the default SQL
injection attack prevention rules will break your site. And there is
another ruleset for file inclusion where things like /etc/hosts and
/etc/passwd get blocked.
I disagree with other posts here, it is partially a balance between
security and usability. You never know what service was implemented
with possible security exploits and being able to throw every WAF rule
on top of your service does keep it more secure. Its just that those
same rulesets are super annoying when you have a securely implemented
service which needs to discuss technical concepts.
Fine tuning the rules is time consuming. You often have to just
completely turn off the ruleset because when you try to keep the
ruleset on and allow the use-case there are a ton of changes you need
to get implemented (if its even possible). Page won't load because
/etc/hosts was in a query param? Okay, now that you've fixed that, all
the XHR included resources won't load because /etc/hosts is included in
the referrer. Now that that's fixed things still won't work because
some random JS analytics lib put the URL visited in a cookie, etc,
etc... There is a temptation to just turn the rules off.
chrisjj wrote 1 day ago:
> The people configuring WAF rules at CDNs tend to do a poor job
understanding sites and services that discuss technical content
They shouldn't be doing that job at all. The content of user data is
none of their business.
matt-p wrote 1 day ago:
In my experience the pain of false positives required to outweigh the
"WAF is best practice" is just very very heigh. Most big businesses
would rather lose/frustrate a small percentage of customers, to be
"safe".
lsofzz wrote 1 day ago:
> The people configuring WAF rules at CDNs tend to do a poor job
understanding sites and services that discuss technical content. It's
not just Cloudflare, Akamai has the same problem.
I agree. There is a business opportunity here. Right in the middle of
your sentences.
Hint: Context-Aware WAF.
Many platforms have emerged in the last decade - some called it smart
WAF, some called it nextgen WAF.. All vaporware garbage that consumes
tons and tons of system resource and still manages to do a shit job
of _actually_ WAF'ing web requests.
To be truly context-aware, you need to compute a priori about the
situation - the user, the page, the interactions etc.
oakwhiz wrote 1 day ago:
I've had the issue where filling out form fields for some company
website triggers a WAF and then nobody in the company is able to
connect me to the responsible party who can fix the WAF rules. So I'm
just stuck.
julik wrote 2 days ago:
This is what surprises me in this story. I could not, at first
glance, assume that either Substack people or Cloudflare people were
incompetent.
Oh: I resisted tooth and nail about turning on a WAF at one of my
gigs (there was no strict requirement for it, just cargo cult). Turns
out - I was right.
kiitos wrote 2 days ago:
> I disagree with other posts here, it is partially a balance between
security and usability. You never know what service was implemented
with possible security exploits and being able to throw every WAF
rule on top of your service does keep it more secure. Its just that
those same rulesets are super annoying when you have a securely
implemented service which needs to discuss technical concepts.
I might be out of the loop here, but it seems to me that any WAF
that's triggered when the string "/etc/hosts" is literally anywhere
in the content of a requested resource, is pretty obviously broken.
schnable wrote 2 days ago:
I don't think so. This rule for example probably block attacks on a
dozen old WordPress vulnerabilities.
kiitos wrote 2 days ago:
And a rule that denies everything blocks all vulnerabilities
entirely.
A false positive from a conservative evaluation of a query
parameter or header value is one thing, conceivably
understandable. A false positive due to the content of a blog
post is something else altogether.
afiori wrote 2 days ago:
This is a strawman, especially if like the parent claims this
was improving security for one of the most popular website
backends ever.
Rules like this might very well have had incredible positive
impact on ten of thousands of websites at the cost of some
weird debugging sessions for dozens of programmers (made up
numbers obviously).
kiitos wrote 23 hours 12 min ago:
Look, any WAF that blocks a document like
/etc/hosts is a file on Unix hosts
is pretty clearly broken. And you can't meaningfully measure
product metrics like impact for fundamentally broken
products.
afiori wrote 7 hours 54 min ago:
> is pretty clearly broken
agree
> And you can't meaningfully measure product metrics like
impact for fundamentally broken products
disagree
RKFADU_UOFCCLEL wrote 2 days ago:
There's no "trade-off" here. Blocking IPs that send "1337 h4x0r
buzzword /etc/passwd" in it is completely naive and obtrusive, which
is the modus operandi of the CDN being discussed here. There are
plenty of other ways of hosting a website.
stingraycharles wrote 2 days ago:
Yup. Were a database company that needs to be compliant with SOC2,
and Iâve had extremely long and tiring arguments with our auditor
why we couldnât adhere to some of these standard WAF rulesets
because it broke our site (we allow people to spin up a demo env and
trigger queries).
We changed auditors after that.
spydum wrote 2 days ago:
sounds like your security policy is wrong (or doesnt have a
provision for exceptions managed by someone with authority to grant
them), or your auditor was swerving out of his lane.
As far as I've seen: SOC2 doesn't describe any hard security
controls - it just asks to evaluate your policy versus your
implemented controls.
stingraycharles wrote 1 day ago:
You are absolutely correct, which is why we switched auditors. We
use a third party to verify compliance of all our cloud resources
(SecureFrame), and one of their checks is that specific AWS WAF
rulesets are enabled on e.g. CloudFront endpoints. These are
managed rulesets by AWS.
We disabled this check, auditor swerved out of his lane, I spent
more several hours explaining things he didnât understand, and
things resolved after our CEO had a call with him (you can
imagine how the discussion went).
All in all, if the auditor would have been more reasonable it
wouldnât have been an issue, but Iâve always been wary of
managed firewall rulesets because of this reason.
krferriter wrote 2 days ago:
I don't get why you'd have SQL injection filtering of input fields at
the CDN level. Or any validation of input fields aside from length or
maybe some simple type validation (number, date, etc). Your backend
should be able to handle arbitrary byte content in input fields. Your
backend shouldn't be vulnerable to SQL injection if not for a CDN
layer that's doing pre-filtering.
benregenspan wrote 1 day ago:
It should be thought of as defense-in-depth only. The backend had
better be immune to SQL injection, but what if someone (whether
in-house or vendor) messes that up?
I do wish it were possible to write the rules in a more
context-sensitive way, maybe possible with some standards around
payloads (if the WAF knows that an endpoint is accepting a specific
structured format, and how escapes work in that format, it could
relax accordingly). But that's probably a pipe dream. Since the
backend could be doing anything, paranoid rulesets have to treat
even escaped data as a potential issue and it's up to users to poke
holes.
nijave wrote 2 days ago:
The farther a request makes it into infrastructure, the more
resources it uses.
immibis wrote 2 days ago:
Because someone said "we need security" and someone else said "what
is security" and someone else said "SQL injection is security" and
someone looked up SQL injections and saw the word "select" and
"insert".
WAFs are always a bad idea (possible exception: in allow-but-audit
mode). If you knew the vulnerabilities you'd protect against them
in your application. If you don't know the vulnerabilities all you
get is a fuzzy feeling that Someone Else is Taking Care of it,
meanwhile the vulnerabilities are still there.
Maybe that's what companies pay for? The feeling?
patrakov wrote 22 hours 49 min ago:
> If you knew the vulnerabilities you'd protect against them in
your application.
Correction: it is not your application but someone else's
Certified Stuff (TM) that you can't change, but which is still
vulnerable.
gopher_space wrote 2 days ago:
If your clients will let you pass the buck on security like this
it would be very tempting to work towards the least onerous
insurance metric and no further.
pxc wrote 2 days ago:
WAFs can be a useful site of intervention during incidents or
when high-severity vulns are first made public. It's not a
replacement for fixing the vuln, that still has to happen, but it
gives you a place to mitigate it that may be faster or simpler
than deploying code changes.
ordersofmag wrote 2 days ago:
A simple reason would be if you're just using it as a proxy signal
for bad bots and you want to reduce the load on your real servers
and let them get rejected at the CDN level. Obvious SQL injection
attempt = must be malicious bot = I don't want my servers wasting
their time
chrisjj wrote 1 day ago:
> A simple reason would be if you're just using it as a proxy
signal for bad bots
Who would be that stupid?
coldpie wrote 2 days ago:
> There is a temptation to just turn the rules off
Definitely, though I have seen other solutions, like inserting
non-printable characters in the problematic strings (e.g.
"/etc/hosts" or whatever, you get the idea). And honestly that seems
like a reasonable, if somewhat annoying, workaround to me that still
retains the protections.
Bluecobra wrote 2 days ago:
Another silly workaround would be to take a screenshot of
â/etc/hostsâ and use images instead. Would break text
browsers/reading mode though.
rhdunn wrote 2 days ago:
And accessibility.
mjr00 wrote 2 days ago:
> I disagree with other posts here, it is partially a balance between
security and usability.
And economics. Many people here are blaming incompetent security
teams and app developers, but a lot of seemingly dumb security
policies are due to insurers. If an insurer says "we're going to jack
up premiums by 20% unless you force employees to change their
password once every 90 days", you can argue till you're blue in the
face that it's bad practice, NIST changed its policy to recommend not
regularly rotating passwords over a decade ago, etc., and be totally
correct... but they're still going to jack up premiums if you don't
do it. So you dejectedly sigh, implement a password expiration
policy, and listen to grumbling employees who call you incompetent.
It's been a while since I've been through a process like this, but
given how infamous log4shell became, it wouldn't surprise me if
insurers are now also making it mandatory that common "hacking
strings" like /etc/hosts, /etc/passwd, jndi:, and friends must be
rejected by servers.
wvh wrote 1 day ago:
Having worked with PCI-DSS, some rules seem to only exist to
appease insurance. When criticising decisions, you are told that
passing audits to be able to claim insurance is the whole game,
even when you can demonstrate how you can bypass certain rules in
reality. High-level security has more to do with politics (my
definition) than purely technical ability. I wouldn't go as far as
to call it security theatre, there's too much good stuff there that
many don't think about without having a handy list, but the game is
certainly a lot bigger than just technical skills and hacker vs
anti-hacker.
I still have a nervous tick from having a screen lock timeout
"smaller than or equal to 30 seconds".
620gelato wrote 1 day ago:
> jack up premiums by 20% unless you force employees to change
their password once every 90 days"
Always made me judge my company's security teams as to why they
enable this stupidity. Thankfully they got rid of this gradually,
nearly 2 years ago now (90 days to 365 days to never). New
passwords were just one key l/r/u/d on the keyboard.
Now I'm thinking maybe this is why the app for a govt savings
scheme in my country won't allow password autofill at all. Imagine
expecting a new password every 90 days and not allowing auto fill -
that just makes passwords worse.
patrakov wrote 1 day ago:
Worse. If you are not in the USA, i.e., if NIST is not the correct
authority, that insurer might actually be enforcing what the
"correct" authority believes to be right, i.e., password
expiration.
II2II wrote 1 day ago:
> If an insurer says "we're going to jack up premiums by 20% unless
you force employees to change their password once every 90 days",
you can argue till you're blue in the face that it's bad practice,
NIST changed its policy to recommend not regularly rotating
passwords over a decade ago, etc., and be totally correct... but
they're still going to jack up premiums if you don't do it.
I would argue that password policies are very context dependent. As
much as I detest changing my password every 90 days, I've worked in
places where the culture encouraged password sharing. That sharing
creates a whole slew of problems. On top of that, removing the
requirement to change passwords every 90 days would encourage very
few people to select secure passwords, mostly because they prefer
convenience and do not understand the risks.
If you are dealing with an externally facing service where people
are willing to choose secure passwords and unwilling to share them,
I would agree that regularly changing passwords creates more
problems than it solves.
Aeolun wrote 1 day ago:
> removing the requirement to change passwords every 90 days
would encourage very few people to select secure passwords
When you donât require them to change it, you can just assign
them a random 16 character string and tell them itâs their job
to memorize it.
phito wrote 1 day ago:
There's no way I will ever remember it. I will write it down.
Let me choose my own password (passphrase if I need to remember
it)
afiori wrote 2 days ago:
I believe that these kind of decisions are mostly downstream of
security audits/consultants with varying level of up to date
slideshows.
I believe that this is overall a reasonable approach for companies
that are bigger than "the CEO knows everyone and trusted executives
are also senior IT/Devs/tech experts" and smaller than "we can spin
an internal security audit using in-house resources"
simonw wrote 2 days ago:
I wish IT teams would say "sorry about the password requirement,
it's required by our insurance policy". I'd feel a lot less angry
about stupid password expiration rules if they told me that.
thayne wrote 1 day ago:
Not exactly that, but I've had the security team say "sorry about
the password policy, we agree it is stupid and counterproductive,
but it's required for compliance with X, and we need X to sell to
some big customers."
cratermoon wrote 2 days ago:
Sometime in the past few years I saw a new wrinkle:
password must be changed every 90 days unless it is above a
minimum length
(12 or so as best I recall)
in which case you only need to change it yearly.
Since the industry has realized length trumps dumb "complexity"
checks,
it's a welcome change to see that encoded into policy.
chimeracoder wrote 2 days ago:
> Sometime in the past few years I saw a new wrinkle: password
must be changed every 90 days unless it is above a minimum
length (12 or so as best I recall) in which case you only need
to change it yearly. Since the industry has realized length
trumps dumb "complexity" checks, it's a welcome change to see
that encoded into policy.
This is such a bizarre hybrid policy, especially since forced
password rotations at fixed intervals are already not
recommended for end-user passwords as a security practice.
vladvasiliu wrote 1 day ago:
I think the issue is that some people don't actually
understand what's going on, so in an attempt at goodwill,
they try to "compromise", and "split the difference" if you
will. Hell, some people will consider the windows hello pin
as a password and force a regular rotation. Combined with
policies coming from outside (think insurance and other
compliance stuff) which try to cover as much ground as
possible, you end up with half-assed implementations like
these.
One discourse I hear is that "people will just use the same
password everywhere". To which I'll answer, "but we have
mfa". "yeah, but the insurance guys".
manwe150 wrote 2 days ago:
I think I like this idea that the rotation interval could be
made proportional to length, for example doubling the interval
with each additional character. Security standards already now
acknowledge that forced yearly rotation is a net decrease in
security, so this would incentivize users to pick the longest
password for which they would tolerate the rotation interval.
Is yearly rotation too annoying for you? For merely the effort
of going from 12 -> 14 characters, you could make it 4 years
instead, or 8 years, 16, and so on.
butshouldyou wrote 1 day ago:
Unfortunately, lots of end users refuse to read the password
policy and won't understand why their password reset interval
is "random" or shorter than their colleague's.
connicpu wrote 2 days ago:
Can confirm when I found out I'd be required to regularly
change my password the security of it went down
significantly. At my current job when I was a new employee I
generated a secure random password and spent a week
memorizing it. 6 months later when I found out I was required
to change it, I reverted to a variation of the password I
used to use for everything years ago with some extra
characters at the end that I'll be rotating with each forced
change...
jimmaswell wrote 1 day ago:
I do the same but write the number at the end of the
password on the laptop in sharpie. I work from home so I've
been thinking about making a usb stick that simulates a
keyboard with a button to enter the password.
immibis wrote 1 day ago:
Dangerous. You might accidentally press the button in a
group chat.
3eb7988a1663 wrote 1 day ago:
They would then have an excuse to get one of those
mission control button covers.
byproxy wrote 1 day ago:
Why not make use of a password manager?
connicpu wrote 1 day ago:
I'm not pulling my phone out every time I have to unlock
my computer at work. If IT wants my work account to be
secure they should change their policies.
edoceo wrote 1 day ago:
As discussed here, the policy is from outside the org.
Aeolun wrote 1 day ago:
You canât open the password manager until your computer
is unlocked.
isomorphic- wrote 1 day ago:
You can put the password manager on your phone or
another device.
denkmoon wrote 1 day ago:
and now youâre violating a different policy.
smeg_it wrote 2 days ago:
I'm no expert, but I did take a CISSP course a while ago. One thing
I actually remember ;P, is that it recommended long passwords in in
lieu of the number, special character, upper, lower ... I don't
remember the exact wording of course and maybe it did recommend
some of that, but it talked about having a sentence rather than all
that mess in 6-8 characters, but many sites still want the short
mess that I never will actually remember
vlovich123 wrote 2 days ago:
While the password recommendation stuff is changing (the US
government updating it guidelines last year), itâs generally
best practice to not share passwords which itself implies using a
password manager anyway which makes the whole âlong
passphraseâ vs âcomplexâ password moot - just generate 32
lowercase random characters to make it easier to type or use the
autogenerated password your password manager recommends.
The long passphrase is more for the key that unlocks your
password manager rather than the random passwords you use day to
day.
kbolino wrote 2 days ago:
There's also login passwords, and depending on how many systems
you have to log into, these can be quite numerous. There are
some attempts to address this with smartcards and FIDO tokens
and so on, but it's not nearly universal yet. At least SSH keys
are common for remote login nowadays, but you still need to log
into some computer directly first.
vlovich123 wrote 2 days ago:
I find it rare to have a huge number of machines to log into
that aren't hooked up to a centralized login server. Still,
nothing prevents you from having passwords for each
individual machine that needs it. It's cumbersome to type it
in but it works, which is why I recommended all lowercase
(faster to type on a mobile device).
mcoliver wrote 2 days ago:
entropy is stronger than complexity.
URI [1]: https://xkcd.com/936/
joseda-hg wrote 2 days ago:
I wonder how many people have used Correct Horse Battery Staple
as a password thanks to this comic
smj-edison wrote 2 days ago:
> Makes password "xkcd.com/936"
D-Coder wrote 2 days ago:
Ah, just mix them up randomly: Staple Battery Correct Horse!
betaby wrote 2 days ago:
> but a lot of seemingly dumb security policies are due to
insurers.
I keep hearing that often on HN, however I've personally never seen
seen such demands from insurers.
I would greatly appreciate if one share such insurance policy.
Insurance policies are not trade secrets and OK to be public. I can
google plenty of commercial cars insurance policies for example.
bigbuppo wrote 1 day ago:
The fun part is that they don't demand anything, they just send
you a worksheet that you fill out and presumably it impacts your
rates. You just assume that whatever they ask about is what they
want. Some of what they suggest is reasonable, like having
backups that aren't stored on storage directly coupled to your
main environment.
The worst part about cyber insurance, though, is that as soon as
you declare an incident, your computers and cloud accounts now
belong to the insurance company until they have their chosen
people rummage through everything. Your restoration process is
now going to run on their schedule. In other words, the reason
the recovery from a crypto-locker attack takes three weeks is
because of cyber insurance. And to be fair, they should only have
to pay out once for a single incident, so their designated
experts get to be careful and meticulous.
tmpz22 wrote 2 days ago:
This is such an important comment.
Fear of a prospective expectation, compliance, requirement, etc.,
even when that requirement does not actually exist is so
prevalent in the personality types of software developers.
9x39 wrote 2 days ago:
It cuts both ways. I've struggled to get things like backups or
multifactor authentication approved without being able to point
to some force like regulation or insurance providers that can
dislodge executives' inertia.
My mental model at this point says that if there's a cost to
some important improvement, the politics and incentives today
are such that a typical executive will only do the bare minimum
required by law or some equivalent force, and not a dollar
more.
simonw wrote 2 days ago:
I found an example! [1] Questionnaire Zurich Cyber Insurance
Question 4.2: "Do you have a technically enforced password policy
that ensures use of strong passwords and that passwords are
changed at least quarterly?"
Since this is an insurance questionnaire, presumably your answers
to that question affect the rates you get charged?
(Found that with the help of o4-mini [2] )
URI [1]: https://retail.direct.zurich.ch/resources/definition/pro...
URI [2]: https://chatgpt.com/share/680bc054-77d8-8006-88a1-a6928a...
smithkl42 wrote 1 day ago:
We've been asked that question before on security
questionnaires, and our answer has always been, "Forcing users
to change passwords regularly is widely regarded as a very bad
security practice, and we don't engage in bad security
practices." We've never had anyone complain.
austhrow743 wrote 1 day ago:
I've never had a complaint about anything I put in to a form
requesting a quote for insurance. I just get the quote back.
Did you write that in the comment expecting an insurance
salesperson to call you up and argue passwords with you? Call
their back office and say "hey this guy says our password
question is crap, get our best guys on it!"?
I just cant imagine any outcome other than it was translated
to just a "no" and increased your premium over what it would
have otherwise been.
gusgus01 wrote 1 day ago:
I've also filled out insurance quote forms several times to
see the interplay of the questions and price. Quite often
many of the questions do not change the quote. So the
existence of the question in a form does not imply a change
in price, or any true guess at the magnitude of the change
at all.
betaby wrote 2 days ago:
Password policy is something rather common, and 'standard'
firewalls.
Question is in the context of of WAF as in the article. WAF
requirement is something more invasive to say the least.
kiitos wrote 2 days ago:
Directly following is question 4.3: "Are users always prevented
from installing programs on end-user devices?"
Totally bonkers stuff.
pjmlp wrote 2 days ago:
This is standard practice for years in big corporations.
You install software via ticket requests to IT, and devs
might have admin rights, but not root, and only temporary.
This is nothing new though, back in the timesharing days,
where we would connect to the development server, we only got
as much rights as required for the ongoing development
workflows.
Hence why PCs felt so liberating.
betaby wrote 2 days ago:
It's a standard practice. And at $CURENT_JOB it's driven by
semi-literate security folks, definitely not insurance.
pjmlp wrote 2 days ago:
Insurance and liability concerns drive the security
folks.
Just wait when more countries keep adopting cybersecurity
laws for companies liabilities when software doesn't
behave, like in any other engineering industry.
stefan_ wrote 1 day ago:
Hello, the security folks in those companies made those
up. "cyber insurance" is hogwash. That entire branch
has been taken over by useless middle manager types who
know to type up checklists in Word but have no
understanding of anything.
pjmlp wrote 1 day ago:
As someone that happens to also be one of those
clueless people when assuming DevOps roles in
consulting projects, it is a very bad day when some
clever user is responsible for a security breach.
A breach can turn out into enough money being lost,
in credibility, canceled orders, or lawsuits, big
enough to close shop, or having to fire those that
thought security rules were dumb.
Also anyone with security officer title, in many
countries has legal responsibilities when something
goes wrong, so when they sign off software
deliverables that go wrong, is their signature on the
approval.
blangk wrote 1 day ago:
Are you arguing non technical people should have root
access to company owned and managed PCs? Because I
can tell you from experience, that will result in a
very bad time at some point. Even if it is just for
the single end user and not the wider org.
9x39 wrote 2 days ago:
A trend for corporate workstations is moving closer to a
phone with a locked-down app store, with all programs from a
company software repo.
Eliminating everything but a business's industry specific
apps, MS Office, and some well-known productivity tools
slashes support calls (no customization!) and frustrates
cyberattacks to some degree when you can't deploy custom
executables.
bigfatkitten wrote 1 day ago:
That's why this it's been a requirement for Australian
government agencies for about 15 years.
In around 2011, the Defence Signals Directorate (now the
Australian Signals Directorate) went through and did an
analysis of all of the intrusions they had assisted with
over the previous few years. It turned out that app
whitelisting, patching OS vulns, patching client
applications (Office, Adobe Reader, browsers), and some
basis permission management would have prevented something
like 90% of them.
The "Top 4" was later expanded to the Essential Eight which
includes additional elements such as backups, MFA,
disabling Office macros and using hardened application
configs.
URI [1]: https://www.cyber.gov.au/resources-business-and-go...
michaelt wrote 2 days ago:
Then the users start using cloud webapps to do everything.
I can't install a PDF-to-excel converter, so I'll use this
online service to do it.
At first glance that might seem a poor move for corporate
information security. But crucially, the security of cloud
webapps is not the windows sysadmins' problem - buck
successfully passed.
serial_dev wrote 2 days ago:
I donât think locking down slashes support calls because
you will now receive support requests anytime someone wants
to install something and actually have a good business
reason to do so.
9x39 wrote 2 days ago:
Consider the ones you don't get: ones where PCs have to
be wiped from customization gone wrong, politics and
productivity police calls - "Why is Bob gaming?", "Why is
Alice on Discord?".
It's about the transition from artisanal
hand-configuration to mass-produced fleet standards, and
diverting exceptional behavior and customizations
somewhere else.
Aeolun wrote 1 day ago:
If you donât want exceptional behavior, thatâs
exactly what youâll get. In more than one way.
Alice is on Discord because half of the products the
company uses now give more or less direct access to
their devs through Discord
bornfreddy wrote 2 days ago:
Coupled with protection against executing unknown
executables this also actually helps with security.
It's not like (most) users know which exe is
potentially a trojan.
manwe150 wrote 2 days ago:
You can buy insurance for just about anything, not just cars.
Companies frequently buy insurance against various
low-probability incidents such as loss of use, fraud, lawsuit,
etc.
josephcsible wrote 2 days ago:
Why wouldn't the IT people just tell the grumbling employees that
exact explanation?
bigbuppo wrote 1 day ago:
If you've read this thread, it would appear that most people here
on HN aren't actually involved with policy compliance work
dictated from above. Have you ever seen a Show HN dealing with
boring business decisions? No. We do, however, get
URI [1]: https://daale.club/
Aeolun wrote 1 day ago:
I think the problem is any time we are involved with policy
compliance work itâs because we get a list with inane
requirements on, and nobody to challenge in regard to it.
maccard wrote 2 days ago:
In a lot of cases the it people are just following the rules and
donât know this.
derektank wrote 2 days ago:
IT doesn't always hear the grumbles, hidden away as they
frequently are behind a ticketing system; the help desk
technicians who do hear the grumbles aren't always informed of
the "why" behind certain policies, and don't have the time or
inclination to go look them up if they're even documented; and
it's a very unsatisfying answer even if one receives a detailed
explanation.
Information loss is an inherent property of large organizations.
decasia wrote 1 day ago:
> Information loss is an inherent property of large
organizations.
That's such an interesting axiom, I'm curious if you would want
to say more about it? It feels right intuitively - complexity
doesn't travel easily across contexts and reaching a common
understanding is harder the more people you're talking to.
resonious wrote 1 day ago:
On a more micro level, I find it very hard to write good
documentation. I always forget something that once pointed
out seems obvious. Or worse, the reader is missing some
important context that many other readers are already privy
to. Not to mention, some people don't even seek out docs
before acting.
I imagine this gets amplified in a large org. The docs are
lacking, people might not read them anyway, and you get an
explosion of people who don't understand very much but still
have a job to do.
the8472 wrote 2 days ago:
In small orgs that might happen, in large orgs it's some game of
telephone where the insurance requirements are forwarded to the
security team which makes the policies which are enforced by
several layers of compliance which come down on the local IT
department.
The underlying purpose of the rules and agency to apply the spirt
rather than the letter gets lost early in the chain and trying to
unwind it can be tedious.
lucianbr wrote 2 days ago:
There should be some limits and some consequences to the insurer as
well. I don't think the insurer is god and should be able to
request anything no matter if it makes sense or not and have people
and companies comply.
If anything, I think this attitude is part of the problem.
Management, IT security, insurers, governing bodies, they all just
impose rules with (sometimes, too often) zero regard for
consequences to anyone else. If no pushback mechanism exists
against insurer requirements, something is broken.
jimmaswell wrote 1 day ago:
This is why everyone should have a union, including highly paid
professionals. Imagine what it would be like. "No, fuck you,
we're going on strike until you stop inconveniencing us to death
with your braindead security theater. No more code until you give
us admin on our own machines, stop wasting our time with useless
Checkmarx scans, and bring the firewall down about ten notches."
the8472 wrote 2 days ago:
URI [1]: https://250bpm.substack.com/p/accountability-sinks
mjr00 wrote 2 days ago:
> There should be some limits and some consequences to the
insurer as well. I don't think the insurer is god and should be
able to request anything no matter if it makes sense or not and
have people and companies comply.
If the insurer requested something unreasonable, you'd go to a
different insurer. It's a competitive market after all. But most
of the complaints about incompetent security practices boil down
to minor nuisances in the grand scheme of things. Forced password
changes once every 90 days is dumb and slightly annoying but
doesn't significantly impact business operations. Having to run
some "enterprise security tool" and go through every false
positive result (of which there will be many) and provide an
explanation as to why it's a false positive is incredibly
annoying and doesn't help your security, but it's also something
you could have a $50k/year security intern do. Turning on a WAF
that happens to reject the 0.0001% of Substack articles which
talk about /etc/hosts isn't going to materially change Substack's
revenue this year.
int_19h wrote 1 day ago:
> Forced password changes once every 90 days is dumb and
slightly annoying but doesn't significantly impact business
operations.
It negatively impacts security, because users then pick simpler
passwords that are easier to rotate through some simple
transformation. Which is why it's considered not just useless,
but an anti-pattern.
vladvasiliu wrote 2 days ago:
The issue is that the Finance dept will show up and ask why you
chose the more expensive insurance. Sure, if you're able to
show how much the annoyances of the cheaper company would cost
you, they'd probably shut it. But I'd argue it's not that easy.
Plus, all these annoyances aren't borne by the security team,
so they don't care that much in the end.
HappMacDonald wrote 2 days ago:
My first thought might be to put together a report showing
the cost that the cheaper insurance would impose upon the
organization which the more expensive up-front option is
saving you.
Perhaps even serve that up as a cost-savings the finance
department is free to then take credit for, I'unno. :P
swiftcoder wrote 2 days ago:
Not just economics, audit processes also really encourage adopting
large rulesets wholesale.
We're SOC2 + HIPAA compliant, which either means convincing the
auditor that our in-house security rules cover 100% of the cases
they care about... or we buy an off-the-shelf WAF that has already
completed the compliance process, and call it a day. The CTO is
going to pick the second option every time.
sgarland wrote 1 day ago:
OS-level monitoring / auditing software also never ceases to
amaze me (for how awful it is). Multiple times, at multiple
companies, I have seen incidents that were caused because
Security installed / enabled something (AWS GuardDuty, Auditbeat,
CrowdStrikeâ¦) that tanked performance. My current place has the
latter two on our ProxySQL EC2 nodes. Auditbeat is consuming two
logical cores on its own. I havenât been able to yet quantify
the impact of CrowdStrike, but from a recent perf report, it
seemed like it was using eBPF to hook into every TCP connection,
which is quite a lot for a DB connection poolers.
I understand the need for security tooling, but I donât think
companies often consider the huge performance impact these tools
add.
mjr00 wrote 2 days ago:
Yeah. SOC2 reminds me that I didn't mention sales as well,
another security-as-economics feature. I've seen a lot of
enterprise RFPs that mandate certain security protocols, some of
which are perfectly sensible and others... not so much. Usually
this is less problematic than insurance because the buyer is more
flexible, but sometimes they (specifically, the buyer's company's
security team, who has no interest besides covering their own
ass) refuse to budge.
If your startup is on the verge of getting a 6 figure MRR deal
with a company, but the company's security team mandates you put
in a WAF to "protect their data"... guess you're putting in a
WAF, like it or not.
meindnoch wrote 2 days ago:
>guess you're putting in a WAF, like it or not.
Install the WAF crap, and then feed every request through
rot13(). Everyone is happy!
benaubin wrote 2 days ago:
now you've banned several different arbitrary strings!
connicpu wrote 2 days ago:
Good luck debugging why the string "/rgp/cnffjq" causes
your request to be rejected :)
throwup238 wrote 2 days ago:
Up until you need to exercise the insurance policy and the
court room "experts" come down on you like a ton of bricks.
Wowfunhappy wrote 2 days ago:
Maybe it wouldn't make a difference, but if I was the IT person
telling users they have to change their passwords every 90 days, I
would 100% include a line in the email blaming the insurance
company.
bigfatkitten wrote 1 day ago:
You would probably have no idea what the requirement actually
said or where it ultimately came from.
It would've gone from the insurer to the legal team, to the GRC
team, to the enterprise security team, to the IT engineering
team, to the IT support team, and then to the user.
Steps #1 to #4 can (and do) introduce their own requirements, or
interpret other requirements in novel ways, and you'd be #5 in
the chain.
foobarchu wrote 2 days ago:
I'm not in an IT dept (developer instead), but I'd bet money that
would get you a thorough dressing down by an executive involved
with the insurance. That sort of blaming goes over well with
those at the bottom of the hierarchy, and poorly with those at
the top.
Wowfunhappy wrote 2 days ago:
The insurance people are not a part of the company, so I'm not
sure who would be offended.
I wouldn't be mean about it. I'm imagining adding a line to the
email such as:
> (Yes, I know this is annoying, but it's required by our
insurance company.)
What is the insurance company going to do, jack up our rates
because we accurately stated what their policy was?
int_19h wrote 1 day ago:
The problem is that this particular insurance company was
picked by someone who does work in yours.
Wowfunhappy wrote 20 hours 6 min ago:
This seems like a pretty awful place to work if my bosses
have such fragile egos they can't handle me saying "our
insurance requires us to do X". I'm not even really passing
judgement on whether X is good or bad.
I'm not saying you're wrong, I've never worked in a company
this large (except for a brief internship), or in IT
specifically. But also, like, come on people, grow up.
paxys wrote 2 days ago:
"You never know..." is the worst form of security, and makes systems
less secure overall. Passwords must be changed every month, just to
be safe. They must be 20 alphanumeric characters (with 5 symbols of
course), just to be safe. We must pass every 3-letter compliance
standard with hundreds of pages of checklists for each. The server
must have WAF enabled, because one of the checklists says so.
Ask the CIO what actual threat all this is preventing, and you'll get
blank stares.
As an engineer what incentive is there to put effort into knowing
where each form input goes and how to sanitize it in a way that makes
sense? You are getting paid to check the box and move on, and every
new hire quickly realizes that. Organizations like these aren't
focused on improving security, they are focused on covering their ass
after the breach happens.
chii wrote 2 days ago:
> Ask the CIO what actual threat all this is preventing
the CIO is securing his job.
reaperducer wrote 2 days ago:
the CIO is securing his job.
Every CIO I have worked for (where n=3) has gotten where they are
because they're a good manager, even though they have near-zero
current technical knowledge.
The fetishizing of "business," in part through MBAs, has been
detrimental to actually getting things done.
A century ago, if someone asked you what you do and you replied,
"I'm a businessman. I have a degree in business," you'd get a
response somewhere between "Yeah, but what to you actually do"
and outright laughter.
alabastervlog wrote 2 days ago:
It's a relatively recent change, too. Transition from "the
executives and managers mostly came up through 10-25 years of
doing 'lower' jobs in the company, and very much know how the
business actually works" to "we hire MBAs to those roles
directly" was throughout the '70s-'90s.
Finance and business grads have really taken over the economy,
not just through technocratic "here's how to do stuff" advice
but by personally taking all the reigns of power. They're even
hard at work taking over medicine and pushing doctors out of
the work-social upper-middle-class. Already did it with
professors. Lawyers seem safe, so far.
pxc wrote 2 days ago:
> Lawyers seem safe, so far.
Nope, lawyers are fucked too. It's just not as advanced yet:
URI [1]: https://www.abajournal.com/web/article/arizona-appro...
tmpz22 wrote 2 days ago:
They're taking over veterinary clinics too! The biggest owner
of veterinary clinics is Mars inc. the candy company!
selimthegrim wrote 2 days ago:
I wonder if Matt Levine has a bit about this
ryandrake wrote 2 days ago:
This looks like a variation of the Scunthorpe problem[1], where a
filter is applied too naively, aggressively, and in this case, to the
wrong content altogether. Applying the filter to "other stuff" sent
to and among the servers might make sense, but there doesn't seem to
be any security benefit to filtering actual text payload that's only
going to be displayed as blog content. This seems like a pretty cut
and dried bug to me.
1:
URI [1]: https://en.wikipedia.org/wiki/Scunthorpe_problem
chrisjj wrote 1 day ago:
> This looks like a variation of the Scunthorpe problem[1], where a
filter is applied too naively
No.
> aggressively
No.
>, and in this case, to the wrong content altogether.
Yes - making it not a Scunthorpe problem.
pmarreck wrote 2 days ago:
Correct. And a great example of it.
rurp wrote 2 days ago:
This is exactly what I was thinking as well, it's a great
Scunthorpe example. Nothing from the body of a user article should
ever be executed in any way. If blocking a list of strings is
providing any security at all you're already in trouble because
attackers will find a way around that specific block list.
gfiorav wrote 2 days ago:
I agree. From a product perspective, I would also support the
decision. Should we make the rules more complex by default,
potentially overlooking SQL injection vulnerabilities? Or should we
blanket prohibit anything that even remotely resembles SQL, allowing
those edge cases to figure it out?
I favor the latter approach. That group of Cloudflare users will
understand the complexity of their use case accepting SQL in payloads
and will be well-positioned to modify the default rules. They will
know exactly where they want to allow SQL usage.
From Cloudflareâs perspective, it is virtually impossible to
reliably cover every conceivable valid use of SQL, and it is likely
99% of websites wonât host SQL content.
wat10000 wrote 2 days ago:
Sorry, we have to reject your comment due to security. The text
"Cloudflares" is a potential SQL injection.
gfiorav wrote 2 days ago:
You know, I get the spirit of this criticism. But, specially in
the age of AI, we're going to get thousands of barely reviewed
websites on Cloudflare.
If you know what you're doing, turn these protections off. If you
don't, there's one less hole out there.
int_19h wrote 1 day ago:
The problem is that people who don't know what they are doing
join the cargo cult and then impose these requirements on
people who do know what they are doing.
wat10000 wrote 2 days ago:
In all seriousness, I don't see the justification for blocking
"/etc/hosts" but allowing "'". The latter is probably a million
times more likely to trigger a vulnerability.
krferriter wrote 2 days ago:
If your web application is relying on Cloudflare filtration of
input values to prevent SQL injection, your web application is
vulnerable to SQL injection.
p_ing wrote 2 days ago:
Defense in-depth. I would hope few would want a vulnerable web
app and simply protect it via a WAF. But just because your web
app is 'invulnerable' doesn't mean you should forgo the WAF.
immibis wrote 2 days ago:
My defense in depth blocks Content-Length that's a prime number
or divisible by 5. Can't be too safe!
RKFADU_UOFCCLEL wrote 2 days ago:
What? If I construct my queries the right way (e.g., not
concatenating strings together like it's the year 1990), then I
never will want a WAF "helping" me by blocking my users because
they have an apostrophe in their name.
patrakov wrote 22 hours 17 min ago:
> If I construct my queries the right way (e.g., not
concatenating strings together like it's the year 1990)...
(in the anti-WAF camp but playing a pedant here)
In your Django app, you indeed follow the best practices and
don't concatenate strings together and so think that this
security theater doesn't apply. Yet, this is precisely how
Django ORM works under the hood, and SQL injections are
periodically found there.
The real solution here is to subscribe to the django-announce
list and update Django, or backport the fix manually.
p_ing wrote 2 days ago:
That's a very narrow view of what a WAF does. You may want to
review the OWASP ruleset at [1] . However, this is just the
ruleset. WAF vendors usually offer features above and beyond
OWASP rule parsing.
And WAF rules can be tuned. There's no reason an apostrophe
in a username or similar needs to be blocked, if it were by a
rule.
URI [1]: https://coreruleset.org/
TheDong wrote 1 day ago:
Okay, I'll look at the "coreruleset" which you say is good.
Let's see what's blocked:
"Division by zero" anywhere in the response body since
that's a php error. Good luck talking about math ([0] and
[1])
Common substrings in webshells, all matched as strings in
response bodies, rather than parsing HTML, so whatever,
don't comment about webshells either [2] Unless the body is
compressed, in which case don't apply the above. Security
[3].
Also, read this regex and tell me you understand what it's
doing. Tell me the author of it understands what it
matches: [1] What the coreruleset is doing here is trying
to parse HTML, SQL, HTTP, and various other languages with
Regular Expressions. This doesn't work. This will never
give you a right result.
It's trying to keep up to date with the string
representation of java and php errors, without even knowing
the version of Java the server is running, and without the
Java maintainers, who constantly add new errors, having any
say.
The only reasons attackers aren't evading the webshell
rules here trivially is because so few people use these
rules in practice that they're not even worth defeating
(and it is quite easy to have your php webshell generate
unique html each load, which cannot be matched by a regular
expression short of /.*/; html is not a regular grammar).
I was ready to see something that made WAFs feel like they
did _anything_ based on your comment, but all I see is a
pile of crap that I would not want anywhere near my site.
Filtering java error strings and php error strings out of
my rust app's responses using regexes to parse html is just
such a clown-world idea of security. Blocking the loading
of web-shells until the attacker changes a single character
in the 'title' block of the output html seems so dumb when
my real problem is that someone could write an arbitrary
executable to my server.
Every WAF ruleset I've read so far has made me sure it's a
huge pile of snake-oil, and this one is no different.
[0]: [1] [2]: [1] [3]:
URI [1]: https://github.com/coreruleset/coreruleset/blob/94...
URI [2]: https://github.com/coreruleset/coreruleset/blob/94...
URI [3]: https://github.com/coreruleset/coreruleset/blob/94...
URI [4]: https://github.com/coreruleset/coreruleset/blob/94...
URI [5]: https://github.com/coreruleset/coreruleset/blob/94...
p_ing wrote 1 day ago:
The issue is you're picking out bits and pieces that
/seem/ correct to you, but you don't seem to have
experience defending a public website from intrusions.
These rules do in fact work. Like I've said previously,
these rules require tuning for your particular website.
If I'm "talking about math" then I would modify or
disable that rule as needed.
I think this is the forest you're missing. WAF isn't
"install it and walk away". WAF needs to be tested in
conjunction with your release, like any other code would.
The WAF can and does protect against attacks your code
would never think of. It also /logs requests/ in a way
your web server will not, making it invaluable for
auditing.
And when running 3rd party software that has a function
you cannot control, but need to prevent, WAFs can do
that, too. I have a particular query string that must
work from an internal but not external network while
external/internal users leverage the same URL -- WAF can
do that with a custom rule examining the query string and
denying access to the outside world.
Or if I need to prevent [AI] bot scraping. WAF can do
that with a couple of clicks.
WAF also unloads the web server from malicious traffic.
Instead of having to size up or out a web server, I can
have a WAF appliance prevent that traffic from ever
reaching the server.
> Every WAF ruleset I've read so far
You don't appear to have any experience with
implementation or operation of a WAF, but are attempting
to be authoritative and dismiss a WAFs utility.
krferriter wrote 2 days ago:
But what is being defended against? This is blocking legitimate
user behavior. Would it be defense in depth to also prohibit
semicolons or two consecutive hyphen characters in all content?
If your app is constructing paths to read from the server's
filesystem based on substrings contained within client-provided
field values, throwing an error if `"/etc/hosts"` appears in
any input is not going to save you.
p_ing wrote 2 days ago:
Unknown or unforeseen attacks. The WAF ruleset can be updated
much faster than code. WAFs also provide flexibility in how
requests are responded to, or even disallow access from IP
ranges, certain browsers, etc.
WAFs do throw false positives and do require adjustments OOTB
for most sites, but youâre missing the forest by focusing
on this single case.
Y_Y wrote 2 days ago:
Why not just whitelist the thousand most common words? That should
be good enough for 99% of approriate content, and the smelly nerds
who make websites or talk about them can take their tiny market
segment and get bent.
_blk wrote 2 days ago:
100!
[good] security just doesn't work as a mixing pattern...
I'm not saying it's necessarily bad to use those additional
protections but they come with severe limitations so the total value
(as in cost/benefit) is hard to gauge.
righthand wrote 2 days ago:
Similar:
Writing `find` as the first word in your search will prevent Firefox
from accepting the âreturnâ key is pressed.
Pretty annoying.
righthand wrote 1 day ago:
EDIT: Apparently this is caused by the âfindplusâ extension.
Removed!
jandrese wrote 2 days ago:
Are you sure you don't have a custom search rule configured in
Firefox? I just tried this on my local instance and there was no
problem.
apetresc wrote 2 days ago:
I can't reproduce this; is it still the case, or some ancient thing?
kmoser wrote 2 days ago:
I can't reproduce it, either, using FF 137.0.2 (64-bit) on Windows.
mrgoldenbrown wrote 2 days ago:
Everything old is new again :) We used to call this the Scunthorpe
problem.
URI [1]: https://en.m.wikipedia.org/wiki/Scunthorpe_problem
odirf wrote 1 day ago:
It is time to add the Substack case to this Wikipedia article.
kreddor wrote 2 days ago:
I remember back in the old days on the Eve Online forums when the
word cockpit would always turn up as "c***pit". I was quite amused by
that.
Too wrote 1 day ago:
This is actually a better solution, replacing dangerous words with
placeholders, instead of blocking the whole payload. That at least
gives the user some indication of what is going on. Not that I'm
for any such WAF filters in the first place, just if having to
choose between the lesser of two evils I'd choose the more
informative.
Matumio wrote 1 day ago:
Not so sure. Imagine you have a base64 encoded payload and it
just happens to encode the forbidden word. Good luck debugging
that, if the payload only gets silently modified.
I suddenly understand why it makes sense to integrity-check a
payload that is already protected by all three of TLS, TCP
checksum and CRC.
Too wrote 17 hours 25 min ago:
Good point, i take take that back. Having payload mutated would
indeed be even more scary. Even more so if it actually contains
real queries, imagine what could happen if /etc/hosts becomes
/etc/*.
reverendsteveii wrote 2 days ago:
"I wonder why it's called Scunthorpe....?"
sits quietly for a second
"Oh nnnnnnnooooooooooooooo lol!"
greghendershott wrote 2 days ago:
See also: Recent scrubbing US government web sites for words like
"diversity", "equity", and "inclusion".
Writing about biology, finance, or geology? Shrug.
Dumb filtering is bad enough when used by smart people with good
intent.
netsharc wrote 1 day ago:
Huh, quick tell one Musk's DOGE l33t h4ck3ers about reverse
proxies, and put all government sites behind one, that looks for
those words and returns an error... Error 451 would be the most
appropriate!
For bonus, the reverse proxy will run on a system infiltrated by
Russian (why not Chinese as well) hackers.
arp242 wrote 2 days ago:
Few years ago I had an application that allowed me to set any password,
but then gave mysterious errors when I tried to use that password to
login. Took me a bit to figure out what was going on, but their WAF
blocked my "hacking attempt" of using a ' in the password.
The same application also stored my full password in localStorage and a
cookie (without httponly or secure). Because reasons. Sigh.
I'm going to do a hot take and say that WAFs are bollocks mainly used
by garbage software. I'm not saying a good developer can't make a
mistake and write a path traversal, but if you're really worried about
that then there are better ways to prevent that than this approach
which obviously is going to negatively impact users in weird and
mysterious ways. It's like the naïve /(fuck|shit|...)/g-type "bad word
filter". It shows a fundamental lack of care and/or competency.
Aside: is anyone still storing passwords in /etc/passwd? Storing the
password in a different root-only file (/etc/shadow,
/etc/master.passwd, etc.) has been a thing on every major system since
the 90s AFAIK?
reverendsteveii wrote 2 days ago:
my bank requires non-alphanumeric characters in their passwords but
will reject a password if it has alphanumeric characters it
associates with command injection attacks.
as far as WAFs being garbage, they absolutely are, but this is a
great time for a POSIWID analysis. A WAF says its purpose is to
secure web apps. It doesn't do that, but people keep buying them. Now
we're faced with a crossroads: we either have to assume that everyone
is stupid or that the actual purpose of a WAF is something other than
its stated purpose. I personally only assume stupidity as a last
resort. I find it lazy and cynical, and it's often used to dismiss
things as hopeless when they're not actually hopeless. To just say
"Oh well, people are dumb" is a thought-terminating cliche that
ignores potential opportunities. So we do the other thing and
actually take some time to think about who decides to put a WAF
in-place and what value it adds for them. Once you do that, you see
myriad benefits because a WAF is a cheap, quick solution that allows
non-technical people to say they're doing something. You're the
manager of a finance OU that has a development group in it whose
responsibility is some small web app. Your boss just read an article
about cyber security and wants to know what this group two levels
below you is doing about cyber security. Would you rather come back
with "We're gonna need a year, $1 million and every other dev
priority to be pushed back in order to develop a custom solution" or
"We can have one fired up tomorrow for $300/mo, it's developed and
supported by Microsoft and it's basically industry standard." The
negative impact of these things is obvious to us because this is what
we do, but we're not always the decision-makers for stuff like that.
Often the decision-makers are actually that naive and/or they're
motivated less by the ostensible goal of better web app security and
more by the goal of better job security.
As far as etc/passwd you're right that passwords don't live there
anymore but user IDs often do and those can indicate which services
are running as daemons on a given system. This is vital because if
you can figure out what services are running you can start version
fingerprinting them and then cross-referencing those versions with
the CVE database.
tlb wrote 2 days ago:
It's more that /etc/hosts and /etc/passwd are good for testing
because they always exist with predictable contents on almost every
system. If you inject "cat /etc/passwd" to various URLs you can grep
for "root:" to see if it worked.
So it's really blocking doorknob-twisting scripts.
arp242 wrote 2 days ago:
Oh yeah, I've used it for that purpose. Seems rather silly to block
that outright though since you can use many commonly distributed
files.
0xbadcafebee wrote 2 days ago:
Worth noting that people here are assuming that the author's assumption
is correct, that his writing /etc/hosts is causing the 403, and that
this is either a consequence of security filtering, or that this
combination of characters at all that's causing the failure. The only
evidence he has, is he gets back a 403 forbidden to an API request when
he writes certain content. There's a thousand different things that
could be triggering that 403.
It's not likely to be a WAF or content scanner, because the HTTP
request is using PUT (which browser forms don't use) and it's uploading
the content as a JSON content-type in a JSON document. The WAF would
have to specifically look for PUTs, open up the JSON document, parse
it, find the sub-string in a valid string, and reject it. OR it would
have to filter raw characters regardless of the HTTP operation.
Neither of those seem likely. WAFs are designed to filter on specific
kinds of requests, content, and methods. A valid string in a valid JSON
document uploaded by JavaScript using a JSON content-type is not an
attack vector. And this problem is definitely not path traversal
protection, because that is only triggered when the string is in the
URL, not some random part of the content body.
SonOfLilit wrote 2 days ago:
You're being downvoted because WAFs work exactly like this, and it's
intentional and their vendors think this is a good thing. A WAF
vendor would say that a WAF parsing JSON makes it weaker.
immibis wrote 2 days ago:
They're being downvoted because they're saying the author is
incorrect when the author is actually correct.
0xbadcafebee wrote 2 days ago:
It's frightening that so many people are convinced the author is
correct, when the author never proved they were correct.
The author just collected a bunch of correlations and then
decided what the cause was. I've been doing this kind of work for
many, many years. Just because it looks like it's caused by one
thing, doesn't mean it is.
Correlation is not causation. That's not just a pithy quip,
there's a reason why it's important to actually find causation.
immibis wrote 1 day ago:
It's more like I saw a big ball fall down and make a hole in
the floor and concluded it must be heavy.
SonOfLilit wrote 2 days ago:
Having had three opportunities in my life to diagnose this
exact problem and then successfully resolve it by turning off
the WAF rule (see my top level comment) - I don't know you or
your work history, but trust me, the author is much closer to
the truth here than you are.
edit: Also, someone commented here "it was an irrelevant cf WAF
rule, we disabled it". Assuming honesty, seems to confirm that
the author was indeed right.
Null-Set wrote 2 days ago:
See [1] rule 100741.
It references this CVE [2] which allows the reading of system files.
The example given shows them reading /etc/passwd
URI [1]: https://developers.cloudflare.com/waf/change-log/2025-04-22/
URI [2]: https://github.com/tuo4n8/CVE-2023-22047
ryandrake wrote 2 days ago:
If you change a single string in the HTTP payload and it works, what
other explanation makes sense besides a text scanner somewhere along
the path to deploying the content?
apetresc wrote 2 days ago:
It sure looks like the author did his due diligence; he has a chart
of all the different phrases in the payload which triggered the 403
and they all corresponded to paths to common UNIX system
configuration files.
Nobody could prove that's exactly what's happening without seeing
Cloudflare's internal WAF rules, but can you think of any other
reasonable explanation? The endpoint is rejecting a PUT who's payload
contains exactly /etc/hosts, /etc/passwd, or /etc/ssh/sshd_config,
but NOT /etc/password, /etc/ssh, or /etc/h0sts. What else could it
be?
simonw wrote 2 days ago:
Yeah, the author clearly put the work in to demonstrate what's
happening here.
pimanrules wrote 2 days ago:
We faced a similar issue in our application. Our internal Red Team was
publishing data with XSS and other injection attack attempts. The
attacks themselves didn't work, but the presence of these entries
caused our internal admin page to stop loading because our corporate
firewall was blocking the network requests with those payloads in them.
So an unsuccessful XSS attack became an effective DoS attack instead.
darkwater wrote 2 days ago:
This is funny and sad at the same time.
netsharc wrote 2 days ago:
Reminds me of an anecdote about an e-commerce platform: someone coded a
leaky webshop, so their workaround was to watch if the string
"OutOfMemoryException" shows up in the logs, and then restart the app.
Another developer in the team decided they wanted to log what customers
searched for, so if someone typed in "OutOfMemoryException" in the
search bar...
PhilipRoman wrote 2 days ago:
Careless analysis of free-form text logs is an underrated way to
exploit systems. It's scary how much software blindly logs data
without out of band escaping or sanitizing.
ycombinatrix wrote 2 days ago:
Why would someone "sanitize" OutOfMemoryException out of their
logs? That is a silly point to make.
MortyWaves wrote 1 day ago:
Absolutely incredible how dense HN can be and that no one has
explained. Obviously that isnât what they are saying, they are
saying itâs profoundly stupid to have the server be controlled
by a simple string search at all.
teraflop wrote 2 days ago:
The point is not to sanitize known strings like
"OutOfMemoryException". The point is to sanitize or (preferably)
escape any untrusted data that gets logged, so that it won't be
confused for something else.
swyx wrote 2 days ago:
i think GP's point is how would you even sanitize the string
"OutOfMemoryException" which presumably comes from a trusted
system
i guess demanding "Structured logs for everything or bust" is
the answer? (i'm not a big o11y guy so pardon me if this is
obvious)
noisem4ker wrote 2 days ago:
"o11y" stands for "observability".
Numeronyms are evil and we should stop using them.
vintermann wrote 1 day ago:
t4s, couldn't agree more.
ramon156 wrote 2 days ago:
You're right, avoiding them gives better a11y
j1elo wrote 2 days ago:
Thanks. My mind started running the random string generator
given those restrictions, like a puzzle game. But had no
idea of what it meant until you wrote it. Who invented that
stupid idea and thought it would be a good one?
swyx wrote 2 days ago:
because its easily googlable.
counter point - people are going to use them, better to
expose newbies early and often and then everyone is
better off
shorthands will always be in demand. we used to say
âhorseless carriageâ, then âautomobileâ, then
âcarâ. would you rather use Light amplification by
stimulated emission of radiation or just âlaserâs?
etc
in the new york times? sure, spell out observability. but
on HN? come on. the term is 7 years old and is used all
over the site. itâs earned it
alpaca128 wrote 1 day ago:
Or we could use words that don't require people to use
Google just to be able to read. A dependence on a
search engine for reading text is unnecessary.
If you find typing tedious just use autocomplete or
other tools instead of making it the readers' problem.
immibis wrote 1 day ago:
no we drive to work in our h16e to set up the l47n
stalfosknight wrote 1 day ago:
I've never seen it before.
PhilipRoman wrote 2 days ago:
Low tech example: escape all newlines in user supplied
strings, then add a known prefix to all user supplied data
(let's say a double hashtag ##, but anything else works too).
When you want to search logs for strings coming from your
system, remove/ignore everything after the marker.
It all comes down to understanding whether the intersection
of two grammars is empty.
jethro_tell wrote 2 days ago:
The difficulty here is that in the example above, it's
unlikely, given any amount of scale, that the two people
were on the same team. They were doing different things
with the same data and probably didn't know what the other
was doing.
Sure you could add a convention to your 'how to log' doc
that specifies that all user input should be tagged with
double '#' but who reads docs until things break?
convention is a shitty way to make things work.
There's 100 ways that you could make this work correctly.
Only restarting on a much more specific string, i.e.
including the app name in the log line etc . . . but that's
all just reducing the likely hood that you get burned.
I've also written a OOM-Killer.sh myself, I'm not above
that, but it's one of those edge cases that's impossible to
do correctly, which is why parsing and acting on log data
generally considered and anti-pattern.
owebmaster wrote 2 days ago:
An OutOfMemoryException log should not be the same as a search
log
Error: OutOfMemoryException
And
Search: OutOfMemoryException
Should not be related in any way
dusanz wrote 2 days ago:
Until someone searches for "Error: OutOfMemoryException"
happysadpanda2 wrote 1 day ago:
I read the gp to mean that error.log (being parsed to look
for OOM) would have no associations with userSearches.log, in
which an end-user searched for OOM
PhilipRoman wrote 2 days ago:
If structured logging is too much, unique prefixes solve this
issue. Basically you need some token that user provided data
is unable to output to the log. If you rigorously escape all
newlines, you can then use start-of-line and end-of-line as
unforgeable tokens. The possibilities are endless and it all
comes down to understanding whether the intersection of two
grammars is empty.
skipants wrote 2 days ago:
I've actually gone through this a few times with our WAF. A user got
IP-banned because the WAF thought a note with the string "system(..."
was PHP injection.
badgersnake wrote 2 days ago:
Seems like a case of somebody installing something they couldnât be
bothered to understand to tick a box marked security.
The outcome is the usual one, stuff breaks and there is no additional
security.
aidog wrote 2 days ago:
It's something I ran into quite a few times in my career. It's a weird
call to get if the client can't save their cms site, due to typing
something harmless. I think worst was when there was a dropdown that I
defined which had a value in the mod rules that was not allowed.
sfoley wrote 2 days ago:
I cannot reproduce this.
Y_Y wrote 2 days ago:
Does it block `/etc//hosts` or `/etc/./hosts`? This is a ridiculous
kind of whack-a-mole that's doomed to failure. The people who wrote
these should realize that hackers are smarter and more determined than
they are and you should only rely on proven security, like not
executing untrusted input.
tom1337 wrote 1 day ago:
Well I've just created an account on substack to test this but turns
out they've already fixed the issue (or turned off their WAF
completely)
augusto-moura wrote 2 days ago:
How would that be hard? Getting the absolute path of a string is in
almost all languages stdlibs[1]. You can just grep for any string
containing slashes and try resolve them and voilá
Resolving wildcards is trickier but definitely possible if you have a
list of forbidden files
[1] Edit: changed link because C's realpath has a slightly different
behavior
URI [1]: https://nodejs.org/api/path.html#pathresolvepaths
TheDong wrote 1 day ago:
The reason it's doomed to failure is because WAFs operate before
your application, and don't have any clue what the data is.
Here is a WAF matching line: [1] Here's where that file is loaded:
[1] It's loaded with '"@pmFromFile lfi-os-files.data"' which means
"case-insensitive match of values from a file".
So yeah, the reason it can't resolve paths properly is because WAFs
are just regex and substring matching trying to paper over security
issues in an application which can only be solved correctly at the
application level.
URI [1]: https://github.com/coreruleset/coreruleset/blob/943a6216ed...
URI [2]: https://github.com/coreruleset/coreruleset/blob/943a6216ed...
watusername wrote 1 day ago:
> How would that be hard? Getting the absolute path of a string is
in almost all languages stdlibs[1]. You can just grep for any
string containing slashes and try resolve them and voilá
Be very, very careful about this, because if you aren't, this can
actually result in platform-dependent behavior or actual filesystem
access. They are bytes containing funny slashes and dots, so
process them as such.
Edit: s/text/bytes/
myflash13 wrote 2 days ago:
Itâs not hard, but I think thatâs more computation than a CDN
should be doing on the edge. If your CDN layer is doing path
resolution on all strings with slashes, thatâs already some heavy
lifting for a proxy layer.
nickdothutton wrote 2 days ago:
See "enumerating badness" as a losing strategy. I knew this was a bad
idea about 5 minutes after starting my first job in 1995.
eli wrote 2 days ago:
Is a security solution worthless if it can't stop a dedicated
attacker? A lot of WAF rules are blocking probes from off-the-shelf
vulnerability scanners.
kevincox wrote 2 days ago:
IMHO the primary value for WAFs is for quickly blocking known
vulnerabilities with specific rules to mitigate vulnerabilities
while they are being properly patched. Ideally the WAF knows what
software is behind it (example WordPress, Java app, ...) and can
apply filters that may be relevant.
Anything else is just a fuzzy bug injector that will only stop the
simplest scanners and script kiddies if you are lucky.
richardwhiuk wrote 2 days ago:
Every security solution can only stop a certain fraction of
attacks.
ndsipa_pomu wrote 2 days ago:
It's merely security theater.
It reminds me of when airports started scanning people's shoes
because an attacker had used a shoe bomb. Yes, that'll stop an
attacker trying a shoe bomb again, but it disadvantages every
traveller and attackers know to put explosives elsewhere.
geoffpado wrote 2 days ago:
âattacker had used a shoe bombâ
Itâs even dumber than that. An attacker tried and failed to use
a shoe bomb, and yet his failure has caused untold hours of
useless delay for over 13 years now.
kevin_thibedeau wrote 2 days ago:
Now you have to buy your liberty with pre-check.
da_chicken wrote 2 days ago:
"It's technically better than nothing," is kind of a bizarre
metric.
It's like not allowing the filesystem to use the word "virus" in a
file name. Yes, it technically protects against some viruses, but
it's really not very difficult to avoid while being a significant
problem to a fair number of users with a legitimate use case.
It's not that it's useless. It's that it's stupid.
jrockway wrote 2 days ago:
Yeah, and this seems like a common Fortune 500 mandatory checkbox.
Gotta have a Web Application Firewall! Doesn't matter what the rules
are, as long as there are a few. Once I was told I needed one to
prevent SQL injection attacks... against an application that didn't
use an SQL database.
If you push back you'll always get a lecture on "defense in depth",
and then they really look at you like you're crazy when you suggest
that it's more effective to get up, tap your desk once, and spin
around in a circle three times every Thursday morning. I don't
know... I do this every Thursday and I've never been hacked. Defense
in depth, right? It can't hurt...
vultour wrote 1 day ago:
A large investment bank I worked for blocked every URL that ended
in `.go`. Considering I mostly wrote Golang code it was somewhat
frustrating.
hnlmorg wrote 2 days ago:
Iâm going through exactly this joy with a client right now.
âWe need SQL injection rules in the WAFâ
âBut we donât have an SQL databaseâ
âBut we need to protect against the possibility of partnering
with another company that needs to use the same datasets and wants
to import them into a SQL databaseâ
In fairness, these people are just trying to do their job too. They
get told by NIST (et al) and Cloud service providers that WAF is
best practice. So itâs no wonder theyâd trust these snake oil
salesman over the developers who asking not to do something
âsecurityâ related.
zelphirkalt wrote 1 day ago:
If they want to do their job well, how about adding some thinking
into the mix, for good measure? Good would also be,if they
actually knew what they are talking about, before trying to tell
the engineers what to do.
immibis wrote 1 day ago:
They don't want to do their job well. They want to look like
they're doing their job well, to people who don't know how to
do the job and whose metrics are completely divorced from
actual merit.
hnlmorg wrote 1 day ago:
> If they want to do their job well, how about adding some
thinking into the mix, for good measure?
Thatâs what the conversation I shared is demonstrating ;)
> Good would also be,if they actually knew what they are
talking about, before trying to tell the engineers what to do.
Often the people enduring the rules arenât supposed to be
security specialists. Because youâll have your SMEs (subject
matter experts) and your stockholders. The stakeholders will
typically be project managers or senior management (for
example) who have different skill sets and priorities to the
SMEs.
The problem is that when it comes to security, itâs a
complicated field where caution is better than lack of caution.
So if a particular project does call on following enhanced
secret practices, it becomes a ripe field for snake oil
salesman.
Or to put it another way: no company would get sued for
following security theatre but they are held accountable if
there is a breach due to not following security best practices.
So often it doesnât matter how logical and sensible the
counter argument is, itâs automatically a losing argument
bombcar wrote 2 days ago:
I love that having a web application firewall set to allow
EVERYTHING passes the checkbox requirement ...
CoffeeOnWrite wrote 2 days ago:
(Iâm in the anti-WAF camp) That does stand to improve your
posture by giving you the ability to quickly apply duct tape to
mitigate an active mild denial of service attack. Itâs not
utterly useless.
krferriter wrote 2 days ago:
Denial of service prevention and throttling of heavy users is a
fine use, searching for a list of certain byte strings inside
input fields and denying requests that contain them isn't.
elevation wrote 2 days ago:
Doesn't it also add latency to every request?
swyx wrote 2 days ago:
sure but how much? 3-10ms is fine for the fast protection
when shit hits the fan.
formerly_proven wrote 2 days ago:
So does running McAfee on every POST body but some places
really wanna do that regardless. (I at least hope the scanner
isn't running in the kernel for this one).
jrockway wrote 2 days ago:
Yeah, we were asked to do this at my last job by some sort
of security review. This one doesn't bother me as much.
"Display 'network error' whenever a user uploads a file
containing 'SELECT *'" is a bad user experience. "Some
files in this repository have been flagged as containing a
virus and are not visible in the web interface until
allowed by an administrator," is OK with me, though.
tough wrote 2 days ago:
I think the main point is the WAF companies must have lobbied
to get that into the checklist
the main point is you need to pay a third party
CoffeeOnWrite wrote 2 days ago:
You can call your existing reverse proxy a WAF to check
this checklist item. (Your point still stands, on the
median companies may opt to purchase a WAF for various
reasons.)
zelphirkalt wrote 1 day ago:
Often it is just pushing responsibility.
mystifyingpoi wrote 2 days ago:
No one expects any WAF to be a 100% solution that catches all
exfiltration attempts ever, and it should not be treated this way.
But having it is generally better than not having it.
wavemode wrote 2 days ago:
No, that logic doesn't follow. If your application is so hopelessly
vulnerable as to benefit from such naive filtering of the text
"/etc/hosts, then your application is still going to be vulnerable
in precisely the same ways, with just slightly modified inputs.
It is net zero for security and net negative for user experience,
so having it is worse than not having it.
serial_dev wrote 2 days ago:
Net zero for security might be generous.
The way I assume it works in practice on a real team is that
after some time, most of your team will have no idea how the WAF
works and what it protects against, where and how it is
configured⦠but they know it exists, so they will no longer pay
attention to security because âwe have a tool for thatâ,
especially when they should have finished that feature a week
agoâ¦
Macha wrote 2 days ago:
> But having it is generally better than not having it.
The problem is that generally you're breaking actual valid use
cases as the tradeoff to being another layer of defense against
hypothetical vulnerabilities.
Yes, discussing the hosts file is a valid use case.
Yes putting angle brackets in the title of your message is valid
use case your users are going to want.
Yes putting "mismatched" single quotes inside double quotes is a
thing users will do.
Yes your users are going to use backslashes and omit spaces in a
way that looks like attempts at escaping characters.
(All real problems I've seen caused by overzealous security
products)
rcxdude wrote 2 days ago:
Is it? The WAF is also now an attack surface itself, and I don't
think WAFs have exactly proven themselves as something that
meaningfully increases security. They certainly break things
unpredictably, though.
wyager wrote 2 days ago:
> But having it is generally better than not having it.
Why? It obviously has an annoying cost and equally obviously won't
stop any hacker with a lukewarm IQ
simonw wrote 2 days ago:
"But having it is generally better than not having it."
I believe the exact opposite.
One (of many) reasons is that it can make your code less secure, by
hiding your security mistakes from you.
If your WAF obscures escaping issues during your own testing and
usage you could very easily let those escaping issues go unresolved
- leaving you vulnerable to any creative attacker who can outsmart
your WAF.
RamRodification wrote 2 days ago:
If you are in charge of testing code for escaping issues, and you
do that through a WAF, you might not be very good at your job.
smallnix wrote 2 days ago:
Dropping 0.5% of requests will prevent even the most sophisticated
attacks (think APT!). Sometimes.
pyrale wrote 2 days ago:
Dropping 95% is even more secure, plus it lives the lucky few
that get past it a sense of pride and exclusivity.
Y_Y wrote 2 days ago:
Is that like a "sense of pride and accomplishment"?
URI [1]: https://knowyourmeme.com/memes/events/star-wars-battle...
paxys wrote 2 days ago:
> But having it is generally better than not having it.
So is HN and every other site in the world insecure because it
allows users to post "/etc/hosts" ?
mystifyingpoi wrote 2 days ago:
Maybe? I don't know nor care. Assuming that HN has a vuln with
path traversal, a sanely configured WAF would block the traversal
attempt.
latexr wrote 2 days ago:
I propose someone who doesnât know or care how a system works
shouldnât be prescribing what to do to make it secure.
Otherwise this is like suggesting every gate must have a lock
to be secure, even those which arenât connected to any walls.
URI [1]: https://i.imgur.com/ntYUQB1.jpeg
MatthiasPortzel wrote 2 days ago:
> someone who doesnât know or care how a system works
shouldnât be prescribing what to do to make it secure
The part thatâs not said outloud is that a lot of
âcomputer securityâ people arenât concerned with
understanding the system. If they were, theyâd be
engineers. Theyâre trying to secure it without
understanding it.
saagarjha wrote 1 day ago:
Good computer security people are engineers.
smallnix wrote 2 days ago:
*some traversal attempts
petercooper wrote 2 days ago:
I ran into a similar issue with OpenRouter last night. OpenRouter is a
âswitchboardâ style service that provides a single endpoint from
which you can use many different LLMs. Itâs great, but last night I
started to try using it to see what models are good at processing raw
HTML in various ways.
It turns out OpenRouterâs API is protected by Cloudflare and
something about specific raw chunks of HTML and JavaScript in the POST
request body cause it to block many, though not all, requests. Going
direct to OpenAI or Anthropic with the same prompts is fine. I
wouldnât mind but these are billable requests to commercial models
and not OpenRouterâs free models (which I expect to be heavily
protected from abuse).
esafak wrote 2 days ago:
Did you report it?
petercooper wrote 2 days ago:
Not yet, other than on X, because the project's comms is oriented
around Discord which involves some hoop jumping.
(Update: On the way to doing that, I decided to run my tests again
and they now work without Cloudflare being touchy, so I'll keep an
eye on it!)
(Update 2: They just replied to me on X and said they had fixed
their Cloudflare config - happy days!)
paxys wrote 2 days ago:
This isn't a "security vs usability" trade-off as the author implies.
This has nothing to do with security at all.
/etc/hosts
See, HN didn't complain. Does this mean I have hacked into the site?
No, Substack (or Cloudflare, wherever the problem is) is run by people
who have no idea how text input works.
gav wrote 2 days ago:
It's more so that Cloudflare has a WAF product that checks a box for
security and makes people who's job it is to care about boxes being
checked happy.
For example, I worked with a client that had a test suite of about
7000 or so strings that should return a 500 error, including
/etc/hosts and other ones such as:
../../apache/logs/error.log
AND%20(SELECT%208203%20FROM%20(SELECT(SLEEP(5)))xGId)
/../..//../..//../..//../winnt/system32/netstat.exe?-a
We "failed" and were not in compliance as you could make a request
containing one of those strings--ignoring that neither Apache, SQL,
or Windows were in use.
We ended up deploying a WAF to block all these requests, even though
it didn't improve security in any meaningful way.
WesolyKubeczek wrote 1 day ago:
Why in the world should those be 500 even? Those all are "40x
client fuckup".
I guess someone was told, when compiling those strings, that they
should observe this known-good implementation (that actually
crashed upon receiving such things) and record whatever it returns,
and then mandate it of everyone else from now on.
krferriter wrote 2 days ago:
> For example, I worked with a client that had a test suite of
about 7000 or so strings that should return a 500 error
> We "failed" and were not in compliance as you could make a
request containing one of those strings--ignoring that neither
Apache, SQL, or Windows were in use.
this causes me pain
mystifyingpoi wrote 2 days ago:
> is run by people who have no idea how text input works
That's a very uncharitable view. It's far more likely that they are
simply using some WAF with sane defaults and never caught this.
They'll fix it and move on.
immibis wrote 2 days ago:
with insane defaults FTFY
eli wrote 2 days ago:
It's a text string that is frequently associated with attacks and
vulnerabilities. In general you want your WAF to block those things.
This is indeed the point of a WAF. Except you also don't want it to
get in the way of normal functionality (too much). That is what the
security vs usability trade off is.
This particular rule is obviously off. I suspect it wasn't intended
to apply to the POST payload of user content. Perhaps just URL
parameters.
On a big enough website, users are doing weird stuff all the time and
it can be tricky to write rules that stop the traffic you don't want
while allowing every oddball legitimate request.
SonOfLilit wrote 2 days ago:
Your auditor wants your WAF to block those things. _You_, at least
I, never ever want to have a WAF at all, as they cause much more
harm than good and, as a product category, deserve to die.
orlp wrote 2 days ago:
This is like banning quotes from your website to 'solve' SQL
injection...
betenoire wrote 2 days ago:
"magic quotes" have entered the chat :D
macspoofing wrote 2 days ago:
My thought exactly - this isn't an example of balance between
"security vs usability" - this is just wrong behaviour.
DIR <- back to front page