_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI Ideas from "A Philosophy of Software Design"
johnwatson11218 wrote 13 hours 1 min ago:
A good game is supposed to be "easy to learn and hard to master". I
think software abstractions should have this property as well. Too
often the next "fix" in a long chain of failed ideas in overly
engineered software feels like the Batman games where one has to
complete a mini tutorial to learn to use the "bat-whatever" for a
single application/puzzle. Contrast this with the Borderlands
franchise, I can learn to play Borderlands in 5 minutes and explore the
skills tree and whatnot at my leisure if at all.
You hear about "Deus ex machina" as a lazy trait in writing, but it is
commonplace in enterprise software.
Load Bearing Abstractions.
elanning wrote 12 hours 53 min ago:
I know the feeling. I like to picture it as a kitchen, and those
âbat-whateverâ gadgets are like those silly infomercial cooking
tools that are only good for one specific thing. Meanwhile the good
abstractions are like a nice knife, that can be used in so many
different contexts.
Warwolt wrote 17 hours 50 min ago:
I read this book with some colleagues at a work book club, and I think
it's interesting how split the opinion on the book is among readers.
My impression is that there's some good ideas in the book, but it
suffers from not being thorough enough on a theoretical level. Many
definitions given are NOT consistently used, the book frequently falls
back on discussing things in a very OOP centric way, and a lot of stuff
came across to me as just opinion pieces.
Some stuff I found was excellent, like the notion of module depth.
When reading reviews on Goodreads, there's a similar disparity between
people who really liked it and people who are critical of it.
rodolphoarruda wrote 17 hours 0 min ago:
> the book frequently falls back on discussing things in a very OOP
centric way
I do procedural most of the time. Do you think I can still benefit
from reading the book? Judging from the summary in blog post, it
seems to be a nice read even for non-OOP code. It's just my first
impression though.
Warwolt wrote 6 hours 22 min ago:
Yes, it's a pretty good book. I'm somewhat annoyed at the book
_sometimes_ speaking in general terms applicable to any paradigm,
and then sometimes speaking in very OOP centric terms. It's still a
worthwhile read.
ozgrakkurt wrote 19 hours 23 min ago:
Focusing on these things feels like focusing on training technique too
much when going to the gym. In the end mostly what matters is getting
things done and keeping it as simple as possible. And simple is very
subjective, everyoneâs simple is what is simple to them. Feels like
everyone should pick a technique and master it then reuse it, there is
not much point on finding a global simple/elegant
tpoacher wrote 19 hours 44 min ago:
The telegram channel is a great idea. Subscribed!
PunchTornado wrote 20 hours 12 min ago:
I view that some of his principles are against what uncle Bob is
preaching. And I agree with him, especially on the focus to reduce
complexity. Uncle Bob preaches modularity and encapsulation over
everything and you end up with a thousand 2 liners classes and methods.
cdpd wrote 21 hours 29 min ago:
I think that the first example tackles such a naive approach about how
to implement the discount code, IMHO I would ask more about the
business logic around it.
Why is a discount applied to both the order and the shipping? are they
the same thing? What if the company applies discounts to only shipping
and not orders itself?
Maybe it comes from experience, but I would focus on understanding the
business side first and the see if the abstraction is feasible (To be
honest the first approach in the example is not even bad given that
they are two "independent" business entities)
gregfjohnson wrote 23 hours 34 min ago:
This short book is (IMHO) one of the best on software design. To me
the main point of the book is the importance of well-designed
abstractions. The "surface area" of a well-designed abstraction is
small, easy to understand, and helpful as you reason through your code
when you use it. The underlying implementation may be deep and
non-trivial, but you find that you don't have any need to worry about
the underlying internal details.
In short:
A beautifully designed abstraction is easy to understand and use.
It is so trustworthy that you don't feel any need to worry about how it
is implemented.
Finally, and most importantly, it enables you to reason with rigor and
precision about the correctness of the code you are writing that makes
use of it.
begueradj wrote 18 hours 56 min ago:
> A beautifully designed abstraction is easy to understand and use.
It's like in "Clean Code" where Ward Cunningham said a clean code is
a beautiful code.
Beautiful design, beautiful code, beautiful abstraction, beautiful
class, beautiful function ... But is not that subjective and broad ?
cratermoon wrote 10 hours 17 min ago:
Robert M. Pirsing discusses qualia in his writings. One objection
raised by his antagonists is, âquality is just what you like",
echoing the idea of broad subjectivity you raise. Yet there is
broad agreement on what counts as quality. Among the aspects we
agree on is complexity and subjective cognitive load.
AnimalMuppet wrote 11 hours 37 min ago:
Yes, it's subjective, but not entirely. After you've done it for a
couple of decades, you start to have a sense of taste, of
aesthetics. Some things seem beautiful, and others ugly. It's
"subjective", but it's also informed by two decades of practice, so
it is far from being purely subjective.
f1shy wrote 23 hours 12 min ago:
That book is a almost perfect summary of what is in my head after 30+
years of programming. I recommend it often to new people, as I see
they make the same mistakes I did back then.
I recommend not loosing time with âClean Xâ books, but instead
read this book. Also, as noted in other comments, you can only âget
itâ after some real experience, so it is important to practice an
develop a âcommon senseâ of programming.
karmakurtisaani wrote 22 hours 58 min ago:
I disagree that the "Clean X" books are a waste of time. They lay a
nice ground understanding of what to aim for when writing code, in
particular when you're early in your career.
When I was starting as a professional coder years ago, I had an
intuitive sense of what good code was, but I had no idea how much
actual thought had been put to it by other people. Reading those
books was a good step in seriously starting to think about the
subject and look at code differently as a craft ("it's not just me,
this code smells!" or "hey that's a neat idea, better keep this in
mind").
Definitely would recommend to someone starting out their career.
Edit: getting downvoted for a reasonable, justified opinion.
Classy.
nyrikki wrote 10 hours 28 min ago:
There tend to be two camps with the Uncle Bob franchise as I see
it:
Those that fall for the way he sells it, as the 'one true path',
or are told to accept it as being so.
Those who view it as an opinionated lens, with some sensible
defaults, but mostly as one lens to think through.
It is probably better to go back to the earlier SOLID idea.
If you view the SRP, as trying to segment code so that only one
group or person needs to modify it, to avoid cross team coupling,
it works well.
If you use it as a hard rule and worse, listen to your linter,
and mix it in with a literal interpretation of DRY, things go
sideways fast.
He did try to clarify this later, but long after it had done it's
damage.
But the reality is how he sells his book as the 'one true path'
works.
It is the same reason scrum and Safe are popular. People prefer
hard rules vs a pile of competing priorities.
Clean architecture is just ports and adapters or onion
architecture repackaged.
Both of which are excellent default approaches, if they work for
the actual problem at hand.
IMHO it is like James Shore's 'The Art of Agile Development',
which is a hard sell compared to the security blanket feel of
scrum.
Both work if you are the type of person who has a horses for
courses mentality, but lots of people hate Agile because their
organization bought into the false concreteness of scrum.
Most STEM curriculums follow this pattern too, teaching
something as a received truth, then adding nuance later.
So it isn't just a programming thing.
I do sometimes recommend Uncle Bob books to junior people, but
always encourage them to learn why the suggestions are made, and
for them to explore where they go sideways or are inappropriate.
His books do work well for audiobooks while driving IMHO.
Even if I know some people will downvote me for saying that.
(Sorry if you org enforced these over simplified ideals as
governance)
pdpi wrote 14 hours 1 min ago:
Donât know about the rest of the series, but Clean Code isnât
merely a waste of time, itâs worse â itâs actually a net
negative, and lies at the root of a number of problems related
to incidental complexity.
karmakurtisaani wrote 11 hours 53 min ago:
Care to elaborate?
pdpi wrote 8 hours 48 min ago:
One of the biggest issues with the book is that it is a
Java-centric book that aspires to be a general-purpose
programming book. Because it never commits to being either,
it sucks equally at both. In much the same way, it's a
"business logic"-centric book that aspires to be general
purpose, so it sucks at both (and it especially sucks as
advice for writing mostly-technical/algorithmic code). This
is epitomized by how HashMap.java from OpenJDK[0] breaks
almost every single bit of advice the book gives, and yet is
one of the cleanest pieces of code I've ever read.
One fundamental misunderstanding in the book and that I've
hear in some of his talks is that he equates polymorphism
with inheritance. I'll forgive him never coming across ad hoc
polymorphism as present in Haskell, but he book was published
in 2009, while Java had generics in 2004. Even if he didn't
have the terminology to express the difference between
subtype polymorphism and parametric polymorphism, five years
is plenty of time to gain an intuitive understanding of how
generics are a form of polymorphism.
His advice around prefering polymorphism (and, therefore,
inheritance, and, therefore, a proliferation of classes) over
switch statements and enums was probably wrong-headed at the
time, and today it's just plain wrong. ADTs and pattern
matching have clearly won that fight, and even Java has them
now.
Speaking of proliferation of classes, the book pays lip
service to the idea of avoiding side-effects, but then the
concrete advice consistently advocates turning stateless
functions into stateful objects for the sake of avoiding
imagined problems.
One particular bugbear of mine is that I've had literally
dozens of discussions over the years caused by his advice
that comments are always failures to express yourself in
code. Many people accept that as fact from reading it first
hand, others you can clearly trace the brain rot back to the
book through a series of intermediaries. This has the effrt
of giving you programmers who don't understand that
high-level strategy comments ("I'm implementing algorithm X")
are incredibly information dense, where one single line
informs how I should interpret the whole function.
Honestly, the list goes on. There's a few nuggest of wisdom
buried in all the nonsense, but it's just plain hard to tell
people "read this chapter, but not that, and ignore these
sections of the chapter you should read". Might as well just
advise against juniors reading the book at all, and only
visiting it when they've had the time to learn enough that
they can cut through the bullshit themselves. (At which point
it's just of dubious value instead of an outright negative)
0.
URI [1]: https://github.com/openjdk/jdk/blob/master/src/java....
whstl wrote 10 hours 49 min ago:
Not GP but: Personally, I find that book's advice is highly
subjective and rooted on aesthetics rather than pragmatism or
experimentation. It encourages an excessive number of very
small methods and very small classes, and brushes off
problems that it causes.
Not about the book, but: Its influence is malignant. Even
Uncle Bob mentioned in a recent interview that he will break
the "10 lines per method" rule if need be. But practitioners
influenced by the book lack his experience, and are often
very strict. I even remember a specific Ruby linter that
capped methods at 5 or 6 lines max IIRC. Working in such a
fragmented codebase is pure madness. This comment from
another user made me remind some of those codebases: [1]
EDIT: After living in the "Clean Code world" for half a
decade I can categorically say that it produces code that is
not only slow to run (as argued by Casey Muratori [1]), but
also slower to understand, due to the jumping around. The
amount of coupling between incestuous classes and methods
born out of "breaking up the code" makes it incredibly
difficult to refactor. [1]
URI [1]: https://news.ycombinator.com/item?id=42486032
URI [2]: https://www.youtube.com/watch?v=tD5NrevFtbU
karmakurtisaani wrote 10 hours 24 min ago:
I think people get hung up with the small classes/methods
and ignore all the rest. One important lesson being that
the aesthetics do matter and you have to pay attention to
writing maintainable code. These are important lessons for
a beginning developer. If you think otherwise, you've never
worked on a code base which has 300 line functions with
variables named temp, a and myVar.
Regarding short functions: yes, having them too short will
absolutely cause problems. And you should not use this as
an absolute rule. But when writing code it's very useful to
keep this in mind in order to keep things simple - when you
see your functions doing 3 independent things, maybe it's
time to break ivt in 3 sub functions.
Edit: I see some criticism concerning too small classes,
class variables being used as de facto global variables and
shitty inheritance. Fully agree that these are plain bad
practices stemming from the OOP craze.
sfpotter wrote 8 hours 24 min ago:
But this line of thinking is exactly what's wrong with
Clean Code. Just seeing your function doing three
independent things is not a signal that you should begin
refactoring.
I've worked on code bases with functions that were longer
than 300 lines with shorter variable names. Whether this
is a problem is completely dependent on the context. If
the function is 300 lines of highly repetitive business
logic where the variable name "x" is used because the
author was too lazy to type out a longer, more
informative variable name, then maybe it's possible to
improve the function by doing some refactoring.
On the other hand, if the function is an implementation
of a complicated numerical optimization algorithm, there
is little duplicated logic, the logic is all highly
specific to the optimization algorithm, and the variable
name "x" refers to the current iterate, then blindly
applying Clean Code dogma will likely make the code
harder to understand and less efficient.
I think the trick here is to cultivate an appreciation
for when it's important to start refactoring. I see some
patterns in when inexperienced developers begin
refactoring these two examples.
In the first example, the junior developer is usually a
little unmoored and doesn't have the confidence to find
something useful to do. They see some repetitive things
in a function and they decide to refactor it. If this
function has a good interface (in the sense of the
book---is a black box, understanding the implementation
not required), refactoring may be harmful. They run the
risk of broadening and weakening the interface by
introducing a new function. Maybe they accidentally
change the ABI. If you have only changed the
implementation, if no one spends any time looking at the
details of this function because it has a good interface,
... what's been gained?
In the second example, the junior developer is usually
panicked and confused by a Big Complicated Function
that's too hard for them to understand. They conflate
their lack of understanding with the length and
complexity of the function. This can easily be a sign of
their lack of expertise. A person with appropriate domain
knowledge may have no trouble whatsoever reading the 300
line function if it's written using the appropriate
idioms etc. But if they refactor it, it now becomes
harder to understand for the expert working on it because
1) it's changed and 2) it may no longer be as idiomatic
as it once was.
whstl wrote 9 hours 57 min ago:
Sure, but nobody is saying that aesthetics don't matter.
Quite the opposite. People have been saying this for
decades, and even government agencies have code-style
guidelines. Also, the idea that big procedures are
problematic is as old as procedural programming itself.
The problem is that, when it comes to aesthetics, one of
the two more-or-less-novel ideas of the book (and the one
that is followed religiously by practitioners) is
downright problematic when followed to the letter.
> when you see your functions doing 3 independent things,
maybe it's time to break it in 3 sub functions
That's true, and I agree! But separation of concerns
doesn't have much to do with 10-lines-per-method. The
"One Level of Abstraction per Function" section, for
example, provides a vastly better heuristic for good
function-size than the number of lines, but unfortunately
it's a very small part of the book.
> I see some criticism concerning [...] class variables
being used as de facto global variables
The criticism is actually about the book recommending
transforming local variables into instance/object
variables... here's the quote:
URI [1]: https://news.ycombinator.com/item?id=42489167
lokar wrote 9 hours 5 min ago:
If the 3 things are related such that they will only
ever be called in order one after the other (and they
are not really complex) itâs better to just do all
the work together.
whstl wrote 8 hours 15 min ago:
Yep, if they're related then I agree 100%.
f1shy wrote 21 hours 32 min ago:
I think you are totally right. The clean X books are not a waste
of time. I meant that in the sense of âstart here, donât
delay thisâ. I would recommend: read aPoSD, then Clean X
series, then again aPoSD ;)
abcde777666 wrote 23 hours 59 min ago:
A lot of these types of books and posts only deal with the low hanging
fruits of software design difficulty, such as the provided discount
service example.
The trouble is that kind of thing's pretty much software development
common sense - only the inexperienced don't know it.
The true difficulties of software development are often must gnarlier
in my experience.
For instance, making architectural choices for large and dynamic
software systems, such as say a cutting edge game engine - that can be
really hard to get right, and there's not always a lot of sage advice
out there for how to navigate it - and not just for game engines but
for any equally or more complex software.
I guess my point being - I'd love to see more effort into addressing
the hard design stuff, and less repetition of what's already been
established.
AndyMcConachie wrote 20 hours 5 min ago:
Every example I see in programming books that say something like,
"You should do it this way." Always come with the caveat of, "It
depends."
For the hard stuff that you would like to see covered the "It
depends" part becomes more important. The correct way of handling the
really tough cases you're talking about are extremely circumstancial.
Thus, a book discussing them generally wouldn't really work. What
would probably work better are examples of these tough design issues
that include the actual code and some discussion about why specific
design decisions were made.
I like reading code from people who had to make tough trade offs in
the real world and hearing in their own words why they made the
decisions they did. I loved reading Lion's Commentary on the UNIX OS
6th edition, for example.
f1shy wrote 23 hours 10 min ago:
Iâm with you with 99% of those books. But this one is little bit
different, IMHO
suzzer99 wrote 1 day ago:
Something bugs me about that first example. We started with two classes
that had trivial constructors, and changed them into classes that
require an instance of DiscountService be supplied to a constructor.
That doesn't feel like less complexity to me.
I'd probably just make applyDiscount a static utility method that the
two classes import and invoke on their own, at least until it becomes
obvious that something more involved is needed.
quest88 wrote 10 hours 3 min ago:
Can you articulate what this feels less complex? The author did a
good job articulating the complexity with the trivial example.
One benefit I like in the author's example, vs a static utility
method, is you don't need to concern yourself about the
implementation when it comes to tests.
Let's consider your example. The CheckoutService needs to call
applyDiscount. So how do we get a discount to be applied? Well, we
need to look at the source code of the static function to see that
the order needs a CouponCode of "SUMMER2024". We may also need to see
how much the order is discounted by so we can assert on the correct
total value in the tests (or we can simple assert the total is less
than the total we started with). That means that anytime we update
the CouponCode (e.g. "WINTER2024") then we have to update a bunch of
tests.
Now let's consider the example's change. I see that the constructor
takes some DiscountService interface. So now I can create a fake
implementation that's used in the tests. I don't need to dig into
implementation code to see how to trigger a discount. I can create
that fake implementation in the test itself so I can provide exactly
how much discount will be applied and the test can assert that the
total is discounted by the amount specified in the test itself, now
we can see it's definitely being applied! Or I can create a
FiftyPercentDiscountFake for all other tests to use.
The example is trivial of course and you likely don't need such an
abstraction for trivial services. After all, this is for example
purposes. But as code becomes less trivial or many teams are
repeating the same thing if you're at a large enough company, then
this a tool in our toolbelt to use to manage the complexity.
suzzer99 wrote 5 hours 32 min ago:
Well in the real world you would have all the coupon codes,
amounts, and any other special behavior driven by entries in a
database. It would never be in code. So the code would either be
checking the database on every call, or loading the current valid
coupons into some kind of resident app memory (and keeping them
up-to-date somehow), depending on the platform.
So I'm not going to think about how different coupon codes would
change the code complexity until I see what a real-world
implementation with database-driven coupon codes looks like.
Testing is a whole other animal. It could be that a DiscountService
interface works best from a testing perspective. But within the
scope of the post, a DiscountService interface with different
implementations for Shipping and Retail Price certainly isn't less
complex than the original code.
exoji2e wrote 23 hours 51 min ago:
Yes, and also itâs weird to get a negative shipping cost (-5) for
US and code SUMMER2024. Typically you would only apply the discount
code once and not both to shipping and the total order value.
suzzer99 wrote 12 hours 54 min ago:
So next, they realize they need a different calculation for
shipping, and now their beautiful abstraction has a big if
isShipping() {...} else {...} block. So much for reducing
complexity.
Sometimes it's best to let two identical blocks of code remain
intact if there's a reasonable chance that their logic may diverge
in the future. If they don't diverge, and you find yourself
constantly updating both, then you can factor it out into shared
code. It's usually much easier to create a new layer of abstraction
in the future, than to remove an unneeded one after a ton of code
has grown up around it.
I'm a big fan of WET - write everything twice (stolen from
HackerNews), then maybe the third time think about refactoring into
shared code, especially in new apps where you don't know how things
are going to involve.
NomDePlum wrote 1 day ago:
A similar article discussing the same book:
URI [1]: https://blog.pragmaticengineer.com/a-philosophy-of-software-de...
galaxyLogic wrote 1 day ago:
âThe idea behind exception aggregation is to handle many exceptions
with a single piece of code; rather than writing distinct handlers for
many individual exceptions, handle them all in one place with a single
handler.â
This seems similar to how events are handled in a web-browser. Each
element can handle its own event-handlers. But equally well there can
be a single event-handler for each event-type in a containing element,
perhaps at the top-level only.
If you define event-handlers of a given type for all DOM-elements of
the page in a single location it becomes much more flexible to modify
how and which events are handled and for which DOM-elements.
So we could say that "error" is just one of the event-types, errors can
be or could be handled by the same mechanism as events in general are.
Right? Or is there clear categorical difference between error-events
and other types of events?
indoorcomic wrote 1 day ago:
I disagree with the examples in the second idea. The "bad" example is
easier to understand and maintain at a glance in my opinion. Looking at
the RegisterUser method, I can immediately see the steps it takes to
register a user, whereas the "good" example I have to think about it a
bit more. Of course, this is a simple example so not much thinking
needs to be done, but in a more realistic application I think this
would hold much more truth. In projects I've worked on I've seen
methods get incredibly bloated due to this. I certainly do agree that
"splitting things up for the sake of splitting them up" can be bad
practice, I'm just not sure this is the best example to demonstrate
that.
estevaoam wrote 1 day ago:
I agree. The main function of a component should describe what it
does, similar to describing the algorithm in natural language. I have
been following this pattern with success. It is much easier to
understand things at a glance.
ricardobeat wrote 1 day ago:
While I heartily agree with limiting complexity as a ground rule, the
example given is not a great one.
First, itâs more about repetition/poor design than complexity.
Second, creating a separate service class for applying a discount is
adding unnecessary complexity. Youâll end up with a pile of
DiscountService, TaxService, ShippingCostsService, and so on, and they
will be sewn together like patchwork. It seems to be a common pattern
in Java but surely there are better ways?
marcosdumay wrote 14 hours 31 min ago:
> Youâll end up with a pile of DiscountService, TaxService,
ShippingCostsService, and so on
This seems reasonable. If you get a lot of complexity on each one of
those, and their behavior is controlled by different areas of your
business, there's nothing bad in segregating them.
sarchertech wrote 1 day ago:
Itâs a great book. I feel like a lot of the midlevel engineers Iâve
worked with over the years who read clean code and stopped there would
benefit greatly from it.
recroad wrote 1 day ago:
Hereâs a recent interview with the author
URI [1]: https://youtu.be/bopwQ_YV06g?si=S2YOVbXj3MJ2NlEG
bvrmn wrote 1 day ago:
Sadly article's author doesn't touch the main idea of the book:
component's public API should be narrow as possible. John makes a great
deal of that with concrete examples.
ilrwbwrkhv wrote 1 day ago:
This. That is the biggest idea and also the most applicable and the
easiest to understand when your complexity is going through the roof.
For example in Ruby land it is very common to make a class and then
make a lot of small tiny methods which are one liners or two liners.
I had asked him directly about this and his answer was to avoid doing
it.
Since then my Ruby and Common Lisp code has become much better.
I have since moved to rust, but the point still applies.
theonething wrote 1 day ago:
> make a lot of small tiny methods which are one liners or two
liners.
I'm presuming you mean public tiny methods? Having private ones
like that can be good if makes sense to do so (encapsulates logic,
increases readability, etc)
ilrwbwrkhv wrote 23 hours 53 min ago:
Yes public "deep" methods. But even private methods I have been
more conservative.
It is after all an API for you!
Basically the idea that you shouldn't have long methods is
something I don't believe in anymore. Even Carmack made a similar
point:
URI [1]: http://number-none.com/blow/blog/programming/2014/09/26/...
lll-o-lll wrote 18 hours 2 min ago:
I never believed this, even when I was compelled to do it.
What are we achieving in these plethora of small methods? There
are many potential negative patterns that eventuate.
- Lifting variables into âglobalâ (object) state. In more
complex classes, itâs often hard to even identify that this
has happened. Is this permanent object state, or just an
internal temporary variable that made it easier to leap around
methods?
- Making code harder to change as variables must be lifted into
method parameters (so changing a variable, or adding a new one
leads to multiple modifications). Method1 calls Method2 calls
Method3 with a âdependencyâ that Method2 never needs.
- Poor naming leading to obtuse code. DoThingPart1,
DoThingPart2
- Harder to follow code by having to jump around the file (or
worse, multiple files).
There are better and worse ways to structure code to make it
easier to read and reason about, however blind metric
approaches are not the way.
nyrikki wrote 10 hours 5 min ago:
I don't remember anything about lifting state up in the clean
series, was it perhaps a react specific book?
To me that would violate the dependency inversion principle
that most of the books leverage heavily.
I know that some languages like .net encourage those
Singleton classes, but I would appreciate being pointed at
where the clean series sells this.
I am of the bounded context camp for component sizing in
general so it is likely I skimmed and dumped the concept,
like I did with the over reliance on polymorphisms, which is
a sometimes food in my mind.
nyrikki wrote 9 hours 25 min ago:
Ya looking into this more this is a react hyper correction
to the Law of Demeter, related to language limits vs broad
concepts.
Not being a UI guy, I am not the one to say how universal
this is, but the react context API seems to use stamp
coupling and has to propagate those changes globally and is
expensive.
Good example of where context is King with tradeoffs.
whstl wrote 9 hours 45 min ago:
To quote from Clean Code:
"The function is a bit too long and the variables are used
throughout. To split the function into smaller pieces we
need to create a GuessStatisticsMessage class and make the
three variables fields of this class." - Add Meaningful
Context, Page 28
EDIT: And then right below there's an example where the
author lifts variables into object state.
EDIT 2: Funny enough, ChatGPT managed to refactor to
something even much shorter and IMO much clearer than both
examples in the book:
private void printGuessStatistics(char candidate, int
count) {
if (count == 0) {
print("There are no " + candidate + "s");
} else if (count == 1) {
print("There is 1 " + candidate);
} else {
print("There are " + count + " " + candidate +
"s");
}
}
nyrikki wrote 8 hours 29 min ago:
Thank you for doing the work to find that for me.
I still don't see that as:
> "Lifting variables into âglobalâ (object) state"
It is simply the inversion and extraction method that is
commonly used. The value of it is lost here as his
example is poor IMHO as I find that cleaning up deep
nested arrow code is where it is.
This method on page 28 is about refactoring to improve
readability, and the location where the variables are
declared are still in the same class.
Nothing has changed in that example except adding named
private methods in place of logic inside an if-elif-else
ladder.
So this:
if (count == 0) {
number = "no";
verb = "are";
pluralModifier = "s";
}
else if (count == 1) {
number = "1";
verb = "is";
pluralModifier = "";
}
else {
number = Integer.toString(count);
verb = "are";
pluralModifier = "s";
}
Is changed to this:
if (count == 0) {
thereAreNoLetters();
}
else if (count == 1) {
thereIsOneLetter();
}
else {
thereAreManyLetters(count);
}
Remember this chapter is about "name things" not flow
control or even data flow.
It actually is intended to help with most of the concerns
above and has nothing to do with the react style
anti-parameter drilling style of 'lifting'
If you go to page 97 he goes over the 'The Law of
Demeter' and argues against exactly what was above and
actually cites Martin Fowlers refactoring book which is
written in a far better style and tries to call out
nuance.
So my opinion that he gives semi-helpful advice that he
over sells as received wisdom still holds. Obviously the
cost of calling a private method in your language and how
that impacts your use case matter here.
whstl wrote 8 hours 18 min ago:
> This method on page 28 is about refactoring to
improve readability, and the location where the
variables are declared are still in the same class.
Same class, but method-scope local variables were
unnecessary turned into long-lived instance variables.
This is what GP means by "Lifting variables into
âglobalâ (object) state". Their phrasing, not mine.
This unnecessary usage of state for what could be a
simple variable binding is what GP is criticizing, and
they're arguing that is problematic for maintenance and
readability. I agree with them.
nyrikki wrote 8 hours 9 min ago:
In that example the constructor parameters were just
moved, is there some cost in overloading the empty
params?
Original (labeled bad by the book):
private void printGuessStatistics(char candidate,
int count) {
String number;
String verb;
String pluralModifier;
Modified version:
public class GuessStatisticsMessage {
private String number;
private String verb;
private String pluralModifier;
public String make(char candidate, int count)
{
createPluralDependentMessageParts(count);
return String.format(
"There %s %s %s%s",
verb, number, candidate, pluralModifier
);
}
Once again, it was presented in the context of naming
things....not object or data structures.
Often one has to resort to spherical cows to explain
things.
(Edited to add the overloaded constructor params)
whstl wrote 7 hours 29 min ago:
> In that example the constructor parameters were
just moved
Not really. In the "Original" example there's no
constructor in sight, as that's just a standalone
procedural-style method, not a class.
The second one turns the method into a class.
> is there some cost
As GP mentions, this incurs mental costs: "In more
complex classes, itâs often hard to even identify
that this has happened. Is this permanent object
state, or just an internal temporary variable that
made it easier to leap around methods?"
In terms of performance, this depends on the
language/runtime/type, but in general you'll get
heap allocation instead of a stack, and it will use
the GC.
Also if the lifetime of the private fields is the
same of GuessStatisticsMessage, so you'll waste
memory if they're complex objects that don't need
to live for as long as the class. Depends, YMMV. I
once had a memory leak-ish issue due to that in a
Ruby app.
EDIT:
> Once again, it was presented in the context of
naming things....not object or data structures
This is fine, but the example is not even good in
terms of names. `make` is a terrible method name
[1], and, turning local variables into long-lived
object variables doesn't improve their names, it
only muddies the waters like GP mentioned. [1]
Classic parody/criticism of the style here:
URI [1]: http://steve-yegge.blogspot.com/2006/03/ex...
nyrikki wrote 7 hours 6 min ago:
It all doesn't matter, these are code fragments
that are not in context of a full program.
Lets look at the labels on these code blocks (in
a chapter called "Chapter 2: Meaningful Names")
> Listing 2-1
> Variables with unclear context.
> Listing 2-2
> Variables have a context.
What part of that would even suggest that they
are indicating the most performant, idealized
code?
They are non-complete random code fragments meant
to discuss one of the hardest problems in
computer science...naming things.
It is simply a poor example of "Variables have a
context" being more readable than "Variables with
unclear context", not a suggestion that the
lifting principle is required.
This is _simply_ an example that is similar to
the extract and inversion method that all of us
that get brought in to break up legacy monoliths
have to do because often things are a big ball of
much with vars that look more like it was written
in the language Brainfuck or obfuscated Perl than
anything reasonable.
That is not anything that the Clean camp, or even
Bob came up with...it is often the reality when
trying to save systems from architectural erosion
or....
But seriously if you consider 'clean code' as
cargo culting design choices without though from
any source, you are going to be screwed no matter
what.
The number of competing needs related to a dozen
types of cohesion along with a dozen types of
coupling will never reduce to a simple set of
rules.
Strings are interned either in the string pool,
or even if you use New...which this code wasn't,
anything newer than 8u20 will dedupe and
duplicate strings will only have a single
instance even if they aren't in the string pool.
So long lived strings total space usage isn't
really massive.
If you choose to save the size of a pointer with
what appears a short lived single computational
class anyway, sacrificing readability and
maintainability, you will have far greater
problems than inefficient memory use.
whstl wrote 6 hours 35 min ago:
> not a suggestion that the lifting principle
is required.
But there is such suggestion. In the quoted
part of the book I sent above, Uncle Bob uses
the words we need: [1] -
> This is _simply_ an example
You asked where in the book the idea criticized
by GP came from, and I only answered. I'm not
trying to argue.
-
> What part of that would even suggest that
they are indicating the most performant,
idealized code?
Nowhere. You're the one who asked for "the
cost". I'm not in an argument, I'm only
answering your questions.
URI [1]: https://news.ycombinator.com/item?id=4...
nyrikki wrote 6 hours 13 min ago:
The parameters have moved to a class method,
still within the class, which is now public,
but still taking a string and an intrinsic
int, the string, as being passed as a
parameter is already in the string pool or
perhaps on the heap outside the string pool
if it was created with New,
Nothing has been elevated outside of the
block scope.
As private methods don't use the table now:
[1] Where is anything that was claimed about
raising variables to global scope as was
suggested above apply.
You still have a constructor btw, just the
make() method overloading the missing class
construction.
The parameters are just placeholders, they
have to be instantiate when invoked even if
the implicit construction is hidden.
But in the beginning example and the
refactored example they have the same
instance variables, with the latter being
marked private but that doesn't impact the
lifecycle.
The question being if the refactored version
uses the new private methods as callable or
if optimization removes them.
The point being is that blaming performance
regressions on this type of refactoring to
improve readability is on thin ground for
java.
This example does nothing to suggest that
global state APIs etc.. that are a problem in
react relate at all to the recommendation,
which are still containing everything in a
local scope.
I thank you for taking the time to answer,
but what I am looking for is how this somehow
lifts vars to a global scope, which it
doesn't.
URI [1]: https://github.com/openjdk/jdk11u-de...
whstl wrote 6 hours 7 min ago:
Sure, no problem. Happy to answer
questions.
WillAdams wrote 1 day ago:
Agreed.
I read this book recently, one chapter at a time, and after each,
reviewed the code for my current project, applying the principles to it
in a re-write --- it helped a lot.
Highly recommended.
onemoresoop wrote 1 day ago:
I love this websiteâs format, it seems very pleasant to read.
asimpletune wrote 1 day ago:
I came here to say the same. Having the navigation on the bottom is
great too, with it appearing when you scroll up. Kudos to the owner.
amenghra wrote 1 day ago:
Itâs not great when it conflicts with the systemâs default
behavior :/
My screen recording:
URI [1]: https://streamable.com/gvz68h
vkazanov wrote 1 day ago:
In the oop age of arch I felt like I had no mouth (and I had to
scream).
This book, as well as the data oriented design approach, is what made
things right for me.
noelwelsh wrote 1 day ago:
I come from an FP background, and this book was interesting to me as
the author very clearly has a very different (imperative, systems)
background. In some cases we have very different opinions, in some
cases I'm completely agreed (e.g. "define errors out of existence" is
extremely common in FP, usually under the term "make illegal states
unrepresentable"), and in other cases I feel they were half-way to FP
but couldn't quite get all the way there (e.g. the editor example is a
classic interpreter, but they didn't make the connection IIRC.) I only
skimmed the book and would like to go back for a more detailed review.
Curious if anyone else with an FP background had the same or different
experience.
0xDEAFBEAD wrote 1 day ago:
I read most of the book a couple years ago, and I thought it was very
good. I wonder if you (or anyone else) can recommend an alternative
book that does a better job of describing your perspective?
zusammen wrote 1 day ago:
I read a few chapters and had the same feeling.
Darmani wrote 1 day ago:
"Define errors out of existence" might sound like "make illegal
states unrepresentable," it's actually not. Instead it's a pastiche
of ideas rather foreign to most FP readers, such as broadening the
space of valid inputs of a function. One of his examples is changing
the substr function to accept out of bounds ranges.
You might be interested in my review. I'm a Haskeller at heart,
although the review draws more from my formal methods background.
Spoiler: his main example of a deep module is actually shallow.
URI [1]: https://www.pathsensitive.com/2018/10/book-review-philosophy...
Dansvidania wrote 8 hours 47 min ago:
the example you quote for "Define errors out of existence", while
indeed it does not follow "make illegal states unrepresentative"
does follow what IMO also is an FP principle: "a total function is
better than a partial one"
Mawr wrote 20 hours 2 min ago:
> his main example of a deep module is actually shallow.
It's not, you're just ignoring what he said:
"A modern implementation of the Unix I/O interface requires
hundreds of thousands of lines of code, which address complex
issues such as: [... 7 bullet points ...] All these issues, and
many more, are handled by the Unix file system implementation; they
are invisible to programmers who invoke the system calls."
So sure, the `open` interface is big in isolation but when compared
to its implementation it's tiny, which is what you've badly missed.
The book also brings up another example right after this one, that
of a Garbage Collector: "This module has no interface at all; it
works invisibly behind the scenes to reclaim unused memory. [...]
The implementation of a garbage collector is quite complex, but the
complexity is hidden from programmers using the language". Cherry
picking, cherry picking.
Then you proceed to not mention all the other key insights the book
talks about and make up your own example of a stack data structure
not being a deep abstraction. Yes, it's not. So? The book
specifically emphasizes not applying its advice indiscriminately to
every single problem; almost every chapter has a "Taking it too
far" section that shows counterexamples.
Just so you don't attempt to muddy the waters here by claiming that
to be a cop-out, the very point of such books is provide advice
that applies in general, in most cases, for 80% of the scenarios.
That is very much true for this book.
Overall, your formal background betrays you. Your POV is too
mechanical, attempting to fit the book's practical advice into some
sort of a rigid academic formula. Real world problems are too
complex for such a simplified rigid framework.
Indeed, a big reason why the book is so outstanding is how
wonderfully practical it is despite John Ousterhout's strong
academical background. He's exceptional in his ability to bring his
more formal insights into the realm of real world engineering. A
breath of fresh air.
Darmani wrote 10 hours 56 min ago:
Hi Mawr,
I don't have much to say to most of your comment --- a lot of the
text reads to me like a rather uncharitable description of the
pedagogical intent of most of my writing.
I'll just respond to the part about deep modules, which brings up
two interesting lessons.
First, you really can't describe an implementation of the Unix IO
interface as being hundreds of thousands of lines.
That's because most of those lines serve many purposes.
Say you're a McDonalds accountant, and you need to compute how
much a Big Mac costs. There's the marginal ingredients and labor.
But then there's everything else: real estate, inventory, and
marketing. You can say that 4 cents of the cost of every menu
item went to running a recent ad campaign. But you can also say:
that ad was about Chicken McNuggets, so we should say 30 cents of
the cost of Chicken McNuggets went to that ad campaign, and 0
cents of everything else. Congratulations! You've just made Big
Macs more profitable.
That's the classic problem of the field of cost accounting, which
teaches that profit is a fictional number for any firm that has
more than one product. The objective number is contribution,
which only considers the marginal cost specific to a single
product.
Deciding how many lines a certain feature takes is an isomorphic
problem. Crediting the entire complexity of the file system
implementation to its POSIX bindings -- actually, a fraction of
the POSIX bindings affected by the filesystem -- is similar to
deciding that the entire marketing, real estate, and logistics
budgets of McDonalds are a cost of Chicken McNuggets but not of
Big Macs. There is a lot of code there, but, as in cost
accounting, there is no definitive way to decide how much to
credit to any specific feature.
All you can objectively discuss is the contribution, i.e.: the
marginal code needed to support a single function. I confess that
I have not calculated the contribution of any implementation of
open() other than the model in SibylFS. But Ousterhout will need
to do so in order to say that the POSIX file API is as deep as he
claims.
Second, it's not at all true that a garbage collector has no
interface. GCs actually have a massive interface. The confusion
here stems from a different source.
Programmers of memory-managed languages do not use the GC. They
use a system that uses the GC. Ousterhout's claim is similar to
saying that renaming a file has no interface, because the user of
Mac's Finder app does not need to write any code to do so. You
can at best ask: what interface does the system provide to the
end-user for accessing some functionality? For Finder, it would
be the keybindings and UI to rename a file. For a memory-managed
language, it's everything the programmer can do that affects
memory usage (variable allocations, scoping, ability to return a
heap-allocated object from a function, etc), as well as forms of
direct access such as finalizers and weak references. If you want
to optimize memory usage in a memory-managed language, you have a
lot to think about. That's the interface to the end user.
If you want to look at the actual interface of a GC, you need to
look at the runtime implementation, and how the rest of the
runtime interfaces with the GC. And it's massive -- GC is a
cross-cutting concern that influences a very large portion of the
runtime code. It's been a while since I've worked with the
internals of any modern runtime, but, off the top of my head, the
compiler needs to emit write barriers and code that traps when
the GC is executing, while the runtime needs to use indirection
for many pointer accesses (if it's a moving GC). Heck, any user
of the JNI needs to interface indirectly with the GC. It's the
reason JNI code uses a special type to reference Java objects
instead of an ordinary pointer.
If you tally up the lines needed to implement either the GC or
the POSIX file API vs. a full spec of its guaranteed behavior,
you may very well find the implementation is longer. But it's far
from as simple a matter as Ousterhout claims.
noelwelsh wrote 20 hours 29 min ago:
Nice review. It reminded me of some of the WTF moments from the
book :-) I should go back to it and write my own.
philosopher1234 wrote 1 day ago:
Your review is great! But I think the idea that itâs in
opposition to PoSD is not right, I think itâs a further
development and elaboration in the same direction of PoSD
Darmani wrote 11 hours 34 min ago:
My review has a bit of a negative vibe, but when when I look
through my paper copy of PoSD, the margins are full of comments
like "Yes!" and "Well said."
hyperpape wrote 1 day ago:
Does Ousterhout actually say modules must always have a longer
implementation than their spec, or just that this is a generally
desirable feature?
If he did, I agree with you, he was wrong about that. I also agree
that the unix file API is probably not a good example.
But whether or not he did, I think the dissection of edge cases
would be better off emphasizing that he's got something importantly
right that goes against the typical "small modules" dogma. All else
being equal, deeper modules are good--making too many overly small
modules creates excessive integration points and reduces the
advantages of modularity.
P.S. While I'm here, this is not really in response to the parent
post, but the example in the article really does not do justice to
Ousterhout's idea. While he does advocate sometimes just inlining
code and criticizes the pervasive idea that you should shorten any
method of more than n lines, the idea of deep modules involves more
than just inlinining code.
Darmani wrote 1 day ago:
I'd say he's in between â he strongly recommends that most
modules be "deep."
I agree that blindly making lots of tiny things is bad, but his
criteria for how to chunk modules is flawed.
lgas wrote 1 day ago:
> Does Ousterhout actually say modules must always have a longer
implementation than their spec, or just that this is a generally
desirable feature?
I mean the spec is a lower bound on the size of the solution,
right? Because if the solution were shorter than the spec, you
could just use the solution as the new shorter spec.
Darmani wrote 1 day ago:
Not necessarily. The implementation is very often more defined
than the specific. If the implementation is the spec, then it
means that even the smallest change in behavior may break
callers.
musicale wrote 1 day ago:
Nice and seemingly balanced review.
Defining errors out of existence should be mandatory for all golang
programs.
fuzztester wrote 1 day ago:
err, are you serious, sir?
alpinisme wrote 1 day ago:
I havenât looked at the substr function but is that not similar
to how you can `take 5 [1,2,3]` or `zip [1,2,3] [âaâ, âbâ,
âcâ, âdâ]`
ninetyninenine wrote 1 day ago:
I thought the book was stupid. Rehashed a bunch of obvious ideas.
Itâs a bit harsh, I know, but thatâs my honest opinion and I
respect other people who like his book.
I too have a fp background and I felt the author is unqualified to
talk about complexity without knowing fp. Elimination of procedures
and mutations is a formal and concrete reduction of complexity while
the authors definition of complexity is hand wavy. Someone should
know about what fp is before writing a book like this.
Why? Because fp is a basically like a formal structure for software
design and the author tried to talk about philosophy without knowing
some hard formal rules that are well known in the industry. Not
saying these rules are absolute but you canât talk about design
without talking about this.
The book talks about modularity and things of that nature too and
totally skips out on understanding the separation between
statefulness and logic. The author completely misses this design
concept of how how IO and mutation should be separated from
declarative operations. Imperative shell/functional core is a central
design philosophy that he doesnât touch upon. The book is woefully
incomplete without talking about this. Whether the authors philosophy
aligns with it is open for debate but you canât talk about what he
talks about without mentioning this in a big way.
rubiquity wrote 1 day ago:
FP weenies gone wild 2024. You design web apps with monads.
Ousterhout has made systems of actual consequence where mutation is
a reality not a fantasy you try to pretend doesnât exist.
kfreds wrote 1 day ago:
The book plainly states that it is a philosophy for software
design. Philosophy in this context is closely related to strategy,
which is the art of reducing reality to heuristics, so that we
might easier figure out how to reach our goals in a complex
environment.
If the book had been titled "Formal methods for software design"
the lack of algorithms for reducing complexity would have been
surprising. As it is about philosophy it should not be surprising
that it focuses on heuristics.
ninetyninenine wrote 1 day ago:
Applying formal methods derived from functional programming is a
design heuristic.
Itâs a core heuristic and philosophy that is foundational in my
opinion. The author failing to mention this makes the book
missing a fundamental issue central to software design.
kfreds wrote 1 day ago:
Well put. This comment makes your criticism of the book much
more clear to me at least.
I agree with you that the separation of Church and state is a
foundational idea of software design and even computing
generally. I find it quite beautiful how it manifests in
hardware as the two categories of digital logic - combinatorial
and sequential. And if we zoom in on the logical expression of
memory we see it again - a latch is simply two feedback loops
and some combinational logic.
For what it's worth I thought the book was brilliant. Its ideas
weren't all obvious to me before I read it. It also inspired me
to read Parnas, Wirth, Hoare, and study the Go runtime and
compiler.
What should be obvious is this: the fact that the ideas were
obvious to you doesn't mean they are obvious to everyone.
Secondly, complexity has many meanings. Managing complexity is
incredibly important in the realm of security. I've been
dabbling in security for 25 years, but I would certainly not
claim to have a deep understanding of functional programming.
Nevertheless I understand complexity quite well. I think that's
what bothered me the most about your original comment - the
idea that people without a background in FP are unqualified to
talk about complexity.
mrkeen wrote 23 hours 6 min ago:
> I would certainly not claim to have a deep understanding of
functional programming.
From a philosophy-of-complexity perspective it's not needed,
all you need to ask is: will my code give the same output
given the same input? (And if not, there's your complexity!)
Of course, this is a big ask of a programmer. Leaving
determinism up to the programmer in an imperative setting is
like leaving memory-safety up to the programmer in a C
setting.
vendiddy wrote 1 day ago:
I too write FP code but I found this book very valuable in how he
treats complexity and his concept of "deep modules".
I acknowledge that he does not cover purity and mutations as a
source of complexity (and they are big sources of complexity) but I
don't think that merits dismissing the entire book on those
grounds.
ninetyninenine wrote 1 day ago:
Iâm not dismissing the entire book. It has merit in what it
mentions but itâs missing core foundational concepts.
Because it misses these concepts the book isnât good in my
opinion.
WillAdams wrote 1 day ago:
In what ways could these concepts be discussed in the structure
of the book in terms of currently prevalent programming
practices?
musicale wrote 1 day ago:
> Someone should know about what fp is before writing a book like
this.
1. Are you quite sure John Ousterhout (who invented Tcl[1],
comparing it to Lisp in section 7 of the original paper) doesn't
"know about what fp is" as you say?
2. Do you think that the main reason functional programming hasn't
taken off in systems programming is that practitioners are
ignorant, or do you think there might be issues with fp systems
that prevent its adoption?
URI [1]: https://web.stanford.edu/~ouster/cgi-bin/papers/tcl-usenix...
ninetyninenine wrote 1 day ago:
The book needs to talk about those issues and trade offs.
Fp with lisp is only a fraction of fp. I of course am talking
more along the lines of pure fp which lisp is not.
musicale wrote 1 day ago:
Sure, fp in Lisp might not always be true (scotsman) fp. ;-)
But omitting fp in the book is not evidence that Ousterhout is
ignorant of fp, and there is certainly evidence to the
contrary.
The likely explanation, given that he's developed a number of
systems from Sprite to Tcl/Tk to RAMCloud to HOMA, is that he
is addressing the current practice of systems programming,
which remains primarily imperative.
UniverseHacker wrote 1 day ago:
If this is so stupid and obvious, why does apparently 99.99% of
software designed by professional engineers seem to be designed by
people completely oblivious to these ideas and considerations?
Following these philosophical principles themselves- it seems like
a simpler and more accessible treatment of these ideas would be
vastly more effective then a more rigorous and complete one-
because the ideas are indeed simple.
markgoho wrote 15 hours 22 min ago:
sturgeon's law
also, just because someone writes poor code doesn't mean they
don't know how to write good code - intent isn't always clear,
and it's a mistake to assume ignorance based only on the output
ninetyninenine wrote 1 day ago:
> If this is so stupid and obvious, why does apparently 99.99% of
software designed by professional engineers seem to be designed
by people completely oblivious to these ideas and considerations?
Itâs similar to why a big portion of the world believes in
Christianity and another portion believes in Buddhism. Basically
only one or none of these religions is correct rendering at least
one population of people believing in a completely made up
fantasy concept.
Much of what is preached in software is religion and what is
preached by the majority can be completely ludicrous. The
majority believing or not knowing something doesnât mean
anything.
Mikhail_Edoshin wrote 13 hours 14 min ago:
Well, there is a parable of seven blind men who met an
elephant, touched different parts of it and later described the
animal in wildly different terms. Listening ot these different
tales can we say that only one of these men is right and all
others are wrong? Or maybe all are wrong? Or all are right?
Also, do contradictions in their tales mean that the elephant
does not exist?
vouwfietsman wrote 17 hours 12 min ago:
No, its not similar at all.
Parent is telling you: "if A is so simple and obvious, why does
nobody do A", your counter argument: "if many people believe A
it does not mean it is true". This is entirely unrelated, the
point is that these things are NOT obvious to the average
programmer, he/she would benefit from learning this, and
claiming that these things are broadly "stupid and obvious" is
petty and false.
Also, the things you're saying just don't add up with your
criticism that the author is missing some fundamental part of
software philosophy. If the author is only missing something,
then it still makes sense for the majority to learn the things
he is saying, at least, as explained by your parent.
Finally, if anything can be compared to religion, it surely is
the evangelism of functional programming zealots.
ninetyninenine wrote 5 hours 52 min ago:
Author made an argument to why does everybody do A and I said
everybody doing A doesnât mean shit and I used religion as
an illustration on how everybody doing A doesnât mean shit.
> If the author is only missing something, then it still
makes sense for the majority to learn the things he is
saying, at least, as explained by your parent.
Sure but from my pov heâs teaching mathematics while
skipping over algebra or addition. We can all agree that
something huge is missing if you donât learn algebra or
addition.
> Finally, if anything can be compared to religion, it surely
is the evangelism of functional programming zealots.
I donât deny it. Nobody can really prove their viewpoint to
be true. Even the atheist is a zealot. The only way to not be
a zealot is to be a zealot about being unsure of everything.
But then that makes you a zealot. People insinutated a lot of
things because I used religion as an analogy.
The ONLY point I was trying to make is that a majority or
large group of people believing in or doing plan A doesn't
mean shit for plan A.
graemep wrote 1 day ago:
> Itâs similar to why a big portion of the world believes in
Christianity and another portion believes in Buddhism.
Basically only one or none of these religions is correct
rendering at least one population of people believing in a
completely made up fantasy concept.
You have picked religions with as little as possible in common.
It would be rather different if you had picked any two
monotheistic religions for example: one could be entirely
right, and that would mean the other was partially or mostly
right.. Despite your choice, there are many things in common: a
path to redemption, monasticism, wealth being a barrier to
redemption, meditation and mysticism... its quite possible
those common elements might be right.
The same with software. Some things that are widely believed
may be true and other false.
ninetyninenine wrote 5 hours 51 min ago:
Religions claim to be the absolute truth of reality. Thus
since all religions have details that are conflicting and
opposing if one is true all the others are false.
musicale wrote 1 day ago:
Good explanation, really. Imperative systems programmers reject
one or more of the fp commandments (perhaps finding them
impractical), and are probably heretics in the eyes of the fp
cult.
UniverseHacker wrote 1 day ago:
Religious âtruthsâ are not factual truths- they are better
thought of as psychological technology or techniques, and are
âtrueâ if they work for the intended purpose. Many
conflicting religious âtruthsâ are all âtrue.â Even
calling them truths is only done to make the religions
accessible to people that canât mentally process nuance, and
the techniques only work for them if labeled as truth.
Intelligent religious scholars understand this well- for
example Mahayana and Vajrayana Buddhism both teach nearly
opposite and factually incompatible perspectives on almost
everything, yet are often both used by the same religious
teachers for different pupils as appropriate.
The same is true for software design- an approach is not
literally true or false, but either works for its intended
purpose or does not. Conflicting philosophies can both be
âtrueâ just with different underlying goals or values.
To circle back here, my point is that this information is
presented in a simple way that will let people reading it
design better software. Saying they have no right to present it
without a much less accessible and more complex framework that
would likely make it less useful to the intended audience does
not make sense to me.
FWIW, I am also a functional programmer, but would love to see
people that are not follow some of these ideas.
drdeca wrote 1 day ago:
1 Corinthians 15:13-19 (NIV) : â If there is no
resurrection of the dead, then not even Christ has been
raised. And if Christ has not been raised, our preaching is
useless and so is your faith. More than that, we are then
found to be false witnesses about God, for we have testified
about God that he raised Christ from the dead. But he did not
raise him if in fact the dead are not raised. For if the dead
are not raised, then Christ has not been raised either. And
if Christ has not been raised, your faith is futile; you are
still in your sins. Then those also who have fallen asleep in
Christ are lost. If only for this life we have hope in
Christ, we are of all people most to be pitied.â
ââ
There is only one kind of truth. âAll truths are Godâs
truths.â
If Christianity is not true, then it is false. If
Christianity and Buddhism strictly contradict each-other,
then at most one of them is true.
Christianity is not meant to be a, what, psychological trick?
It makes claims, and these claims should be believed if true
and disbelieved if false.
bccdee wrote 13 hours 39 min ago:
> If Christianity is not true, then it is false.
It might be better to think of it as a potentially useful
fiction. Our culture is full of those.
Morality doesn't exist in any objective sense, which means
"murder is wrong" is not, strictly speaking, true. That
doesn't mean it isn't useful for us to collectively treat
it as if it's true. You might even argue that that's the
nature of all shared truth.
AnimalMuppet wrote 11 hours 42 min ago:
The passage drdeca quoted explicitly denies you the space
to treat Christianity as a "useful fiction". (I mean,
lots of people do, but they have to ignore what it
actually teaches in order to do so. You have to create a
fictionalized version if you want a useful fiction, which
I guess shouldn't surprise me...)
UniverseHacker wrote 1 day ago:
It's no trick, it's a spiritual path that can't be
understood without following and practicing it- the path
very much leads to something real that cannot be
experienced or explained any other way. Everything
Christianity teaches is true in the sense that I mean here.
You are not understanding what I am saying and I do not
personally know how to explain it more clearly[1], which,
as I explained above, is why religions pragmatically also
offer this view you hold as the official explanation to lay
people, despite being obvious nonsense as an objective
truth to anyone that thinks very hard about it.
I posit almost all intelligent monastics and religious
people are smart enough to tell the difference between
objective truth and religious truth- but it is taboo to
explain this to lay people as they will be confused and
think it means the religion is "fake" or a "trick", however
I don't feel the need to respect said taboo. Perhaps I will
learn to respect it by trying to explain it to people
unsuccessfully. [1] David Chapman may be able to:
URI [1]: https://vividness.live/visionary-and-objective-tru...
graemep wrote 1 day ago:
> I posit almost all intelligent monastics and religious
people are smart enough to tell the difference between
objective truth and religious truth- but it is taboo to
explain this to lay people as they will be confused and
think it means the religion is "fake" or a "trick",
however I don't feel the need to respect said taboo.
That is positing a conspiracy theory level of deception.
At least as far as Christianity goes, the "intelligent
monastics and religious people" write down their beliefs,
and have done so for millennia, and they read each others
writings. What you suggest might be possible with an oral
tradition, but not with a written one. Christianity is
very much concerned with objective truth, and one of the
distinguishing characters of it (and some other religions
too) is a belief that there is an objective truth.
UniverseHacker wrote 1 day ago:
It's no great conspiracy for a religion to have tiers
of understanding and nuance reserved for people more
intelligent and dedicated in practice- that is one key
purpose of having a distinction between lay people and
monastics. The mystique of this is openly part of the
draw for people to sign up for it.
There's no deception- it's something that (as this
discussion shows) is very subtle and dangerous to the
religions when misunderstood- but not dangerous when
understood correctly. It is written down repeatedly in
religious texts, in a subtle way with plausible
deniability, but clear to those that can read between
the lines. Writing in that way was the essential basic
art of any intellectual until very recently, it is only
now (sort of) safe to plainly state nuanced
philosophical and religious concepts without facing
persecution. Nietzsche argued you still should not do
so even if you can.
It's also both quite obvious and relatively unimportant
on its own to people that would be capable of
understanding nuance, and could be quite harmful to the
faith and the stability of the religion of those not
able to understand.
graemep wrote 15 hours 51 min ago:
There would definitely need to be many people who are
are deliberately deceitful. Those who both know how
to "read between the lines" and who clearly seek to
persuade others in the objective facts of
Christianity.
Take CS Lewis as an example. He write strong and
clear defences of the incarnation, miracles etc. as
objective facts. He was either trying to deliberately
deceive or he did not actually understand older
writing, and the latter is not really credible given
he was the professor of mediaeval and renaissance
literature at Oxford.
> The mystique of this is openly part of the draw for
people to sign up for it.
Not in my experience of priests, monks and nuns and
people who consider becoming clergy.
UniverseHacker wrote 10 hours 23 min ago:
I haven't read any of CS Lewis's writing for
adults, but unfortunately, it is not at all unusual
for academic liberal arts scholars to have only a
very shallow surface understanding of the ideas in
literature they formally study.
Another possibility is that if you get what I'm
saying here, you might re-read CS Lewis and have a
very different perspective on what he was actually
saying- because those Christian "truths" are
extremely important, and exist for a good reason -
and one can write a strong clear defense of them
from the perspective I am coming from.
I read a lot of old philosophy and religious texts
translated and commented on by "well respected"
scholars, and it is not uncommon at all that I can
tell they are seeing only the surface of the
ideas... which can make it frustrating and
difficult to read when the translator wasn't
'getting it.' The level one needs to be at to be a
well respected philosopher, and just to succeed as
an academic are not close at all, and there is no
guarantee that the latter will be capable of fully
grasping the ideas of the former - it is probably
the norm that they cannot. If they could they would
not be just a translator or scholar, but a powerful
philosopher in their own right.
An intelligent person whose mind is fundamentally
oriented towards communicating deeper meaning, does
not operate on the level of obsessing over banal
binary verification of facts- and they need to be
able to assume their reader is already capable of
thinking abstractly in this way as well. To put it
simply one must assume intelligence in the reader
to communicate deep ideas and meaning, and
neglecting to "explain how to be intelligent" is
not deception- when it is not even something that
can be explained.
graemep wrote 1 day ago:
> It is written down repeatedly in religious texts,
in a subtle way with plausible deniability, but clear
to those that can read between the lines.
Can you give me an example of what you mean? From
Christianity, as its the religion I know most about.
roenxi wrote 18 hours 40 min ago:
You might enjoy this comic: [1] It makes a
humourous and compelling argument that a big part
of Christianity is encouraging its adherents to
follow the game-theoretic optimum in a way that
will convince someone even if they are a bit
credulous.
If you approach the bible with a good knowledge of
negotiation and game theory, a lot of it can be
interpreted in that light. There is a lot of good
advice to get people to move to the global optimums
that can be reached if everyone cooperates. It
isn't subtle about it. There is no conspiracy to
hide that it is good advice even to someone who
doesn't particularly believe in afterlives,
miracles or god-given ethics. There is a very neat
division between the common read and the read of
someone with a good grasp of social dynamics,
negotiation and game theory. No conspiracies. Just
a lot of people who can't handle complex social
negotiation.
URI [1]: https://www.smbc-comics.com/comic/2010-06-...
graemep wrote 17 hours 43 min ago:
Its hardly a new idea. One problem is that there
is a lot more to religion than ethics. It also
assumes that religious rules of behaviour are
global optimums. It fails to explain why
religions spread too - why would people believe
in the religion that promotes cooperation, rather
than one another one? In fact, I would argue,
that, in the west, far more people are moralistic
therapeutic deists than Christians.
There is also a lack of evidence it works. I do
not think Christians are consistently greatly
more socially cooperative than atheists. Maybe
more inclined to help people on the fringes of
society - e.g. running food banks here in the UK,
very active in poverty charities globally but
while good, I cannot believe it has a sufficient
consistent effect to provide an advantage to a
society that follows it.
Fear of hell as a motivator is limited to some
Christian denominations but is not often
mentioned by other denominations (I am mostly
familiar with Catholic and Anglican churches) or
in the Bible, or Christian writings, or in
sermons or in religious discussions. Christian
universalists and others do not believe in any
form of hell at all!
It might work with a religion once established
(religious societies do better because of that
cooperation) but it does not explain how
religions spread in the first place.
Its a lot more likely to apply to a religion that
has been long established in a relatively stable
setting so it is credible as an explanation of
much of ancient Jewish law that seems strange to
us now (e.g. what to eat, not plucking fruit from
young trees etc) that often seems off from a
modern perspective.
UniverseHacker wrote 12 hours 2 min ago:
The comic isn't saying this is the main point
of religions, it's only saying it's one thing
that happens within religions. For example,
religious communities have their own social
norms that are fundamental to the religion, and
allow for coordinated actions you don't see
elsewhere, like an Amish barn raising.
I take a Jungian view that a major useful thing
religions offer is a framework for relating to
the unconscious. One key part of that is to
have a clear sense of ethics, and to align ones
actions with it, which is generally good for
your mental health.
Izkata wrote 12 hours 40 min ago:
> so it is credible as an explanation of much
of ancient Jewish law that seems strange to us
now (e.g. what to eat, not plucking fruit from
young trees etc) that often seems off from a
modern perspective.
One example theory I remember reading at some
point was the prohibition against eating
shellfish: In the area the religion arose, it
would have most likely gone bad by the time it
was brought that far inland.
UniverseHacker wrote 11 hours 54 min ago:
That seems like a very forced theory. By the
time shellfish is bad enough to present a
health risk, it smells, looks, and feels
repugnant, one doesn't need a religious
system to know not to eat it.
Shellfish are susceptible to harmful algal
blooms like red tide, that can make them very
dangerous.
Coastal foraging cultures that don't have
bans on eating shellfish, instead have
complex knowledge about when, where, and how
to prepare and eat them. It's the same with
mushrooms- cultures either universally ban
them, or deeply educate everyone about them.
All cultures globally with access to these
foods have a system here- it's not unique to
Judaism.
UniverseHacker wrote 1 day ago:
I'm not a scholar of Christian literature (or a
Christian), and I don't speak Latin, so it would
hardly be appropriate for me to pull out a specific
quote and insist "this is what they really meant."
In truth, my original source for this was my own
understanding being raised in a Christian church-
and voicing this perspective out loud in church as
a young kid didn't go over well, as you might
imagine. To me as a young kid, it was immediately
obvious that there were deeper ethical principles
being explained in these stories, and one had to be
an idiot to be worried about if they were objective
factual details or not, when the point was clearly
to understand and embody the message- to practice
and live it. One was called to have faith that
living these principles wholeheartedly was the
right thing to do and would lead to real spiritual
growth, not to have faith that some particular guy
built a particular boat- such things are
irrelevant.
However St. Augustine is someone that I am
particularly certain had a clear understanding of
this, and I can see it in how he frames most of his
ideas.
Another example, would be that ancient religious
texts are not careful at all to avoid making
numerous objectively factual contradictions- as the
anti-christian crowd loves to point out over and
over while also completely missing the point. If
the people writing them thought that was important,
they would have avoided doing so- contrary to
modern opinion, ancient theologians and
philosophers like St. Augustine were not idiots.
William Blake is a more modern person that, while
just about the furthest thing from a monastic,
clearly had a deep understanding of what I am
talking about. Carl Jung also extensively
understood and discussed a lot of esoteric things
in Christianity including this, and wrote about
them in a relatively clear modern way.
graemep wrote 17 hours 22 min ago:
> However St. Augustine is someone that I am
particularly certain had a clear understanding of
this, and I can see it in how he frames most of
his ideas.
Can you give me an example of one?
> To me as a young kid, it was immediately
obvious that there were deeper ethical principles
being explained in these stories, and one had to
be an idiot to be worried about if they were
objective factual details or not
Again, an example? You are suggesting for example
that there is no redemption or afterlife but they
convey some point?
> If the people writing them thought that was
important, they would have avoided doing so-
contrary to modern opinion, ancient theologians
and philosophers like St. Augustine were not
idiots.
Does Augustine contradict himself? In a single
work (different views in different works could be
a change of mind)?
UniverseHacker wrote 11 hours 40 min ago:
I am curious where you are coming from- are you
a religious person that feels like my
distinction between religious and objective
truth undermines your beliefs, or are you a
non-religious person that dislikes the idea
that religion may still have value, even if the
beliefs are not based on objective physical
truth?
Myself, I would say I am non-religious, but
have a lot of respect for the purpose and value
religions offers people, and that one benefits
greatly by understanding and filling those
roles and needs in other ways even if not
practicing a religion. I very much dislike the
Richard Dawkins follower crowd that hate
religion with a passion, but have no
understanding of it, and have no connection to
or understanding of their own emotions,
unconscious, or spirituality to their own
detriment.
UniverseHacker wrote 14 hours 14 min ago:
Look at Wikiquote for some of St Augustines
most well known quotes with what I am saying in
mind- if you canât see a dozen examples
youâre not going to agree with a specific one
I point out either. I am refusing to give a
specific example for a reason- you will almost
certainly disagree immediately with the
specific example - because they are written
with an alternate interpretation possible on
purpose - and then think my whole premise must
be wrong as a result without looking at the
bigger picture, and seeing how often this
plausibly deniable concept keeps coming up.
> You are suggesting for example that there is
no redemption or afterlife
I am suggesting no such thing, only that
dwelling on this issue is to miss the point,
and even worrying about it would be an
obstacle. One must deeply feel these ideas and
practice accordingly to follow this spiritual
path- even getting stuck on arguing that they
are true would be an obstacle to that.
shinycode wrote 1 day ago:
Interested if you can give other books/resources on the subject
(not fp though)
vendiddy wrote 1 day ago:
I disagree with the op and found the book to be very good. If you
haven't read it, I would recommend reading it and judging for
yourself.
ninetyninenine wrote 1 day ago:
Canât because fp is in itself basically a design philosophy
that can be explained in 3 axioms.
Segregate mutation from logic
Segregate IO from logic
Eliminate procedures from logic.
The third axiom is sort of for free as it falls out automatically
when someone enforces the first two. Thatâs basically
imperative shell/functional core design philosophy which is
basically identical to the rules of pure functional programming.
[1] With fp you can think of these rules enforced as a language.
Outside of fp we call it functional core / imperative shell and
these rules can be enforced in an imperative language as a core
design philosophy.
URI [1]: https://medium.com/ssense-tech/a-look-at-the-functional-...
analog31 wrote 1 day ago:
I also found this useful. I'm not a software developer, but use
programming for problem solving and prototyping. Still, things
that "look like software" sometimes leak out of my lab. FP
always set off my BS alarm, because in my simplistic view, the
whole world has state. But even for my crude work, a sort of
"separation of powers" helps clean up my programs a lot, and
code that doesn't need to have side effects can be a lot
cleaner if it's not mixed with code that does.
skydhash wrote 1 day ago:
FP does not deny state, it merely segregate it between the
before and the after, and everything that is in between is
transient. Then you combine all the individual functions,
piping them into each other and the whole reflect the same
structure. Then, it becomes easier to reason about your logic
as you only have two worry about 2 states: the input and the
result. No need to care about individual transformations and
ordering like you do in imperative.
rramadass wrote 1 day ago:
I would like to echo the user kfreds sibling comment. I don't
have a FP background either and hence would very much like to
hear your recommendations on books/videos/articles to
understand FP and design the "FP way".
kfreds wrote 1 day ago:
Thank you. I found this comment illuminating. I too am very
interested to hear any book recommendations you have on the
topic.
What are your favorite books on software design, functional
programming, and/or computing generally? What are your favorite
papers on the topic of complexity (as FP defines it)?
BoiledCabbage wrote 1 day ago:
"Domain Modeling Made Functional" is a great read and sits
next to a lot of these topics. Very easy to follow and learn
from even if you don't know (and never intend to use) the
language.
DIR <- back to front page