_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI Automatic Programming
keepamovin wrote 9 min ago:
Thank you. I and you can be proud. Yes we can! :)
I posted yesterday about how I'd invented a new compression algorithm,
and used an AI to code it. The top comment was like "You or Claude? ...
also ... maybe consider more than just 1-shotting some random idea."
This was apparently based on the signal that I had incorrectly added
ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which
is a dictionary version of the back-reference variant by the same
Level-Ziv team of LZ77, the thing actually used in Zip). This mistake
was apparently signal that I had no idea what I was doing, was a script
kiddie who had just tried to one shot some crap idea, and ended up with
slop.
This was despite the code working and the results table being accurate.
Admittedly the readme was hyped and that probably set this person off
too. But they were so far off in their belief that this was Claude's
idea, Claude's solution, and just a one-off that it seemed they not
only totally misrepresented me and my work, but the whole process that
it would actually take to make something like this.
I feel that perhaps someone making such comments does not have much
familiarity with automatic programming. Because here's what actually
happened: the path to get from my idea (intuited in 2013, but beyond my
skills to do easily until using AI) was about as far from a 'one-shot'
as you can get.
The first iteration (Basic LZW + unbounded edit scripts + Huffman) was
roughly 100x slower. I spent hours guiding the implementation through
specific optimization attempts:
- BK-trees for lookups (eventually discarded as slow).
- Then going to Arithmetic coding. First both codes + scripts, later
splitting.
- Various strategies for pruning/resetting unbounded dictionaries.
- Finally landing on a fixed dict size with a Gray-Code-style nearest
neighbor search to cap the exploration.
The AI suggested some tactical fixes (like capping the Levenshtein
table, splitting edits/codes in Arithemtic coding), but the
architectural pivots came from me. I had to find the winning path.
I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s
for 2MB) and the ratio consistently beat LZW (interestingly, for
smaller dics, which makes sense, as the edit scripts make each word
more expressive).
That was my bar: Is it real? Does it work? Can it beat LZW? Once it
did, I shared it. I was focused on the bench accuracy, not the
marketing copy. I let the AI write the hype readme - I didn't really
think it mattered. Yes, this person fixated on a small mistake there,
and completely misrepresented or had the wrong model of waht it
actually took to produce this.
I believe that kind of misperception must be the result of a lack of
familiarity with using these tools in practice. I consider these kind
of "disdain from the unserious & inexperienced" to be low quality, low
effort comments than essentially equate AI with clueless engineers and
slop.
As antirze lays out: the same LLMs depending on the human that is
guiding the process with their intuition, design, continuous steering
and idea of software.
Maybe some people are just pissed off - maybe their dev skills sucked
beofre AI, and maybe they still suck with AI, and now they are mad at
everything good people are doing with AI, and AI itself?
Idk, man. I just reckon this is the age where you can really make
things happen, that you couldn't make before, and you should be into
and positive. If you are a serious about making stuff. And making stuff
is never easy. And it's always about you. A master doesn't blame his
tools.
falloutx wrote 21 min ago:
May be a language issue but "Automatic" would imply something happening
without any intervention. Also, I dont like that everyone is trying to
coin a term for this but there is already a term called lite coding for
this sort of a setup, I just coined it.
laserlight wrote 24 min ago:
Have we ever had autocomplete programming? Then why have a new term for
LLM-assisted programming?
doe88 wrote 24 min ago:
@antirez if you reading this, it would be insigthful I think if you
could share what is your current AI workflow, the tools you use, etc.
Thanks!
antirez wrote 20 min ago:
Thanks, sharing a lot on X / BlueSky + YouTube but once the C course
on YouTube will be finished, I'll start a new course on programming
in this way. I need a couple more lessons to declare the C course
closed (later I'll restart it likely, the advanced part). So I can
start with the AP course.
jpnc wrote 26 min ago:
How does it feel to see all your programming heroes turn into
Linkedin-style influencers?
VadimPR wrote 33 min ago:
"I automatically programmed it" doesn't really roll off the tongue, nor
does it make much sense - I reckon we need a better term.
It certainly quicker (and at times, more fun!) to develop this way,
that is for certain.
sesm wrote 13 min ago:
"Throwaway prototype" - that's the traditional term for this.
falloutx wrote 19 min ago:
I coined the term lite coding for this after reading this article and
now my chatGPT has convinced me that I am a genius
antirez wrote 29 min ago:
You will say I programmed it, there is no longer for this
distinction. But then you can add that you used automatic programming
in the process. But shortly there will be no need to refer to this
term similarly to how today you don't specify you used an editor...
baq wrote 11 min ago:
I like to think that the prompt is dark magic and the outputs are
conjured. I get to feel like a wizard.
Imustaskforhelp wrote 15 min ago:
(Yes?) but the editor isn't claiming to take your job in 5 years.
Also I do feel like this is a very substantial leap.
This is sort of like the difference between some and many.
Your editor has some effect on the final result so crediting
it/mentioning it doesn't really impact it (but people still do
mention their editor choices and I know some git repo's with
.vscode which can show that the creator used vscode, I am
unfamiliar if the same might be true for other editors too)
But especially in AI, the difference is that I personally feel like
its doing many/most work. It's literally writing the code which
turns into the binary which runs on machine while being a black
box.
I don't really know because its something that I am contradicted
about too but I just want to speak my mind even if it may be a
little contradicted on the whole AI distinction thing which is why
I wish to discuss it with ya.
dugmartin wrote 35 min ago:
I have 30+ years of industry experience and I've been leaning heavily
into spec driven development at work and it is a game changer. I love
programming and now I get to program at one level higher: the spec.
I spend hours on a spec, working with Claude Code to first generate and
iterate on all the requirements, going over the requirements using
self-reviews in Claude first using Opus 4.5 and then CoPilot using
GPT-5.2. The self-reviews are prompts to review the spec using all the
roles and perspectives it thinks are appropriate. This self review
process is critical and really polishes the requirements (I normally
run 7-8 rounds of self-review).
Once the requirements are polished and any questions answered by
stakeholders I use Claude Code again to create a extremely detailed and
phased implementation plan with full code, again all in the spec (using
a new file is the requirements doc is so large is fills the context
window). The implementation plan then goes though the same multi-round
self review using two models to polish (again, 7 or 8 rounds),
finalized with a review by me.
The result? I can then tell Claude Code to implement the plan and it
is usually done in 20 minutes. I've delivered major features using
this process with zero changes in acceptance testing.
What is funny is that everything old is new again. When I started in
industry I worked in defense contracting, working on the project to
build the "black box" for the F-22. When I joined the team they were
already a year into the spec writing process with zero code produced
and they had (iirc) another year on the schedule for the spec. At my
third job I found a literal shelf containing multiple binders that laid
out the spec for a mainframe hosted publishing application written in
the 1970s.
Looking back I've come to realize the agile movement, which was a
backlash against this kind of heavy waterfall process I experienced at
the start of my career, was basically an attempt to "vibe code" the
overall system design. At least for me AI assisted mini-waterfall
("augmented cascade"?) seems a path back to producing better quality
software that doesn't suffer from the agile "oh, I didn't think of
that".
mentos wrote 5 min ago:
I believe the future of programming will be specs so Iâm curious to
ask you as someone who operates this way already, are there any
public specs you could point to worth learning from that you revere?
Iâm thinking the same way past generations were referred to John
Carmackâs Quake code next generations will celebrate great specs.
catdog wrote 6 min ago:
As it is so often in life, extreme approaches are often bad. If you
do pure waterfall you risk finding out very late that your plan might
not work out, either because of unforeseen technical difficulties
implementing it, the given requirements actually being
wrong/incomplete or just simply missing the point in time where you
planned enough. If you do extreme agile you often end up with a shit
architecture which actually, among other things, hurt your future
agility but you get a result which you can validate against reality.
The "oh, I didn't think of that" is definitely present in both
extremes.
AdamN wrote 23 min ago:
Waterfall can work great when: 1/ the focus is long-term both in
terms of knowing that she company can take a few years to get the
thing live but also that it will be around for many more years, 2/
the people writing the spec and the code are largely the same people.
Agile was really pushing to make sure companies could get software
live before they died (number 1) and to remedy the anti-pattern that
appeared with number 2 where non-technical business people would
write the (half-assed) spec and then technical people would be
expected do the monkey work of implementing it.
aglavine wrote 4 min ago:
No.
Agile core is the feedback loop. I can't believe people still don't
get it. Feedback from reality is always faster than guessing on the
air.
Waterfall is never great. The only time when you need something
else than Agile is when lives are at stake, you need there formal
specifications and rigorous testing.
SDD allows better output than traditional programming. It is
similar to waterfall in the sense that the model helps you to write
design docs in hours instead of days and take more into account as
a result. But the feedback loop is there and it is still the key
part in the process.
user3939382 wrote 13 min ago:
I spent my career building software for executives that wanted to
know exactly what they were going to get and when because they have
budgets and deadlines i.e. the real world.
Mostly Iâve seen agile as, letâs do the same thing 3x we could
have done once if we spent time on specs. The key phrase here is
ârequirements analysisâ and if youâre not good at it either
your software sucks or youâre going to iterate needlessly and
waste massive time including on bad architecture. You donât
iterate the foundation of a house.
I see scenarios where Agile makes sense (scoped, in house software,
skunk works) but just like cloud, jwts, and several other things
making it default is often a huge waste of $ for problems you/most
donât have.
Talk to the stakeholders. Write the specs. Analyze. Then build.
âWaterfallâ became like a dirty word. Just because megacorps
flubbed it doesnât mean you switch to flying blind.
reidrac wrote 36 min ago:
> Pre-training is, actually, our collective gift that allows many
individuals to do things they could otherwise never do, like if we are
now linked in a collective mind, in a certain way.
Is not a gift if it was stolen.
Anyway, in my opinion the code that was generated by the LLM is yours
as long as you're responsible for it. When I look at a PR I'm reading
the output of a person, independently of the tools that person used.
There's conflict perhaps when the submitter doesn't take full ownership
of the code. So I agree with Antirez on that part
tonyedgecombe wrote 19 min ago:
>Is not a gift if it was stolen.
Yeah, I had a visceral reaction to that statement.
slim wrote 22 min ago:
It is knowledge, it can't be stolen. It is stolen only in the sense
of someone gatekeeping knowledge. Which is as a practice, the least
we can say, dubious. because is math stolen ? if you stole math to
build your knowledge on top of it, you own nothing and can claim to
have been stolen yourself
jakkos wrote 5 min ago:
Are you against copyright, patents, and IP in all forms then?
mgaunard wrote 36 min ago:
I stopped reading at "soon to become the practice of writing software".
That belief has no basis at this point and it's been demonstrated not
only that AI doesn't improve coding but also that the costs associated
are not sustainable.
reidrac wrote 34 min ago:
I continued reading, but you're right. Why the author felt that it
was necessary to include that?
jakkos wrote 38 min ago:
> Pre-training is, actually, our collective gift
I feel like this wording isn't great when there are many impactful open
source programmers who have explicitly stated that they don't want
their code used to train these models and licensed their work in a
world where LLMs didn't exist. It wasn't their "gift", it was
unwillingly taken from them.
> I'm a programmer, and I use automatic programming. The code I
generate in this way is mine. My code, my output, my production. I, and
you, can be proud.
I've seen LLMs generate code that I have immediately recognized as
being copied a from a book or technical blog post I've read before
(e.g. exact same semantics, very similar comment structure and variable
names). Even if not legally required, crediting where you got ideas and
code from is the least you can do. While LLMs just launder code as
completely your own.
hjoutfbkfd wrote 26 min ago:
when you inplement a quick sort, do you credit Hoare in the comments?
jakkos wrote 9 min ago:
No, in the same way that I wouldn't cite Euler every time I used
one of his theorems - because it's so well known that its history
is well documented in countless places.
However, if I was using a more recent/niche/unknown theorem, it
would absolutely be considered bad practice not to cite where I got
it from.
antirez wrote 22 min ago:
Now many will downvote you because this is an algorithm and not
some code. But the reality is that programming is in large part
built looking at somebody else code / techniques, internalizing
them, and reproducing them again with changes. So actually it works
like that for code as well.
p-e-w wrote 30 min ago:
> I feel like this wording isn't great when there are many impactful
open source programmers who have explicitly stated that they don't
want their code used to train these models
Thatâs been the fate of many creators since the dawn of time. Kafka
explicitly stated that he wanted his works to be burned after his
death. So when youâre reading about Gregorâs awkward interactions
with his sister, youâre literally consuming the private thoughts of
a stranger who stated plainly that he didnât want them shared with
anyone.
Yet people still talk about Kafkaâs âcontribution to
literatureâ as if it were otherwise, with most never even bothering
to ask themselves whether they should be reading that stuff at all.
yuvadam wrote 31 min ago:
I don't think it's possible to separate any open source contribution
from the ones that came before it, as we're all standing on the
shoulders of giants. Every developer learns from their predecessors
and adapts patterns and code from existing projects.
heavyset_go wrote 22 min ago:
You can say that about literally everything, yet we have robust
systems for protecting intellectual property, anyway.
antirez wrote 24 min ago:
Exactly that. And all the books about, for instance, operating
systems, totally based on the work of others: their ideas where
collected and documented, the exact algorithms, and so forth. All
the human culture worked this way. Moreover there is a strong
pattern of the most prolific / known open source developers being
NOT against the fact that their code was used for training: they
can't talk for everybody but it is a signal that for many this use
is within the scope of making source code available.
Imustaskforhelp wrote 24 min ago:
> I don't think it's possible to separate any open source
contribution from the ones that came before it, as we're all
standing on the shoulders of giants. Every developer learns from
their predecessors and adapts patterns and code from existing
projects.
Yes but you can also ask the developer (wheter in libera.irc, or
say if its a foss project on any foss talk, about which books and
blogs they followed for code patterns & inspirations & just talk to
them)
I do feel like some aspects of this are gonna get eaten away by the
black box if we do spec-development imo.
jakkos wrote 25 min ago:
If you fork an open source project and nuke the git history, that's
considered to be a "dick move" because you are erasing the record
of people's contributions.
LLMs are doing this on an industrial scale.
rtpg wrote 42 min ago:
Every time I hear someone mention they vibed a thing or claude gave
them something, it just reads as a sort of admission that I'm about to
read some _very_ "first draft"-feeling code. I get this even from
people who spend a lot of time talking about needing to own code you
send up.
People need to stop apologizing for their work product because of the
tools they use. Just make the work product better and you don't have to
apologize or waste people's time.
Especially given that you have these tools to make cleanup easier (in
theory)!
mccoyb wrote 45 min ago:
a better term might be âfeedback engineeringâ or âverification
engineeringâ (what feedback loop do I need to construct to ensure
that the output artifact from the agent matches my specification)
This includes standard testing strategies, but also much more general
processes
I think of it as steering a probability distribution
At least to me, this makes it clear where âvibe codingâ sits â¦
someone who doesnât know how to express precise verification or
feedback loops is going to get âthe mean of all softwareâ
noodletheworld wrote 47 min ago:
Vibe Engineering. Automatic Programming. âWe need to get beyond the
arguments of slop vs sophistication..."
Everyone seems to want to invent a new word for 'programming with AI'
because 'vibe coding' seems to have come to equate to 'being rubbish
and writing AI slop'.
...buuuut, it doesn't really matter what you call it does it?
If the result is slop, no amount of branding is going to make it not
slop.
People are not stupid. When I say "I vibe coded this shit" I do not
mean, "I used good engineering practices to...". I mean... I was lazy
and slapped out some stupid thing that sort of worked.
/shrug
When AI assisted programming is generally good enough not to be called
slop, we will simply call it 'programming'.
Until then, it's slop.
There is programming, and there is vibe coding. People know what they
mean.
We don't need new words.
songodongo wrote 49 min ago:
Not that I necessarily disagree with any of it, but one word comes to
mind as I read through it: âcopiumâ
marmalade2413 wrote 50 min ago:
I disagree with referring to this as automatic software as if it's a
binary statement. It's very much a spectrum and this kind of software
development is not fully automatic.
There's actually a wealth of literature on defining levels of software
automation (such as: [1] ).
URI [1]: https://doi.org/10.1016/j.apergo.2015.09.013
sandruso wrote 50 min ago:
> Pre-training is, actually, our collective gift that allows many
individuals to do things they could otherwise never do, like if we are
now linked in a collective mind, in a certain way.
The question is if you can have it all? Can you get faster results and
still be growing your skills. Can we 10x the collective mind knowledge
with use of AI or we need to spend a lot of time learning the old wayTM
to move the industry forward.
Also nobody needs to justify what tools they are using. If there is a
pressure to justify them, we are doing something wrong.
Imustaskforhelp wrote 1 min ago:
People feel ripped off by AI & products which use AI. So this is the
reason why you have to justify the tool use of AI.
fwlr wrote 52 min ago:
Itâs very healthy to have the âstrong anti-disclosureâ position
expressed with clarity and passion.
xixixao wrote 52 min ago:
This is a classic false dichotomy. Vibe coding, automatic coding and
coding is clearly on a spectrum. And I can employ all the shades during
a single project.
pseidemann wrote 40 min ago:
AI is like an instrument which can be played in various ways,
different styles and intensities.
One might say it's spec strumming.
margorczynski wrote 53 min ago:
Vibe coding is an idiotic term and it's a shame that it stuck. If I'm a
project lead and just giving directions to the devs I'm also "vibe
coding"?
I guess a large of that is that 1-2 years ago the whole process was
much more non-deterministic and actually getting a sensible result much
harder.
nubg wrote 45 min ago:
I think if a manager just gave some high order instructions and then
went mostly handsoff until teammembers started quitting, dying etc,
only then he steps in, that would be vibe managing. Normal managing
would be much more supervision and guidance through feedback. This
aligns 100% with TFA.
DIR <- back to front page