Menu

10 Years of Generative AI Slop

2025-01-07

2025 Help Wanted: Artificial engineer, 10 years experience required.

Hot takes on the death of software engineering are legion. Boy, are Venture Capitalists hopped up on hopium over this. Imagine if AI could replace their biggest cost! No more profit sharing with expensive, uppity laborers coders. And the “idea guys”? The idea guys can focus on what they do best, think amazing thoughts while a machine makes it all real. No more 50/50 splits with technical co-founders. Oh joyous day!

But what happens to the software engineering industry after 10 years of the AI productivity that was promised? Are we working four hour weeks from private islands or are we in homeless tent camps in California? Is Sam Altman or Elon Musk emperor of Earth?

To answer these, we need to estimate the expected speedup on development from generative AI and come to terms with the nature of software development itself. Merrily, our likeliest future is mere generative AI system collapse — dull human programmers, minds muted by AI-assistants, generating code at record rates right up until we bury ourselves in the complexity avalanche.

Yes, the planet got destroyed.  But for a beautiful moment in time we created a lot of value for shareholders.

On speed limits

To estimate the speedup potential from generative AI, we need to guess how much of the software development process can be automated. If you think the answer is “most of it”, please know you are very smart and you should spend more time on Xitter and NFT investing. Go on now, off with ya.

For us remaining idiots, the time breakdown for a typical software engineer is closer to this:

Suspend disbelief that 40% of software engineering time is spent on things generative AI is allegedly good at — coding, writing, research. This is where VC dreams wreck on the rock of Amdhal’s Law. The expected speedup is bounded:

1 / ((1 - 0.40) + (0.40 / AI_MULTIPLIER))

If the AI_MULTIPLIER is 2x on the low end (McKinsey, keep suspending that disbelief) or generative AI turns normies into unicorn 10x devs on the high end (my VC-funded startup needs all the monies), that’s 1.25x and 1.56x efficiency gains, respectively.

Respectable, but is it 1 Trillion Dollars of respectable?

On “Unimaginable sums of money”

AI maximalists believe next generation models will capture much more than 40% of software engineering, accelerate product development, and even do our thinking for us. RIP “idea guy”.

If we remove humans from the loop and ignore the fundamental nature of software development, then, sure. Unimaginable sums of money may overcome OpenAI’s failing business and the fact next-gen models aren’t close to cost effective.

I can do all things through a16z who strengthens me.
– Silicon Valley 4:13

On the tragic absence of silver allergens in softwerewolves

Even with money no object, the pesky nature of software development may yet prove a problem to our generative AI overlords.

The 1968 NATO Software Engineering Conference recognized the idea of a “software crisis”, and it’s 38 years since Fred Brooks theorized the essential and accidental nature of software complexity. With four decades perspective, we can chuckle at his musings that Ada and object oriented programming were a way out of the wilderness, and yet we credit him with popularizing the idea there is “no silver bullet” for the complexity that drowns us.

Funnily enough, Fred Brooks mentions AI in his paper.

Many people expect advances in artificial intelligence to provide the revolutionary break-through that will give order-of-magnitude gains in software productivity and quality.
I do not.
Books, Frederick P. (1986). “No Silver Bullet—Essence and Accident in Software Engineering”, page 8

The software industry is littered with the wreckage of well-funded projects promising silver bullets: pre-winter AI, case tools, RAD tools, VB6, 5GLs, Web 2.0, Web3

Well did it work for those people?  No, it never does, I mean, these people somehow delude themselves into thinking it might, but ... but it might work for us.

On the nature of software development

Fred Brooks understood what his contemporary Peter Naur knew and what NYPD Detective Robert Thorn discovered: software systems are people, not code.

[Programming] should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.
Naur, Peter (1985). “Programming as Theory Building”

In the “theory building” view of software, the program isn’t machine code, bits in memory, or even higher level language source code. Those are all reifications of the program essence which is an idea, a theory built and cultivated over time by a team of people.

What keeps the software alive are the programmers who have an accurate mental model (theory) of how it is built and works. That mental model can only be learned by having worked on the project while it grew or by working alongside somebody who did, who can help you absorb the theory. Replace enough of the programmers, and their mental models become disconnected from the reality of the code, and the code dies. That dead code can only be replaced by new code that has been ‘grown’ by the current programmers.
Bjarnason, Baldur (2022). “Out of the Software Crisis”

This is doubly the case with modern SaaS. Customers purchase a service made of software and the team of developers writing and operating that software.

Teams continuously choose automation/toil tradeoffs and the arc of mature systems bends toward automation, but the program theory of how it should behave remains in care of the team. Whether silicon or meat sacks are doing the work, there are always critical bits of business and operational knowledge that exist exclusively in fallible grey matter, and that theory is critical for growth. These systems are more garden than machine, living things that grow and are symbiotic with their gardeners. System growth is stunted and stops when cloudy minds hold an incomplete theory of the program.

But generative AI promises to effortlessly generate code, reams of it, of questionable provenance, dubious quality, lightly reviewed, and rarely understood.

So you’re telling me there’s a chance.

On the need for a bigger tarpit

Assuming 25M working programmers, a 250-day working year, and a very conservative 8 lines of code per dev/day, that’s 50B lines of code per year. McKinsey Digital claims a 2x productivity boost for devs using generative AI, and if we credulously accept that claim, a fully armed and operational ecosystem fires out 1T lines of code per decade. We are so screwed.

The act of programming requires an order of magnitude more reading and comprehension than writing. With a tarpit 2x bigger, does the 25% of time spent coding hold steady or do we drown in testing, operations, and overhead because we comprehend our systems even less than we do now?

If programs are theory building, generative AI isn’t optimizing SDLC bottlenecks. While it may grow our code and utility in the short term, generative AI transforms into liability. It puts a drag on theory development as it shrinks the relative portion that people understand. Inaccurate mental models create bugs as the system changes, grows, and interacts with the environment.

on generative research and product development

Notice how far we’ve gotten without mentioning the impact model collapse and LLM hallucinations will have on product development? Using LLMs for “user research” is like thinking conspiracy Youtube is equivalent to scientific health studies. “Do your own research” indeed. From the toilet.

PSA

If you haven’t read Mosely and Marks’ Out of the Tarpit, please please stop reading this sentence and fix that problem.

On generative Artificial Indebtedness

A quarterly profit-driven industry means we will take on the high-interest loan generative AI demands. Short term productivity gains where the interest comes due long after jumping to a new grift is venture capitalist catnip. We will likely create an entire generation of generated programs no one wrote and no one understands, maintained by a generation of programmers missing a decade of experience in first principles engineering.

Mediocre code written at record rates will create entire classes of metastable system failures. The few engineers with skill to wade through the complexity mess will command excessive prices, breaking the venture capitalist cheap labor fever dream.

In ten years, skilled engineers and observability tools will be more crucial than ever. Fred Brooks and Peter Naur will still have their Turing awards. The remaining question: does generative AI system collapse happen before the decade is out? Vegas might set the collapse over-under at 5.5 years. I’m taking the under.

Thanks to my wife - Alicia - and my friend - Holden Galusha - for feedback, proofreading, and editing.

Related tags:

email comments to paul@bauer.codes

site menu

Back to top