← Back to home

A Theoretical View on 'Something Big is Happening'

February 19th 2026 · Mart van der Jagt

Introduction

The “Something big is happening” article went viral last week. And when I read it, I understood why. It appeals to a feeling in the stomach. I share the feeling. Like the author, I also work in software and the leap of capabilities in AI is immense. The gap between today and one year ago is huge. I feel like every other day I need to adapt to a new reality; MoltBook, RentAHuman, Opus4.6, GPT5.3, SaaSpocalypse and now this article. And I keep saying how cool it is to be working on the frontier of AI development, but that is also a way to deal with that itch in the stomach I feel every time. It is probably the reason why I started writing articles, so I could assert some measure of control again.

Thoughts on the article

The Something Big article was clearly written with help of AI. Here are a few examples. The most obvious clue is using the em dash: “The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over.”. I don’t think anybody knows where to even find this on their keyboard. But also, AI generated text tends to use “Here’s the.. : ” a lot. There are multiple occurences in the article: “Here’s the thing nobody outside of tech quite understands yet: ”, “Here’s a simple commitment that will put you ahead of almost everyone: ”.

To be clear, this is not a problem by itself, I also use AI to help me write. However, the writer made certain choices that especially in this day and time, especially for an article that went viral (not his fault of course), are reprehensible. He chose a tone, or let the LLM chose a tone, that played in on sentiment. And then he just wrote the article (or let the article be written) towards that sentiment.

And I can’t say how much of it he wrote himself, but that is the whole problem now that we have AI to generate text. It is so hard to distinguish authenticity and truth from what appears to be true. I believe that the only way to overcome this, is to force yourself to support your claims with evidence. Scientific, empirical, logical. As long as you make the effort to back up your claims, and to falsify them.

The claim that this article makes that has the biggest impact, is that AI will accellerate exponentially and there are no signs of flattening:

Graph from the Something Big is Happening article showing exponential AI capability growth

This is a huge claim. It directly speaks to that feeling in the stomach, and it deserves more than a single source to back it up. I don’t have the expertise to confirm or reject this claim, but I do want to provide an alternative view on this claim. Just to prove that there are always different perspectives to be found.

Whether an LLM can run for hours at a time, doesn’t say anything about the quality of the result. It ignores the 80/20 “rule”, where 20% of the work costs you 80% of the effort. If you put the SWE-bench bash only results in a graph, you will see the curve flattening. The huge leap of Claude Opus 4.6 only resulted in a 1.2% leap on the SWE-bench. And yes, the timeframe of this analysis is annoyingly small, and yes 1.2% on the scale of humanity’s existence this is still a gigantic leap, and yes I don’t even know if the 80/20 rule has any scientific support. However, these remarks exactly support my point that such important claims must be substantially backed up. Something which the article fails to do.

SWE-bench verified results showing a flattening curve of AI performance improvements

A theoretical view

I do want to offer you an alternative theoretical view: Shell Theory. Although not empirically backed up, it is logically supported. It does not contradict what is written in the article, but it does offer a more moderate view on what is happening:

Shell Theory conceptual diagram showing how AI amplifies high-agency users beyond baseline

With AI everybody gets lifted. And you can be content with that. Or you can use AI as a lever to learn new things. But that takes a lot of energy and time, and this is hard because it is so easy to get things 80% right already. At one point you will pass beyond the flatline that AI gives you for free, and from there on you will be amplified beyond your capabilities. If you worry about what is coming next, the best thing you can do is force yourself to show agency, to show discipline. The same discipline that was required in the pre-AI era, to understand and learn about the world.

If you are interested in reading more about Shell Theory, then see this link (still based on Opus 4.5 mid bench). And yes, that article has been written with help of AI, to find the right words, to find the logical inconsistencies in the argumentation. The model is still the same as it came to mind at 5AM in the morning. And I spend hours and hours, together with Claude Opus 4.5 to sharpen the thesis and get it lifted into the amplification zone.


This article was (with exception of svg generation) 100% written without support of AI.


Interested to find out whether you are in the shell zone? See Am I a Shell?