Warning: This article has a lot of embedded code, so the ball machine is slow unless you have a REALLY fast computer. Play at your own risk.
Seriously, though.
GPT-5, as a simple text completion model, is not a revelation.
This isn’t so surprising. It was becoming clearer with every new raw LLM release that the fundamental improvements from scaling solely the performance of the core text predictor were starting to show diminishing returns. But I’m going to make an argument today that, although the LLM itself is not nearly as much of a leap from GPT-4 as GPT-4 was from GPT-3, we have still seen at least a whole-version-number of real improvement between the release of GPT-4 and 5 as we did between 3 and 4. The reasons for that are mostly what exists around that LLM core.
August 24, 2025 · 9 minutes · Read more →
Recently I read the AI 2027 paper. I was surprised to see Scott Alexander’s name on this paper, and I was doubly surprised to see him do his first face reveal podcast about it with Dwarkesh.
On its face, this is one of the most aggressive predictions for when we will have AGI (at least the new definition of AGI, which is something that is comparable to or better than humans at all non-bodily tasks) that I have read. Even as someone who has been a long believer in Ray Kurzweil’s Singularity predictions, 2027 strikes me as very early. I realize that Kurzweil’s AGI date was also late 2020s, which puts his prediction in line with AI 2027, while 2045 was his singularity prediction. But 2027 still feels early to me.
August 1, 2025 · 21 minutes · Read more →
During my parental leave, which ends tomorrow, I played through quite a few video games - something I love and one of the easiest ways to spend time while rocking my baby daughter to sleep. And it doesn’t hurt that my amazing wife loves watching games about as much as TV or movies with me, as long as they are beautiful or cooperative in some way. All of these are.
They include, in order:
July 7, 2025 · 8 minutes · Read more →
I have long been of the mind that LLMs and their evolutions are truly thinking, and that they are on their way to solving all of the intellectual tasks that humans can solve today.
To me, it is just too uncanny that the technology that seems to have made the final jump to some degree of competence in tasks that require what is commonly understood as “thinking” or “understanding”, after a long string of attempts and architectures that fail these tasks, is a type of neural network. It would be much easier to argue away transformer models as non-thinking stochastic parrots if we had happened to have had success with any other architecture than the one that was designed to mimic our own brains and the neurons firing off to one another within them. It’s just too weird. They are shaped like us, they sound like us in a lot of ways, and it’s obvious they are thinking something like us too.
June 16, 2025 · 14 minutes · Read more →
Lots of chatter right now about AI replacing software developers.
I agree - AI will take over software development. The question is: what work will be left when this happens?
January 24, 2024 · 2 minutes · Read more →
Editor’s note from 2025:
This article was written as part of the launch of Treekeepers VR and the sole proprietorship Together Again Studios, and represents some of my core beliefs of the value of VR and where it’s taking us socially. Though I’m no longer actively working on Treekeepers, I do hold that VR and AR are truly the “endgame” of interface and one that could save us from some of the social attitudes caused by social media of today. Enjoy!
With Together Again Studios and Treekeepers VR, we’re setting out to solve an insidious problem we see all around us:
August 1, 2022 · 3 minutes · Read more →