Is the AI Hype Over?

This time last year, the world was real hyped about AI. Arguably, now at the start of 2025, there’s more talk of “AI” than ever, but while the stock market boys haven’t got the message yet, it sure seems like this hype bubble is floating straight towards a pin… But, before we draw any conclusions, let’s take a look at the state of the market, and the tools that are currently available. It’s also worth making it clear that when we are talking about “AI” here, we aren’t talking about true artificial intelligence, so much as machine learning and neural networks. This is a novel form of computing for sure, but none of this is anything near intelligent. We aren’t talking about AGI either, that’s the scary one, this is anything from marketing BS to a fancy form of database. 

Most AI tools and projects fit into a couple of categories, things like Generative AI – often shortened to GenAI – or analysis tools like optical character recognition. Within GenAI in particular you’ll find text generation, called LLMs or large language models, image generation – think Stable Diffusion or DALL-E, and video generation, like OpenAI’s SORA. Seeing as how ChatGPT stole a lot of the limelight last year, let’s start with that. ChatGPT had quite an interesting year, as throughout 2024 they were teasing new models that were record-breaking, game-changing, and as close to AGI as you can get, and yet the reality? Well sure, o1 is better than GPT-4o, especially at what looks like reasoning, but to say that it’s ‘thinking’, or ‘reasoning’ is to anthropomorphise an inanimate object. To be clear though, the improvements OpenAI have made to their GPT models – and more specifically their ChatGPT service, combining multiple neural networks into one seemingly cohesive user experience is amazing. As an example, their new “chat to ChatGPT” feature is a combination of their Whisper model that converts audio to text, GPT-4o that turns what you said into a response, and then their TTS model to turn text into pretty natural sounding audio. When you ask ChatGPT to create an image for you, it spins up DALL-E in the background. It’s really cool, but it isn’t as alive as OpenAI makes it out to be. That too goes for the o1 and o1-mini models, with their specialty being deductive reasoning. In tests you’ll regularly find o1 in particular gives you considerably longer, more detailed, and more, well, reasoned answers. This is really cool – it’s getting more questions you ask right than wrong which is a great start, although it’s worth remembering that it still isn’t actually reasoning. It tells you the “thought process” which is great – more transparency into how neural networks work is really overdue, and it’ll sure help kids cheat better on tests – but I suspect a little smoke and mirrors there. OpenAI’s objective, their incentive-driven goal, is to convince you that their models, and to a degree models similar to, are more advanced than they are, and more worthy of your $20, or $200 a month. 

One concern I have with improving these models is the training data. The way that you train a neural network is by giving it a whole lot of training data – labelled data, I should add – so it can build associations. The more data, the more refined the model. You can do plenty of optimisations on the actual code itself – gradient descent is a tricky problem especially when trying to balance cost and training time – but at the core of it, you need more data. OpenAI and their peers have already scraped the entire internet, trained it on every book ever written – with or without permission – and anything else they can get their hands on. What now? How can you improve the model with more data when you’ve got the entire collective works of the human race and that isn’t enough? You can get your models to train themselves, but that very quickly runs you into issues with it reinforcing undesirable outcomes. 

One area that o1 in particular is meant to be better at is programming – there’s lots of talk about how AI tools are going to replace programmers jobs real soon, but… That would assume that these models can genuinely do deductive reasoning – well actually more than that. They can spot problems in code without ever running it, giving it test data, or even actually knowing what the code does. For basic CRUD tasks, an AI tool is arguably a fantastic productivity booster. “Set up a React 19 project with a contact form, react-select, and API calls to a NodeJS backend” will give you what an intern will take a month to do. “Write a unit test for this code” – done. But give it an abstract problem – the sort of thing you’d be afraid to post on Stack Overflow because you’ll get hate-mailed out of existence – and it’ll struggle. Now again, o1 is better than the rest for this, but it isn’t revolutionary. Programmers don’t need to fear for their jobs, mostly because first the client would need to understand their own requirements and we all know that’s just not possible, but second, because LLMs can’t think. They can’t solve unique and novel problems, especially without an ability to understand the problem – they can’t actually run the code and breakpoint your way through an issue to see where the data is getting changed, nor can they understand the nuance of something like a bug in the framework or the language that you’re using. Without the capacity to genuinely think, experiment, and yeah, reason, us coders still have jobs yet.

This whole “not aware of what it’s doing” problem persists in all generative AI tools, and image generation is no different. That is exactly why earlier models just could not draw hands – because they’re not drawing. There is no intrinsic understanding of what the words you type in mean. It doesn’t know what a person looks like, just how to arrange pixels to get a reward. Obviously with training it can get better – and has – but much like LLMs there is a limit to the training data, as well as a limit to how neural networks function to get better results. Naturally that translates to video generation too – like OpenAI’s SORA – but even more so. One thing that SORA generally can’t get right is physics – because it isn’t a simulation, it’s an approximation. Of course, much like hands in pictures, AI video creation tools will get better and more convincing, but even more so there’s a limit to the training data available. 

Then there is the class of AI tools that I think are genuinely game-changing, namely the analysis tools. Optical character recognition, and the wider computer vision market, has improved a lot with the introduction of neural networks. Even basic neural networks can generally figure out characters, numbers and basic shapes, and in part thanks to generative AI models needing a classification system that isn’t just new-age slave labour, there are computer vision models that can detect considerably more advanced objects, people and kinda everything under the sun. My absolute favourite use of AI is in healthcare – and not a new-age version of WebMD that just tells everyone they have cancer for a papercut on their finger or a runny nose – but for actually finding cancer, early. There are a number of tools that have hit the news in the last year for being able to detect cancer in scans well before human eyes can, with a shockingly high success rate. The magic of neural networks is that they are just looking for patterns, and while humans are pretty good at that, if you sick an AI on a singular problem with enough data, it’s going to find patterns we never would have even considered an option. The downside is that they are generally black boxes without much explanation potential, so while these often proprietary tools can do the job – one called Sybil can detect lung cancer with 80 to 95 percent effectiveness even before doctors can find anything – if the endless grind of capitalism takes hold, and it will, because it’s a proprietary unexplainable neural network, there’s no way to share that life saving information. Still, these tools can, will, and are saving people’s lives, and that is absolutely amazing. 

So, that’s the market, now the hype. The most prevalent concern by far was fear of job loss, and I’ll be honest and say that this is one hell of a complicated topic. It isn’t the AI tools themselves that will lay people off, it’s the business owners and decision makers that are being sold a product that can supposedly save them money, improve efficiency and make their lives easier. Regardless of whether or not the tools actually do that, the fact that so many decision makers – apparently 67 percent, at least of those surveyed – are considering using some form of AI in their business means that the willingness is already there, regardless of the effectiveness. Job losses will come, at least partially, because the tools even in their current state can likely at least speed up work some areas, and that’ll be enough to convince the brass. Do I think that all jobs are in line for the firing squad? No, of course not, but the low hanging fruit – copy writers, stock video and photo creators, and assistants might be worried. Jobs that require complicated processes, antiquated systems – I’m looking at you banking and airlines – or troubleshooting and deductive reasoning – ie programming – I’d imagine you’ll be safe. 

The thing that frustrates me is that all this hype is driven by eejits in board rooms who say dumb crap like “we need to integrate AI into our workflows or we’ll be left behind and out of touch!” while having zero understanding of literally anything other than the fancy coffee they boastfully drink all day and their bank balance. They don’t understand their own team’s work, workflows, problems and inefficiencies, and they sure as shit don’t understand what an AI is beyond the hype. People are going to lose their jobs because of their hubris, and that gets on my nerves. But that’s capitalism for ya! One other thing to consider – reports suggest AI is currently consuming around 2% of the world’s energy usage. Is AI crap like CoPilot+ that nobody seems to want except Microsoft really worth all that energy – literally?

Personally, I’d get out while the game is good. Like it or not AI is here to stay, at least for a while. With it being the talk of the town – with the town being boardrooms – and enough vultures circling the skies – those being VC funded startups with various AI products that may or may not work – there’s enough momentum here to keep this relevant for a while, and companies doing trial runs for AI tools take a while to prove out too. But, just looking at NVIDIA’s stock price alone… Man that sure looks like a bubble to me. Although what do I know, I’m just an idiot on the internet, and you should never trust idiots on the internet for financial advice. Anyway, that’s the state of AI, at least as I see it. I think the hype is still here, but it’s quietening down – and let’s face it the joke is getting overdone with literally everything from your CPU to your toaster being “AI” now. It’s getting dry, and I know I’m bored of it – although I do have ADHD.