Allow me to preface this by saying that I’m not on the “anti-A.I.” bandwagon by any stretch. While I’m sceptical, to a reasonable degree, about some of today’s large-language models, and whether they can really do as much as investors have been promised, I see the potential in A.I. in a lot of ways. I don’t want this piece – discussing one very specific use of A.I. – to be misunderstood! In fact, I’d argue that anyone who claims to be “anti-A.I.” in every possible case doesn’t actually understand what A.I. is and how broad a category it is; it would be like saying your “anti-computer,” or “anti-electricity.” The uses for A.I. are vast – it’s an incredibly big category of inventions.
So what are we getting into today, then? If you missed the announcement, graphics card manufacturer (and major supplier of components to A.I. datacentres) Nvidia has recently shown off its new A.I.-powered DLSS 5.0 – a graphical overlay for some video games, which is intended to add more “realism” to environments, character models, and faces.
And… to be blunt, I think it looks like shit.
The DLSS 5 Starfield demo.
Some of today’s generative A.I. models can create photorealistic landscapes, creatures, and even people. A.I. art is a big topic in and of itself, and it can be quite controversial, so we won’t get into all of the arguments around it. Suffice to say that, as someone who runs a small website as a hobby, the only times I’ve used A.I. art (that I’m aware of, anyway) are in a couple of my other articles discussing A.I. – and that was a deliberate choice to help illustrate a point I was making. I’m not actively opposed to A.I. art in all cases; as with any subject, it’s not a black-or-white thing. Not all photographs are “art,” but some can be – and I would suggest A.I. art is probably in that same kind of space.
But we’re off-topic already!
DLSS 5, according to Nvidia, is intended to increase the “visual fidelity” of video games, and the company claims it’s their most significant innovation since real-time ray-tracing almost nine years ago. DLSS 5 uses generative A.I. in some form – the exact details are not clear – and seems to work as a kind of “middle man” during the rendering of frames, upscaling, adding detail, and trying to give games a more photorealistic look.
Nvidia’s helpful explanation of how DLSS 5 works.
On the surface, this sounds like a useful invention, right? At least for *some* games, anyway. Game developers have been chasing photorealism since, really, the very dawn of video games and computer-generated imagery, so any new innovation that brings us closer to that goal should be a cause for celebration. Only… well, is DLSS 5 *actually* making things photorealistic? Or is it simply adding a filter?
The screenshots Nvidia provided – which, I would note, are going to have been *very* carefully selected to show the new tech in the best possible light – all feel, well, kinda samey. And that’s despite the games selected to show off this new technology all being pretty different from one another in terms of art style. Yes, all of the games in question were aiming for some measure of photorealism, but there are incredibly important differences in the way they use more subtle things like light, shading, facial animations, and so on. If DLSS 5 smooths all of that out, resulting in games that look indistinguishable from one another… I’m not sure I’d call that a “breakthrough.”
DLSS 5 running on an Nvidia tech demo.
To Nvidia’s credit, they claim, in their marketing blurb, that DLSS 5 is meant to be “tightly grounded in the game developer’s 3D world and artistic intent.” But based on the screenshots and video that Nvidia itself provided as part of this announcement, I gotta be honest: I’m not seeing that. I see a filter that smooths out a game’s rough edges, sure, and definitely adds more detail – but if those details are all the same, and the end result is that faces in particular end up looking incredibly similar from title to title (and, I would add, not unlike a Snapchat/TikTok filter or other A.I.-generated artwork), then I don’t think it’s going to be of interest – at least, not to me.
There’s already a lot of sameyness and repetitiveness in the way modern games look thanks to many of the industry’s biggest studios using the same handful of game engines. Unreal Engine 4 and 5 are so commonplace nowadays that you can almost always notice its presence from the moment you boot up a title. And there are advantages to that – don’t get me wrong. As someone who used to work in the industry, one of the biggest issues developers (and studios) faced was that skills in one engine or one programming language don’t automatically translate; if more studios are using the same software, skills are more easily transferrable.
Another of the demo images.
But for players, the end result has been that an increasing number of big-budget titles feel… samey. And DLSS 5, if it can actually do what’s being advertised, might just make that particular trend *worse*, not better. Photorealism is not one singular thing – just go to an art gallery and look at photos, and you can see that, even in the real world, there are completely different ways to capture a portrait, a city, a landscape… and more. DLSS 5 seems, to me, to be trying to apply the same techniques to every game shown off – and the results are more miss than hit.
One of the titles selected was Starfield – and if you know me, you’ll know I’m of the opinion that Starfield needs all the help it can get! I once described Starfield’s NPCs as “dead-eyed, waxy-skinned Madame Tussauds rejects,” so *anything* that could be added to the game to “fix” its NPCs should be great. Right?
Starfield with DLSS 5.
Look at the image above, which is taken from the opening act of Starfield. Look at the two characters – Heller and Lin. Doesn’t Heller just look like… a meme? You know, the edited “Chad face” meme? And what’s with the lighting? The image is horrifically over-lit, completely negating the vibe of the original scene. I can’t believe Nvidia has got me *defending* Bethesda’s “artistic vision” for Starfield, but the original version of the scene genuinely has more character. The dimly-lit, dusty space evokes the feeling of being on a small outpost at the arse end of space; the DLSS 5 version completely changes the entire tone of the setting. It makes it feel washed-out.
Even if you prefer the more brightly-lit version of Starfield’s opening area, can we at least agree that Bethesda lit the original room a certain way on purpose? Starfield has other indoor areas which are much brighter, so it’s not a technical limitation. It’s clearly a creative choice for that room, at that mining outpost, to be lit the way it was. And DLSS 5 blasted right through that, ignoring all of it.
EA Sports FC.
The one game where I thought DLSS 5 worked best (or “least badly,” I guess) was EA Sports FC. Those kinds of sports games have always been interested in pushing photorealism, and I just felt that DLSS 5 looked most in line with the game’s art style. But the EA Sports FC promo images also threw up some pretty weird and jarring artefacting in the background: in the image above, note how the player on the left, when DLSS 5 has been enabled, seems to stick out from the background quite abruptly. Compare that to the same image without DLSS 5; it’s a much smoother transition from face to background – something that, I would argue, looks more natural and less artificial.
The lighting issue also affects Resident Evil Requiem. The provided images of protagonist Leon (seen below) show DLSS 5 completely changing the way he appears in that scene – and again, as with Starfield above, it looks too bright compared to the original. For a horror game, where environments and lighting matter all the more, I can only describe that as being a potentially huge problem.
The Leon image also has a weird “glassy” effect to the sky in the background, despite seemingly being set outdoors. That could also be a bug – a bug in the very demo images that Nvidia is using to introduce this new technology.
Resident Evil Requiem’s entire lighting has changed.
According to reports by folks who’ve seen DLSS 5 for themselves, Nvidia was running the demonstration on not one but *two* of its top-of-the-line RTX 5090 graphics cards. Here in the UK, those cards retail for upwards of £1,800 – so a rig needing two of them is gonna set you back a pretty penny! Cutting-edge innovation often starts expensive and gradually comes down in price – 1080p HD, ray-tracing, 4K, etc. were all in that category. But if DLSS 5 launches, as promised, later this year and it needs the highest of high-end hardware just to get started… well, I guess that rules me out, anyway!
From what I’ve seen, I gotta be honest with you: I’m not impressed. I think there’s potential – in theory – to use A.I. in some way to improve graphical fidelity, add realism, and do the kinds of things that Nvidia is promising DLSS 5 can. But if the end result is games and characters that look like they’re straight out of memes or A.I. art… I don’t see that proving popular and catching on. Even when DLSS 5 had opportunities to genuinely improve some pretty janky-looking character models in a game like Starfield, it still came up short.
If DLSS 5 could’ve improved any of the games selected, it would’ve been Starfield.
Art is complicated, and art is subjective. And I have no doubt that some folks will happily sacrifice “artistic vision” in order to gain a more detailed, photorealistic look. But if there’s one thing we’ve learned from the success of indie games over the past decade-plus, it’s that graphics aren’t the only thing that matters to players. I’m still of the opinion that, if I had to choose between two similar games in the same genre, the better-looking one is going to grab my attention first. And the push for photorealism has led to some absolutely beautiful video games over the past few years. But does adding a generative A.I. layer improve things? Based on the evidence Nvidia chose to submit, I’m gonna say “no.”
However, this could be an idea to keep an eye on. If we haven’t yet reached the ceiling of generative A.I.’s capabilities, and if improvements to this kind of system are possible, it could be an interesting technology for the future. For one thing, it could mean there’s less of a need to remaster and remake older games; if the goal of a remaster, like last year’s Oblivion, for instance, is just to improve the graphical fidelity, well, this kind of system might be able to do that much more easily. So, despite not liking DLSS 5 as it’s been shown today, I can at least see the potential for its use somewhere down the line – assuming that Nvidia can hone it, refine it, and ensure that it really does preserve a game’s unique art style without ruining things like brightness and environmental details, or making faces look… well, like *that*.
Thanks for reading. I’m not a tech expert by any stretch, but I wanted to share my thoughts on this new technology as it pertains to video games. If you want to check out my thoughts on one potential future for generative A.I. in entertainment, click or tap here. And if you want to get my thoughts on last year’s alarming A.I. 2027 paper, you can find that by clicking or tapping here. Until next time!
DLSS and DLSS 5 are trademarks of Nvidia. All titles discussed above are the copyrights of their respective developer, studio, and/or publisher. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.
I recently read the very alarming A.I. 2027 report. I’ll leave a link to the full version at the end of this article, and I highly recommend checking it out if you’re interested in the future of A.I. But for now, I’ll briefly summarise the report. And yes, I know this is a little outside my wheelhouse here on the website, but I think this subject is worth discussing!
A.I. 2027′s core premise is not, as some alarmist headlines have tried to claim, that the end of the world is a mere two years away! But the report’s authors argue that, if artificial intelligence development continues on its current trajectory, late 2027 could be humanity’s final opportunity to remain in control of any A.I. system that emerges – and our last chance to ensure that its goals and priorities align with our own. They present this as an existential risk; that a sufficiently advanced A.I. sometime after 2027 could, if not properly restrained and aligned with human interests, literally exterminate the human race. That… that’s kind of disconcerting, right?
Is this our imminent fate?
I confess that I’m way behind the curve when it comes to A.I. Everyone talks about ChatGPT, A.I. art, and other such things… but my knowledge of the subject has been, until recently, surface-level at best. But given the potential A.I. has to be disruptive, perhaps on a scale we haven’t seen since the advent of the world wide web or even the Industrial Revolution… let’s just say I felt the need to catch up and get up to speed!
Artificial intelligence is already changing the way we live, work, and interact. What are “algorithms” if not a form of A.I.? If you’ve ever scrolled social media and felt that the website or app is almost too clever in recommending posts or adverts that appeal to you… you’ve already been caught in its web. And if you’ve noticed fewer job listings in fields like translation, copy-editing, or programming… you’ve seen what might just be the tip of the iceberg when it comes to A.I.-related job losses and workplace changes. I bring this up because I think something that gets lost in this conversation about the future of A.I. is that A.I. is already here, already changing things, and isn’t just some nebulous future idea about something we “might” invent one day. We’ve already invented it.
The M-5 Multitronic Unit from Star Trek.
Predicting the future is hard – nigh-on impossible, really. Step back to television programmes from the 1980s, 1990s, and even the 2000s that tried to look ahead to 2025 and see how wrong many of their predictions and ideas were. A famous example is flying cars – I still haven’t seen one of those, despite the promises made by sci-fi films in the ’80s! So I think it’s worth acknowledging that. Some predictions can be incredibly wide of the mark – especially those that promise either an imminent technological utopia or the end of the world. Both of those scenarios are present in A.I. 2027.
As a layman looking in from the outside, I’ve been dimly aware of talk around artificial intelligence, but it hasn’t been something I’ve felt any need to engage with. I don’t have a place in my life for an A.I. chat bot, I’ve never needed to generate A.I. art before (though I am using some A.I.-generated images in this piece), and I actually enjoy the process of writing… so I see no need to use something like ChatGPT to generate text, either. But there’s no denying that, as someone with a creative streak, I feel increasingly… obsolete. A.I. doesn’t just have the potential to revolutionise writing, web design, media criticism, and all the things I talk about here on the website. It’s literally already doing all of those things.
It’s a chat bot.
I could fire up ChatGPT or a similar A.I. programme and, within a matter of moments, it could’ve scanned everything I’ve ever published here on the website, and all of my unpublished drafts, too. I could then ask it to write an article or essay on any subject imaginable, using my tone of voice and my writing style. Could you tell the difference? From my perspective, as someone who runs a small website as a hobby and who enjoys writing, there’s no point in using A.I. for that. But if I was working in an office job where lengthy reports were the order of the day… I can see A.I. being a very tempting shortcut. Heck, I’d have killed for an automated essay-writing programme when I was at school!
And that’s exactly what these systems are designed to do. They’re intended to cut costs for big businesses – and by far the largest cost that most companies and corporations have is the wage bill. Let’s rephrase that for the sake of clarity: the entire point of A.I. is to save corporations money by enabling them to lay off staff and automate more and more of their work.
Unemployed people queuing for financial relief during the Great Depression.
Unfortunately, there’s a bit of classism that seems to have crept into this conversation. I grew up in the north of England in the ’80s and ’90s, at a time when deindustrialisation had robbed the area of mines and factories. Automation played a big part in that – work that used to take a dozen people could be done with just one specialist, and then it became “too expensive” to continue to operate. There’s more to what happened in this part of the world, of course, but automation played a big part. Some of the people who used to tell factory workers and miners to “re-train” or “learn to code” are now themselves on the receiving end of automation-related job losses. And it’s pretty grating to see folks getting worked up about A.I. in the 2020s when they not only didn’t give a shit about the devastation automation brought to working class communities from the ’80s onwards, but actively supported it.
In that sense, I kind of view this consequence of A.I. development as a continuation of a process that’s been ongoing for decades, not something new. For decades, big companies have been looking for shortcuts; ways to cut jobs and pay fewer members of staff while achieving ever greater profit margins. A.I. is what they’re banking on in the 2020s in the same way as manufacturers invested in automated equipment in factories, or mining corporations exchanged pickaxes for machines. The difference? A.I. is coming for white collar, middle class jobs. Earlier automation mostly took jobs away from blue collar, working class folks.
Sights like this were common when I was growing up. Photo: David Wilkinson / Former Addspace furniture factory, Bolton upon Dearne
But that’s just one side to A.I. – the corporate, job-stealing side. The existential risk as posited by A.I. 2027 is much more complex… and I’d be lying if I said I wasn’t concerned. Unlike the 2012 phenomenon or the predictions of Nostradamus, the risk posed by artificial intelligence feels real. I’d say it feels somewhat comparable to the way the millennium bug felt in 1999; a technological danger that humanity has created for itself. In 1999, a lot of people were genuinely concerned that the millennium bug could cause all kinds of chaos, but thanks to some hard work, none of those predictions came to pass. It went largely unnoticed at the time, but it did take a lot of effort behind the scenes to ensure that the millennium bug didn’t have a major impact; it wasn’t, as some have tried to claim, a completely overblown threat.
So what’s the lesson there? If there are risks to A.I. development… shouldn’t we do everything we can to mitigate them? That seems like the rational course of action, but as the authors of A.I. 2027 explain, current artificial intelligence systems don’t exist in a vacuum. There’s a technological “arms race” between different countries, and slowing development to implement safety measures could mean that the current leader – the United States – would surrender its position on the cutting edge. The result of all this? Rapid, chaotic, unchecked development of A.I. systems in order to remain ahead of the curve.
A graph summarising A.I. 2027′s findings.
There are military applications for A.I. – and if one side in a potential conflict has A.I.-controlled drone weaponry and the other doesn’t… it’d be like trying to fight a modern army with medieval weaponry and tactics. Keeping up with A.I. developments must feel, to military planners and political leaders, like even more of an existential risk, because the consequences of failure could be catastrophic. We talked above about the world wide web and the industrial revolution; in military terms, A.I. could be at least as impactful as the development of nuclear weapons.
So I think we’ve covered why governments, militaries, and corporations want an A.I.-infused future. But two questions still remain: will they remain in control of it? And what will it mean for the rest of us?
Skynet in Terminator 3.
I’ll tell you a story from my own life that I think might illustrate this next point. In the late 2000s, I was feeling pretty disillusioned with my job and career path. In those days, I was working in the video games industry, on the marketing side of things, but I went through a phase where I wasn’t satisfied, and I started contemplating other career options. For a time, I thought seriously about getting my HGV license – that’s heavy goods vehicle, a.k.a. a truck or lorry. But by that point, it seemed to a techie as if self-driving vehicles were only a few years away… and I genuinely thought that it would be a waste of time to spend a lot of money taking tests and getting a qualification that could become obsolete before I could make the most of it. To use an analogy from history: it felt like jobs based on driving any kind of vehicle from taxi cabs to trucks were about to disappear in much the same way as horses and stables did in the first decades of the 20th Century.
Why do I make this point? I was wrong, in the late 2000s, to be thinking like that. Self-driving technology may be “here” in a technical sense, but it isn’t road legal and it doesn’t seem to be making the kind of impact I would’ve predicted (or feared) in the late 2000s. The same is true of many other technologies that seemed to be “the wave of the future,” only to completely fizzle out. I owned a MiniDisc player in the early 2000s, expecting that technology would replace CDs! Google Glass, 3D televisions, the hovercraft, Concorde… the list of “revolutionary” technologies which never delivered their promised revolutions goes on.
A self-driving lorry.
The point is this: there are many technologies that seemed, for a time, to be the “next big thing,” primed to revolutionise some aspect of our daily lives. And they didn’t accomplish that. When was the last time you even thought about MiniDiscs, hovercraft, or Google Glass? For a brief moment, all of those things seemed to be on the cutting edge of technological innovation, yet they fizzled out without having the impact some folks expected. Could A.I. be the same? And more importantly: are we perhaps reaching the limitations of A.I.’s current capabilities?
It seems to me that the more A.I.-generated content exists out in the wild, the harder it becomes to filter it out – including for A.I. programmers. We’ve all seen A.I.-generated images that aren’t quite right: hands with too many fingers, a cat with five legs, the New York skyline with buildings in the wrong place. As more of this content is artificially generated, at least some of it is going to be fed back into A.I. algorithms as they trawl the web, right? So… doesn’t that mean there’s going to be a point where A.I. either stagnates or even starts getting worse, rather than better?
Remember when Google Glass was about to change everything?
Then there are jobs that A.I. would struggle to perform. I could absolutely envision a world where many office jobs are replaced by A.I. programmes – similar to how factories that used to be full of workers slowly got replaced by robots and machinery. But could you have an A.I. plumber? I had my bathroom redone a couple of years ago to fit an adapted shower as I’m disabled… and it took several skilled workers more than a week to measure things, reroute an errant pipe, and install all of the fixtures. Earlier today I had to cut down a rogue tree branch that was in danger of damaging my roof. Could A.I. do that? Autonomous robots have come a long way in the past decade or so, but even with full A.I. integration, tasks like those still seem a long way off – not to mention unaffordable, even here in the UK.
One part of A.I. 2027 seemed to offer a “technological utopia;” the kind of post-scarcity society that we’ve seen depicted in Star Trek. And don’t get me wrong… I would love to live in that kind of world. But is that realistic? Are we really only a few years away from fleets of automatons doing everything from diagnosing diseases to plumbing a sink? The rate of technological progress is impressive, for sure, but I don’t believe we’re closing in on that kind of future.
Legions of robots from the film I, Robot.
Then there are political considerations that I would argue very strongly A.I. 2027′s authors have got wrong. The idea of a “universal basic income” has been talked about before, and was even trialled on a small scale in Finland and a couple of other places. But the politics surrounding this are incredibly difficult, even toxic, and I struggle to envision a near-future scenario where universal basic income is actually politically feasible. The current political climate in the United States, as well as here in the UK, seems to be completely opposed to that kind of scheme. I mean, do we really think Donald Trump and the Republican Party would ever go for a policy of universal basic income?
None of this means that A.I. 2027 is wrong in its big-picture assessment of the short-to-medium-term future of artificial intelligence. But given the authors’ political miscalculations in particular, I think there’s enough of a red flag to at least consider the possibility that they might’ve gotten other things wrong. The report’s premise is based on competing A.I. models from different countries eventually agreeing to merge – but isn’t it just as likely that two independent A.I. systems would develop different, even opposing goals? Human beings are tribal by nature, and it’s at least possible that any kind of sentient life we might create would retain that tribalism.
Are we suggesting that Donald Trump (pictured at a recent event with Mark Zuckerberg and Bill Gates) will seriously consider a policy of universal basic income?
I don’t mean to nitpick A.I. 2027 too much, because I think the report highlights some important issues and raises important considerations. But I think it’s worth acknowledging that it feels, in parts, like a doomsday scenario and an unrealistic utopia. These are both extreme… and perhaps that’s why neither feels especially likely.
A.I. is coming for a lot of jobs, and some folks who would’ve never expected to be losing their job to automation are going to be impacted. Artists, animators, writers, creative folks… A.I. is coming for us. Programmers, web designers, game developers… basically any office job you can think of, especially so-called “entry-level” positions. They’re all vulnerable to A.I. – and to big corporations seeking to remove employees in order to save money and make even greater profits. That side of A.I. feels real – because, as I noted earlier, it’s already happening. Layoffs in the video games industry are partially driven by A.I. replacing workers. Real-time translation apps mean there’s less of a need for translators. Data entry can be done quicker and faster with an A.I. model than with an entire team of people. And A.I. doesn’t take breaks, need maternity leave, or even go home at the end of the workday. None of this is imaginary or a subject of debate. It isn’t “coming soon.” It’s already here and it’s real.
Free translation apps (Google Translate pictured) are getting better all the time, and real-time in-ear translation for spoken languages is also possible.
But the existential risk? A.I. super-intelligence? A machine with unfathomable, god-like knowledge and powers? A technological utopia where A.I does all the work and us humans can just kick back, claim our universal basic income, and chill out all day? I gotta admit that I’m sceptical.
I think back to my fears of self-driving vehicles poaching jobs from truck drivers. And I reflect on the technological promises (and concerns) about technologies like supersonic jets and Google Glass. Within just the past few years or decades, technologies have emerged that seemed to be ready to reshape the world… but they didn’t. A.I. can do certain things incredibly well, and it’s definitely impacting the world of white collar work. But the more fanciful claims in A.I. 2027 feel, to me, more like sci-fi than a credible threat.
Both a post-labour utopia and killer robots going rogue still feel like sci-fi.
Maybe that’s my own bias showing through, though. I fully admit that, over the years, I’ve made plenty of mistakes when it comes to technology. I mean, I bought a Wii U, for goodness’ sake! And, in a broader sense, it can be difficult for any of us to imagine a world that’s radically different from the one we currently inhabit. My grandfather could vividly recall rushing outside to catch a rare glimpse of a motor car when he was a child, yet by the time he was in his fifties there’d been atomic weapons and humans landing on the moon. The pace of technological change in the first half of the twentieth century was unprecedented, and the lives of people were completely changed.
In my lifetime, too, I’ve seen the birth of the world wide web, the rise of portable electronic devices, and more. When I was a kid, my household had one television set, no video recorder, no computer, and a single landline telephone. By the time I moved out and got my first student flat, we had computers, games consoles, mobile phones, and dial-up internet. Then came broadband, MP3s, streaming, and smartphone apps. When I had my first weekend job I’d get paid in cash, and my first full-time job still paid me by cheque. I’d regularly have to go to the bank to pay those in – but I can’t remember the last time I set foot in a bank. I do all my banking these days on a smartphone app.
There have been huge technological changes within my own lifetime.
I guess this boils down to one massive question: are we anywhere near the limitations of modern A.I. systems? Or, to think about it another way, are today’s A.I. models, with all of their generative capabilities and human-mimicking interactions, genuinely capable of taking the next big leap?
A.I. models appear, to me, to be closer to parrots and mynah birds than actual sentient people. They’re quite capable of replicating “language,” on the very simple basis of “input X requires response Y,” but they don’t understand what they’re saying. That’s why A.I. systems make so many incredibly basic mistakes, and why some enquiries or conversation attempts go completely off the rails. A.I. models, in their current form, also seem to struggle with retaining information, even if all we’re talking about are a few lines of text.
Are today’s LLMs closer, in terms of sentience, to you and me… or to this parrot?
Let me give you an example. While researching for this article, I “spoke” with several A.I. chat bots. And it was an interesting experience! A.I. can do some things incredibly well – it can write you functional computer code, answer questions about everything from history to mathematics, and even write an article or a story that flows naturally and is coherent. But A.I. struggles with some pretty basic things, too, like remembering what’s already been said. In just a short span of time, and without me deliberately trying to trick it, I found the A.I. that I was “speaking” with was giving totally contradictory responses to things it had said just a few minutes earlier. I could scroll up and see the older messages, but the A.I. seemed to not have that ability. It’s programmed to respond to immediate inputs – so if you ask it the same question twice, five minutes apart, you might very well get two different answers. Not differently worded answers – totally different, contradictory responses.
There’s a massive gulf between technology that’s “almost ready” and technology that actually works. Self-driving vehicles might work perfectly… 98% of the time. But that last 2%? That’s what’s stopping self-driving vehicles from going mainstream. The same with some of Elon Musk’s rockets – they’re 99% perfect… but that 1% error causes explosions on the launchpad. A.I. can do some things very well in a limited use case, and can appear to be very human with its mimicry of language. But is it really just a few years – or even a few months – away from the technological revolution predicted by A.I. 2027?
The small difference between “almost ready” and “actually works” is what leads to rockets exploding on the launchpad.
There’s not just a difference, but a massive, ocean-spanning gulf between a machine that can mimic human language and one that can operate independently. The first is like a parrot imitating its owner, incapable of genuinely understanding anything it says, and is still fundamentally operating on a “input X yields response Y” basis, with very limited ability to retain information within a single “conversation.” An A.I. system that can solve world hunger, operate entire governments, and threaten our extinction feels… well, it still seems like it’s the stuff of sci-fi. And yes, I accept that this was a very limited experiment using free A.I. bots, and that the cutting-edge stuff being worked on behind the scenes is going to be much more capable. But are its capabilities that much greater, and is this technology really the disruptive, revolutionary, and potentially world-ending threat that the authors of A.I. 2027 believe?
Generative A.I. is proving disruptive, and despite what some folks might want to think, the technology is clearly here to stay – at least in some form. On the business side of things, if an A.I. can do the work of an office full of people, well, that office is going to be closed and those folks will be laid off. I’ve also looked ahead to the future of A.I.-generated entertainment, making a bold prediction that A.I. might, one day soon, be able to generate tailor-made films and TV shows, potentially shutting down entire studios and laying off actors, writers, and everyone else.
Part of the Warner Bros. studio in California.
For a lot of white collar, middle-class, creative, and upper-income-bracket folks… they’ve long considered themselves safe from this kind of automation-driven unemployment. So the arrival of generative A.I. systems that are competent and genuinely compete for those roles? That’s been a massive shock, and I think that’s why we see so many people pushing back against A.I. today in a way that they didn’t push back against all kinds of other disruptive technologies. Because, at the end of the day, most technological changes prove to be disruptive to someone. It’s just this time it’s the turn of online content creators, wealthier folks, and people with a disproportionately large voice.
But when it comes to things like A.I. drones murdering people, A.I. systems going rogue, or A.I. super-pandemics… I’m not convinced it’s something we’re on the cusp of. Nor do I feel we’re just a few years away from a post-labour tech-driven utopia where no one has to work and we can all indulge in artistic or academic pursuits. These two scenarios feel far-fetched, to me, even as some A.I. systems expand their reach and their capabilities. I’m not convinced that we aren’t close to the ceiling of what current A.I. models are capable of, nor that the kinds of doomsday or utopian scenarios laid out in A.I. 2027 would require major advances in computing and other technologies that may not even be possible.
What kind of changes or advancements in computing might be necessary to fulfil some of A.I. 2027′s predictions?
The world is ever-changing, and technology in particular is not a static thing. My entire lifetime, really, has seen innovation upon innovation, taking me from an analogue childhood in the ’80s and early ’90s to the tech-focused life of today. I don’t doubt that there will be more changes to come, and that there will be inventions and innovations that, right now, I can’t even conceive of – assuming I live long enough to see them! So I’m not trying to dismiss out of hand the genuine concerns folks have about artificial intelligence. But at the same time, I can’t help but feel that current models could be more limited in their abilities than the A.I. evangelists want their investors to believe.
Right now, A.I. is driving an almost unprecedented level of investment, with a handful of companies making a ton of money. But is this just the beginning of an economic boom that will rival that of the post-war years or the Industrial Revolution? Or is it a speculative bubble about to burst, as we’ve seen repeatedly in recent decades? Whether we’re talking about the dot-com bubble, subprime mortgages, or cryptocurrency crashes, there are plenty of examples of speculative bubbles that got out of hand. Is A.I. just the next one? Are the promises made by A.I. creators genuine, or just an attempt to drum up further investment? Can A.I. really do what investors are being promised?
There’s a lot of money at stake in the A.I. industry.
We can’t escape the reality that all of this is tied to money. A.I. companies need documents like A.I. 2027, because this conversation feeds into the narrative their executives are weaving about the future capabilities of these systems. And the promise of an incredible return on investment is what’s keeping these – otherwise unprofitable – companies in business right now. I’m not accusing anyone of running a deliberate scam, but it’s a pretty well-established way of doing business in the tech space: over-promise, rake in the cash, and only then try to figure out how to make good on at least some of those ideas. That approach has worked for the likes of Apple. But it didn’t go quite so well for companies like Theranos.
The tl;dr is this: it benefits A.I. companies to allow this conversation about their products to do the rounds. It drums up interest and attracts investment – not because investors want to see humanity wiped out and the world end, but because they see the potential for short-term financial gains. A select few companies in the A.I. space have seen their share prices increase four-, five-, and six-fold in just a couple of years – and that’s largely due to the belief that A.I. is the wave of the future. Investors believe that whoever perfects the technology first will become the world’s first trillionaire – and they want in on that. We can’t avoid that side of the issue when discussing A.I. technologies as they exist today – and their future prospects.
Data from Star Trek: The Next Generation.
A.I. is already disrupting entire industries, and we’re all going to have to learn how to use these systems in the workplace in the years ahead. There could very well be fewer entry-level white-collar jobs, fewer graduate-level jobs, and fewer office jobs in general. And the potential uses for A.I. systems on the battlefield could result in a monumental change in how future conflicts unfold. But as I see it, today’s artificial intelligence systems don’t “think.” They regurgitate information when prompted, and they’re closer in actual “intelligence” to a parrot than to a person. Artificial intelligence can do some things very well – better, faster, or more reliably than any person ever could. And that’s going to be fantastic in some use cases: diagnosing diseases earlier, writing computer programmes, or creating individualised education plans for kids with special needs. But there’s a lot that A.I. can’t do – and some of it, with the limitations of computing power, may never be possible.
And it’s those things, in my view, which would be needed to turn the LLMs of today into the super-intelligence of A.I. 2027.
So that’s all for today. I hope this was interesting – though as a total non-expert, I could be completely and utterly wrong about everything! No change there, then. I’ve linked the original A.I. 2027 paper below, and if you haven’t read it, please check it out. There are also some great summaries on YouTube, too. I know this was a change from my usual content, but A.I. has been a big deal in sci-fi – and in the Star Trek franchise in particular – for decades, and it’s a big deal right now thanks to the success of the likes of ChatGPT.
If you missed it, I have another piece in which I talk about the possibility of generative A.I. being used to create tailor-made films and TV shows in the near future: you can find it by clicking or tapping here. Thanks for reading, and I hope you found my take to be interesting. Until next time!
Some images generated with A.I. (yes, on purpose!) Some stock photos courtesy of Unsplash, Pixabay, and Wikimedia Commons. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.
I have a longer piece in the pipeline about a document titled A.I. 2027, which you may have seen doing the rounds. I’ll save most of my thoughts on A.I. and its future disruptive potential for that, so be sure to check back. But today, I wanted to tackle a more specific use for artificial intelligence, and how we could see a change in the way we engage with and consume entertainment in the not-so-distant future.
This was prompted, in part, by a conversation I was having with a friend about A.I. 2027. Spoiler alert for my longer piece, but I compared the possibility of a “do-everything” general A.I. system to things like 3D televisions, Google Glass, and the Concorde supersonic jet. All three are examples of technologies that seemed to be on the cusp of revolutionising various aspects of our lives… but didn’t. I can’t help but feel that future predictions of general or super-intelligent A.I. – either as a horseman of the apocalypse or as a revolutionary technology about to deliver some kind of utopia – are, at best, far-fetched! But generative A.I. models, which are more limited in scope, do have the potential to play a huge role in the future of entertainment.
And that’s what I want to address today.
Is A.I. about to revolutionise entertainment?
If you’d asked me in 2005 what the future of entertainment would look like, I would not have been able to predict user-generated content on platforms like YouTube and TikTok becoming massive, popular, and hugely profitable. But in 2025, barely 24 hours go by for me without watching at least one video on YouTube or spending a little time scrolling TikTok. The widespread adoption of broadband internet, social media, and smartphones with built-in connectivity and cameras facilitated this transformation in the way literally billions of people engage with entertainment.
It’s not a stretch to say that there are people today – adults as well as kids – who don’t care much for television, films, or even video games. Their primary sources of entertainment come from social media – and from user-generated content specifically. It turns out that a lot of people enjoy watching things no media executive could’ve ever dreamed of: vlogs, workout routines, makeup tutorials, video game “let’s plays,” and even ASMR. If you’d told me in 2005 what some of the most popular YouTube and TikTok pages would look like twenty years later, I’d have struggled to believe it!
These kinds of videos didn’t seem like they’d be the future of entertainment just a few short years ago!
All of this is to say that a revolution in how we engage with media is nothing new. It’s already happened over the past fifteen to twenty years – and that came after a century of changes as we went from live music and theatre productions to the cinema, television, video recording, video games, and so on. Nothing in the entertainment sector stays still for very long, so there are definitely changes coming. Whether my prediction is right… well, if I’m still here in a decade or two, we can revisit this piece and see!
So what is my prediction, exactly? What is this big, revolutionary, A.I.-driven change that I foresee?
In short: user-controlled movies, TV shows, and perhaps even video games. All monetised by big corporations, all licensed and based on subscription models, and all generated by A.I.
Are A.I.-generated films and TV shows going to be part of the future of entertainment?
Imagine Star Trek’s holodeck, where you can tell the computer what you want to see, but on a flat screen. The biggest names in entertainment at the time will have either developed or bought out A.I. systems to power this content, and you’ll see celebrities, actors, and anyone famous copyrighting or trademarking their likeness and voice, ready to be licensed out. Some performers will be contracted solely to one big entertainment powerhouse, others might be “jobbing it” wherever they can make a buck. “Traditional” – i.e. human-created – films, TV shows, and games will still be made, and social media likely won’t go away, either. But A.I.-generated, customisable, tailored entertainment is going to be a big deal very soon.
You can already see the beginnings of this. Google’s Veo software is just one example of text-to-video A.I., and people are already using it to make their own videos of all kinds of things. The real revolution in the implementation of this technology won’t actually be its development, but its monetisation; how big companies can extract the most money possible for their service will determine how it’s used going forward. Right now, if I ask one of these programmes to generate me a video of Darth Vader in a hot tub, it’ll do it without question – but LucasFilm and Disney won’t be happy about that. As soon as there’s a viable method for monetising these things, we’ll see A.I. models go the way of film and TV streamers – walling off their own intellectual property, and licensing it out for a fee.
Google’s Veo video generator is one of several that already exist.
Maybe one of the big names in entertainment today – Netflix, for example – will buy out one of the big A.I. companies, using their software exclusively on their platform. Or conversely, maybe one of the big A.I. companies will buy out someone like Disney or Netflix, in a not dissimilar way to how Amazon was able to purchase the venerable MGM Studios a few years ago. Both of those seem possible – but the end result will be the same: content and IP locked behind a paywall, available only to those willing and able to pay.
But for those lucky folks, a world of possibilities opens up.
You’ll sign into your new A.I.-Netflix hybrid, and along with the pre-made programmes and perhaps other user-generated content, there’ll be a simple prompt: “What would you like to watch today?”
Big entertainment corporations – like Netflix – surely see the potential in A.I.-generated content already.
From there, your only limit will be your imagination. “I want to see a film that’s about two hours long, starring John Wayne as a mercenary in outer space, with Kate Mara as his love interest, and a pink robot voiced by Brent Spiner as the villain. They should pilot a spaceship called the R.M.S. Wombat, there should be a soundtrack by Bon Jovi, and the entire story should be a metaphor for the dangers of secondhand smoking.” A split-second later… and bang! You’re watching your fantasy film with whatever parameters you’ve given the A.I. It’ll be capable of pulling off shocking twists, bringing long-dead actors back to life, and even generating brand-new stories in established fictional universes.
Imagine being able to fire up Paramount’s A.I. (or, let’s be realistic, the A.I. company that owns whatever remains of Paramount by that point) and generate brand-new Star Trek stories. Maybe you’ve always wanted to know what would’ve happened if Captain Picard died after his assimilation by the Borg, or what might’ve happened if the USS Voyager was destroyed, with a handful of survivors being taken in by Chakotay and his Maquis crew. Or perhaps you want to see an expanded look at an existing story, potentially filling in some of the blanks in between established events. You could even look at a story from the other side, like seeing the Dominion War from the Cardassian perspective. All of those things feel plausible with A.I. integration.
We might be able to make our own Star Trek stories one day…
As A.I. technology gets better, its ability to retain information will improve. This means that, the more you use an A.I. programme, the more it gets to “know” you – or at least it gets better at predicting your reactions, your likes, and your dislikes. This means that our hypothetical A.I.-Netflix hybrid will very quickly get to know what kinds of stories you like, what generates the best responses from you, and will be able to use that information to fine-tune and personalise the content it generates for you.
It’ll be kind of like having your own personal film studio. As the A.I. retains all of the information and storylines it’s generated, it’ll be able to make sequels, prequels, and expand on certain story points. If you really liked a character in an A.I.-generated film, it’ll be able to make a spin-off, just for you, tailored to what you liked about the character and the kinds of stories it knows you responded well to elsewhere. Heck, it could even generate a casual vlog-style series based on your favourite celebrity or character, kind of like how we see A.I. chat bots based on fictional characters today.
Imagine having your own film studio in your TV or phone, ready to turn your ideas and thoughts into real, ready-to-watch content.
By now, you’ve heard the criticisms of A.I. Its supercomputers use more energy than entire countries. It’s stealing people’s art, writing, and more. It’s capable of “lying” or “hallucinating” falsehoods, spreading misinformation. It’s going to put millions of people out of work. And I don’t dispute any of those things, nor am I “championing” the use of A.I. in the entertainment space. This prediction is based on what I’ve seen from my limited engagement with the world of A.I. so far. I don’t actively use A.I. myself; I don’t really have a place in my life for an A.I. chat bot, and I’ve never needed to use A.I. to generate anything. But I see people using it more and more, and to me, the scenario outlined above feels like a plausible next step for the technology as it currently exists.
The big things from the corporate side are how to lock down their A.I. models and monetise them, as well as how to prevent competing A.I. systems from “trawling” their content and using intellectual property that they claim ownership of. After all, it’d be no good to offer your service for sale if a free competitor could do the exact same thing without the price tag! But if there’s one thing I can say with certainty after more than forty years of existing in this capitalist economy, it’s that you should never underestimate the ability of corporations to find a way to monetise… everything.
Whichever corporation figures this out first is gonna make a lot of money…
Twenty years ago, I wouldn’t have been able to predict the rise of social media, user-generated content, and subscription services. All of those things seemed to come out of nowhere, catching me off-guard. The idea that people would spend hours each day watching what are basically other people’s home videos… that would’ve seemed positively ludicrous, even in 2005. But some people did see that potential, and more importantly, were able to get in early and monetise the heck out of it.
With generative A.I. being the current trend, it’s easy to write it off as a flash in the pan; another 3D television or MiniDisc. And maybe that’s still going to be the case; I haven’t watched a 3D movie in years, and my MiniDisc player has been gathering dust in the attic since the mid-2000s. But right now, with the amount of money being thrown at generative A.I. software, it feels at least plausible to me that, a few years from now, we could all be generating our own high-quality films, TV programmes, and perhaps even video games from simple prompts, with the only limitations being our imagination… and our wallets.
Remember MiniDisc?
I don’t know if that’s the kind of future I want… but I gotta be honest: part of me feels intrigued by the possibilities A.I. could offer. Being able to get tailor-made, fully customisable movies… there’s genuine appeal there, and whoever gets it right first stands to make a ton of money. I don’t think such a marketplace means that films, TV shows, and video games created by human beings will disappear; there will still be a place for creativity, imagination, and innovation. But there could be far, far fewer of those types of films, TV shows, and games being created if the big corporations go all-in on generative A.I. in the way I’ve outlined above. As with all things A.I., that basically means a ton of people are gonna be out of work. That undeniably taints the finished product; A.I. will come with an obvious, and pretty devastating, cost.
But for a lot of people… well, we already know that the cost to human lives doesn’t matter if they enjoy the finished product. Video games are still frequently created under “crunch” conditions, an exploitative practice I saw firsthand when I worked in the video games industry. Batteries rely on cobalt and lithium, mined by underpaid or even enslaved workers in horrible conditions. People pay for cheap clothes and shoes made in sweatshops. The list goes on… and my point is simple: don’t expect some kind of moral crusade against A.I. to change a lot of minds if it hasn’t in the cases we just mentioned.
Whether A.I. is here to stay, and whether I’m even close to being right about its potential future role in entertainment remains to be seen. I don’t know how much time I’ve got left, but if I’m still here in a decade or two, let’s revisit this piece together. Perhaps we’ll share a chuckle about how wrong I was, and how I exaggerated a flash in the pan technological fad way beyond its capabilities. Or not.
All brands and properties mentioned above are the copyright or trademark of their respective studio, distributor, broadcaster, etc. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.