A.I. 2027: My Take

I recently read the very alarming A.I. 2027 report. I’ll leave a link to the full version at the end of this article, and I highly recommend checking it out if you’re interested in the future of A.I. But for now, I’ll briefly summarise the report. And yes, I know this is a little outside my wheelhouse here on the website, but I think this subject is worth discussing!

A.I. 2027′s core premise is not, as some alarmist headlines have tried to claim, that the end of the world is a mere two years away! But the report’s authors argue that, if artificial intelligence development continues on its current trajectory, late 2027 could be humanity’s final opportunity to remain in control of any A.I. system that emerges – and our last chance to ensure that its goals and priorities align with our own. They present this as an existential risk; that a sufficiently advanced A.I. sometime after 2027 could, if not properly restrained and aligned with human interests, literally exterminate the human race. That… that’s kind of disconcerting, right?

Still frame from Terminator 2 showing a person killed by nuclear weapons.
Is this our imminent fate?

I confess that I’m way behind the curve when it comes to A.I. Everyone talks about ChatGPT, A.I. art, and other such things… but my knowledge of the subject has been, until recently, surface-level at best. But given the potential A.I. has to be disruptive, perhaps on a scale we haven’t seen since the advent of the world wide web or even the Industrial Revolution… let’s just say I felt the need to catch up and get up to speed!

Artificial intelligence is already changing the way we live, work, and interact. What are “algorithms” if not a form of A.I.? If you’ve ever scrolled social media and felt that the website or app is almost too clever in recommending posts or adverts that appeal to you… you’ve already been caught in its web. And if you’ve noticed fewer job listings in fields like translation, copy-editing, or programming… you’ve seen what might just be the tip of the iceberg when it comes to A.I.-related job losses and workplace changes. I bring this up because I think something that gets lost in this conversation about the future of A.I. is that A.I. is already here, already changing things, and isn’t just some nebulous future idea about something we “might” invent one day. We’ve already invented it.

Still frame of the M-5 computer from Star Trek: The Original Series.
The M-5 Multitronic Unit from Star Trek.

Predicting the future is hard – nigh-on impossible, really. Step back to television programmes from the 1980s, 1990s, and even the 2000s that tried to look ahead to 2025 and see how wrong many of their predictions and ideas were. A famous example is flying cars – I still haven’t seen one of those, despite the promises made by sci-fi films in the ’80s! So I think it’s worth acknowledging that. Some predictions can be incredibly wide of the mark – especially those that promise either an imminent technological utopia or the end of the world. Both of those scenarios are present in A.I. 2027.

As a layman looking in from the outside, I’ve been dimly aware of talk around artificial intelligence, but it hasn’t been something I’ve felt any need to engage with. I don’t have a place in my life for an A.I. chat bot, I’ve never needed to generate A.I. art before (though I am using some A.I.-generated images in this piece), and I actually enjoy the process of writing… so I see no need to use something like ChatGPT to generate text, either. But there’s no denying that, as someone with a creative streak, I feel increasingly… obsolete. A.I. doesn’t just have the potential to revolutionise writing, web design, media criticism, and all the things I talk about here on the website. It’s literally already doing all of those things.

A.I.-generated image of a robot chatting to two humans on the computer.
It’s a chat bot.

I could fire up ChatGPT or a similar A.I. programme and, within a matter of moments, it could’ve scanned everything I’ve ever published here on the website, and all of my unpublished drafts, too. I could then ask it to write an article or essay on any subject imaginable, using my tone of voice and my writing style. Could you tell the difference? From my perspective, as someone who runs a small website as a hobby and who enjoys writing, there’s no point in using A.I. for that. But if I was working in an office job where lengthy reports were the order of the day… I can see A.I. being a very tempting shortcut. Heck, I’d have killed for an automated essay-writing programme when I was at school!

And that’s exactly what these systems are designed to do. They’re intended to cut costs for big businesses – and by far the largest cost that most companies and corporations have is the wage bill. Let’s rephrase that for the sake of clarity: the entire point of A.I. is to save corporations money by enabling them to lay off staff and automate more and more of their work.

Photo of an unemployment queue in the USA during the Great Depression.
Unemployed people queuing for financial relief during the Great Depression.

Unfortunately, there’s a bit of classism that seems to have crept into this conversation. I grew up in the north of England in the ’80s and ’90s, at a time when deindustrialisation had robbed the area of mines and factories. Automation played a big part in that – work that used to take a dozen people could be done with just one specialist, and then it became “too expensive” to continue to operate. There’s more to what happened in this part of the world, of course, but automation played a big part. Some of the people who used to tell factory workers and miners to “re-train” or “learn to code” are now themselves on the receiving end of automation-related job losses. And it’s pretty grating to see folks getting worked up about A.I. in the 2020s when they not only didn’t give a shit about the devastation automation brought to working class communities from the ’80s onwards, but actively supported it.

In that sense, I kind of view this consequence of A.I. development as a continuation of a process that’s been ongoing for decades, not something new. For decades, big companies have been looking for shortcuts; ways to cut jobs and pay fewer members of staff while achieving ever greater profit margins. A.I. is what they’re banking on in the 2020s in the same way as manufacturers invested in automated equipment in factories, or mining corporations exchanged pickaxes for machines. The difference? A.I. is coming for white collar, middle class jobs. Earlier automation mostly took jobs away from blue collar, working class folks.

Photo of an abandoned, derelict factory in the UK.
Sights like this were common when I was growing up.
Photo: David Wilkinson / Former Addspace furniture factory, Bolton upon Dearne

But that’s just one side to A.I. – the corporate, job-stealing side. The existential risk as posited by A.I. 2027 is much more complex… and I’d be lying if I said I wasn’t concerned. Unlike the 2012 phenomenon or the predictions of Nostradamus, the risk posed by artificial intelligence feels real. I’d say it feels somewhat comparable to the way the millennium bug felt in 1999; a technological danger that humanity has created for itself. In 1999, a lot of people were genuinely concerned that the millennium bug could cause all kinds of chaos, but thanks to some hard work, none of those predictions came to pass. It went largely unnoticed at the time, but it did take a lot of effort behind the scenes to ensure that the millennium bug didn’t have a major impact; it wasn’t, as some have tried to claim, a completely overblown threat.

So what’s the lesson there? If there are risks to A.I. development… shouldn’t we do everything we can to mitigate them? That seems like the rational course of action, but as the authors of A.I. 2027 explain, current artificial intelligence systems don’t exist in a vacuum. There’s a technological “arms race” between different countries, and slowing development to implement safety measures could mean that the current leader – the United States – would surrender its position on the cutting edge. The result of all this? Rapid, chaotic, unchecked development of A.I. systems in order to remain ahead of the curve.

Screenshot of a summary graph from the A.I. 2027 site.
A graph summarising A.I. 2027′s findings.

There are military applications for A.I. – and if one side in a potential conflict has A.I.-controlled drone weaponry and the other doesn’t… it’d be like trying to fight a modern army with medieval weaponry and tactics. Keeping up with A.I. developments must feel, to military planners and political leaders, like even more of an existential risk, because the consequences of failure could be catastrophic. We talked above about the world wide web and the industrial revolution; in military terms, A.I. could be at least as impactful as the development of nuclear weapons.

So I think we’ve covered why governments, militaries, and corporations want an A.I.-infused future. But two questions still remain: will they remain in control of it? And what will it mean for the rest of us?

Still frame from Terminator 3 showing Skynet coming online.
Skynet in Terminator 3.

I’ll tell you a story from my own life that I think might illustrate this next point. In the late 2000s, I was feeling pretty disillusioned with my job and career path. In those days, I was working in the video games industry, on the marketing side of things, but I went through a phase where I wasn’t satisfied, and I started contemplating other career options. For a time, I thought seriously about getting my HGV license – that’s heavy goods vehicle, a.k.a. a truck or lorry. But by that point, it seemed to a techie as if self-driving vehicles were only a few years away… and I genuinely thought that it would be a waste of time to spend a lot of money taking tests and getting a qualification that could become obsolete before I could make the most of it. To use an analogy from history: it felt like jobs based on driving any kind of vehicle from taxi cabs to trucks were about to disappear in much the same way as horses and stables did in the first decades of the 20th Century.

Why do I make this point? I was wrong, in the late 2000s, to be thinking like that. Self-driving technology may be “here” in a technical sense, but it isn’t road legal and it doesn’t seem to be making the kind of impact I would’ve predicted (or feared) in the late 2000s. The same is true of many other technologies that seemed to be “the wave of the future,” only to completely fizzle out. I owned a MiniDisc player in the early 2000s, expecting that technology would replace CDs! Google Glass, 3D televisions, the hovercraft, Concorde… the list of “revolutionary” technologies which never delivered their promised revolutions goes on.

A.I.-generated image of a yellow robot driving a lorry.
A self-driving lorry.

The point is this: there are many technologies that seemed, for a time, to be the “next big thing,” primed to revolutionise some aspect of our daily lives. And they didn’t accomplish that. When was the last time you even thought about MiniDiscs, hovercraft, or Google Glass? For a brief moment, all of those things seemed to be on the cutting edge of technological innovation, yet they fizzled out without having the impact some folks expected. Could A.I. be the same? And more importantly: are we perhaps reaching the limitations of A.I.’s current capabilities?

It seems to me that the more A.I.-generated content exists out in the wild, the harder it becomes to filter it out – including for A.I. programmers. We’ve all seen A.I.-generated images that aren’t quite right: hands with too many fingers, a cat with five legs, the New York skyline with buildings in the wrong place. As more of this content is artificially generated, at least some of it is going to be fed back into A.I. algorithms as they trawl the web, right? So… doesn’t that mean there’s going to be a point where A.I. either stagnates or even starts getting worse, rather than better?

Stock photo of Google Glass.
Remember when Google Glass was about to change everything?

Then there are jobs that A.I. would struggle to perform. I could absolutely envision a world where many office jobs are replaced by A.I. programmes – similar to how factories that used to be full of workers slowly got replaced by robots and machinery. But could you have an A.I. plumber? I had my bathroom redone a couple of years ago to fit an adapted shower as I’m disabled… and it took several skilled workers more than a week to measure things, reroute an errant pipe, and install all of the fixtures. Earlier today I had to cut down a rogue tree branch that was in danger of damaging my roof. Could A.I. do that? Autonomous robots have come a long way in the past decade or so, but even with full A.I. integration, tasks like those still seem a long way off – not to mention unaffordable, even here in the UK.

One part of A.I. 2027 seemed to offer a “technological utopia;” the kind of post-scarcity society that we’ve seen depicted in Star Trek. And don’t get me wrong… I would love to live in that kind of world. But is that realistic? Are we really only a few years away from fleets of automatons doing everything from diagnosing diseases to plumbing a sink? The rate of technological progress is impressive, for sure, but I don’t believe we’re closing in on that kind of future.

Cropped poster/banner of I, Robot showing a legion of humanoid robots.
Legions of robots from the film I, Robot.

Then there are political considerations that I would argue very strongly A.I. 2027′s authors have got wrong. The idea of a “universal basic income” has been talked about before, and was even trialled on a small scale in Finland and a couple of other places. But the politics surrounding this are incredibly difficult, even toxic, and I struggle to envision a near-future scenario where universal basic income is actually politically feasible. The current political climate in the United States, as well as here in the UK, seems to be completely opposed to that kind of scheme. I mean, do we really think Donald Trump and the Republican Party would ever go for a policy of universal basic income?

None of this means that A.I. 2027 is wrong in its big-picture assessment of the short-to-medium-term future of artificial intelligence. But given the authors’ political miscalculations in particular, I think there’s enough of a red flag to at least consider the possibility that they might’ve gotten other things wrong. The report’s premise is based on competing A.I. models from different countries eventually agreeing to merge – but isn’t it just as likely that two independent A.I. systems would develop different, even opposing goals? Human beings are tribal by nature, and it’s at least possible that any kind of sentient life we might create would retain that tribalism.

Photo of Mark Zuckerberg, Donald Trump, Melania Trump, and Bill Gates at the White House, 2025.
Are we suggesting that Donald Trump (pictured at a recent event with Mark Zuckerberg and Bill Gates) will seriously consider a policy of universal basic income?

I don’t mean to nitpick A.I. 2027 too much, because I think the report highlights some important issues and raises important considerations. But I think it’s worth acknowledging that it feels, in parts, like a doomsday scenario and an unrealistic utopia. These are both extreme… and perhaps that’s why neither feels especially likely.

A.I. is coming for a lot of jobs, and some folks who would’ve never expected to be losing their job to automation are going to be impacted. Artists, animators, writers, creative folks… A.I. is coming for us. Programmers, web designers, game developers… basically any office job you can think of, especially so-called “entry-level” positions. They’re all vulnerable to A.I. – and to big corporations seeking to remove employees in order to save money and make even greater profits. That side of A.I. feels real – because, as I noted earlier, it’s already happening. Layoffs in the video games industry are partially driven by A.I. replacing workers. Real-time translation apps mean there’s less of a need for translators. Data entry can be done quicker and faster with an A.I. model than with an entire team of people. And A.I. doesn’t take breaks, need maternity leave, or even go home at the end of the workday. None of this is imaginary or a subject of debate. It isn’t “coming soon.” It’s already here and it’s real.

Screenshot of Google Translate showing real-time text translation.
Free translation apps (Google Translate pictured) are getting better all the time, and real-time in-ear translation for spoken languages is also possible.

But the existential risk? A.I. super-intelligence? A machine with unfathomable, god-like knowledge and powers? A technological utopia where A.I does all the work and us humans can just kick back, claim our universal basic income, and chill out all day? I gotta admit that I’m sceptical.

I think back to my fears of self-driving vehicles poaching jobs from truck drivers. And I reflect on the technological promises (and concerns) about technologies like supersonic jets and Google Glass. Within just the past few years or decades, technologies have emerged that seemed to be ready to reshape the world… but they didn’t. A.I. can do certain things incredibly well, and it’s definitely impacting the world of white collar work. But the more fanciful claims in A.I. 2027 feel, to me, more like sci-fi than a credible threat.

Still frame from Terminator 2 showing a Terminator wielding weapons.
Both a post-labour utopia and killer robots going rogue still feel like sci-fi.

Maybe that’s my own bias showing through, though. I fully admit that, over the years, I’ve made plenty of mistakes when it comes to technology. I mean, I bought a Wii U, for goodness’ sake! And, in a broader sense, it can be difficult for any of us to imagine a world that’s radically different from the one we currently inhabit. My grandfather could vividly recall rushing outside to catch a rare glimpse of a motor car when he was a child, yet by the time he was in his fifties there’d been atomic weapons and humans landing on the moon. The pace of technological change in the first half of the twentieth century was unprecedented, and the lives of people were completely changed.

In my lifetime, too, I’ve seen the birth of the world wide web, the rise of portable electronic devices, and more. When I was a kid, my household had one television set, no video recorder, no computer, and a single landline telephone. By the time I moved out and got my first student flat, we had computers, games consoles, mobile phones, and dial-up internet. Then came broadband, MP3s, streaming, and smartphone apps. When I had my first weekend job I’d get paid in cash, and my first full-time job still paid me by cheque. I’d regularly have to go to the bank to pay those in – but I can’t remember the last time I set foot in a bank. I do all my banking these days on a smartphone app.

Stock photo of a CRT television set.
There have been huge technological changes within my own lifetime.

I guess this boils down to one massive question: are we anywhere near the limitations of modern A.I. systems? Or, to think about it another way, are today’s A.I. models, with all of their generative capabilities and human-mimicking interactions, genuinely capable of taking the next big leap?

A.I. models appear, to me, to be closer to parrots and mynah birds than actual sentient people. They’re quite capable of replicating “language,” on the very simple basis of “input X requires response Y,” but they don’t understand what they’re saying. That’s why A.I. systems make so many incredibly basic mistakes, and why some enquiries or conversation attempts go completely off the rails. A.I. models, in their current form, also seem to struggle with retaining information, even if all we’re talking about are a few lines of text.

Stock photo of a red parrot.
Are today’s LLMs closer, in terms of sentience, to you and me… or to this parrot?

Let me give you an example. While researching for this article, I “spoke” with several A.I. chat bots. And it was an interesting experience! A.I. can do some things incredibly well – it can write you functional computer code, answer questions about everything from history to mathematics, and even write an article or a story that flows naturally and is coherent. But A.I. struggles with some pretty basic things, too, like remembering what’s already been said. In just a short span of time, and without me deliberately trying to trick it, I found the A.I. that I was “speaking” with was giving totally contradictory responses to things it had said just a few minutes earlier. I could scroll up and see the older messages, but the A.I. seemed to not have that ability. It’s programmed to respond to immediate inputs – so if you ask it the same question twice, five minutes apart, you might very well get two different answers. Not differently worded answers – totally different, contradictory responses.

There’s a massive gulf between technology that’s “almost ready” and technology that actually works. Self-driving vehicles might work perfectly… 98% of the time. But that last 2%? That’s what’s stopping self-driving vehicles from going mainstream. The same with some of Elon Musk’s rockets – they’re 99% perfect… but that 1% error causes explosions on the launchpad. A.I. can do some things very well in a limited use case, and can appear to be very human with its mimicry of language. But is it really just a few years – or even a few months – away from the technological revolution predicted by A.I. 2027?

Still frame of a Nasa rocket exploding near the launchpad.
The small difference between “almost ready” and “actually works” is what leads to rockets exploding on the launchpad.

There’s not just a difference, but a massive, ocean-spanning gulf between a machine that can mimic human language and one that can operate independently. The first is like a parrot imitating its owner, incapable of genuinely understanding anything it says, and is still fundamentally operating on a “input X yields response Y” basis, with very limited ability to retain information within a single “conversation.” An A.I. system that can solve world hunger, operate entire governments, and threaten our extinction feels… well, it still seems like it’s the stuff of sci-fi. And yes, I accept that this was a very limited experiment using free A.I. bots, and that the cutting-edge stuff being worked on behind the scenes is going to be much more capable. But are its capabilities that much greater, and is this technology really the disruptive, revolutionary, and potentially world-ending threat that the authors of A.I. 2027 believe?

Generative A.I. is proving disruptive, and despite what some folks might want to think, the technology is clearly here to stay – at least in some form. On the business side of things, if an A.I. can do the work of an office full of people, well, that office is going to be closed and those folks will be laid off. I’ve also looked ahead to the future of A.I.-generated entertainment, making a bold prediction that A.I. might, one day soon, be able to generate tailor-made films and TV shows, potentially shutting down entire studios and laying off actors, writers, and everyone else.

Photo of the Warner Bros. studio in California.
Part of the Warner Bros. studio in California.

For a lot of white collar, middle-class, creative, and upper-income-bracket folks… they’ve long considered themselves safe from this kind of automation-driven unemployment. So the arrival of generative A.I. systems that are competent and genuinely compete for those roles? That’s been a massive shock, and I think that’s why we see so many people pushing back against A.I. today in a way that they didn’t push back against all kinds of other disruptive technologies. Because, at the end of the day, most technological changes prove to be disruptive to someone. It’s just this time it’s the turn of online content creators, wealthier folks, and people with a disproportionately large voice.

But when it comes to things like A.I. drones murdering people, A.I. systems going rogue, or A.I. super-pandemics… I’m not convinced it’s something we’re on the cusp of. Nor do I feel we’re just a few years away from a post-labour tech-driven utopia where no one has to work and we can all indulge in artistic or academic pursuits. These two scenarios feel far-fetched, to me, even as some A.I. systems expand their reach and their capabilities. I’m not convinced that we aren’t close to the ceiling of what current A.I. models are capable of, nor that the kinds of doomsday or utopian scenarios laid out in A.I. 2027 would require major advances in computing and other technologies that may not even be possible.

Stock photo of a server rack.
What kind of changes or advancements in computing might be necessary to fulfil some of A.I. 2027′s predictions?

The world is ever-changing, and technology in particular is not a static thing. My entire lifetime, really, has seen innovation upon innovation, taking me from an analogue childhood in the ’80s and early ’90s to the tech-focused life of today. I don’t doubt that there will be more changes to come, and that there will be inventions and innovations that, right now, I can’t even conceive of – assuming I live long enough to see them! So I’m not trying to dismiss out of hand the genuine concerns folks have about artificial intelligence. But at the same time, I can’t help but feel that current models could be more limited in their abilities than the A.I. evangelists want their investors to believe.

Right now, A.I. is driving an almost unprecedented level of investment, with a handful of companies making a ton of money. But is this just the beginning of an economic boom that will rival that of the post-war years or the Industrial Revolution? Or is it a speculative bubble about to burst, as we’ve seen repeatedly in recent decades? Whether we’re talking about the dot-com bubble, subprime mortgages, or cryptocurrency crashes, there are plenty of examples of speculative bubbles that got out of hand. Is A.I. just the next one? Are the promises made by A.I. creators genuine, or just an attempt to drum up further investment? Can A.I. really do what investors are being promised?

Stock photo of a briefcase containing stacks of United States dollars.
There’s a lot of money at stake in the A.I. industry.

We can’t escape the reality that all of this is tied to money. A.I. companies need documents like A.I. 2027, because this conversation feeds into the narrative their executives are weaving about the future capabilities of these systems. And the promise of an incredible return on investment is what’s keeping these – otherwise unprofitable – companies in business right now. I’m not accusing anyone of running a deliberate scam, but it’s a pretty well-established way of doing business in the tech space: over-promise, rake in the cash, and only then try to figure out how to make good on at least some of those ideas. That approach has worked for the likes of Apple. But it didn’t go quite so well for companies like Theranos.

The tl;dr is this: it benefits A.I. companies to allow this conversation about their products to do the rounds. It drums up interest and attracts investment – not because investors want to see humanity wiped out and the world end, but because they see the potential for short-term financial gains. A select few companies in the A.I. space have seen their share prices increase four-, five-, and six-fold in just a couple of years – and that’s largely due to the belief that A.I. is the wave of the future. Investors believe that whoever perfects the technology first will become the world’s first trillionaire – and they want in on that. We can’t avoid that side of the issue when discussing A.I. technologies as they exist today – and their future prospects.

Still frame from Star Trek TNG Season 2 showing Data.
Data from Star Trek: The Next Generation.

A.I. is already disrupting entire industries, and we’re all going to have to learn how to use these systems in the workplace in the years ahead. There could very well be fewer entry-level white-collar jobs, fewer graduate-level jobs, and fewer office jobs in general. And the potential uses for A.I. systems on the battlefield could result in a monumental change in how future conflicts unfold. But as I see it, today’s artificial intelligence systems don’t “think.” They regurgitate information when prompted, and they’re closer in actual “intelligence” to a parrot than to a person. Artificial intelligence can do some things very well – better, faster, or more reliably than any person ever could. And that’s going to be fantastic in some use cases: diagnosing diseases earlier, writing computer programmes, or creating individualised education plans for kids with special needs. But there’s a lot that A.I. can’t do – and some of it, with the limitations of computing power, may never be possible.

And it’s those things, in my view, which would be needed to turn the LLMs of today into the super-intelligence of A.I. 2027.

So that’s all for today. I hope this was interesting – though as a total non-expert, I could be completely and utterly wrong about everything! No change there, then. I’ve linked the original A.I. 2027 paper below, and if you haven’t read it, please check it out. There are also some great summaries on YouTube, too. I know this was a change from my usual content, but A.I. has been a big deal in sci-fi – and in the Star Trek franchise in particular – for decades, and it’s a big deal right now thanks to the success of the likes of ChatGPT.

If you missed it, I have another piece in which I talk about the possibility of generative A.I. being used to create tailor-made films and TV shows in the near future: you can find it by clicking or tapping here. Thanks for reading, and I hope you found my take to be interesting. Until next time!


You can find the original A.I. 2027 paper by clicking or tapping here. (Warning: leads to an external website.)


Some images generated with A.I. (yes, on purpose!) Some stock photos courtesy of Unsplash, Pixabay, and Wikimedia Commons. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

The Future of A.I. Entertainment?

I have a longer piece in the pipeline about a document titled A.I. 2027, which you may have seen doing the rounds. I’ll save most of my thoughts on A.I. and its future disruptive potential for that, so be sure to check back. But today, I wanted to tackle a more specific use for artificial intelligence, and how we could see a change in the way we engage with and consume entertainment in the not-so-distant future.

This was prompted, in part, by a conversation I was having with a friend about A.I. 2027. Spoiler alert for my longer piece, but I compared the possibility of a “do-everything” general A.I. system to things like 3D televisions, Google Glass, and the Concorde supersonic jet. All three are examples of technologies that seemed to be on the cusp of revolutionising various aspects of our lives… but didn’t. I can’t help but feel that future predictions of general or super-intelligent A.I. – either as a horseman of the apocalypse or as a revolutionary technology about to deliver some kind of utopia – are, at best, far-fetched! But generative A.I. models, which are more limited in scope, do have the potential to play a huge role in the future of entertainment.

And that’s what I want to address today.

ChatGPT's text box with the words "Ask anything" prominently displayed.
Is A.I. about to revolutionise entertainment?

If you’d asked me in 2005 what the future of entertainment would look like, I would not have been able to predict user-generated content on platforms like YouTube and TikTok becoming massive, popular, and hugely profitable. But in 2025, barely 24 hours go by for me without watching at least one video on YouTube or spending a little time scrolling TikTok. The widespread adoption of broadband internet, social media, and smartphones with built-in connectivity and cameras facilitated this transformation in the way literally billions of people engage with entertainment.

It’s not a stretch to say that there are people today – adults as well as kids – who don’t care much for television, films, or even video games. Their primary sources of entertainment come from social media – and from user-generated content specifically. It turns out that a lot of people enjoy watching things no media executive could’ve ever dreamed of: vlogs, workout routines, makeup tutorials, video game “let’s plays,” and even ASMR. If you’d told me in 2005 what some of the most popular YouTube and TikTok pages would look like twenty years later, I’d have struggled to believe it!

Four YouTube thumbnails from different genres of video.
These kinds of videos didn’t seem like they’d be the future of entertainment just a few short years ago!

All of this is to say that a revolution in how we engage with media is nothing new. It’s already happened over the past fifteen to twenty years – and that came after a century of changes as we went from live music and theatre productions to the cinema, television, video recording, video games, and so on. Nothing in the entertainment sector stays still for very long, so there are definitely changes coming. Whether my prediction is right… well, if I’m still here in a decade or two, we can revisit this piece and see!

So what is my prediction, exactly? What is this big, revolutionary, A.I.-driven change that I foresee?

In short: user-controlled movies, TV shows, and perhaps even video games. All monetised by big corporations, all licensed and based on subscription models, and all generated by A.I.

Artwork of a traditional cinema film reel on a gold-coloured background.
Are A.I.-generated films and TV shows going to be part of the future of entertainment?

Imagine Star Trek’s holodeck, where you can tell the computer what you want to see, but on a flat screen. The biggest names in entertainment at the time will have either developed or bought out A.I. systems to power this content, and you’ll see celebrities, actors, and anyone famous copyrighting or trademarking their likeness and voice, ready to be licensed out. Some performers will be contracted solely to one big entertainment powerhouse, others might be “jobbing it” wherever they can make a buck. “Traditional” – i.e. human-created – films, TV shows, and games will still be made, and social media likely won’t go away, either. But A.I.-generated, customisable, tailored entertainment is going to be a big deal very soon.

You can already see the beginnings of this. Google’s Veo software is just one example of text-to-video A.I., and people are already using it to make their own videos of all kinds of things. The real revolution in the implementation of this technology won’t actually be its development, but its monetisation; how big companies can extract the most money possible for their service will determine how it’s used going forward. Right now, if I ask one of these programmes to generate me a video of Darth Vader in a hot tub, it’ll do it without question – but LucasFilm and Disney won’t be happy about that. As soon as there’s a viable method for monetising these things, we’ll see A.I. models go the way of film and TV streamers – walling off their own intellectual property, and licensing it out for a fee.

Screenshot of Google's Veo 2 software generating a video from a text prompt.
Google’s Veo video generator is one of several that already exist.

Maybe one of the big names in entertainment today – Netflix, for example – will buy out one of the big A.I. companies, using their software exclusively on their platform. Or conversely, maybe one of the big A.I. companies will buy out someone like Disney or Netflix, in a not dissimilar way to how Amazon was able to purchase the venerable MGM Studios a few years ago. Both of those seem possible – but the end result will be the same: content and IP locked behind a paywall, available only to those willing and able to pay.

But for those lucky folks, a world of possibilities opens up.

You’ll sign into your new A.I.-Netflix hybrid, and along with the pre-made programmes and perhaps other user-generated content, there’ll be a simple prompt: “What would you like to watch today?”

A mock-up of Netflix's logo with the subtitle "Powered by ChatGPT" in the same font.
Big entertainment corporations – like Netflix – surely see the potential in A.I.-generated content already.

From there, your only limit will be your imagination. “I want to see a film that’s about two hours long, starring John Wayne as a mercenary in outer space, with Kate Mara as his love interest, and a pink robot voiced by Brent Spiner as the villain. They should pilot a spaceship called the R.M.S. Wombat, there should be a soundtrack by Bon Jovi, and the entire story should be a metaphor for the dangers of secondhand smoking.” A split-second later… and bang! You’re watching your fantasy film with whatever parameters you’ve given the A.I. It’ll be capable of pulling off shocking twists, bringing long-dead actors back to life, and even generating brand-new stories in established fictional universes.

Imagine being able to fire up Paramount’s A.I. (or, let’s be realistic, the A.I. company that owns whatever remains of Paramount by that point) and generate brand-new Star Trek stories. Maybe you’ve always wanted to know what would’ve happened if Captain Picard died after his assimilation by the Borg, or what might’ve happened if the USS Voyager was destroyed, with a handful of survivors being taken in by Chakotay and his Maquis crew. Or perhaps you want to see an expanded look at an existing story, potentially filling in some of the blanks in between established events. You could even look at a story from the other side, like seeing the Dominion War from the Cardassian perspective. All of those things feel plausible with A.I. integration.

Still frame from Star Trek VI: The Undiscovered Country showing the USS Enterprise-A.
We might be able to make our own Star Trek stories one day…

As A.I. technology gets better, its ability to retain information will improve. This means that, the more you use an A.I. programme, the more it gets to “know” you – or at least it gets better at predicting your reactions, your likes, and your dislikes. This means that our hypothetical A.I.-Netflix hybrid will very quickly get to know what kinds of stories you like, what generates the best responses from you, and will be able to use that information to fine-tune and personalise the content it generates for you.

It’ll be kind of like having your own personal film studio. As the A.I. retains all of the information and storylines it’s generated, it’ll be able to make sequels, prequels, and expand on certain story points. If you really liked a character in an A.I.-generated film, it’ll be able to make a spin-off, just for you, tailored to what you liked about the character and the kinds of stories it knows you responded well to elsewhere. Heck, it could even generate a casual vlog-style series based on your favourite celebrity or character, kind of like how we see A.I. chat bots based on fictional characters today.

Photo of a Hollywood film studio.
Imagine having your own film studio in your TV or phone, ready to turn your ideas and thoughts into real, ready-to-watch content.

By now, you’ve heard the criticisms of A.I. Its supercomputers use more energy than entire countries. It’s stealing people’s art, writing, and more. It’s capable of “lying” or “hallucinating” falsehoods, spreading misinformation. It’s going to put millions of people out of work. And I don’t dispute any of those things, nor am I “championing” the use of A.I. in the entertainment space. This prediction is based on what I’ve seen from my limited engagement with the world of A.I. so far. I don’t actively use A.I. myself; I don’t really have a place in my life for an A.I. chat bot, and I’ve never needed to use A.I. to generate anything. But I see people using it more and more, and to me, the scenario outlined above feels like a plausible next step for the technology as it currently exists.

The big things from the corporate side are how to lock down their A.I. models and monetise them, as well as how to prevent competing A.I. systems from “trawling” their content and using intellectual property that they claim ownership of. After all, it’d be no good to offer your service for sale if a free competitor could do the exact same thing without the price tag! But if there’s one thing I can say with certainty after more than forty years of existing in this capitalist economy, it’s that you should never underestimate the ability of corporations to find a way to monetise… everything.

Photo of a briefcase overflowing with $100 bills.
Whichever corporation figures this out first is gonna make a lot of money…

Twenty years ago, I wouldn’t have been able to predict the rise of social media, user-generated content, and subscription services. All of those things seemed to come out of nowhere, catching me off-guard. The idea that people would spend hours each day watching what are basically other people’s home videos… that would’ve seemed positively ludicrous, even in 2005. But some people did see that potential, and more importantly, were able to get in early and monetise the heck out of it.

With generative A.I. being the current trend, it’s easy to write it off as a flash in the pan; another 3D television or MiniDisc. And maybe that’s still going to be the case; I haven’t watched a 3D movie in years, and my MiniDisc player has been gathering dust in the attic since the mid-2000s. But right now, with the amount of money being thrown at generative A.I. software, it feels at least plausible to me that, a few years from now, we could all be generating our own high-quality films, TV programmes, and perhaps even video games from simple prompts, with the only limitations being our imagination… and our wallets.

Stock photo of a Sony MiniDisc.
Remember MiniDisc?

I don’t know if that’s the kind of future I want… but I gotta be honest: part of me feels intrigued by the possibilities A.I. could offer. Being able to get tailor-made, fully customisable movies… there’s genuine appeal there, and whoever gets it right first stands to make a ton of money. I don’t think such a marketplace means that films, TV shows, and video games created by human beings will disappear; there will still be a place for creativity, imagination, and innovation. But there could be far, far fewer of those types of films, TV shows, and games being created if the big corporations go all-in on generative A.I. in the way I’ve outlined above. As with all things A.I., that basically means a ton of people are gonna be out of work. That undeniably taints the finished product; A.I. will come with an obvious, and pretty devastating, cost.

But for a lot of people… well, we already know that the cost to human lives doesn’t matter if they enjoy the finished product. Video games are still frequently created under “crunch” conditions, an exploitative practice I saw firsthand when I worked in the video games industry. Batteries rely on cobalt and lithium, mined by underpaid or even enslaved workers in horrible conditions. People pay for cheap clothes and shoes made in sweatshops. The list goes on… and my point is simple: don’t expect some kind of moral crusade against A.I. to change a lot of minds if it hasn’t in the cases we just mentioned.

Whether A.I. is here to stay, and whether I’m even close to being right about its potential future role in entertainment remains to be seen. I don’t know how much time I’ve got left, but if I’m still here in a decade or two, let’s revisit this piece together. Perhaps we’ll share a chuckle about how wrong I was, and how I exaggerated a flash in the pan technological fad way beyond its capabilities. Or not.


All brands and properties mentioned above are the copyright or trademark of their respective studio, distributor, broadcaster, etc. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

YouTube vs. Ad-Blockers

Using an ad-blocker when heading online feels simultaneously like a cheat code and also something that’s absolutely essential nowadays. Ad-blocking browser extensions don’t simply prevent you from seeing ads, they also stop companies from tracking at least part of your online activity, including searches and clicks. Although it’s not something that gets discussed often, I’d go so far as to say that an ad-blocker is a pretty essential piece of software for anyone who wants to get a bit of extra privacy and who doesn’t want to see their data shared online. Obviously an ad-blocker on its own isn’t the only thing one needs to remain private and un-tracked online… but it’s a good, easy first step.

At the same time, though, ad-blockers fundamentally challenge the way in which a significant portion of the internet operates. The only way many websites can afford to remain online is because they’re funded by advertising revenue, and using ad-blocking extensions undermines that funding model. I have no sympathy for big corporations (as you’ll know if you’re a regular around here), but small businesses and community projects that rely on ad revenue aren’t things I’d want to see vanish from the internet.

Advertising is a major source of revenue for websites and apps.

YouTube has recently kicked off an attempt to crack down on ad-blockers, and it’s this effort that I wanted to talk about today – as well as talk about ads online and ad-blocking in a more general sense.

As YouTube’s position as the dominant video platform has been seriously challenged over the past three or four years, I can understand Google’s approach to ads and ad-blockers – or at least, I can recognise that there’s a perverse kind of logic to the corporation’s approach, even if I fundamentally disagree with much of what it’s doing. YouTube has been losing out to TikTok in particular, as well as other video sharing apps and websites, and that partially explains Google’s insistence on both serving up more and more ads to users and cracking down on ad-blocking.

If you use an ad-blocker, you might’ve seen this recently.

But Google is failing to learn not one but two vital lessons from the early days of the internet! The first is the “Streisand Effect.” By talking about ad-blocking, Google and YouTube are drawing attention to the existence of ad-blockers… and there are tangible, noticeable effects as a result. I don’t put much stock in Reddit as a bellwether of online discourse, but it’s at least noteworthy that the sub-Reddits for several well-known ad-blockers have seen massive increases in comments and subscribers since the YouTube crackdown began.

In short, Google and YouTube may be falling victim to the “law of unintended consequences!” By making a fuss about ad-blockers and trying to push ad-block users to ditch a useful piece of software, all they’ve managed to do is draw attention to the existence of ad-blocking software… causing more users, not fewer, to begin blocking ads on YouTube and other Google services.

The original “Streisand Effect” photograph.
Image Credit: Copyright (C) 2002 Kenneth & Gabrielle Adelman, California Coastal Records Project, http://www.californiacoastline.org

The second lesson Google and YouTube seem to have missed is that of Napster’s demise. In the early 2000s, Napster was one of the first big file-sharing websites. People began sharing music online, and the recording industry tried to sue anyone and everyone involved. Napster was shut down after only a couple of years… but that wasn’t the end of file-sharing. Other applications immediately picked up where Napster had left off, and file-sharing is still available online to this day.

Even if YouTube were to manage to successfully disable or shut down all of the most prominent ad-blocking extensions and applications, more will arise to take their place. And even now, there’s a raging battle between Google’s army of coders and those who volunteer or work for ad-blocking applications. As soon as Google updates its code, ad-blockers find a way around it. There’s no end in sight and no obvious way for Google to strike a killer blow.

Logo for uBlock Origin – a popular ad-blocking extension.

There will always be people who want to get something for nothing, and who want to enjoy YouTube without ever seeing a single ad. That’s inevitable. But the rise in ad-blocking isn’t the fault of “greedy” consumers… it’s YouTube’s own fault.

Ads on YouTube used to be inoffensive enough. They’d show up once per five or six videos, there’d only be a single ad, and most ads were only a few seconds long. YouTube also allowed for many ads to be skipped after five seconds – and this combination seemed to be working well enough. It’s never fun to be served up an ad, but it wasn’t obtrusive and didn’t get in the way of the YouTube experience for most people.

An example of a skippable ad on YouTube.

But YouTube wasn’t content with that… despite raking in literally billions of dollars in ad revenue. Parent company Google began inserting more ads, longer ads, mid-video ads, and unskippable ads… all of which made the user experience noticeably worse. The delicate equilibrium between advertising and content became unbalanced, so is it any wonder people started looking for ways to skip or avoid the ads? YouTube brought this upon itself!

This is the first time in almost four years of writing on the website that I’ve so much as mentioned ad-blocking. By trying to crack down on ad-blockers, all YouTube has managed to do is draw attention to their existence, and call out its own piss-poor advertising situation. It’s the classic case of trying to squeeze users too hard: all YouTube will do is drive more and more people away from its platform and into the arms of gleeful competitors like TikTok.

TikTok has emerged as a major competitor to YouTube in recent years.

If ads on YouTube would go back to the way they were a few years ago, maybe the corporation would be able to convince more people to switch off the ad-blockers. But Google is trying to force users to sign up for YouTube Premium – a new subscription service that’s surprisingly expensive. This is a case of “invent a problem, sell a solution.” Google makes the ad situation on YouTube worse and worse, then offers to let folks pay to skip the ads. Ad-blockers are an existential threat to that business model.

If Google believes that it can win this fight… I think it’s wrong and it’s underestimating the resourcefulness of folks online. There may be small victories along the way – Google may shut down one ad-blocking extension, for example, ban ad-blocking on Chrome, or manage to re-code YouTube to temporarily prevent ad-blockers. But there’s a whole lot of determination on the other side, and as we’ve already seen so far in this fight, it’s never more than a few minutes before workarounds are discovered for whatever updates are rolled out to YouTube. Even if Google managed to score a big win and prevented all ad-blocking on YouTube… it won’t be long before cunning volunteers find a way around it.

There are a lot of people who are working to update ad-blockers and keep them functioning.

YouTube is not the juggernaut it once was. TikTok’s rise has seriously challenged its status as the leading video-sharing platform, especially among younger folks, and TikTok’s advertising situation seems to be a lot better and more user-friendly. I don’t use TikTok all that much, but having tried it out recently, I find ads on that platform far less annoying and far more tolerable.

At this point, doing anything to turn off users or push them away feels like a catastrophically bad decision for YouTube that could blow up in its face. The history of the internet is littered with once-massive websites and companies that failed to keep up with changing user attitudes… so there’s no guarantee that YouTube will just be able to coast on its current and past success and simply win by default. That didn’t work for MySpace.

Remember MySpace?

The internet and social media are constantly in a state of flux. It may seem that YouTube is in a dominant position right now – but the brief history of the web tells us that no website, app, or social media platform is unassailable. Doing anything to upset the apple cart while in that first-place position… it just isn’t a good idea. If YouTube was losing money hand over fist, unable to afford to keep the lights on, maybe this crackdown on ad-blocking would be more understandable. But we’re talking about a multi-billion dollar website backed up by one of the biggest tech companies on the planet.

YouTube’s own anti-consumer practices, coupled with the rise of TikTok and other platforms, have pushed people away at the precise moment that Google wants to rope them into long-term paid subscriptions. That’s unfortunate and puts YouTube in an awkward position, but I don’t think this is the right way to react. As Barbara Streisand found out when she tried to force a small photographer to take a picture of her house off the internet, calling attention to something only leads to more people becoming aware of it. YouTube and Google are drawing more attention to ad-blockers… and the result is articles like this one. I’ve literally never mentioned ad-blocking in almost four years of writing here on the website… not until Google and YouTube made it into an issue.

Google and YouTube seem determined to make ad-blocking into an issue.

At the end of the day, this is a fight YouTube won’t be able to win. They may score some successes along the way, and Google may even succeed at temporarily preventing ad-blockers on their platform. But it won’t be long before another ad-blocker pops up with a workaround. Look at what happened with Napster, LimeWire, and The Pirate Bay. Or look at how game developers and publishers are locked in a continual battle with pirates. No sooner has an anti-piracy tool been created than people are figuring out how to crack it and work around it. Spending a lot of money on these endeavours is ultimately fruitless – and it’s better for Google and YouTube to realise it now.

There’s also a very real legal question for Google and YouTube: are they even allowed to use software that detects ad-blockers? Scanning a user’s browser, extensions, or computer without permission is illegal in many jurisdictions, so these ad-blocker-blockers may not even be lawful. It would take someone with better legal knowledge than I to litigate that, though!

YouTube and Google may have fallen afoul of privacy laws.

So that’s the situation. YouTube is trying to push back against the small minority of its PC users who use ad-blocking software… but all that’s happened so far is a rise in the use of ad-blockers and more attention being drawn to ad-blocking. Volunteers working for popular ad-blocking extensions are easily able to shut down YouTube’s best efforts right now… and good for them.

I’d encourage anyone who wants to get a small bit of extra security and privacy online to seriously consider installing one of the available ad-blocking browser add-ons. You can manually “white-list” certain websites and apps, like YouTube, if you want to and if it feels important to you to contribute to smaller businesses and content creators.

As for YouTube’s prospects… I’m not so sure. The way people – especially younger people – consume media is changing, and YouTube’s status as the dominant video-sharing platform is already under threat. Maybe this clampdown on ad-blocking will succeed and will bring in a bit more loose change in the short-term, but beyond that? The future of online video platforms looks a lot more like TikTok and a lot less like YouTube.

Some images used above courtesy of Unsplash and Pixabay. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Thoughts on the Linus Tech Tips/LMG situation

I don’t usually “do drama,” especially not when that drama involves YouTubers and “influencers,” but the monumental shitstorm engulfing Linus Tech Tips has been all over my social media feeds this week and I feel compelled to add my two cents.

If you’ve missed the news somehow, popular tech-entertainment channel Linus Tech Tips – and its parent company, Linus Media Group – has been embroiled in controversy. What started with a video from another tech review channel, Gamers Nexus, has ballooned into accusations of ethical issues with LMG’s reviews, conflicts of interest, mistreatment of a small company and their prototype product, and even reports from at least one former employee of a work environment so toxic that she resulted to self-harm before being forced out.

You may have seen this video among your YouTube recommendations this week.
Image: Gamers Nexus

For some background, I’ve been a long-time viewer of Linus Tech Tips. I’ve been a nerd since, well, forever, and when I first started really getting into YouTube, visiting the platform more regularly and doing more than just looking up occasional how-to videos, Linus Tech Tips was one of the first channels that I subscribed to and would regularly tune in to watch. The hosts – Linus included – usually do a good job at making tech and computing topics accessible to the layperson, and while some of the humour isn’t my cup of tea and veers into being cringeworthy for its own sake, more often than not I’d find myself cracking a smile.

When I decided I was going to build my own PC for the first time, Linus Tech Tips was one of the main resources I consulted. I purchased at least one component for my PC – a CPU cooler, which we’ll discuss in more detail in a moment – entirely because it was highly recommended by Linus. I felt the channel and its hosts were trustworthy – and as someone who doesn’t have a ton of knowledge about tech, components, and the like, I don’t really have the ability of an organisation like Gamers Nexus when it comes to fact-checking the content that Linus Media Group has been putting out.

The Noctua NH-D15 as seen in a recent LTT video.

We’ve talked before about ethics among reviewers, and I’ve made clear that I don’t believe there can ever be such a thing as a “paid review.” There are reviews, in which a reviewer shares their thoughts and opinions on a product as best they can, and there are advertisements. Anything paid for is an ad, regardless of the terminology used, and Linus Media Group has, in the past, come close to skirting that line in my opinion. Taking paid sponsorships or selling ad space in a video is one thing, but dedicating an entire video to selling (sorry, “showcasing”) a product on a channel that also does supposedly-independent reviews… that kind of thing has felt uncomfortable for a long time, and is certainly absolutely useless from a consumer standpoint.

But at least those videos are flagged up as such, and even if I just skimmed a title or clicked without really paying attention, it’s usually obvious within a minute or two that a product “showcase” is, in fact, little more than an overly-long advertisement.

What’s been surprising to learn, at least for me, over the past week or so, is the extent to which LMG has become tied up with multiple big tech companies – companies whose products they subsequently review, and whose competitors products they also review.

An example of one of LTT’s sponsored “showcase” videos.

There is, as others have pointed out, a clear conflict of interest here. If Linus Tech Tips didn’t pretend to offer “reviews,” but rather stuck to product showcases, entertainment videos, and the kind of general mucking about that viewers have come to expect… I guess that would be okay. The deals between LMG and big companies like Intel and Noctua should still be stated up-front and not hidden, but at least there’d be less of an ethical minefield.

But Linus and co. seem to want to have it all.

In a small way, this has actually affected me. The PC cooler I mentioned, the one recommended so highly by Linus on his various channels, is manufactured by Noctua – a company that LMG has a financial relationship with. Now, I will state up-front that I’m not disappointed by the cooler’s performance in the PC that I built, but it’s made me stop to think. Would I have purchased that model, or even a model by that company, were it not for the strong recommendation from a tech influencer that I felt I could trust? And was Linus’ glowing praise for the Noctua cooler influenced in some way by his company’s financial ties to its manufacturer?

The Noctua cooler that I purchased was featured in a Linus Tech Tips video earlier this year.

To be clear (and because Linus himself has been quick to accuse critics of “libel” in the past) I’m not suggesting that Linus, LTT, or LMG deliberately misled me, nor that they fraudulently or dishonestly recommended me a product. I’m responsible for my own purchasing decisions, and it’s on me to seek out multiple reviews and do my own due diligence before making a financial commitment. But what I am saying is that, when there’s any semblance of a conflict of interest or any reason for a reviewer to look more kindly on a product… that’s a serious problem.

Here on my website, I mostly review films, games, television shows, and episodes of Star Trek. But if I were approached by, say, Paramount Global, and entered into a financial arrangement with that company, there’d be some degree of pressure – real, implied, or just purely imaginary – to keep that relationship going and to, for want of a better term, placate Paramount by glossing over the negatives and accentuating the positives in any future “review.” My reviews of Paramount’s films and TV shows would become suspect as a result – and even if I wasn’t making a conscious choice to look upon their content more positively, there’s a strong chance it would happen anyway.

“Shut up and take my money!” said multiple companies to Linus Media Group…

This is, for me, the crux of the Linus Tech Tips problem, and it’s one that LMG may find very difficult to overcome. Sure, the fact that some of their reviews were badly done or incompetently handled is a bad look – but that’s something that can be corrected, over time, with better quality control, enhanced video production procedures, and the like. And other tech channels, like Gamers Nexus, will be watching and will be ready to jump in and hold Linus’ feet to the fire if those kinds of mistakes and slip-ups happen again.

But the issue of trust is a much trickier and more nebulous one to resolve. Trust has been strained to breaking-point between Linus Tech Tips and many folks in its audience, and revelations of close partnerships and financial ties to big tech companies isn’t something that can be hand-waved away nor fixed in short order by internal changes at LMG. This is something that cuts to the very core of Linus Tech Tips’ content: can viewers trust what Linus and his co-hosts are saying about, well, anything?

Linus in a recent apology/explanation video.

No review is ever “entirely objective,” because that’s not how reviews work. There’s always a certain amount of one’s own thoughts, impressions, and biases present, even in reviews that promote stats and data above all else. So I don’t want to come across as asking for the impossible, because I know from my own experience that there simply isn’t such a thing as an “objective” review.

What makes a review valuable is the trust audiences have with the reviewer. I like to think that folks who come to Trekking with Dennis to read my film reviews or Star Trek episode reviews trust me to share my honest thoughts on what I’m seeing or playing, and trust that I have enough basic knowledge of the subject matter at hand to write something that’s at least worth their while. I also state in many of my pieces all over the website that I have no “insider information” or special access, and that I’m sharing my thoughts as openly as I can. That’s not to say I’m in any way free from bias, but I am independent. I don’t have financial ties to any of the companies whose films, games, or TV shows I review.

I like to think that my audience can trust my reviews of shows like Star Trek.

Linus Tech Tips can’t make that claim – and whether the hosts are “reviewing” a product from a company they work with or a product from one of that company’s competitors… there’s a very real danger of unconscious bias seeping into their content, making their reviews functionally worthless to consumers. If viewers are tuning in to watch Linus goof about, that’s fine and I guess no real harm was done. But for someone in the position I was in last year, looking for genuine product recommendations… there’s a pretty massive red flag being waved.

Linus Media Group can and should take on board the feedback provided by Gamers Nexus and their community. Slowing down their output, prioritising quality over quantity, and being willing to go back and edit videos or even re-film whole segments if mistakes were made are all pretty easy fixes; low-hanging fruit that should see significant improvements to the quality and accuracy of their videos going forward. For a company of LMG’s size and financial means, if making those kinds of commitments means either reducing their output or hiring additional staff, that won’t be an issue. The company is, quite remarkably for a YouTube outfit, worth tens of millions of dollars, in case you didn’t know.

Terren Tong, Chief Executive Officer at Linus Media Group.

But those changes are surface-level at best. The real issue of trust, and whether viewers can have faith in a review published by a company with such significant investments all over the tech space… that’s a whole different matter. And there’s no easy fix here – LMG has to decide what kind of company it wants to be and what kind of content it intends to create. If they’re going to stick to geeking out and mucking around, doing little more than playing with some of the fancy pieces of kit that they spend vast sums of money on, then I think they’ll be okay. But if they want to play at being reviewers and journalists, offering honest advice to consumers… the changes required to win back that kind of trust are much greater, and the will to make them simply may not be present.

How Linus Media Group responds to these criticisms in the days ahead will be telling, and I think it’s not unfair to say that Linus’ initial reaction to Gamers Nexus was poor – so catastrophically poor, in fact, that it fuelled the fire and made the situation noticeably worse. For me, this isn’t an issue of a simple inaccuracy, a mistake, waiting too long to correct a misstatement, or even that LMG is clearly running too hot and with the wrong priorities. My biggest takeaway from this whole sorry saga is that, at its core, LMG is too heavily involved with too many of the companies in the tech space whose products it purports to review. Whether intentional or not, those relationships are going to lead to bias, to conflicts of interest, and to making a review from LTT worthless.

An example of a graph produced for a Linus Tech Tips product review.

It goes without saying that a company should treat its employees with dignity and respect at a bare minimum, and I don’t want to ignore nor make light of the very serious accusations levelled at LMG by at least one former member of staff. Those accusations are, at time of writing, being looked into, and if they prove even close to true… LMG is going to be in for another huge wave of trouble to say the least. We’ve all had the misfortune of working for a toxic manager or boss at one time or another, and I extend my sympathies to the ex-employee in question.

Linus’ on-screen persona seems a million miles away from the accusations of sexism, ableism, and toxicity that have been levelled at managers within LMG… but if those accusations are borne out, quite frankly it could be a far worse situation for the company. Other famous YouTubers have been “cancelled” for comparably toxic behaviour, so no one, no matter how big of a fish they may be in their small pond, should consider themselves to be immune from consequences. These allegations from a former employee could prove far worse than anything Gamers Nexus presented and any of the trust issues I’ve been discussing above.

What comes next for Linus Tech Tips?

So that’s about all I have to say, really. This is a disappointing situation, but one that, if I’m being honest, I should’ve seen coming. There have been so many “showcases” and sponsored segments in LTT videos over the years that of course the company must have those deep relationships and financial entanglements. It’s on me that I didn’t really put two and two together until now.

Going forward, I’m not sure what I plan to do. Linus Tech Tips has given me a lot of enjoyment over the last few years, and even if there are conflicts of interest and financial arrangements that now make me question the usefulness of LTT’s reviews on a practical level, some of the channel’s other content – the videos focused more on entertainment and fun – may still be worth watching. However, I will be keeping my ear to the ground to see what comes of the investigation into the complaints raised by at least one ex-employee of LMG. If the accusations levied at LMG are even close to true, then I think I’ll be unsubscribing. There’s no place for such awful treatment of a fellow human being, no matter how “big” and untouchable you think you are.

This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

I quit Twitter

They say that social media isn’t an airport: you don’t need to announce your departure. But having recently jumped out of the “hellsite” that is Twitter, I wanted to explain why I decided to put an end to an almost two-year experiment with the micro-blogging platform – and my reasons for leaving may not be what you expect.

In 2022, tech entrepreneur (and richest human on Earth) Elon Musk announced that he planned to purchase Twitter. Some of the changes he made when the deal eventually went through haven’t gone down all too well – but my decision to quit is entirely unrelated to the Musk situation, even though I now find myself as one voice among many who have chosen to leave Twitter at the same time. In fact, the situation with Elon Musk’s takeover is partly why I wanted to share my thoughts; it would be easy for folks to assume that I left Twitter because of Musk, when really the timing is little more than coincidence.

New Twitter CEO Elon Musk.

First of all, I don’t think I was ever a good fit for Twitter, nor was Twitter the right place for me, because of my writing style. If you’re a regular reader, you may know that I have a somewhat longwinded or exhaustive style (I have reviews of individual Star Trek episodes that break the 10,000-word mark, for instance) and that just doesn’t gel with Twitter’s 280-character limit. I find it nigh-on impossible to condense an article or argument into such a short-form post, and while Twitter does allow for writing threads of posts that are joined together, that sort of defeats the purpose – and I found that many people on Twitter would only read one post out of an entire thread anyway.

As a result of posts being so curtailed, the conversation on Twitter, such as it is, lacks nuance. When we’re talking about the world of entertainment, which is my primary focus here on the website, I found that a Twitter post had barely enough space for me to say that I either liked something or I didn’t; there’s very little room for shades of grey or a more detailed explanation of how some elements of a production worked while others didn’t. And the audience on Twitter seems to largely expect that kind of black-and-white judgement.

No, not that kind of Shades of Gray

In practically all of my reviews here on the website, you’ll find me saying both positive and negative things about a film, television show, or video game – because in almost every case, no production is either absolutely perfect nor irredeemably awful. Even in cases where I’ve been scathing about films I didn’t enjoy – such as The Rise of Skywalker or Zack Snyder’s Justice League – I found positive things to say about some aspects of the projects, and likewise in productions that I loved and had plenty of positive things to say – like Star Trek: Discovery’s fourth season finale, Coming Home, or the video game Forza Horizon 5 – I wasn’t shy about picking on elements that I felt were less successful. Twitter, at least in my experience, didn’t really allow me to do that – and most people I interacted with there didn’t seem to want that nuanced discussion in any case.

Most things in life are not black-and-white, with either wholly positive or entirely negative opinions, and speaking for myself, it can be those nuances that I find the most interesting. Twitter, by its very nature, forces users into one camp or another, and when there’s already so much division in the world – and in fan communities – that kind of discussion goes nowhere positive. Taking Star Trek as an example, I feel that there have been some incredible Star Trek episodes since the franchise returned to the small screen five years ago… but there have also been some pretty serious missteps by Paramount Global, as well as some disappointing storytelling decisions to boot. But on Twitter, fans are pretty firmly divided into two camps: those who support new Star Trek and those who hate it. I found it difficult to fit in with either; I was to pro-“new Trek” for those who hate it, and too critical of it for those who never wanted to see a negative opinion expressed and who were quick to label anyone saying anything remotely critical of Paramount as a “hater.”

I’ve been critical of Paramount Global over the past couple of years.

On both sides of an increasingly black-or-white argument, I saw people who openly said that they didn’t want fans from “the other side” to even follow them, let alone interact with them, and these divisions seem to go very deep. One fan had in their Twitter bio words to the effect of “I never want to say anything negative about Star Trek ever” – and I just feel that such extremes of positivity or negativity don’t make for a healthy fan community, nor for a supportive one that can discuss in a civil and polite way the franchises they love. And this was by no means exclusive to the Star Trek fan community – similar divisions and arguments seem to plague practically every community on Twitter, and that’s before we even get to the world of politics!

On a similar note, Twitter reminded me a lot of the kind of stupid “he said, she said” arguments that were common on the playground at school. In the roughly two years I spent on Twitter I lost count of the number of times that “drama” erupted in the Star Trek fan community, with everything from financial scams to racist rants to ill-considered compliments triggering huge waves of backlash targeting certain individuals and their friends. Many times I was told that a person I had followed was “toxic,” and that if I wanted to remain part of whatever clique I had inadvertently stumbled upon then I’d better un-follow a whole host of people extra-quick.

It’s a visual metaphor.

If someone made a mistake – not being racist or anything truly nasty, but just saying something ill-advised or even something that was taken the wrong way – they would often find themselves the subject of hate and abuse, leading to exile. Twitter’s community of medieval peasants placed the targets of the day’s ire in the stocks, tarring and feathering them until the humiliation or attacks got too much, forcing them to leave the platform. I saw perfectly decent people who had friends and fans make a single mistake, write one single poorly-worded post, and get run off the site by sanctimonious self-appointed moderators.

I’d heard people call Twitter “toxic” long before I joined, and I’d even seen some folks refer to it as a “hellsite,” but I confess that I was entirely unprepared for the levels of childishness, of toxicity, and of poor behaviour that I encountered. Fortunately, for the duration of my two years on Twitter, none of that hate was ever directed at me personally, but simply being there and seeing these “Twitter dramas” unfold was enough to put me off.

An average Twitter user getting ready for a day of arguing with strangers about meaningless nonsense.

I joined Twitter with the rather shameless intention of promoting some of the articles here on the website, in the hopes that I’d get a few more clicks from Trekkies and people interested in some of the other things I talk about here. But looking at my stats, the posts I shared on Twitter only ever got a few hundred clicks at the very most, meaning that the experiment was a failure and sticking around didn’t feel worthwhile. I was conscious not to come across like a spam-bot, only ever posting links to the website, so I tried to expand what I did to include memes and jokes, often but not always about Star Trek. But even so, I never managed to attract much of a following.

This isn’t intended to sound bitter, and I appreciated that some folks on Twitter did follow my account and share and like some of the posts I made. I’m grateful for that and for their support. But speaking purely practically, being on Twitter didn’t accomplish what I hoped it would, and the few hundred extra hits that the website got wasn’t worth falling down that rabbit hole and getting sucked in to the politics and the drama that swirls around that infernal website.

Social media just isn’t my strong suit!

Twitter became a time-sink for me, and I found myself scrolling through memes, politics, and even some of those damned arguments and toxic conversations that I talked about earlier. I found I could spend hours just endlessly doomscrolling, and honestly it wasn’t good for my mental health – nor for my productivity. While there were some genuinely interesting accounts sharing fun posts, they were drowned out in an ocean of negativity, toxicity, and argument – and perhaps that’s partly why my own account got lost and never managed to find much of an audience, even in the niches I hoped to break into.

The political side of this was, for me, perhaps the worst part – I found myself unable to resist the temptation to fall down these political rabbit holes, and would waste time in a way that wasn’t positive for my mental health. Politics can be fascinating, don’t get me wrong, and Twitter is, in some respects anyway, an interesting platform in which politicians at all levels can interact directly with their constituents. But for many of the same reasons that we’ve just been discussing, those conversations are black-and-white, and Twitter is firmly divided into camps. Toxicity and hate reign supreme, with the most extreme posts getting the most attention and the most feedback. It really is a nightmare.

I found myself spending way too much time reading and thinking about politics.

I’m glad that a platform like Twitter exists, where people from different backgrounds and with different opinions can interact. I think there were good intentions there – at least in the beginning. And despite his personal controversies, I hope that Elon Musk can find a way to make the platform a success going forward; the world needs a place to communicate, and for all the talk of toxicity and people falling into echo chambers, the one positive thing to say about Twitter – and social media in general – is that it can, under the right circumstances, expose people to points of view that they may not have considered, and with proper content moderation it should be able to direct people away from conspiracy theories in the direction of scientific fact. Twitter has failed on that latter point lately – but no more so than any other big social media platform.

But for me, Twitter was never a good fit. I need to have the freedom to write longer posts, to take positions on the subjects I discuss that don’t fall solidly into “love it” or “hate it,” and just in general my communication skills, even online, are lacking. I didn’t know how to use the site to talk to people or make friends, nor how to really take advantage of its algorithms to drive traffic to my own website.

So that’s it. If you used to follow me on Twitter and wondered what happened to my account, now you know. I have no plans to join any other social media platforms at this time, and while I’m happy to have experimented with Twitter and given it as much of a fair shake as I reasonably could, it was definitely time for me to get out.

No statement in the article above should be interpreted as targeting any individual, fan group, social media page, etc. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

To recast or not to recast?

Spoiler Warning: There are spoilers ahead for The Book of Boba Fett. Minor spoilers are also present for Solo: A Star Wars Story and Star Trek: Strange New Worlds.

President of Lucasfilm Kathleen Kennedy recently made a statement that has drawn a lot of attention. In an interview with magazine Vanity Fair, Kennedy stated that one of the lessons that the Disney-owned studio learned from the lukewarm response to Solo: A Star Wars Story in 2018 was that recasting classic characters isn’t possible. In her words, “it does seem so abundantly clear that we can’t do that.”

This has kicked off a discussion online, not least among fans of the Star Wars franchise and Solo in particular. Today I want to add my two cents to the conversation and use the debate around Kennedy and Solo to talk about recasting characters in a general sense, as well as touch on some alternatives that studios have turned to in recent years.

Lucasfilm President Kathleen Kennedy.

First of all, I encourage you to read the full piece in Vanity Fair so you’re aware of what was said and its context. Online debate often centres around a single phrase, soundbite, or fragment of a sentence, and it isn’t always clear how someone’s words were intended. In this case, for example, Kennedy seems to have been talking about the need for Star Wars to move on – something I’ve talked about at length here on the website – and expand beyond the confines of the “Skywalker Saga” and the handful of familiar characters who made up those stories.

I absolutely agree with that sentiment! The Star Wars galaxy is such a vast and exciting setting, one with thousands of years of history, an uncertain future, thousands of planets, trillions of inhabitants, and so much potential! So far, the Star Wars franchise has doubled-down on showing us the same handful of characters and the same tiny sliver of that setting over and over again, and I for one am starting to get sick of it! Star Wars can be more than Luke Skywalker – if it’s willing to put in the effort.

Canto Bight, one of many interesting locales in the Star Wars galaxy.

So in that sense, I agree with Kathleen Kennedy! But taken as a starting point for a discussion about recasting characters, I couldn’t disagree more.

It’s profoundly ironic that Kennedy made this statement during the build-up to the broadcast of Obi-Wan Kenobi… a series that centres around a character who was recast from the original Star Wars films! The Star Wars prequel trilogy recast a number of characters – and recast characters were even ham-fistedly edited into the so-called “special editions” of the original films, with the original voice of Boba Fett being re-dubbed, the original Emperor Palpatine being overwritten in The Empire Strikes Back, and most notoriously, the Force ghost of Anakin Skywalker being changed in Return of the Jedi.

Disney and Lucasfilm panicked in the late 2010s. The divisive reaction to The Last Jedi and underwhelming numbers for Solo: A Star Wars Story saw the Star Wars franchise refocused to bring back a lot more of what corporate leadership hopes will pass for nostalgia. This accounts for the existence of shows like The Book of Boba Fett and Obi-Wan Kenobi, as well as certain narrative decisions in The Rise of Skywalker. It isn’t the first time that corporate cowardice has got in the way of entertainment, and it likely won’t be the last.

The Book of Boba Fett is an example of Star Wars recycling characters.

I’d argue that the wrong lesson has been learned from Solo if Disney and Lucasfilm believe that the biggest takeaway is that they should never try to recast characters. Alden Ehrenreich’s performance was far from the worst thing about that film, and if audiences and Star Wars fans felt that he “didn’t feel” like Han Solo, the blame needs to be placed with the way the script was written and the way the story was told. Solo was a fairly clumsy overplaying of the nostalgia card in my view; a film with an interesting premise that was hampered by shoehorning in characters from the original trilogy and that made the same mistake with Han Solo as the prequel trilogy had with Anakin Skywalker: overexplaining his backstory.

But all of that is incidental. Even if we accept the premise that Solo was a failure and that the performance of its lead was a significant factor – neither of which I’m fully on board with, but I’ll grant for the sake of argument – is the right response really to say that no classic characters should ever be recast again? That seems like a horrible overreaction!

Alden Ehrenreich in Solo: A Star Wars Story.

The Star Wars franchise has done some very interesting things with digital de-ageing and CGI character creation. The way Luke Skywalker was brought to screen in The Book of Boba Fett as an entirely CGI character was technologically stunning, and at first I thought I was watching a lookalike. The technology needed to create CGI characters and perfectly replicate the face – and even voice – of real actors is already here, and while Disney may be a pioneer of the technology, they’re far from the only ones to be using it. We’ve recently seen digital de-ageing make an appearance over in the Star Trek franchise, for example.

I’ve had an article in the pipeline for well over a year that I really ought to get around to finishing one of these days! It’s all about CGI characters in film and television, because I’m convinced that we’re not far away from a film or TV series bringing back to life a dead actor to play a leading role. I can already picture a snooty director who insists that the lead role in their film could only ever belong to someone like Lawrence Olivier or Orson Welles and decides to digitally recreate them rather than cast someone else!

A CGI recreation of Mark Hamill was used to bring Luke Skywalker into The Book of Boba Fett.

That’s connected, in a way, to what we’re talking about here. Some actors and performers are so iconic that I can absolutely see a time – perhaps very soon – when a film or TV series will use a CGI lead mingling with real actors. A digital creation with a fake face and fake voice that are indistinguishable from the real thing. And as that technology improves and becomes more accessible, it may even become possible for amateurs to do something similar. Imagine a Star Trek fan-film where a fully-realistic CGI Captain Picard is the lead. We’re edging closer to that kind of reality!

But is Sir Patrick Stewart inseparable from Captain Picard? Is Mark Hamill the only possible Luke Skywalker? Or are these roles – and many others across the realm of entertainment – more than just one actor? Look at Shakespeare as an example: how many actors have taken on the role of Hamlet, Prospero, or Richard III? And even in cinema, how many different performances have there been of characters like Count Dracula or Ebenezer Scrooge?

Is Christopher Lee the only possible Dracula in all of cinema?

If the argument is that certain characters can only ever be portrayed by one person, doesn’t that limit them and restrict them to a single possible interpretation? There have been very different takes on iconic characters over the years, and while audiences may have a preference for one or another, that doesn’t mean that only one interpretation is valid. Taking a character and giving them to a new actor expands the potential of that character.

In the Star Trek franchise we’ve seen the main characters from The Original Series recast for the Kelvin films, and while I know of some Trekkies who absolutely hated that idea, as time has passed since the 2009 reboot, more of those folks have come around. 2009’s Star Trek did a lot of things right as a reboot of the franchise, and a big part of its success was down to the way those classic characters were reinterpreted.

Star Trek hasn’t been shy about recasting characters!

Some Star Trek characters have been recast multiple times – by my count, Strange New Worlds features the fourth actor to play Captain Pike and the third to play Spock. Early reactions to Strange New Worlds have been incredibly positive – and the series has even drawn praise from some fans who hadn’t enjoyed anything else that modern Star Trek has had to offer. Recasting Pike, Spock, and other classic characters has clearly not harmed Strange New Worlds.

And the same could be true for Star Wars. Maybe Solo wasn’t the best film the franchise has put out, but that shouldn’t mean that experimenting with different takes on classic characters should be entirely shut down. There’s scope for new actors to take on the roles of Luke, Leia, and others – just as there was for Ewan McGregor to become Obi-Wan Kenobi or Alden Ehrenreich to become Han Solo. Arbitrarily deciding that recasting can never work off the back of a single underwhelming film is an unnecessary overreaction – especially considering that recasting has already worked in Star Wars, with an upcoming series standing as testament to that fact.

Obi-Wan Kenobi was recast in 1999 – and the recast character was so popular that he’s about to get his own spin-off series.

At the same time, there’s a place for digital character creation, recreated characters, and CGI characters, and I fully expect to see a lot more of that type of thing in the years ahead. For my two cents, digital de-ageing and CGI characters probably work best as side-characters rather than main protagonists – and I think Star Wars has got away with using them in that context so far. It will be a new challenge to see a film or TV series where the leading role is taken over by a CGI character.

So in conclusion, I’m glad that Kathleen Kennedy is finally willing to consider expanding the Star Wars franchise beyond the tiny fragment of its wonderful and vast setting that we’ve seen so far. That part is the good news! However, I don’t agree that there’s no place for recasting characters in a general sense. New actors have the potential to bring a new interpretation to the role, and if you look across at other films, franchises, and TV shows, it’s abundantly clear that recasting can and does work.

All properties and franchises discussed above are the copyright of their respective studio, distributor, corporation, etc. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Windows 11… what’s the point?

About a month ago I built myself a brand-new PC. When I was making my plans and getting ready for the build, one of the choices I had was that of the operating system. With the exception of some very early lessons at school using a BBC Micro, and playing a few games on a Commodore 64 owned by a friend, I’ve always used Microsoft products. My first ever PC ran Windows 95, and I’ve used every version of Windows since, either for school, work, or at home. Although I’m not an expert by any means, I consider myself a pretty experienced Windows user!

Microsoft initially promised that Windows 10 would be the “final” version of their landmark operating system, with updates and tweaks but no replacement. This is what Apple has been doing for over twenty years with macOS (formerly known as OS X) so it seemed like something Microsoft could do as well. That promise lasted barely six years – less if we assume that Windows 11 must’ve been in development behind-the-scenes for a while – and before we go any further it’s worth acknowledging that. The broken promises surrounding Windows 10 will have quite understandably soured some people on Windows 11 before they even got started.

The first PC I owned ran a different version of Windows!

I found Windows 10 to be okay, but it had some issues. There were graphical bugs that only afflicted 4K screen resolutions, an unnecessarily complex set of menus and settings, lag on some Bluetooth devices, and more. I reported a few of these issues to Microsoft not long after upgrading to Windows 10… but they ignored all of them. If nothing else, I felt that upgrading to Windows 11 would at least simplify the experience, getting rid of the multiple settings menus and finally allowing me to display extra-large icons.

But alas, Windows 11 has to be the shoddiest “upgrade” I’ve ever come across. Windows 11 isn’t even akin to the upgrade from Windows 8 to Windows 8.1, and practically all of my complaints and criticisms about Windows 10 remain in the operating system. At time of writing, Microsoft charges £120 for the Home version and £220 for the Pro version of Windows 11 – and there is no way in hell that it’s worth the money. You’re better off sticking with Windows 10 in the short-term.

Windows 11… it has TikTok!

As 4K screen resolutions have become more common, you’d think that Microsoft would allow Windows users to take full advantage of a good-looking display. Heck, Microsoft sells its own Surface products with 4K screens – and yet for some reason, incredibly basic things like extra-large icons don’t work with a 4K screen resolution. This issue was reported to Microsoft as early as 2017 – a full five years ago. Throughout the lifetime of Windows 10 they did nothing to fix it, and I’d given up on ever being able to use extra-large icons on Windows 10. But you’ll forgive me for thinking that such a basic, simple thing could’ve been included when a brand-new operating system was released.

Control Panel and Settings menus are also a major area of complaint. As early as Windows 8, Microsoft saw fit to include not one but two settings menus: the classic Control Panel and a new Settings menu. These two menus often overlap, and it can be exceptionally frustrating to be spending ages looking for something only to realise you can’t do it from the Settings menu and you have to go back to Control Panel – or vice versa. How difficult would it be to roll both menus into one? This is now the fourth operating system in a row to have this problem, and I know I’m not the only one bothered by it.

The Control Panel still exists… and still clashes with the Settings menu.

To me, the examples above show just how little care and effort Microsoft put into the development of Windows 11. There are a handful of new features – like the ability to install certain apps from Android, for instance – but nowhere near enough to justify the cost, nor even enough to justify calling Windows 11 a wholly new OS. It’s Windows 10.1 – a basic shell with a few new shiny features slapped carelessly atop Windows 10.

And that isn’t actually the worst part of it. Some of the “features” that Windows 11 has introduced has made the day-to-day experience of using the operating system significantly worse. One of the most basic features that I’ve used for years in Windows is the ability to see my scheduled calendar events at a single click. Click the bottom-right of the screen to pop open an expanded calendar, then click on a day to see what events are on the agenda. Windows 11 has taken away this phenomenally useful feature, forcing me to open the full calendar app.

The Widgets menu.

This is part of a trend that you’ll notice with Windows 11 from the very beginning: every feature, every useful little app, every widget… they’re all designed to push users to sign up for Outlook and OneDrive accounts. Even if you have a full Microsoft account – and using Windows 11 without one is pretty difficult, as basic things like changing to dark mode aren’t available to you in that case – Outlook and OneDrive are basically required to make the most of many Windows 11 features.

Want to see a slideshow of photos on the Photos widget? Tough luck, you need OneDrive for that. Want to check your schedule on the calendar without having to open the full app? Screw you, sign up for Outlook. This is Microsoft’s approach. To the corporation, it isn’t good enough that you’ve bought the OS; in order to use many of its most basic features they want to fully rope you into every Microsoft account, ecosystem, and most importantly, every possible subscription.

Windows 11 offers a lot of apps… but to take advantage of them you’ll need subscriptions and accounts.

Atop that there are some unnecessary cosmetic changes and menu changes that have again made doing everyday tasks complicated. Right-clicking now brings up a new, smaller menu, one which has replaced basic options like “Copy” and “Paste” with stupid little icons. In order to access really basic options that have been part of Windows for decades – like “Print,” for example – you need to right-click, then click to open a second options menu. Unnecessary menus hidden inside of menus seems to be one of the hallmarks of this underwhelming operating system.

Installing Windows 11 was not a smooth experience, either. Despite not actually being much more complex than Windows 10 in many respects – an OS that can run on most computers made in the last 15 years – Windows 11 has one of the biggest barriers to entry of any Microsoft release to date. By requiring a Trusted Platform Module (or TPM) Windows 11 is effectively off-limits to any PC more than four or five years old. Even pretty expensive PCs with good-quality components don’t comply with this requirement.

This is the screen that greeted me when I first tried to set up Windows 11.

One of the strangest bugs I’ve encountered so far is in the Event Viewer. While tracking down a particularly annoying problem that came about when I built my new PC, I noticed that the Event Viewer is completely flooded with the same message over and over and over again. At time of writing, my PC – which is less than two months old – has more than 20,000 instances of the same “DistributedCOM” warning. Microsoft’s official advice? That’s fine – it’s supposed to look like that!

Microsoft currently plans to end support for Windows 10 – a widely-adopted OS in light of the corporation’s promises that it would be the “final” Windows version – in late 2025, which is only three-and-a-half years away at time of writing! This cynical attempt to pressure users to upgrade is just disgraceful; previous versions of Windows lasted far longer after their successor systems were released. Support for Windows 7, for example, only ended two years ago, and Windows 8 and 8.1 are still supported at time of writing.

Windows 11 reminds me of Windows ME.

So that, in my experience so far, is Windows 11. It’s as if a team of some of the best software experts in the world sat down to create an operating system designed from the ground up with the sole objective of pissing me off – and they succeeded beyond their wildest expectations.

Windows 11 will be my operating system from this point forward – but only by default. Just like when I had Windows ME, Windows Vista, and Windows 8 and 8.1, I’ll begrudgingly tolerate it. But as soon as there’s a better OS available, I’ll take it. Windows 11 is, in my view, comparable to those failed experiments from Microsoft; the best thing I can say about it is that it may prove to be an incremental step on the way to something better.

We can but hope, right?

Windows 11 is available to purchase now. Windows 11, Windows, and all other properties mentioned above are the copyright of Microsoft Corporation. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Building my new PC

For a couple of years my PC had been in need of a refresh! I’m disabled and spend most of my time at home, and my PC has been everything for me over the last few years: entertainment centre, games console, workspace, and of course, the place where I write all of these articles and do all of the tasks here on the website! When 2021 rolled around I decided I needed to get my act together and get serious about an upgrade, and over the course of last year I put together a list and began to acquire the components for my new build piece by piece.

Though I perhaps know a little more about computers than your average consumer, I’m by no means an expert – as this article will surely attest. That’s why I’m not calling this a “build guide” or “how to build a PC!” There are plenty of far better-qualified people than me who’ve written step-by-step guides on how to do everything, and there are videos on YouTube too. I’d recommend Linus Tech Tips on YouTube, TechRadar’s step-by-step guide, and the Tom’s Hardware forum if you’re looking for that kind of detail – and I’ll include links to all three at the end of this article. I’m writing this for the website to share my experience as a newbie to PC building – and because I enjoy writing!

I’m no expert, so if you’re looking for a detailed guide you’ll have to go elsewhere!

First of all, it took longer than I’d hoped to get everything in place. I kicked off this project just over a year ago, in early 2021, and I hoped to have made upgrades to my old PC by the summer. I then changed my plans and decided to build an entirely new machine from scratch, adding extra time to the project, but I still had hoped to be finished well before Christmas. In the end it was mid-March that I finally got it done – and there’s one additional task that I’ll aim to complete perhaps later this year or early next year, depending on how things go.

When I set out to build my PC I thought I knew the basics; which components I’d need and roughly how much I’d need to spend on them. But what hit me later on were all of the hidden costs, extras, and accessories: things like additional cables, an extra fan, a new DisplayPort cable, a new surge protector, screwdrivers, a static wrist strap, thermal compound, thermal pads, and so on. Because I’ve also changed where I sit and the orientation of my PC, I’ve also needed to invest in a new monitor arm and additional storage under the table that my PC rests on. All of these smaller things added up and delayed the project by at least a month!

Picking up hidden extras like new cables added to the cost.

Because I’ve never had a lot of money, I’ve always chosen to invest in items that I feel are higher-quality and stand a good chance of lasting a long time. The cheapest products aren’t always the best value or longest-lasting, as I’m sure you’ve discovered for yourself! With that in mind, I sought out components with excellent reviews, and even a single negative review about a product or the company’s customer service was enough to send me into a tailspin as I pondered the upsides and potential drawbacks. This also added a lot of time to the project!

This time around I chose to go with an AMD Ryzen CPU, specifically a third-gen Ryzen 7 5800x 8-core processor. After more than a decade of Intel’s dominance in the processor space, AMD’s Ryzen chips began getting rave reviews a few years ago, and it seemed like the best fit. I’m not wildly into overclocking nor do I intend to push the chip far beyond its limits – but I wanted to get something that I thought would be high-quality, fast, and that would really show off what a modern PC is capable of.

My new CPU.

About a decade ago I suffered a major internet outage that left me reeling! For more than six weeks I remained disconnected, growing increasingly frustrated – and increasingly bored. When I got back online I ordered an external hard drive, and on that drive I installed a number of games, made backups of my DVDs, and so on so I’d always have something to do if I was ever in that situation again. I got a second external drive somewhere along the line too, and my workspace has been cluttered with drives, wires, and power cables for the past few years.

With my new PC, I wanted to ditch the external drives altogether. I don’t go places, I don’t have other computers I might want to plug into, so their presence was just an annoyance! With that in mind I installed two drives in my new PC: an M.2 drive to serve as my main C: drive, where Windows is installed, where other software and apps can be installed, and where I can install most of the games I’d want to play, and a second large hard disk where I can keep all of my stored DVD and Blu-ray rips.

Good old DVD box sets…

I chose a Sabrent Rocket M.2 drive for my new PC’s primary drive – again, on the back of reviews and recommendations – and a large Seagate Exos hard disk for my secondary drive. It should be possible to install games on the second drive as well, if space becomes an issue on the M.2 in future, which is also a nice feature to have. Redundancy is the name of the game in that case!

This is my first experience with an M.2 drive. My old PC had a SATA SSD, but it was a very cheap one that never seemed to be especially fast. I think it was a Kingston model, and it was pretty small as well. Basically everything except for Windows – including my collection of MP3s and photos – ended up on an external drive.

My new M.2 SSD.

This might be the most controversial part of the build, but I went for RAM overkill: 64GB of DDR4 RAM. The RAM can be clocked to 3600MHz, which is apparently recommended for Ryzen chips, though out of the box it ran much slower. 64GB of RAM is complete overkill for practically any modern system, so I’m told, but last year I was thinking about getting into YouTube – I had a short-lived foray into podcasting – so I thought I might need the extra if I got serious about video editing or other RAM-intensive tasks.

I chose a decent motherboard to go with all of these components – a “gaming” model from MSI. I also invested in a power supply from Be Quiet that’s rated 80 Plus Titanium – the highest rating available from the premiere ratings organisation for these kinds of things. I don’t pretend to know the exact details of what makes a “Titanium” better than a “Bronze,” but I think it’s to do with power efficiency, particularly during periods of heavy use. It seemed worthwhile to spend the extra money on something more efficient, though, and I made sure to choose a power supply that could more than handle all of the components I was putting into the machine.

The motherboard I chose.

Here’s a problem that I wager most users won’t have to factor in: cats! I have several cats, and they have a tendency to jump on my PC case. With my old machine, I found that the inconveniently-located power button meant that they were frequently turning my PC off with their paws when jumping or walking on the case, so I wanted to choose a new case with a power button either on the front or at least not flat on top. Most cases nowadays seem to have that kind of design; the “old days” of horizontal cases or power buttons on the vertical front of the case seem to be long-gone!

I chose a Be Quiet case in the end; the power button is still near the top, but it’s located on a sloping panel that means my cats could jump up and down without disturbing it or accidentally switching me off halfway through writing an article… or halfway through the latest episode of Star Trek! The Be Quiet Dark Base 900 is a much larger case than my previous machine, but I think that means that there should be good airflow for keeping all of the components cool.

This is what I mean.

The CPU cooler that I chose was also based on reviews and recommendations: I went with a Noctua NH-D15. I debated using a water cooler – one of the all-in-one systems – but ever since I knew a fellow PC builder who ruined his entire system when his homemade water cooling system sprang a leak… let’s just say I’ve been put off! I know that today’s all-in-one water coolers are probably safe to use – far safer than the janky piece of crap my feckless friend built in his basement 20+ years ago – but even so, I felt that an air cooler was the way to go. The Noctua NH-D15 is one of the best-reviewed setups on the market, and it has recently been updated with a special AMD Ryzen mounting bracket, so that was the version I picked up.

I chose to add one PCIe card – a WiFi and Bluetooth antenna. I don’t care about the WiFi particularly as I’ve always preferred to use ethernet for my stationary PC, but I wanted to add Bluetooth functionality. I use a Bluetooth keyboard and I have a couple of other Bluetooth devices that I thought I might try to connect, and considering that it wasn’t hugely expensive to add it in, it seemed worthwhile.

The Bluetooth card.

With prices for graphics cards having been sky-high for years, I knew from the start that I would recycle my current one rather than wait months only to pay over-inflated prices. When my GPU crapped out on me a couple of years ago I replaced it with a modern GTX 1660, so it’s not like it’s a horribly outdated component. It would be lovely to back up all of that new hardware with a ray-tracing graphics card that can really take advantage of modern games… but one thing at a time! That’s an upgrade that I hope to get around to either later this year or next year, depending on prices and how well my PC performs.

So those were the main pieces that I chose. It took a while to back up all my files (and double-back up the most important ones because I’m paranoid like that), but eventually I’d done as much as I could, procrastinated long enough, and was ready to get to building!

I scavenged this graphics card from my old machine. It’ll do for now!

I’m absolutely certain that building a PC in 2022 is significantly easier than it would’ve been fifteen or twenty years ago. Most components slot into place, there are step-by-step guides and video tutorials on how to do everything, and even the instructions that came with the components were easy to understand.

I started by taking the motherboard out of its box, strapping on my anti-static wristband and grounding myself, and making sure I had my new screwdriver kit at the ready! Installing the RAM was the task I chose to do first – it’s something I’d done before and I knew exactly what I was doing. From there I installed the M.2 drive and its heatsink, and then the task I was probably most nervous about: the processor itself.

My chosen RAM.

How many horror stories have you seen of bent pins, misaligned chips, and other CPU disasters? I couldn’t tell easily which way the chip was supposed to be oriented; the little triangle that’s supposed to indicate that was incredibly small and blended in. But after checking, double-checking, and psyching myself up for it, I gingerly placed the chip in its awaiting hole… and we had success! Nothing was broken, no pins snapped off, and nothing blew up. Phew!

Next I applied a small amount of thermal compound (I went with Kryonaut’s “Thermal Grizzly” paste instead of the stock one from Noctua). Doing what I’d seen others do on video, I laid out a small drip of the stuff, no larger than a grain of rice, and then secured the cooler in place. It amazes me that such a large cooler is okay; it looks like it’s hanging there, suspended in mid-air!

The thermal paste I used.

Having done about as much as I could with the motherboard outside of the case, I next had to grab the case itself and start installing the power supply. The Be Quiet power supply that I chose came with a large number of cables, not all of which I ended up using. Some of the cables look very similar to one another, so it took a while to make sure I’d got each one in the right place!

I installed the motherboard, screwing it into the appropriate standoffs in the case. Then I slowly began plugging in each of the various cables, including a bunch of wires that had been dangling inside of the case when I opened it up! I installed the hard disk in the lower corner of the case, and removed all of the other hard disk trays that I’m not using (I’ll hang onto them in case I ever want to add in another drive or something). I hope this will result in slightly better airflow.

A lantern came in handy for extra light – the inside of the case was very dark.

All that was left was to install the GPU and the Bluetooth card in the two PCIe slots. Having done that, which didn’t take very long at all, I checked my watch and was surprised to see it had only been about ninety minutes! Thinking to myself that I’d done a good job, I grabbed a Dr Pepper and went in for a victory sip while the cats sat idly by and watched. To my surprise none of them tried to interfere while I was working… good cats!

But I was far from done, as it turned out. After double-checking every connection and component, I plugged in the PC and hit the power button… and nothing happened. Oh god, panic time! What have I done wrong? How can I even test to see what’s happening if literally nothing is happening?! After a moment of abject panic I tried to think back… what could have gone wrong? Why would absolutely nothing at all happen when I hit the power button?

My reaction!

After checking the very obvious things – was the power supply switched on, was the cable plugged in, was the surge protector turned on, etc. – I honed in on the problem: the power button itself. The power button had to be connected to the motherboard using a two-pin cable, and the connection had to be in a specific orientation (as denoted by a plus and minus symbol). I’d installed it back to front. After reversing the power switch connector I tried again, and to my joy and relief the system sprang to life!

All of the fans seemed to be spinning, and after reaching the BIOS it seemed like everything was showing up: the system detected the existence of its USB ports, its M.2 drive, its hard drive, it had the right amount of RAM… everything seemed to be right where it should be, so I shut it down and prepared to install Windows 11.

My operating system of choice. Ha.

Ugh. Windows 11. We’ll have to talk about this in more detail on another occasion, but for now suffice to say that Windows 11 appears to have been designed by a team of software experts at Microsoft who were given the explicit brief of creating an operating system that embodies every aspect of the word “irritating.” They succeeded beyond their wildest ambitions.

I was told at first that “This PC can’t run Windows 11!” thanks to the ridiculous hardware requirements that Microsoft placed on the new OS. I knew that wasn’t right, because the Ryzen 5800X has the required module to be able to run Windows 11. However, this security feature is not enabled in the BIOS by default, so I had to go in and turn it on manually. Having completed this task, Windows 11 happily installed at the second time of asking.

The screen that greeted me first time around.

That should have been the end of the affair, but there was one final twist in this tale of amateur-hour PC building! A couple of days after putting everything together, slapping myself on the back, and calling it a job well done, the new PC began experiencing random crashes. There would be no warning, no blue screen… just an instant shutdown as if the power had been cut. I was very worried!

These shutdowns produced no error messages worth their salt, just a very basic message in the Windows Event Viewer that said nothing about the cause. After spending a long time on Google and chasing down replies to years-old posts on forums, I tried as many different software fixes as I could find: updated drivers, uninstalled programmes, rolled back Windows updates, re-installed every driver one by one, updated the motherboard BIOS, deleted installed apps… nothing worked. The shutdowns continued, and they seemed to be getting worse. At one point, the system tried and failed to boot five times in a row; it wouldn’t even make it as far as the desktop before losing power.

This was all I could see in the Event Viewer.

After a lot of digging around, which the vagueness of the error message (and the fact that Windows 11’s Event Viewer is cluttered with warnings that Microsoft says are totally fine) did not help, I eventually relented and opened up the case again to see if there could be a hardware problem. It didn’t seem like a typical hardware issue – if there was a nonfunctional or broken component, I would have expected to see this problem from the very first moment I put the system together, not starting days later after everything had been going smoothly.

Every component appeared to be securely in place; the CPU cooler wasn’t falling off, all the cables were plugged into the power supply securely, and the power supply itself seemed to be in good working order. Running out of options I did something that really isn’t recommended – poking around inside the case while the system was powered on. I poked and prodded at the various components as safely as I could, and eventually I hit upon the problem – the cable connecting the power supply to the CPU was just slightly loose. The tiniest bump or prod on this connection switched the system off in exactly the same fashion as I’d been experiencing.

A simple loose connection between the power supply and the CPU was the cause of the problem.

Rerouting the cable in question, and tying it as securely as I could to the inside of the case, seems to have solved the problem. I can only assume that it came loose in the first place thanks to a combination of my amateur workmanship leaving it susceptible to the smallest of knocks… and the cats jumping on top of the case! They didn’t jump on the new case for a couple of days as they were wary of this new addition to the room, but I think their jumping must’ve been just enough to loosen this CPU power cable and cause those irritating random shutdowns. At time of writing it’s been just over a week since I rerouted the cable and the problem has not returned.

So that’s my PC building journey. It was an interesting experience, and while I can’t honestly say that I saved a lot of money by buying my own components, what I can say is that I got exactly the PC that I wanted. I got to choose every part, I got to make sure that I got components that met my requirements – or the requirements I thought I had, at least! – and I got a new experience out of it, too. At my age, brand-new experiences are few and far between!

One of the guilty kitties!

If you’re looking for a recommendation, I’d say that building a PC isn’t for the total beginner. Sure, most components snap together easily enough, and anyone who’s ever built a Meccano set would be able to do that part of it with a few basic tools and the instructions. But knowing where to begin, and where to look in the event of things not going exactly as planned… that required some background knowledge on the basics of how PCs work. If you’ve taken an interest in technology, though, and you know the difference between a CPU and a GPU, or which way around fans should be pointing, then I’d say it’s a fun project – but it is a project, and that requires some degree of effort, preparation, troubleshooting, and an ability to Google your way to solutions!

I’m glad I attempted this project, and hopefully the new PC will tide me over for the next few years with no trouble. I have vague plans, as mentioned, to get a ray-tracing graphics card in the months ahead, but for now I’m satisfied. I’ve copied over all of my files and backups, and I’ve started installing a few games to play – including a couple of titles that my old PC struggled to get running.

Stay tuned for a review of Windows 11 in the days ahead, because I definitely have some thoughts on Microsoft’s latest operating system. Some very critical thoughts!

Windows 11. It has TikTok!

Below you can find a list of the components that I used to build my new PC.

  • Power Supply: Be Quiet! Dark Power 12 850 Watt Fully Modular 80+ Titanium
  • Case: Be Quiet Black Dark Base 900
  • Case Fans: Be Quiet Silent Wings 3 PWM 140mm
  • Extra Case Fan: Noctua NF-A14 PWM 140mm
  • Motherboard: MSI MAG X570 Tomahawk WIFI ATX
  • CPU (Processor): AMD Ryzen 7 5800x 8-Core
  • GPU (Graphics Card): Palit GeForce GTX 1660 6GB
  • RAM: Corsair Vengeance LPX 16GB x4 (64GB total)
  • Solid-State Drive: Sabrent Rocket 2TB Nvme M.2
  • SATA Hard Disk: Seagate Exos Enterprise Class
  • PCIe Bluetooth Card: Gigabyte GC-WB1733D-I
  • Extra Cable: Sabrent SATA III
  • Operating System: Windows 11
  • Tools: iFixit Essential Electronics Toolkit; anti-static wrist strap

Below you can find links to a few websites that I found helpful during my planning and preparation stages:

I’m not an expert and this article is not intended as advice or a guide. You are solely responsible for the outcome if you choose to build your own PC, and I accept no responsibility for any damage or destruction that may result. Some stock images used above are courtesy of Pixabay. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Thoughts on the PlayStation 5 downgrade controversy

As we approach the PlayStation 5’s first anniversary in just a few weeks’ time, I don’t see how anyone could possibly challenge the assertion that Sony launched the console far too early. The PlayStation 5 has been out of stock since day one, and occasional shipments of consoles are either sent out to folks who pre-ordered one months earlier, or else are snapped up by bots or the occasional legitimate buyer within minutes of becoming available. To be fair, this isn’t something exclusive to Sony – Microsoft has had similar problems with the Xbox Series X and S.

As a result of ongoing shortages, Sony is clearly scrambling to make more units available as quickly as possible. Failure to do so will see the brand continue to take a hit, and if there isn’t sufficient stock in the run-up to Christmas, with parents and gamers forced to either go without for the second year in a row or buy from a scalper at over-inflated prices, the PlayStation 5 might take a long time to recover. Enter the downgrade controversy, which has been doing the rounds online in recent days.

The PlayStation 5 is still out of stock around the world.

In short, newer PlayStation 5 consoles appear to have smaller, lighter cooling systems, which obviously saves Sony a bit of money and presumably makes consoles easier and cheaper to ship around the world. The internal components are otherwise the same, but the heat-sink and cooling apparatus have been significantly changed. Many folks are calling out Sony for this, saying this represents a significant downgrade. Controversy and argument has ensued.

Let’s get the obvious things out of the way first. No one is suggesting that PlayStation 5 consoles are suddenly going to burst into flames or melt or explode or set your house on fire. That’s the dumbest, most pathetic straw-man argument I’ve seen put forward by some Sony fanboys as they try to defend the company. But it is clear that Sony has started using smaller, presumably less effective cooling solutions in newer consoles, so what’s going on? And is it a risk or a potential problem?

An original PlayStation 5 cooling setup (left) and a new one side-by-side. The new model is clearly a lot smaller.
Image Credit: Austin Evans via YouTube.

Heat is bad for sensitive electronics, like the microprocessors used in video game consoles and PCs. The reason why manufacturers spend such a large amount of time and energy on figuring out the best cooling solution is because of this plain and simple fact. If an electronic device – like a PlayStation 5 console – runs at a higher temperature, there’s a higher chance of its components wearing out sooner than if an identical machine could be cooled better and more efficiently. That’s the heart of this discussion – not straw-man arguments about “catching fire.”

So the question I have is this: was the original PlayStation 5 cooler over-engineered? Or to put it another way: was the original version essentially “too good” at its job, providing a level of cooling that the PlayStation 5’s internal components didn’t need? Anyone who’s ever built their own PC can tell you that it’s possible to go overboard with cooling – eventually you hit a point of diminishing returns where cooling is as efficient as possible. Adding more cooling capacity is meaningless if a component or machine is already cooled as efficiently as possible.

Newer PlayStation 5 consoles may look identical to older ones, but they’re lighter and with smaller coolers.

If that’s the case, and the original PlayStation 5 had more cooling capacity than it could reasonably need, then perhaps this downgrade is perfectly understandable. The difficulty with figuring this out is that the console is less than a year old, and any long-term effects from overheating aren’t known right now. These things can take several years to fully manifest, and by then it could be too late for players who’ve bought a downgraded console if indeed there is an issue there.

From Sony’s point of view, this is already a PR problem. Disassembly videos and articles by popular tech websites and YouTube channels have already highlighted the downgrade, and whether or not it represents a genuine threat to the longevity of newer PlayStation 5 consoles and their internal components, there’s no denying that some people are concerned. Not only that, but it risks Sony looking cheap, like they’re skimping out on the PlayStation 5’s internals. That perception – regardless of whether it will actually cause a problem for the average player – is a real danger for the company.

Headlines like these – from reputable outlets – could be incredibly damaging for Sony, creating a perception that will be difficult to shift.

As I mentioned at the start of this article, the launch of the PlayStation 5 has been rocky at best for Sony. “Supply chain issues” has become a buzzword for all sorts of companies over the past year or so, but the public’s tolerance for such things is limited. Add into the mix the perception that Sony is trying to circumvent some of their shortages by cheaping out on something as important as cooling and you have the makings of a significant challenge for the console’s reputation – and the company’s.

Sony will need to address this issue, and do so quickly. Their response will be significant for the future success of the PlayStation 5, because if they allow the console to acquire a reputation for being hit-and-miss or for having cheap, low-quality components, that will be hard to shift. Microsoft had to spend a lot of money repairing the damage done by the Xbox 360’s dreaded “red ring of death” in the mid-2000s, not just in terms of replacement hardware but in terms of the console’s reputation. Sony doesn’t have the financial resources of Microsoft, so they need to get this right first time around.

This could turn into a major problem for the PlayStation 5 if Sony doesn’t act fast.

In summary, then, I’m not sure whether this downgrade is a significant issue. If the original PlayStation 5 had an unnecessarily large cooling capacity – which seems unlikely, but you never know – then perhaps all this is is an attempt at efficiency. My suspicion is that Sony is trying to find ways to cut the costs of shipping given the sudden jump over the past year in the price of sending products around the world – reducing weight is a great way to save money right now, and PlayStation 5 consoles are heavy. This downgrade does have the potential to be damaging, though, as sensitive electronic components that aren’t sufficiently cooled will wear out more quickly. Any impact, however, seems likely not to arise for months if not years.

Where Sony needs to worry is in terms of reputation and PR. Right now there’s a growing image among consumers that the company is skimping out on the PlayStation 5’s internal components, and they need to act fast to prevent that from becoming the headline – especially with the holiday season approaching. The PlayStation 5 has already endured a difficult launch, but until now the biggest issue Sony has had is that folks want to buy a PlayStation 5 but haven’t been able to. If this perception sets in and takes hold, the company could soon find that many of those would-be PlayStation 5 buyers have changed their minds and don’t want a console after all – and that will be a much more difficult problem to solve.

The PlayStation 5 is out now – assuming you can find one. PlayStation, PlayStation 5, and other properties mentioned above are owned by Sony and/or Sony Interactive Entertainment. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

So Windows 11 is happening…

I avoided covering the rumours and so-called leaks a few weeks ago, but it turns out that Windows 11 really does exist and will begin being rolled out later this year or early next year. I was surprised to hear that Microsoft planned to release a whole new operating system so soon after Windows 10’s 2015 launch; Windows 10 was billed as the “final” version, with the prospect of updates and tweaks but no replacement. A mere six years later – or fewer, assuming that the new OS has been in development for a while – and Microsoft is ready to abandon that pledge.

Windows 10 is far from perfect. It’s an improvement over past versions of the operating system, of course, but it has its problems. For me, though, the worst thing about Windows 10 has been Microsoft’s lack of care. Bugs and issues which were reported to Microsoft more than five years ago – such as 4K displays not being able to use extra large icons – are still in the OS and it seems Microsoft just opted to ignore them.

Windows 11 is coming. Prepare yourself!

An update to Windows has been needed for a while, not just to address some of these bugs but to give the whole OS a bit of a refresh. But does it need to be a completely new operating system? Though Windows remains dominant across the PC space, a lot of people were initially sold on the upgrade to Windows 10 based on the promise that it would be the final version of the OS. Windows 10 had a solid launch because people were keen to upgrade from Windows 7 and Windows 8 on that basis – something that was helped by the upgrade being free at first.

To abandon that promise so soon after making it is going to sour at least some people on Windows 11 – even more so if the new upgrade won’t be free. I can’t find any information on that, by the way, so watch this space. Windows 10 has, over the course of the last few years, come to eclipse Windows 7 and 8 as the most-used operating system around the world, and with a renewed growth in the PC market partly thanks to lockdowns and working from home, I would have argued that Windows 10 is well-placed to ensure Microsoft’s continued dominance of the PC space going forward.

An example of a Windows 11 desktop.

Windows 10 will be Windows 11’s main competitor, at least in the first few months and even years of the new OS’ life. Apple Mac is its own walled garden, and Linux, despite some attempts to make “user-friendly” versions, is still a niche, enthusiast product. So Windows as a whole has no major competition in the PC realm – but Windows 11 will have to stand up against Windows 10, an OS with a built-in userbase that numbers in the billions.

Windows 11 will have to strike the right balance between offering improvements and changes but without being so different as to discourage users familiar with the basic Windows interface. Moving the Start button to the centre of the taskbar instead of leaving it in its familiar left-hand position is one of those dumb aesthetic things that’s likely to prove costly. Windows isn’t Mac, and shouldn’t try to imitate everything Apple does. Folks need familiarity, especially considering the prevalence of Windows in the business world, where many users aren’t as tech-savvy and just want something that they know how to use.

Does Windows need to copy Mac?

If Windows 11 can smooth some of the rougher edges of Windows 10, perhaps it will see success. And in the longer term, unless we get a repeat of the Vista problem followed in short order by another upgrade, I think Windows 11 will, simply by default, gradually roll out to more and more devices. As noted above, there simply isn’t a viable alternative for most PC users.

There are some concerning elements, though. I mentioned Vista, and that greatly-disliked operating system brought some elements to Windows that seem superficially similar to Windows 11. Widgets for the taskbar and desktop are the most notable. And from Windows 8, which was also considered a major disappointment, Windows 11 is bringing back the “multi-device” design, with the new OS supposedly being able to work on phones, tablets, touch-screens, and laptops as well as PCs.

Gaming was mentioned as part of Microsoft’s Windows 11 presentation.

One thing Windows 10 got absolutely right was its return to a focus on PC and standard keyboard and mouse input devices. I’m not convinced that enough people want a Windows 11 tablet or laptop to make building the entire OS around that concept worthwhile. Doing so risks making the desktop PC experience worse for users – and considering 99% of folks who use Windows do so on a desktop PC or laptop, that’s a mistake Microsoft can’t afford to repeat.

All that being said, I’ll give Windows 11 a shot when it’s ready. I like to stay up-to-date, and the newest version of Windows is an inevitability for someone who uses a PC daily. Might as well get in at the ground floor and start getting used to things – that’s been the attitude I had with every version of Windows since I first owned a Windows 95 PC!

I’ve been using Windows for a while now…

One point to note is that Microsoft’s current policy is to continue to support Windows 10 “through October 14, 2025.” That’s a scant four years away, and if it should happen that support for Windows 10 ends on that date, as Microsoft seems to be implying, then everyone will need to upgrade to Windows 11 at that time. If there’s a free upgrade offered for a limited time, as there was with Windows 10, it would make sense in my opinion to take it.

Despite lofty promises in 2015 about kids being able to grow up with the ever-present, unchanging Windows 10, six years later Microsoft is ready to ditch it in favour of a new operating system. It looks to offer some superficial visual changes, and while I’m hopeful it’ll fix some of the problems with 4K displays that Windows 10 has suffered from I don’t know that for sure. It feels unnecessary, but as Microsoft is utterly dominant in the PC realm, anyone with a Windows machine should think seriously about taking the upgrade when it rolls out in the months ahead.

When the official Windows 11 upgrade or launch happens, I hope you’ll check back for my full thoughts on the latest version of the operating system. Until then, all that’s left to say is I hope it’s a success along the lines of Windows XP, and not a disappointment like Windows 8 or, god forbid, Windows Vista.

Windows 11 is being released in late 2021 or early 2022 by Microsoft. Windows 11, Windows 10, and all other properties mentioned above are the copyright of Microsoft. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

I pre-ordered Starlink

February was a month where I was hoping to save some money, putting a little aside for some overdue computer upgrades. But I was pleasantly surprised to hear that pre-orders for Starlink – the satellite internet company owned by Tesla founder and Mars enthusiast Elon Musk – were available here in the UK. I promptly paid my deposit and have signed up for Starlink, which is scheduled to become available in “mid-to-late 2021.”

Usually I encourage people to avoid pre-orders, as they can lead to disappointment. But I’ve been in dire need of upgrading my internet connection for a long time, and with neither fibre broadband nor 5G seemingly on the cards any time soon, Starlink is the best option for me. I live in a rural part of the UK, and while some nearby settlements have been connected to fibre broadband and are enjoying download speeds around 50-60 Mbps and upload speeds of at least 10 Mbps, I’m stuck with copper telephone lines.

A SpaceX rocket launches, taking Starlink satellites into orbit.
Photo Credit: U.S. Air Force/Airman 1st Class Zoe Thacker via WikiMedia Commons

It’s amazing to me in a way just how much data these old-fashioned copper telephone wires can actually transmit. Considering the technology is well over a century old, and that this village had its telephone lines installed sometime in the 1950s or 1960s (yes, rural England was late to the party!) it’s a shock that any internet connection is possible, quite frankly! I remember my father telling me about his childhood in London, when his family was the only one on their street to have a telephone. People would queue up at their front door sometimes to borrow their phone! How times change.

In 2021, the kind of speeds that copper phone lines can deliver are just not acceptable, even using broadband. On a good day I can expect around 7-8 Mbps down and barely 1 Mbps up, which means I can download almost one megabyte of data per second. That’s adequate for streaming, even in high definition, but it means downloading large files is interminably slow! When it comes to video games, which I predominantly buy digitally on platforms like Steam, this can mean waiting literally an entire day – or even longer – just to download the installation files for some of today’s modern titles.

My download speed could be worse… but it could be a heck of a lot better too!

That’s not to mention the unreliability of the service I get from BT – a.k.a. British Telecom. A few years ago, a fault of some kind at BT knocked me offline for over six weeks, and the “best case scenario” download and upload speeds I mentioned often fluctuate and dip below that; some days I can find I have barely 1 Mbps of download speed, meaning doing anything online besides reading text is impossibly slow. As a disabled person who spends a lot of time indoors, I find myself increasingly reliant on the internet for everything from communication to everyday necessities. I do my banking and sort out my bills online. In these pandemic times I use video chat to keep in touch with friends and family. I even order my groceries online! And of course, the online sphere is where I get much of my entertainment, whether that’s in the form of films, television, or video games.

As I said when I criticised the television license, I don’t watch broadcast TV any more, so the internet has become my primary way of accessing entertainment, news, and really everything else. It’s become a necessity in a way I would never have predicted in the 1990s or even the 2000s – and not just for me, but for almost everybody. So I’ve been in need of an upgrade for a while!

I’ve got a new toy to play with coming soon!

I looked into getting a 4G modem and router, but as it happens the 4G availability in my area wouldn’t improve the situation much. It was also much more expensive – almost double the price I currently pay. And as mentioned, neither fibre nor 5G seem to be coming here any time soon. I could move house of course, but I’m settled here and moving into a town simply for the sake of faster internet is not something I’d realistically want to do.

Then along came Starlink! I’d signed up to be notified about the service over a year ago, excited at the prospect of faster internet via satellite. And now I’ve officially pre-ordered it! Hopefully the company will stay on course for their mid-to-late 2021 launch, and all being well I’ll get connected in late summer or the autumn. Be sure to check back because I’ll let you know all about the experience when I finally get connected.

So that’s it. There isn’t much else to say at this stage, really. I’m very excited about this new, fancy-sounding piece of technology, and I don’t mind paying a little extra if my internet experience will be vastly improved. The pre-order process was simple, and I can’t fault it from that point of view. I’m looking forward to faster internet speeds some time soon!

Starlink is available to pre-order now in the United Kingdom and some other locations in Europe and North America. Availability varies by location but over time the company plans a worldwide rollout of the service. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Two Microsoft products – a controller and a keyboard

By sheer coincidence, I replaced both my keyboard and game controller in August. And also by coincidence – or at least, not by design – both of my replacements came from the Microsoft Store. This short review will detail my experience with each of them; rolling two items into one article is certainly a rare example of efficiency from me!

First let’s look at the controller. This one, I have to admit, is a bit of a luxury. I’d had an Xbox One controller for years – it may have been the one I got when I bought an Xbox One at launch in 2013. Now that was a bad decision if there ever was one – the console may have improved somewhat in the years since, but at launch it was bad value and offered precious little to play! But we’re off-topic already; you can read more about my Xbox One experience by clicking or tapping here. I decided that it was time to replace the controller – one of the thumbsticks was loose, making it harder to make very precise movements in some games, and in addition its vibration/rumble function didn’t seem to be working right. It still vibrates, but it does so in a much more clunky way than it used to.

I considered a few different controller options, including the Hyperkin Duke, which is a reimagining of the classic Xbox controller from 2001. That controller was one I greatly enjoyed using during the original Xbox era, but unfortunately the new version is difficult to get hold of here in the UK. I found one on Amazon, but at quite a mark-up. So I decided to check out Xbox’s Design Labs website, where Microsoft sell customised controllers. I went with an all-blue design, with a metallic D-pad, black Start and Select buttons, and black A, B, X, and Y buttons with the proper colours for the letters. I’m sure some people feel that removing the coloured letters and replacing them with a grey or black design looks more sleek, but the colours can be a great visual reference when it comes to things like quick-time events or any other occasion where split-second button presses are required.

My new controller.

Microsoft stated when I bought the controller that it could take up to a month to arrive, so I wasn’t expecting it much before the beginning of September. To my pleasant surprise, though, it arrived much sooner – on the same day as my keyboard, no less! The design was just what I’d chosen – which it should have been, of course – and so far I’m satisfied with it. Was it worth the extra money to get a different colour compared to buying a standard controller? I don’t play that many games any more, so I guess you could argue that it wasn’t. The control pad is fundamentally no different from a standard Xbox One controller; unlike the Xbox One elite controller it doesn’t have swappable parts or extra buttons, and its construction is wholly plastic instead of the “rubberised” feel of the elite. But the elite controllers are twice the price! For around £20 more than a standard controller, Xbox Design Labs offer a huge range of colours, and different areas of the controller can be different colours. They brag about millions of colour combinations – most of which you’d never want, of course – but all of the main colours are there, and they have a couple of “fades” and “camo” options too.

Controllers can also be engraved – though to be honest, that’s a pretty impressive-sounding term for what seems to be just laser printing. But for someone who wants their gamertag on their controller – or to make a fun gag gift, perhaps – it’s nice that the option exists.

The Xbox One controller was very similar to the Xbox 360 controller, which was itself not massively different from the second iteration of the original Xbox controller. So I’m not really reviewing the controller from that perspective. I already know I like it as I’ve been using something similar for years! The Design Labs experience was solid. There were a number of options, the website worked smoothly and was well laid-out, and the colours on screen match perfectly with the product I received. Add to that the quicker than expected delivery and it’s hard to find fault.

The Xbox S controller from 2002 or 2003. The “S” may have originally meant “small”.

As someone who has never really been a “PlayStation guy”, I think I’ll always prefer Xbox’s controllers than those made for the rival console. They feel chunkier and more substantial in my (admittedly oversized) hands, but at the end of the day once you get used to a particular design you want to stick with it. That’s presumably why the Xbox Series X’s controller won’t be a significant departure from the current design.

Up next we have the keyboard.

I write almost every day, not just for this website but for other projects that I have on the go, as well as typing messages to friends and the like. For the last three years or so, I’ve been using a Corsair Strafe mechanical keyboard. The variant I have has a red backlight and Cherry MX blue mechanical switches – the “clicky” kind. I bought this keyboard on the recommendation of several tech reviewers who said that the blue switches were great for typing.

The Corsair Strafe.

This keyboard has been fine. It was interesting at first to go back to a keyboard that, for all its modern aesthetic, had a very retro feel and sound. It reminded me of the kind of keyboards I first learned to type on when I was very young. I actually remember the first time I ever used a computer, being concerned that the keyboard only had capital letters when I wanted to type something in lowercase! That was when I was at school, and the “computer” in those days was little more than a word processor. And of course there was no internet. How things have changed, eh?

Although the typing experience has been good overall with the Corsair, after very long typing sessions it can get a little tiring on my old fingertips. The space bar in particular has a strange, almost rough texture to it, and I often find that my thumb – which I use to hit the spacebar almost all of the time – can start to not exactly sting, but rather notice this texture in an unpleasant way after longer typing sessions. The mechanical keyboard has also proved a nightmare to keep clean, with deep chasms in between the keys that seem to attract dust and cat hair like magnets! Finally, several of the keys have started to wear down, and the backlight shows through on the edges of several of them now. Perhaps that’s simply the result of heavy use, but for something I haven’t owned that long it seems like it shouldn’t have happened so quickly. Regardless, the keyboard doesn’t look as nice as it once did, and while it does still work I thought I’d try out a replacement.

I don’t need backlighting on a keyboard, as I can type from muscle memory – something that will happen as you spend more time hunched over your computer! And my computer setup is in a well-lit room, so on the occasions where I need to look down to see what I’m doing I don’t need the keyboard to be its own light source. The keyboard I ultimately bought as a replacement is not backlit, and I don’t consider that to be a problem at all.

After looking at several options, both mechanical and non-mechanical, I opted for the Microsoft Surface bluetooth keyboard. I’ve used a Microsoft mouse in the past (though my current daily driver is a white Logitech G305 wireless mouse) and I’ve always considered Microsoft’s hardware products to be solid and of decent quality. After ruling out a few other options for a variety of reasons, I chose the Microsoft Surface.

The Microsoft Surface bluetooth keyboard.

Initial impressions were good. The packaging was premium – as the Xbox controller’s had been too – and I was very impressed with the look and feel of the keyboard. It has almost no give to it when pressure is applied; it’s very solid. The keys, despite being low profile, have a satisfying press, and unlike the loud “click” of the Corsair, are relatively quiet.

The keyboard also has a full number pad, which is important to me as I often use the right Enter key when writing. It takes AAA batteries instead of being rechargeable via USB, which for some people may be offputting, but it’s a feature I really wanted to have. AA or AAA batteries last ages in devices like mice and keyboards. I used to use a Logitech MX Master mouse, and that thing needed to be charged every few days, which was incredibly annoying. In comparison, a mouse I have in my bedroom which takes AAs has been using the same pair of batteries for at least a year – probably longer. And since I replaced the MX Master with the G305 I’ve gone through precisely one battery. Why anyone would favour rechargeable devices that have such a short battery life over devices that take AA or AAA batteries that last months or years is beyond me. But we’re off-topic again! The battery cover is magnetic, which was a very neat feature. The magnet seems strong enough to keep the battery compartment closed, which is important for obvious reasons, and I like the modern touch it offers over an older-style plastic latch.

I did have an issue with the keyboard – but it’s one that seems almost unique to me that anyone with a modern setup should be able to avoid. The keyboard connects via bluetooth. Duh, right? It’s in the name. But my PC doesn’t have bluetooth connectivity built in, as several years ago I didn’t see any need to spend extra money on that additional feature. Most wireless keyboards come with a dongle so you can plug them into your PC, but presumably Microsoft’s expectation is that the Surface keyboard will be paired with a Surface PC – which must all come with bluetooth as standard. Like I said, this is a minor gripe that probably won’t affect anyone else who buys this product, but if your PC lacks bluetooth connectivity like mine, you’ll need to buy a separate dongle to be able to use the keyboard.

As with many things I’ve accumulated over the years, I could have sworn I owned a USB bluetooth dongle – but I haven’t the faintest idea where it is. I had to get a replacement on Amazon – not a big deal as they aren’t expensive, but it meant waiting an extra couple of days after the keyboard arrived before I could use it! It reminded me of the Christmas where I got a Nintendo 64 – I was all set to play with my new console when there was a power cut! The N64 sat in its box for what seemed like an eternity, unable to be played because the electric was out. Decades later and I’m back in that position. Life is funny like that sometimes.

Ah, memories.

When the dongle finally arrived, pairing the keyboard was easy. From the settings menu in Windows 10 – for which the keyboard has a designated button – it’s possible to see the device’s battery status. The keyboard is also in the standard UK layout – which means that a few symbols are in different places than on a US layout keyboard – which is obviously important to me as that’s how I’m used to typing. I’m on Windows 10, but the keyboard should be compatible with Windows 8.1 – or indeed any device capable of using bluetooth.

The typing experience is pleasant. As mentioned, the keys have a satisfying press, and they also have a slightly soft feel that’s definitely nicer than the hard plastic keycaps of the Corsair that I’d been using. It feels closer to typing on a laptop – a premium, high-end laptop – than any desktop keyboard I’ve ever used. Microsoft promises a whopping five million presses per key over the lifespan of the keyboard – so let’s put that to the test over the next few months and years! Unlike in the picture above, the Return/Enter key is full-size, which is something else I greatly appreciate. A single press of the Function button switches between the F-keys (F1 for help, F5 for refreshing web pages, etc) and a variety of other functions. The aforementioned settings button is one, and there are also keys to control the volume, media player keys to play, pause, etc. and even screen brightness controls. I don’t use such keys that often, but the additional functionality is nice, and not having to hold down a second key to use them is also a neat feature.

Of all the “premium” keyboards I looked at, the Microsoft Surface seemed like the best option for me at this point. I was ready for a change from the clicky mechanical switches I’d been using for the past few years, and as someone who does a lot of typing I wanted something I’d be comfortable with. So far, the Surface has accomplished that and I’m happy with my purchase.

It’s hard to make product recommendations, because I don’t know your circumstances. If you have a spare £20 burning a hole in your pocket and you like customised things, get the Design Labs controller and show off your unique style. But if you’re on a budget, skip that and just get a standard controller. Or better yet, find a pre-owned one or a 360 controller and save even more money.

Likewise for the keyboard. If you write as much as I do on a daily basis and want something solid and premium, the Surface could be a good option if you don’t want a mechanical keyboard. But it’s impossible to deny that you can get a perfectly functional keyboard with a number pad – wired or wireless – for a fraction of the price. I just looked on Amazon, and one of the top results was a Microsoft wired keyboard for £10 – a full £80 less than I paid for the Surface. So the question is – what do you want from a keyboard? If you don’t type a lot – or even if you do but are on a tight budget – save your money. Nothing the Surface does is essential and you could get identical functionality far cheaper.

Speaking for myself, though, I’m happy with what I got. Sometimes it’s worth spending the extra money on a higher-end product, and sometimes it’s worth splurging a little on a cool-looking or custom product just for the fun of it. At the end of the day, it’s up to you to decide what best suits your setup and where you want to invest your money.

The Xbox and Surface brands are the copyright of Microsoft. No sponsorship was involved; these are products I purchased for myself with my own money and the article comprises my genuine impressions regarding them. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

An amazing tech demo

I’m an avid collector of free games on PC. I browse the listings on various digital storefronts and almost any time I see a title being offered for free I snap it up. Many of them have been crap – or at least not my idea of fun – but in amongst the advertisements and first attempts and games of yesteryear there are some real gems. One I picked up recently on Steam falls into that category.

Mýrdalssandur, Iceland is the name of the title, and it really isn’t a “game” as much as it is a tech demo, showing off what Unreal Engine 4 is capable of. Even on my ageing PC this interactive walking simulator looks absolutely incredible, and some of the screenshots and footage I’ve seen captured on far better rigs than mine are unbelievable.

Screenshots honestly don’t do Mýrdalssandur, Iceland justice.

The term “photorealistic” comes to mind, and while it is still possible to tell you’re playing a game, the level of detail, even when zooming in to look at small objects, is phenomenal. One thing that has surprised me over the last decade or so – really emphasised by the success of titles like Minecraft – is how there has been a movement among a significant portion of game developers and publishers away from ever-better and more realistic graphics. The jump in quality from titles of the 16-bit era, which is when I first got into gaming in my youth, to the 3D worlds that came later was massive, and graphics continued to improve over the years, pushing ever closer to photorealism.

Shenmue was the first game I can remember playing that genuinely felt cinematic, and small details like individual fingers on the hands of main character Ryo that moved independently were a huge part of how that game felt. Returning to Shenmue today shows it has actually aged rather poorly, even compared to some other titles of its era, but to me at the time it represented a huge leap forward in what games were capable of. Minecraft, as I mentioned above, took me by surprise in how popular it became partly because I felt that its pixel graphic style was simply out-of-date and that would be offputting to a gaming audience who had, until that point, generally favoured the march toward photorealism.

At least partly inspired by Minecraft’s success, there have been hundreds of titles released over the last decade or so that emulate the graphical style of older eras. Partly this is because such games are cheaper and easier to make – there are tools on Steam, for example, to let budding developers make their own titles in that style that are very inexpensive. A single person in 2020 can make a 2D pixel graphics platformer in a weekend that would’ve taken an entire team of developers months in the 1980s or 1990s. The entire “indie” genre – or a large part of it, anyway – is made up of titles like this, inspired by the likes of Terraria, The Binding of Isaac, and Stardew Valley. The graphical style is from another era, but people do still love those games and there’s a huge market for them.

Promo image for Minecraft, a game which doesn’t care at all about graphics!
Picture Credit: IGDB

In some respects, the growing market for titles that don’t try to do anything graphically new has probably slowed down the advancement toward photorealism exemplified by Mýrdalssandur, Iceland. But generally, games in that style are their own genre doing their own thing off to one side. Some modern games, especially the titles which make the most money for their companies, do try to look as realistic as possible, though. Franchises like FIFA and Madden in the sports genre, and big-budget releases like Call of Duty use the best graphics engines available to their development teams to try to look better with each iteration, even if they don’t really push the boat out. Almost the entire racing genre – especially those titles that feature real-world cars and are closer to simulators than arcade-style racing – always manage to look great.

Consoles, and the fact that there have been such long console generations in recent years, are definitely a contributing factor to the slower pace of graphical improvement. The Xbox One and PlayStation 4, to use the current lineup, were both released in 2013 – using components available at that time. Every aspect of their hardware is based on technology that is now seven years old, and even at the time of release they were still outmatched by high-end PCs using more expensive components. Every major title released this console generation has been constrained by that technological ceiling: games have to be able to function properly on an Xbox One or PlayStation 4 from 2013, despite the fact that technology has moved on since then. Were it not for that requirement, more games could push graphical boundaries and look even better. I know that’s straying into “PC Master Race” territory, but it’s not untrue to say it.

Seriously… try the experience for yourself to get the full effect.

To get back on topic, Mýrdalssandur, Iceland looks stunning. A casual glance at the screen and you’d think you were looking at a photo or video. The imagery would fit right in with CGI created for the big screen – and looks a heck of a lot better than many of the CGI environments present in films from just a few years ago. My PC has certainly never run a title that looks this good, and I’m amazed to see what my graphics card and older processor can still manage.

If the title is still free on Steam when you’re reading this, I highly recommend checking it out. You won’t want to spend hours playing in this empty world – there isn’t anything to actually do, after all – but as an example of what graphics can be I think it’s well worth a look.

Mýrdalssandur, Iceland is available on Steam, and was free to download and keep at the time this article was written. Mýrdalssandur, Iceland is the copyright of Caves RD, and Unreal Engine 4 is the copyright of Epic Games. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

My thoughts on the “console war”

Barring a major shift in circumstances, which we may yet see if the coronavirus pandemic isn’t sorted out in the next few months, Xbox and PlayStation plan to launch new consoles before Christmas. They will replace this generation’s Xbox One and PlayStation 4, which were released in 2013, and will join the Nintendo Switch to form the “big three” gaming platforms heading into 2021 and beyond.

When I’m in a gaming mood I’m primarily a PC player. I find PC to be a more versatile platform, and the abundance of digital shops on PC means that sales and discounts are aplenty, which I absolutely feel makes PC an appealing choice even if the up-front costs can be higher than a console. But that’s a whole different article!

When Google Stadia launched towards the end of last year, I felt it had the potential to be disruptive to the gaming market in all kinds of good ways. To understand why, we need to step back in time.

The Xbox Series X was unveiled late last year.

For a brief moment just after the millennium, there were four companies in the home console market, and they were, broadly speaking, all trying to appeal to the same core audience of gamers. There was Sega, with the Dreamcast, Sony, with its PlayStation brand, Nintendo, and Microsoft, which launched its first Xbox console in 2001. This moment wasn’t to last, of course, as the Dreamcast would prove a failure forcing Sega out of the market altogether. Nintendo’s GameCube was also not a resounding success, and the lessons the company learned led to the creation of the Wii in 2006, and from that point on, Nintendo has been fishing in a different pond to the other two console brands.

So since the mid-2000s, when Nintendo decided to go in a completely different direction with the Wii, Xbox and PlayStation have been the two main brands in direct competition. Nintendo’s current offering, the Switch, is a very different platform from anything Microsoft and Sony have, being half-handheld and half-console, and has a very different hardware setup. As a result, many gamers (myself included) will have a primary platform for playing most games and a Nintendo for playing their titles. I’m currently in the early stages of building my island in Animal Crossing: New Horizons so stay tuned for my thoughts on that at some point!

The two main competitors, PlayStation and Xbox, have taken very different routes since 2013, and the console market is in danger, I feel, of becoming a monopoly. It needs something major to shake things up – hence my excitement at Stadia potentially doing so. Microsoft’s Xbox brand has been focused on being a “multimedia” brand instead of purely gaming, and its output reflects that. Microsoft has also seen a steady growth in the PC gaming market and has chosen to release some previously exclusive titles on PC as well – the most significant being Halo: The Master Chief Collection, which is as close as Xbox has to a signature franchise. Only Halo 5 remains a console exclusive right now, and I have to say it feels like only a matter of time before that, too, is ported to PC. Microsoft have been working hard to turn the Xbox One into a multimedia centre – something people could have in their living rooms to watch television, use streaming services, and even do things like make video calls.

As a result of Xbox’s foray into the PC space and using their platform to promote things like video streaming as much as gaming, PlayStation has been the dominant force in this console generation. They’ve offered many more exclusive titles, and the PlayStation 4 has outsold the Xbox One by at least two-to-one, perhaps even more. While Xbox as a brand is still healthy and commercially viable, it doesn’t leave the overall state of the market feeling especially great, as competition between the two companies is necessary to keep quality high and for developers to keep pushing the boundaries.

The DualSense controller is all we’ve seen of the PlayStation 5 so far.

Google Stadia is clearly not going to be the disruptive force I hoped for, at least not any time soon. Its minuscule userbase and tiny library of games has seen to that, though I hope Google will continue development as the core technology is interesting at least. And as far as I know, no one else is planning to get in on the home console market right now. There have been past attempts, like the Ouya and other android-based consoles, but none have been particularly successful. It took a company with the clout and financial resources of Microsoft in 2001 to break into the market for the first time as a newcomer, and if Google is unable to successfully enter the gaming space I can see that failure being offputting for anyone considering investing significant money into a new home console.

So we’re left with a two-plus-one situation in the home console space. PlayStation versus Xbox, with Nintendo off to one side largely doing its own thing. Both the Xbox Series X and PlayStation 5 will be comparable in terms of their internal hardware, especially as both seem to be using AMD’s Zen chips and incorporating ray-tracing graphics, so the choice between systems will be more about marketing than technology. Xbox has already signalled that their multimedia and PC plans will continue into the new generation, and it was even suggested at one stage late last year that every Xbox Series X game will also be available on Xbox One for the first year or two of the new console’s life. This combination will, I feel, give the PlayStation 5 a distinct advantage.

So where do I stand? I’ll be honest, I don’t really have a dog in this fight any more. As someone who plays primarily on PC it’s less important to me. Later in the generation, when prices start to come down, I can perhaps see myself picking up a console, but it would only be if there was some must-play exclusive that didn’t make it to PC. And of the two, PlayStation seems most likely to offer something along those lines so it’s not impossible I’d pick up a PlayStation 5 in the next few years. It certainly won’t be at or near launch, though.

However, I’ve never really been a big PlayStation gamer. In the generations after the first PlayStation launched I owned a Nintendo 64, a Dreamcast, an Xbox, and then an Xbox 360. It wasn’t until much later when I picked up a second-hand PlayStation 3. By then I was less into gaming and I’ve only played a handful of PlayStation 3 and 4 titles over the last few years. This is purely subjective, but as someone who likes to play some games with a controller instead of keyboard and mouse, I find Xbox controllers more comfortable to use. The original Xbox controller from 2001 – known as the “duke” – is actually one of my favourites, despite the justifiable criticism it received at the time for its large size!

The Xbox Series X and PlayStation 5 will join the Nintendo Switch in the home console market.

Looking in from the outside as someone who has no plans to purchase either of the new consoles imminently, what I hope is that both are successful for their parent companies and that both are going to be great platforms for gaming. I’d like to see a bigger stride this console generation than the last, particularly where graphics are concerned, but it seems unlikely. Many PlayStation 4 and Xbox One titles don’t look much different from games released in the latter part of the previous generation, and gameplay and graphics in general have not advanced nearly as far over the last few years as they had in previous generations. Earlier console generations brought huge advancements over their predecessors. The Nintendo 64, for example, was an incredibly powerful machine compared to the Super Nintendo, which was itself streets ahead of the earlier NES. I remember in the late 1990s and early 2000s when there was talk of genuine photorealism by 2010, 2015, or 2020. While some projects can come close to that, we aren’t there in a general sense. And to make a long story short, the fact that the next generation of consoles will be a progression or iteration on what is already available in terms of graphics and gameplay makes them less exciting to me personally.

What we will see are smaller quality-of-life improvements. Things like longer battery life in wireless peripherals like controllers, as well as a move from hard discs to solid-state drives will give console gamers something to appreciate. There might also be things like faster download speeds, quicker installation from optical discs – which are still going to be present – and support for 4K resolution and video playback. With most new televisions being 4K that makes a lot of sense.

Overall, the biggest issue that is currently facing Xbox and PlayStation is the pandemic. Both in terms of disruption to their manufacturing and logistics and the wider economic impact on consumer spending, the launches scheduled for later this year may yet be delayed, and if they aren’t, sales may not initially be as strong as they were in 2013 or 2005/06. The consoles themselves will be of some interest, but what I’m most interested to see is how new games plan to take advantage of some of the new hardware capabilities. Pushing the boundaries and creating games that are bigger, better, and more visually impressive than ever is something I’ll always be interested to see, even though I don’t really mind which brand or company “wins”.

All brands mentioned above are the copyright of their respective parent companies. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

My new audio setup

For a number of years headphones were my only option for watching a film or listening to music, largely because I was living with other people. I was okay with that – despite the discomfort that using headphones for prolonged periods can bring. But a few years ago I got a mid-range set of PC speakers. They’re from Logitech, which is a good brand in my experience, and they’re USB-powered. And for several years they’ve been good.

When I got a new television a couple of years ago I hooked up my PC to it for a more enjoyable film-watching experience. Having a bigger screen as opposed to a small PC monitor has significantly improved my entertainment experiences in that time. But I kept the same set of speakers – the TV’s own speakers being pretty poor in comparison, as a lot of modern slim television speakers are.

Recently, though, I started looking into audio equipment again. What I really want – one day, when budget allows – is a 5.1 or 7.1 surround sound system. That’s the “gold standard” of home cinema setups, but of course a decent one is prohibitively expensive. Or at least it is on my budget!

So I started to look at headphones again.

Initially I thought maybe I’d get some kind of wireless or bluetooth setup, but after reading a lot of reviews and trying some out for myself I came to the conclusion that even the more expensive ones – £200 or more – weren’t all that great. They might be okay for some uses – gaming, for example – but when it came to having a good all-round entertainment experience – for listening to music, watching television and film, as well as perhaps gaming too – the wireless devices that I could find all felt inadequate. In addition, one of the biggest features on modern wireless and bluetooth headphones is “noise cancellation”, where the headphones will use a combination of a microphone (to measure noise in the environment) and additional processing to try to cover up external noise. But I have no need for that, so it would be an additional expense for no reason.

I’m not an audiophile by any means, and my ears aren’t as finely-tuned as many people’s seem to be, but I found myself drawn more and more toward headphones marketed to audiophile consumers. What I really wanted, at the end of the day, was a listening experience that would be as true-to-life as possible, without distortion, interference, or anything of that nature. The noise-cancelling technology I mentioned above has a noticeable effect on audio quality, as a background of white noise or hissing is often present as part of the way it works. I didn’t want any of that; I wanted a sound that was as neutral as possible.

As I looked at various headphones, one recommendation that came up again and again was to get an external DAC – a digital-to-audio converter. PC motherboards have one by default in order to translate a digital file into an analogue audio signal that can be then used by speakers or headphones. But DACs in computers often suffer as a result of being too close to other electronic components, which can lead to noise interference, as well as generally being of lower quality.

After looking at a number of options, I chose a DAC by a company called iFi. This device, called the iFi Zen, is a combined DAC and amplifier – an amplifier being important to get sufficient volume in lots of headphones which draw more power.

The iFi Zen DAC-amplifier.

The DAC on its own doesn’t do much, obviously, so pairing it with a nice set of headphones was important. After much research into different types of headphones, different technologies, and of course different brands, which took me a solid couple of months, I nailed down my choice – the Sennheiser HD 600. These headphones were originally released in 1997, and to my mind they’re very fancy – even if some audiophile gear sells for ten times the price!

What attracted me to this model was its reputation for being a truly neutral headphone – there are no artificial attempts made to boost the bass, or otherwise distort the sound produced. In fact Sennheiser HD 600s are often used by sound engineers and editors for that very reason. It doesn’t mean that listening to a recording of someone speaking is the same as hearing someone standing next to you, rather that the reproduction of the sound is as close as possible to the way it was recorded. And I have to say I did notice a difference.

It took a little while to get used to it, partly due to the directional nature of the sound. My two speakers sit either side of the screen and face towards the room. Because the headphones are positioned on either side of my head, and because there’s less distracting noise within the room, sound comes from one side or the other, or in the middle, instead of straight at me in an unspecified direction. That was a new experience for me, really, something that I hadn’t noticed with previous headphones and headsets I’ve owned. It’s hard to explain in words, but the headphones do a great job with positioning the sound; if two characters on screen are speaking, for example, they sound different depending on where they are standing. You can even tell when a person is closer or further away, as well as positioning where they are – how far left or right.

My Sennheiser HD 600 box.

When listening to music this isn’t so much of a factor, but watching television is definitely a different – and I think much more enjoyable – experience. I trialled my new setup on last week’s episode of Star Trek: Picard and I was really impressed. Since then I’ve experimented much more, playing many different kinds of music, watching a few different series and films, and of course playing Age of Empires II. I wrote a post the other day about how much fun it has been rediscovering that game – you can find it by clicking or tapping here.

I’d also been used to a more bass-heavy sound, just because of the way my speakers work. And the first few things I listened to on the headphones sounded flat in comparison. But it was only after using them for a while then going back to my speakers that I realised that the speakers were distorting and over-emphasising the bass. The headphones, as I had hoped, were much closer to a neutral sound.

It’s worth noting that digital audio files, like MP3s and the audio tracks on many digital/streaming videos, are compressed. And the compression process reduces the overall quality of the audio. I would love to be in a position where I could afford to go out and buy the highest quality of everything; 4K Blu-Rays and a Tidal subscription to listen to so-called “master quality” audio. But as I said, I’m not sure my ears are sufficiently finely-tuned to notice a difference!

For my purposes, though, my new setup – modest though it may be – is a great improvement over the speakers, and while I won’t be using it all the time or even every day, when I’m sitting down with a film or television show, or wanting to rock out to some bangin’ tunes, it’ll be my go-to.

This is not a sponsored post or ad. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

The £48,000 computer

Actually it’s £48,087.98. Can’t forget the 98p, am I right? If you’re in America, that’s $64,000 – though from what I’ve seen online it may be closer to $50,000 over there for you guys. Obviously we’re talking about the Mac Pro, the latest computer from Apple.

The Mac Pro starts at £5,499 for the “basic” edition, and even that is absolutely obscene by most people’s standards. Where I live, £48,000 would get you literally acres of arable farmland, or would be a sizeable deposit on a house. Or get you a very nice car. The list of things you could do with 48 grand could go on and on. But in the first line you missed the key point – “by most people’s standards”. This isn’t a machine for the likes of you and me.

Some time ago, I found myself working in government. Nothing fancy (I wasn’t an MP or anything) but I worked an office job in a small local government office. The work that we had to do in this office required the use of a spreadsheet – a programme like Microsoft Excel, which is part of their Office suite. Office costs around £120, maybe closer to £200 for a business edition. And of course, any big organisation gets a sizeable discount that scales with how many copies they need. For a big organisation, like the place I was working, Office was an essential tool and all the computers in our office had it installed. Yet, despite Excel being perfectly serviceable for what we needed, the department was paying several thousand pounds a year (in the mid-2000s, inflation will have raised the price since then) for the license to a different piece of spreadsheet software – one that was marketed at being designed especially for what we needed! It was, of course, essentially the same product as Excel. Just with less compatibility – it wouldn’t open Excel files, nor allow Excel to open its own files, and because other departments in the same building – as well as other departments doing the exact same job as us in other locations – didn’t use it, anything they sent to us or we sent to them had to be manually converted. What a colossal waste of everyone’s time – and money. Taxpayers’ money, in this case.

The point of this long story is that there is a whole world out there which most of us don’t ever see, or if we do it’s only a brief glimpse: the world of big business, and of powerful organisations for whom money is – for all practical purposes at least – no object.

These are the kind of people who Apple are targeting with the new Mac Pro. Because they know that people want to stay on the cutting edge, and upgrade their machines as often as possible. And these people will go to see their boss and ask for the upgrade – and it will be granted even if it costs £48,000 because the organisation they work for doesn’t even notice that amount of money. If you’re working in, for example, a film studio, where budgets regularly balloon past the $150 million mark, what’s 50 grand compared to that? And when you consider for someone who works in an industry like that, they would get a decent amount of work out of that machine over two or three years, working on multiple titles, why not spend the money?

I’m a paid-up member of the Apple “ecosystem” – by which I mean I’m an iPhone guy. And even though I have to concede that iPhones are generally more expensive, I’ve found them to be much better than Android phones, especially at the lower end of the market or on models a year or two old. So there is something to be said for choosing quality over price. But is that a fair argument when looking at a computer – one single computer – that’s the price of a brand new flash car?

I mean, for £48,000 you could buy close to 100 decent computers and kit out a medium sized business. Sorry, no more “you could buy X for that money”. But you could. Just saying.

For someone who’s already an Apple user, who uses professional software like Final Cut Pro or Logic Pro, and who needs a really powerful machine for work, if they’re bringing in the big bucks and this will help them do their work faster and more efficiently, there’s an argument to be made in its favour.

But despite this, despite the fact that I can see a potential user base – albeit a small one – some of the costs of components really seem inflated.

Adding wheels to the case costs £360 – that’s £90 per wheel. They might be good quality – I mean, they better be at £90 each – but are they so much better than anything else on the market? It’s possible to buy a wheeled computer stand for less than £10 on Amazon, and that would do basically the same thing.

It’s possible to get a 2TB NVMe solid-state drive for about £250. Apple wants £720 for a non-specified SSD. Is it NVMe (or Apple equivalent)? Because there’s a big performance difference between SATA and NVMe. And even if it is NVMe, what makes it so much better than other options on the market that it’s basically triple the price? Additionally, 2TB isn’t a lot of storage when you’re talking about video files, animation, rendering, audio work, and the like. That SSD will fill up pretty quickly, and the only other option, costing £1,260 more than the base model, is 4TB. I’ve seen folks with YouTube channels who have literally hundreds of terabytes of video – presumably anyone buying a Mac Pro still needs to pay extra for external storage.

I know there’s a market for a machine like this, and I know Apple will have taken the time to make sure every component is high quality and will work well with their software, and that has to be factored into the price. But it still seems excessive, and some of the choices – like £360 for a set of wheels for the case – are just obscene.

But hey, if anyone wants to get me one for Christmas I won’t say no. Not that I really need all that hardware for blogging.

This article contains the thoughts and opinions of one person only and is not intended to cause any offence.