A.I. 2027: My Take

I recently read the very alarming A.I. 2027 report. I’ll leave a link to the full version at the end of this article, and I highly recommend checking it out if you’re interested in the future of A.I. But for now, I’ll briefly summarise the report. And yes, I know this is a little outside my wheelhouse here on the website, but I think this subject is worth discussing!

A.I. 2027′s core premise is not, as some alarmist headlines have tried to claim, that the end of the world is a mere two years away! But the report’s authors argue that, if artificial intelligence development continues on its current trajectory, late 2027 could be humanity’s final opportunity to remain in control of any A.I. system that emerges – and our last chance to ensure that its goals and priorities align with our own. They present this as an existential risk; that a sufficiently advanced A.I. sometime after 2027 could, if not properly restrained and aligned with human interests, literally exterminate the human race. That… that’s kind of disconcerting, right?

Still frame from Terminator 2 showing a person killed by nuclear weapons.
Is this our imminent fate?

I confess that I’m way behind the curve when it comes to A.I. Everyone talks about ChatGPT, A.I. art, and other such things… but my knowledge of the subject has been, until recently, surface-level at best. But given the potential A.I. has to be disruptive, perhaps on a scale we haven’t seen since the advent of the world wide web or even the Industrial Revolution… let’s just say I felt the need to catch up and get up to speed!

Artificial intelligence is already changing the way we live, work, and interact. What are “algorithms” if not a form of A.I.? If you’ve ever scrolled social media and felt that the website or app is almost too clever in recommending posts or adverts that appeal to you… you’ve already been caught in its web. And if you’ve noticed fewer job listings in fields like translation, copy-editing, or programming… you’ve seen what might just be the tip of the iceberg when it comes to A.I.-related job losses and workplace changes. I bring this up because I think something that gets lost in this conversation about the future of A.I. is that A.I. is already here, already changing things, and isn’t just some nebulous future idea about something we “might” invent one day. We’ve already invented it.

Still frame of the M-5 computer from Star Trek: The Original Series.
The M-5 Multitronic Unit from Star Trek.

Predicting the future is hard – nigh-on impossible, really. Step back to television programmes from the 1980s, 1990s, and even the 2000s that tried to look ahead to 2025 and see how wrong many of their predictions and ideas were. A famous example is flying cars – I still haven’t seen one of those, despite the promises made by sci-fi films in the ’80s! So I think it’s worth acknowledging that. Some predictions can be incredibly wide of the mark – especially those that promise either an imminent technological utopia or the end of the world. Both of those scenarios are present in A.I. 2027.

As a layman looking in from the outside, I’ve been dimly aware of talk around artificial intelligence, but it hasn’t been something I’ve felt any need to engage with. I don’t have a place in my life for an A.I. chat bot, I’ve never needed to generate A.I. art before (though I am using some A.I.-generated images in this piece), and I actually enjoy the process of writing… so I see no need to use something like ChatGPT to generate text, either. But there’s no denying that, as someone with a creative streak, I feel increasingly… obsolete. A.I. doesn’t just have the potential to revolutionise writing, web design, media criticism, and all the things I talk about here on the website. It’s literally already doing all of those things.

A.I.-generated image of a robot chatting to two humans on the computer.
It’s a chat bot.

I could fire up ChatGPT or a similar A.I. programme and, within a matter of moments, it could’ve scanned everything I’ve ever published here on the website, and all of my unpublished drafts, too. I could then ask it to write an article or essay on any subject imaginable, using my tone of voice and my writing style. Could you tell the difference? From my perspective, as someone who runs a small website as a hobby and who enjoys writing, there’s no point in using A.I. for that. But if I was working in an office job where lengthy reports were the order of the day… I can see A.I. being a very tempting shortcut. Heck, I’d have killed for an automated essay-writing programme when I was at school!

And that’s exactly what these systems are designed to do. They’re intended to cut costs for big businesses – and by far the largest cost that most companies and corporations have is the wage bill. Let’s rephrase that for the sake of clarity: the entire point of A.I. is to save corporations money by enabling them to lay off staff and automate more and more of their work.

Photo of an unemployment queue in the USA during the Great Depression.
Unemployed people queuing for financial relief during the Great Depression.

Unfortunately, there’s a bit of classism that seems to have crept into this conversation. I grew up in the north of England in the ’80s and ’90s, at a time when deindustrialisation had robbed the area of mines and factories. Automation played a big part in that – work that used to take a dozen people could be done with just one specialist, and then it became “too expensive” to continue to operate. There’s more to what happened in this part of the world, of course, but automation played a big part. Some of the people who used to tell factory workers and miners to “re-train” or “learn to code” are now themselves on the receiving end of automation-related job losses. And it’s pretty grating to see folks getting worked up about A.I. in the 2020s when they not only didn’t give a shit about the devastation automation brought to working class communities from the ’80s onwards, but actively supported it.

In that sense, I kind of view this consequence of A.I. development as a continuation of a process that’s been ongoing for decades, not something new. For decades, big companies have been looking for shortcuts; ways to cut jobs and pay fewer members of staff while achieving ever greater profit margins. A.I. is what they’re banking on in the 2020s in the same way as manufacturers invested in automated equipment in factories, or mining corporations exchanged pickaxes for machines. The difference? A.I. is coming for white collar, middle class jobs. Earlier automation mostly took jobs away from blue collar, working class folks.

Photo of an abandoned, derelict factory in the UK.
Sights like this were common when I was growing up.
Photo: David Wilkinson / Former Addspace furniture factory, Bolton upon Dearne

But that’s just one side to A.I. – the corporate, job-stealing side. The existential risk as posited by A.I. 2027 is much more complex… and I’d be lying if I said I wasn’t concerned. Unlike the 2012 phenomenon or the predictions of Nostradamus, the risk posed by artificial intelligence feels real. I’d say it feels somewhat comparable to the way the millennium bug felt in 1999; a technological danger that humanity has created for itself. In 1999, a lot of people were genuinely concerned that the millennium bug could cause all kinds of chaos, but thanks to some hard work, none of those predictions came to pass. It went largely unnoticed at the time, but it did take a lot of effort behind the scenes to ensure that the millennium bug didn’t have a major impact; it wasn’t, as some have tried to claim, a completely overblown threat.

So what’s the lesson there? If there are risks to A.I. development… shouldn’t we do everything we can to mitigate them? That seems like the rational course of action, but as the authors of A.I. 2027 explain, current artificial intelligence systems don’t exist in a vacuum. There’s a technological “arms race” between different countries, and slowing development to implement safety measures could mean that the current leader – the United States – would surrender its position on the cutting edge. The result of all this? Rapid, chaotic, unchecked development of A.I. systems in order to remain ahead of the curve.

Screenshot of a summary graph from the A.I. 2027 site.
A graph summarising A.I. 2027′s findings.

There are military applications for A.I. – and if one side in a potential conflict has A.I.-controlled drone weaponry and the other doesn’t… it’d be like trying to fight a modern army with medieval weaponry and tactics. Keeping up with A.I. developments must feel, to military planners and political leaders, like even more of an existential risk, because the consequences of failure could be catastrophic. We talked above about the world wide web and the industrial revolution; in military terms, A.I. could be at least as impactful as the development of nuclear weapons.

So I think we’ve covered why governments, militaries, and corporations want an A.I.-infused future. But two questions still remain: will they remain in control of it? And what will it mean for the rest of us?

Still frame from Terminator 3 showing Skynet coming online.
Skynet in Terminator 3.

I’ll tell you a story from my own life that I think might illustrate this next point. In the late 2000s, I was feeling pretty disillusioned with my job and career path. In those days, I was working in the video games industry, on the marketing side of things, but I went through a phase where I wasn’t satisfied, and I started contemplating other career options. For a time, I thought seriously about getting my HGV license – that’s heavy goods vehicle, a.k.a. a truck or lorry. But by that point, it seemed to a techie as if self-driving vehicles were only a few years away… and I genuinely thought that it would be a waste of time to spend a lot of money taking tests and getting a qualification that could become obsolete before I could make the most of it. To use an analogy from history: it felt like jobs based on driving any kind of vehicle from taxi cabs to trucks were about to disappear in much the same way as horses and stables did in the first decades of the 20th Century.

Why do I make this point? I was wrong, in the late 2000s, to be thinking like that. Self-driving technology may be “here” in a technical sense, but it isn’t road legal and it doesn’t seem to be making the kind of impact I would’ve predicted (or feared) in the late 2000s. The same is true of many other technologies that seemed to be “the wave of the future,” only to completely fizzle out. I owned a MiniDisc player in the early 2000s, expecting that technology would replace CDs! Google Glass, 3D televisions, the hovercraft, Concorde… the list of “revolutionary” technologies which never delivered their promised revolutions goes on.

A.I.-generated image of a yellow robot driving a lorry.
A self-driving lorry.

The point is this: there are many technologies that seemed, for a time, to be the “next big thing,” primed to revolutionise some aspect of our daily lives. And they didn’t accomplish that. When was the last time you even thought about MiniDiscs, hovercraft, or Google Glass? For a brief moment, all of those things seemed to be on the cutting edge of technological innovation, yet they fizzled out without having the impact some folks expected. Could A.I. be the same? And more importantly: are we perhaps reaching the limitations of A.I.’s current capabilities?

It seems to me that the more A.I.-generated content exists out in the wild, the harder it becomes to filter it out – including for A.I. programmers. We’ve all seen A.I.-generated images that aren’t quite right: hands with too many fingers, a cat with five legs, the New York skyline with buildings in the wrong place. As more of this content is artificially generated, at least some of it is going to be fed back into A.I. algorithms as they trawl the web, right? So… doesn’t that mean there’s going to be a point where A.I. either stagnates or even starts getting worse, rather than better?

Stock photo of Google Glass.
Remember when Google Glass was about to change everything?

Then there are jobs that A.I. would struggle to perform. I could absolutely envision a world where many office jobs are replaced by A.I. programmes – similar to how factories that used to be full of workers slowly got replaced by robots and machinery. But could you have an A.I. plumber? I had my bathroom redone a couple of years ago to fit an adapted shower as I’m disabled… and it took several skilled workers more than a week to measure things, reroute an errant pipe, and install all of the fixtures. Earlier today I had to cut down a rogue tree branch that was in danger of damaging my roof. Could A.I. do that? Autonomous robots have come a long way in the past decade or so, but even with full A.I. integration, tasks like those still seem a long way off – not to mention unaffordable, even here in the UK.

One part of A.I. 2027 seemed to offer a “technological utopia;” the kind of post-scarcity society that we’ve seen depicted in Star Trek. And don’t get me wrong… I would love to live in that kind of world. But is that realistic? Are we really only a few years away from fleets of automatons doing everything from diagnosing diseases to plumbing a sink? The rate of technological progress is impressive, for sure, but I don’t believe we’re closing in on that kind of future.

Cropped poster/banner of I, Robot showing a legion of humanoid robots.
Legions of robots from the film I, Robot.

Then there are political considerations that I would argue very strongly A.I. 2027′s authors have got wrong. The idea of a “universal basic income” has been talked about before, and was even trialled on a small scale in Finland and a couple of other places. But the politics surrounding this are incredibly difficult, even toxic, and I struggle to envision a near-future scenario where universal basic income is actually politically feasible. The current political climate in the United States, as well as here in the UK, seems to be completely opposed to that kind of scheme. I mean, do we really think Donald Trump and the Republican Party would ever go for a policy of universal basic income?

None of this means that A.I. 2027 is wrong in its big-picture assessment of the short-to-medium-term future of artificial intelligence. But given the authors’ political miscalculations in particular, I think there’s enough of a red flag to at least consider the possibility that they might’ve gotten other things wrong. The report’s premise is based on competing A.I. models from different countries eventually agreeing to merge – but isn’t it just as likely that two independent A.I. systems would develop different, even opposing goals? Human beings are tribal by nature, and it’s at least possible that any kind of sentient life we might create would retain that tribalism.

Photo of Mark Zuckerberg, Donald Trump, Melania Trump, and Bill Gates at the White House, 2025.
Are we suggesting that Donald Trump (pictured at a recent event with Mark Zuckerberg and Bill Gates) will seriously consider a policy of universal basic income?

I don’t mean to nitpick A.I. 2027 too much, because I think the report highlights some important issues and raises important considerations. But I think it’s worth acknowledging that it feels, in parts, like a doomsday scenario and an unrealistic utopia. These are both extreme… and perhaps that’s why neither feels especially likely.

A.I. is coming for a lot of jobs, and some folks who would’ve never expected to be losing their job to automation are going to be impacted. Artists, animators, writers, creative folks… A.I. is coming for us. Programmers, web designers, game developers… basically any office job you can think of, especially so-called “entry-level” positions. They’re all vulnerable to A.I. – and to big corporations seeking to remove employees in order to save money and make even greater profits. That side of A.I. feels real – because, as I noted earlier, it’s already happening. Layoffs in the video games industry are partially driven by A.I. replacing workers. Real-time translation apps mean there’s less of a need for translators. Data entry can be done quicker and faster with an A.I. model than with an entire team of people. And A.I. doesn’t take breaks, need maternity leave, or even go home at the end of the workday. None of this is imaginary or a subject of debate. It isn’t “coming soon.” It’s already here and it’s real.

Screenshot of Google Translate showing real-time text translation.
Free translation apps (Google Translate pictured) are getting better all the time, and real-time in-ear translation for spoken languages is also possible.

But the existential risk? A.I. super-intelligence? A machine with unfathomable, god-like knowledge and powers? A technological utopia where A.I does all the work and us humans can just kick back, claim our universal basic income, and chill out all day? I gotta admit that I’m sceptical.

I think back to my fears of self-driving vehicles poaching jobs from truck drivers. And I reflect on the technological promises (and concerns) about technologies like supersonic jets and Google Glass. Within just the past few years or decades, technologies have emerged that seemed to be ready to reshape the world… but they didn’t. A.I. can do certain things incredibly well, and it’s definitely impacting the world of white collar work. But the more fanciful claims in A.I. 2027 feel, to me, more like sci-fi than a credible threat.

Still frame from Terminator 2 showing a Terminator wielding weapons.
Both a post-labour utopia and killer robots going rogue still feel like sci-fi.

Maybe that’s my own bias showing through, though. I fully admit that, over the years, I’ve made plenty of mistakes when it comes to technology. I mean, I bought a Wii U, for goodness’ sake! And, in a broader sense, it can be difficult for any of us to imagine a world that’s radically different from the one we currently inhabit. My grandfather could vividly recall rushing outside to catch a rare glimpse of a motor car when he was a child, yet by the time he was in his fifties there’d been atomic weapons and humans landing on the moon. The pace of technological change in the first half of the twentieth century was unprecedented, and the lives of people were completely changed.

In my lifetime, too, I’ve seen the birth of the world wide web, the rise of portable electronic devices, and more. When I was a kid, my household had one television set, no video recorder, no computer, and a single landline telephone. By the time I moved out and got my first student flat, we had computers, games consoles, mobile phones, and dial-up internet. Then came broadband, MP3s, streaming, and smartphone apps. When I had my first weekend job I’d get paid in cash, and my first full-time job still paid me by cheque. I’d regularly have to go to the bank to pay those in – but I can’t remember the last time I set foot in a bank. I do all my banking these days on a smartphone app.

Stock photo of a CRT television set.
There have been huge technological changes within my own lifetime.

I guess this boils down to one massive question: are we anywhere near the limitations of modern A.I. systems? Or, to think about it another way, are today’s A.I. models, with all of their generative capabilities and human-mimicking interactions, genuinely capable of taking the next big leap?

A.I. models appear, to me, to be closer to parrots and mynah birds than actual sentient people. They’re quite capable of replicating “language,” on the very simple basis of “input X requires response Y,” but they don’t understand what they’re saying. That’s why A.I. systems make so many incredibly basic mistakes, and why some enquiries or conversation attempts go completely off the rails. A.I. models, in their current form, also seem to struggle with retaining information, even if all we’re talking about are a few lines of text.

Stock photo of a red parrot.
Are today’s LLMs closer, in terms of sentience, to you and me… or to this parrot?

Let me give you an example. While researching for this article, I “spoke” with several A.I. chat bots. And it was an interesting experience! A.I. can do some things incredibly well – it can write you functional computer code, answer questions about everything from history to mathematics, and even write an article or a story that flows naturally and is coherent. But A.I. struggles with some pretty basic things, too, like remembering what’s already been said. In just a short span of time, and without me deliberately trying to trick it, I found the A.I. that I was “speaking” with was giving totally contradictory responses to things it had said just a few minutes earlier. I could scroll up and see the older messages, but the A.I. seemed to not have that ability. It’s programmed to respond to immediate inputs – so if you ask it the same question twice, five minutes apart, you might very well get two different answers. Not differently worded answers – totally different, contradictory responses.

There’s a massive gulf between technology that’s “almost ready” and technology that actually works. Self-driving vehicles might work perfectly… 98% of the time. But that last 2%? That’s what’s stopping self-driving vehicles from going mainstream. The same with some of Elon Musk’s rockets – they’re 99% perfect… but that 1% error causes explosions on the launchpad. A.I. can do some things very well in a limited use case, and can appear to be very human with its mimicry of language. But is it really just a few years – or even a few months – away from the technological revolution predicted by A.I. 2027?

Still frame of a Nasa rocket exploding near the launchpad.
The small difference between “almost ready” and “actually works” is what leads to rockets exploding on the launchpad.

There’s not just a difference, but a massive, ocean-spanning gulf between a machine that can mimic human language and one that can operate independently. The first is like a parrot imitating its owner, incapable of genuinely understanding anything it says, and is still fundamentally operating on a “input X yields response Y” basis, with very limited ability to retain information within a single “conversation.” An A.I. system that can solve world hunger, operate entire governments, and threaten our extinction feels… well, it still seems like it’s the stuff of sci-fi. And yes, I accept that this was a very limited experiment using free A.I. bots, and that the cutting-edge stuff being worked on behind the scenes is going to be much more capable. But are its capabilities that much greater, and is this technology really the disruptive, revolutionary, and potentially world-ending threat that the authors of A.I. 2027 believe?

Generative A.I. is proving disruptive, and despite what some folks might want to think, the technology is clearly here to stay – at least in some form. On the business side of things, if an A.I. can do the work of an office full of people, well, that office is going to be closed and those folks will be laid off. I’ve also looked ahead to the future of A.I.-generated entertainment, making a bold prediction that A.I. might, one day soon, be able to generate tailor-made films and TV shows, potentially shutting down entire studios and laying off actors, writers, and everyone else.

Photo of the Warner Bros. studio in California.
Part of the Warner Bros. studio in California.

For a lot of white collar, middle-class, creative, and upper-income-bracket folks… they’ve long considered themselves safe from this kind of automation-driven unemployment. So the arrival of generative A.I. systems that are competent and genuinely compete for those roles? That’s been a massive shock, and I think that’s why we see so many people pushing back against A.I. today in a way that they didn’t push back against all kinds of other disruptive technologies. Because, at the end of the day, most technological changes prove to be disruptive to someone. It’s just this time it’s the turn of online content creators, wealthier folks, and people with a disproportionately large voice.

But when it comes to things like A.I. drones murdering people, A.I. systems going rogue, or A.I. super-pandemics… I’m not convinced it’s something we’re on the cusp of. Nor do I feel we’re just a few years away from a post-labour tech-driven utopia where no one has to work and we can all indulge in artistic or academic pursuits. These two scenarios feel far-fetched, to me, even as some A.I. systems expand their reach and their capabilities. I’m not convinced that we aren’t close to the ceiling of what current A.I. models are capable of, nor that the kinds of doomsday or utopian scenarios laid out in A.I. 2027 would require major advances in computing and other technologies that may not even be possible.

Stock photo of a server rack.
What kind of changes or advancements in computing might be necessary to fulfil some of A.I. 2027′s predictions?

The world is ever-changing, and technology in particular is not a static thing. My entire lifetime, really, has seen innovation upon innovation, taking me from an analogue childhood in the ’80s and early ’90s to the tech-focused life of today. I don’t doubt that there will be more changes to come, and that there will be inventions and innovations that, right now, I can’t even conceive of – assuming I live long enough to see them! So I’m not trying to dismiss out of hand the genuine concerns folks have about artificial intelligence. But at the same time, I can’t help but feel that current models could be more limited in their abilities than the A.I. evangelists want their investors to believe.

Right now, A.I. is driving an almost unprecedented level of investment, with a handful of companies making a ton of money. But is this just the beginning of an economic boom that will rival that of the post-war years or the Industrial Revolution? Or is it a speculative bubble about to burst, as we’ve seen repeatedly in recent decades? Whether we’re talking about the dot-com bubble, subprime mortgages, or cryptocurrency crashes, there are plenty of examples of speculative bubbles that got out of hand. Is A.I. just the next one? Are the promises made by A.I. creators genuine, or just an attempt to drum up further investment? Can A.I. really do what investors are being promised?

Stock photo of a briefcase containing stacks of United States dollars.
There’s a lot of money at stake in the A.I. industry.

We can’t escape the reality that all of this is tied to money. A.I. companies need documents like A.I. 2027, because this conversation feeds into the narrative their executives are weaving about the future capabilities of these systems. And the promise of an incredible return on investment is what’s keeping these – otherwise unprofitable – companies in business right now. I’m not accusing anyone of running a deliberate scam, but it’s a pretty well-established way of doing business in the tech space: over-promise, rake in the cash, and only then try to figure out how to make good on at least some of those ideas. That approach has worked for the likes of Apple. But it didn’t go quite so well for companies like Theranos.

The tl;dr is this: it benefits A.I. companies to allow this conversation about their products to do the rounds. It drums up interest and attracts investment – not because investors want to see humanity wiped out and the world end, but because they see the potential for short-term financial gains. A select few companies in the A.I. space have seen their share prices increase four-, five-, and six-fold in just a couple of years – and that’s largely due to the belief that A.I. is the wave of the future. Investors believe that whoever perfects the technology first will become the world’s first trillionaire – and they want in on that. We can’t avoid that side of the issue when discussing A.I. technologies as they exist today – and their future prospects.

Still frame from Star Trek TNG Season 2 showing Data.
Data from Star Trek: The Next Generation.

A.I. is already disrupting entire industries, and we’re all going to have to learn how to use these systems in the workplace in the years ahead. There could very well be fewer entry-level white-collar jobs, fewer graduate-level jobs, and fewer office jobs in general. And the potential uses for A.I. systems on the battlefield could result in a monumental change in how future conflicts unfold. But as I see it, today’s artificial intelligence systems don’t “think.” They regurgitate information when prompted, and they’re closer in actual “intelligence” to a parrot than to a person. Artificial intelligence can do some things very well – better, faster, or more reliably than any person ever could. And that’s going to be fantastic in some use cases: diagnosing diseases earlier, writing computer programmes, or creating individualised education plans for kids with special needs. But there’s a lot that A.I. can’t do – and some of it, with the limitations of computing power, may never be possible.

And it’s those things, in my view, which would be needed to turn the LLMs of today into the super-intelligence of A.I. 2027.

So that’s all for today. I hope this was interesting – though as a total non-expert, I could be completely and utterly wrong about everything! No change there, then. I’ve linked the original A.I. 2027 paper below, and if you haven’t read it, please check it out. There are also some great summaries on YouTube, too. I know this was a change from my usual content, but A.I. has been a big deal in sci-fi – and in the Star Trek franchise in particular – for decades, and it’s a big deal right now thanks to the success of the likes of ChatGPT.

If you missed it, I have another piece in which I talk about the possibility of generative A.I. being used to create tailor-made films and TV shows in the near future: you can find it by clicking or tapping here. Thanks for reading, and I hope you found my take to be interesting. Until next time!


You can find the original A.I. 2027 paper by clicking or tapping here. (Warning: leads to an external website.)


Some images generated with A.I. (yes, on purpose!) Some stock photos courtesy of Unsplash, Pixabay, and Wikimedia Commons. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.

Building my new PC

For a couple of years my PC had been in need of a refresh! I’m disabled and spend most of my time at home, and my PC has been everything for me over the last few years: entertainment centre, games console, workspace, and of course, the place where I write all of these articles and do all of the tasks here on the website! When 2021 rolled around I decided I needed to get my act together and get serious about an upgrade, and over the course of last year I put together a list and began to acquire the components for my new build piece by piece.

Though I perhaps know a little more about computers than your average consumer, I’m by no means an expert – as this article will surely attest. That’s why I’m not calling this a “build guide” or “how to build a PC!” There are plenty of far better-qualified people than me who’ve written step-by-step guides on how to do everything, and there are videos on YouTube too. I’d recommend Linus Tech Tips on YouTube, TechRadar’s step-by-step guide, and the Tom’s Hardware forum if you’re looking for that kind of detail – and I’ll include links to all three at the end of this article. I’m writing this for the website to share my experience as a newbie to PC building – and because I enjoy writing!

I’m no expert, so if you’re looking for a detailed guide you’ll have to go elsewhere!

First of all, it took longer than I’d hoped to get everything in place. I kicked off this project just over a year ago, in early 2021, and I hoped to have made upgrades to my old PC by the summer. I then changed my plans and decided to build an entirely new machine from scratch, adding extra time to the project, but I still had hoped to be finished well before Christmas. In the end it was mid-March that I finally got it done – and there’s one additional task that I’ll aim to complete perhaps later this year or early next year, depending on how things go.

When I set out to build my PC I thought I knew the basics; which components I’d need and roughly how much I’d need to spend on them. But what hit me later on were all of the hidden costs, extras, and accessories: things like additional cables, an extra fan, a new DisplayPort cable, a new surge protector, screwdrivers, a static wrist strap, thermal compound, thermal pads, and so on. Because I’ve also changed where I sit and the orientation of my PC, I’ve also needed to invest in a new monitor arm and additional storage under the table that my PC rests on. All of these smaller things added up and delayed the project by at least a month!

Picking up hidden extras like new cables added to the cost.

Because I’ve never had a lot of money, I’ve always chosen to invest in items that I feel are higher-quality and stand a good chance of lasting a long time. The cheapest products aren’t always the best value or longest-lasting, as I’m sure you’ve discovered for yourself! With that in mind, I sought out components with excellent reviews, and even a single negative review about a product or the company’s customer service was enough to send me into a tailspin as I pondered the upsides and potential drawbacks. This also added a lot of time to the project!

This time around I chose to go with an AMD Ryzen CPU, specifically a third-gen Ryzen 7 5800x 8-core processor. After more than a decade of Intel’s dominance in the processor space, AMD’s Ryzen chips began getting rave reviews a few years ago, and it seemed like the best fit. I’m not wildly into overclocking nor do I intend to push the chip far beyond its limits – but I wanted to get something that I thought would be high-quality, fast, and that would really show off what a modern PC is capable of.

My new CPU.

About a decade ago I suffered a major internet outage that left me reeling! For more than six weeks I remained disconnected, growing increasingly frustrated – and increasingly bored. When I got back online I ordered an external hard drive, and on that drive I installed a number of games, made backups of my DVDs, and so on so I’d always have something to do if I was ever in that situation again. I got a second external drive somewhere along the line too, and my workspace has been cluttered with drives, wires, and power cables for the past few years.

With my new PC, I wanted to ditch the external drives altogether. I don’t go places, I don’t have other computers I might want to plug into, so their presence was just an annoyance! With that in mind I installed two drives in my new PC: an M.2 drive to serve as my main C: drive, where Windows is installed, where other software and apps can be installed, and where I can install most of the games I’d want to play, and a second large hard disk where I can keep all of my stored DVD and Blu-ray rips.

Good old DVD box sets…

I chose a Sabrent Rocket M.2 drive for my new PC’s primary drive – again, on the back of reviews and recommendations – and a large Seagate Exos hard disk for my secondary drive. It should be possible to install games on the second drive as well, if space becomes an issue on the M.2 in future, which is also a nice feature to have. Redundancy is the name of the game in that case!

This is my first experience with an M.2 drive. My old PC had a SATA SSD, but it was a very cheap one that never seemed to be especially fast. I think it was a Kingston model, and it was pretty small as well. Basically everything except for Windows – including my collection of MP3s and photos – ended up on an external drive.

My new M.2 SSD.

This might be the most controversial part of the build, but I went for RAM overkill: 64GB of DDR4 RAM. The RAM can be clocked to 3600MHz, which is apparently recommended for Ryzen chips, though out of the box it ran much slower. 64GB of RAM is complete overkill for practically any modern system, so I’m told, but last year I was thinking about getting into YouTube – I had a short-lived foray into podcasting – so I thought I might need the extra if I got serious about video editing or other RAM-intensive tasks.

I chose a decent motherboard to go with all of these components – a “gaming” model from MSI. I also invested in a power supply from Be Quiet that’s rated 80 Plus Titanium – the highest rating available from the premiere ratings organisation for these kinds of things. I don’t pretend to know the exact details of what makes a “Titanium” better than a “Bronze,” but I think it’s to do with power efficiency, particularly during periods of heavy use. It seemed worthwhile to spend the extra money on something more efficient, though, and I made sure to choose a power supply that could more than handle all of the components I was putting into the machine.

The motherboard I chose.

Here’s a problem that I wager most users won’t have to factor in: cats! I have several cats, and they have a tendency to jump on my PC case. With my old machine, I found that the inconveniently-located power button meant that they were frequently turning my PC off with their paws when jumping or walking on the case, so I wanted to choose a new case with a power button either on the front or at least not flat on top. Most cases nowadays seem to have that kind of design; the “old days” of horizontal cases or power buttons on the vertical front of the case seem to be long-gone!

I chose a Be Quiet case in the end; the power button is still near the top, but it’s located on a sloping panel that means my cats could jump up and down without disturbing it or accidentally switching me off halfway through writing an article… or halfway through the latest episode of Star Trek! The Be Quiet Dark Base 900 is a much larger case than my previous machine, but I think that means that there should be good airflow for keeping all of the components cool.

This is what I mean.

The CPU cooler that I chose was also based on reviews and recommendations: I went with a Noctua NH-D15. I debated using a water cooler – one of the all-in-one systems – but ever since I knew a fellow PC builder who ruined his entire system when his homemade water cooling system sprang a leak… let’s just say I’ve been put off! I know that today’s all-in-one water coolers are probably safe to use – far safer than the janky piece of crap my feckless friend built in his basement 20+ years ago – but even so, I felt that an air cooler was the way to go. The Noctua NH-D15 is one of the best-reviewed setups on the market, and it has recently been updated with a special AMD Ryzen mounting bracket, so that was the version I picked up.

I chose to add one PCIe card – a WiFi and Bluetooth antenna. I don’t care about the WiFi particularly as I’ve always preferred to use ethernet for my stationary PC, but I wanted to add Bluetooth functionality. I use a Bluetooth keyboard and I have a couple of other Bluetooth devices that I thought I might try to connect, and considering that it wasn’t hugely expensive to add it in, it seemed worthwhile.

The Bluetooth card.

With prices for graphics cards having been sky-high for years, I knew from the start that I would recycle my current one rather than wait months only to pay over-inflated prices. When my GPU crapped out on me a couple of years ago I replaced it with a modern GTX 1660, so it’s not like it’s a horribly outdated component. It would be lovely to back up all of that new hardware with a ray-tracing graphics card that can really take advantage of modern games… but one thing at a time! That’s an upgrade that I hope to get around to either later this year or next year, depending on prices and how well my PC performs.

So those were the main pieces that I chose. It took a while to back up all my files (and double-back up the most important ones because I’m paranoid like that), but eventually I’d done as much as I could, procrastinated long enough, and was ready to get to building!

I scavenged this graphics card from my old machine. It’ll do for now!

I’m absolutely certain that building a PC in 2022 is significantly easier than it would’ve been fifteen or twenty years ago. Most components slot into place, there are step-by-step guides and video tutorials on how to do everything, and even the instructions that came with the components were easy to understand.

I started by taking the motherboard out of its box, strapping on my anti-static wristband and grounding myself, and making sure I had my new screwdriver kit at the ready! Installing the RAM was the task I chose to do first – it’s something I’d done before and I knew exactly what I was doing. From there I installed the M.2 drive and its heatsink, and then the task I was probably most nervous about: the processor itself.

My chosen RAM.

How many horror stories have you seen of bent pins, misaligned chips, and other CPU disasters? I couldn’t tell easily which way the chip was supposed to be oriented; the little triangle that’s supposed to indicate that was incredibly small and blended in. But after checking, double-checking, and psyching myself up for it, I gingerly placed the chip in its awaiting hole… and we had success! Nothing was broken, no pins snapped off, and nothing blew up. Phew!

Next I applied a small amount of thermal compound (I went with Kryonaut’s “Thermal Grizzly” paste instead of the stock one from Noctua). Doing what I’d seen others do on video, I laid out a small drip of the stuff, no larger than a grain of rice, and then secured the cooler in place. It amazes me that such a large cooler is okay; it looks like it’s hanging there, suspended in mid-air!

The thermal paste I used.

Having done about as much as I could with the motherboard outside of the case, I next had to grab the case itself and start installing the power supply. The Be Quiet power supply that I chose came with a large number of cables, not all of which I ended up using. Some of the cables look very similar to one another, so it took a while to make sure I’d got each one in the right place!

I installed the motherboard, screwing it into the appropriate standoffs in the case. Then I slowly began plugging in each of the various cables, including a bunch of wires that had been dangling inside of the case when I opened it up! I installed the hard disk in the lower corner of the case, and removed all of the other hard disk trays that I’m not using (I’ll hang onto them in case I ever want to add in another drive or something). I hope this will result in slightly better airflow.

A lantern came in handy for extra light – the inside of the case was very dark.

All that was left was to install the GPU and the Bluetooth card in the two PCIe slots. Having done that, which didn’t take very long at all, I checked my watch and was surprised to see it had only been about ninety minutes! Thinking to myself that I’d done a good job, I grabbed a Dr Pepper and went in for a victory sip while the cats sat idly by and watched. To my surprise none of them tried to interfere while I was working… good cats!

But I was far from done, as it turned out. After double-checking every connection and component, I plugged in the PC and hit the power button… and nothing happened. Oh god, panic time! What have I done wrong? How can I even test to see what’s happening if literally nothing is happening?! After a moment of abject panic I tried to think back… what could have gone wrong? Why would absolutely nothing at all happen when I hit the power button?

My reaction!

After checking the very obvious things – was the power supply switched on, was the cable plugged in, was the surge protector turned on, etc. – I honed in on the problem: the power button itself. The power button had to be connected to the motherboard using a two-pin cable, and the connection had to be in a specific orientation (as denoted by a plus and minus symbol). I’d installed it back to front. After reversing the power switch connector I tried again, and to my joy and relief the system sprang to life!

All of the fans seemed to be spinning, and after reaching the BIOS it seemed like everything was showing up: the system detected the existence of its USB ports, its M.2 drive, its hard drive, it had the right amount of RAM… everything seemed to be right where it should be, so I shut it down and prepared to install Windows 11.

My operating system of choice. Ha.

Ugh. Windows 11. We’ll have to talk about this in more detail on another occasion, but for now suffice to say that Windows 11 appears to have been designed by a team of software experts at Microsoft who were given the explicit brief of creating an operating system that embodies every aspect of the word “irritating.” They succeeded beyond their wildest ambitions.

I was told at first that “This PC can’t run Windows 11!” thanks to the ridiculous hardware requirements that Microsoft placed on the new OS. I knew that wasn’t right, because the Ryzen 5800X has the required module to be able to run Windows 11. However, this security feature is not enabled in the BIOS by default, so I had to go in and turn it on manually. Having completed this task, Windows 11 happily installed at the second time of asking.

The screen that greeted me first time around.

That should have been the end of the affair, but there was one final twist in this tale of amateur-hour PC building! A couple of days after putting everything together, slapping myself on the back, and calling it a job well done, the new PC began experiencing random crashes. There would be no warning, no blue screen… just an instant shutdown as if the power had been cut. I was very worried!

These shutdowns produced no error messages worth their salt, just a very basic message in the Windows Event Viewer that said nothing about the cause. After spending a long time on Google and chasing down replies to years-old posts on forums, I tried as many different software fixes as I could find: updated drivers, uninstalled programmes, rolled back Windows updates, re-installed every driver one by one, updated the motherboard BIOS, deleted installed apps… nothing worked. The shutdowns continued, and they seemed to be getting worse. At one point, the system tried and failed to boot five times in a row; it wouldn’t even make it as far as the desktop before losing power.

This was all I could see in the Event Viewer.

After a lot of digging around, which the vagueness of the error message (and the fact that Windows 11’s Event Viewer is cluttered with warnings that Microsoft says are totally fine) did not help, I eventually relented and opened up the case again to see if there could be a hardware problem. It didn’t seem like a typical hardware issue – if there was a nonfunctional or broken component, I would have expected to see this problem from the very first moment I put the system together, not starting days later after everything had been going smoothly.

Every component appeared to be securely in place; the CPU cooler wasn’t falling off, all the cables were plugged into the power supply securely, and the power supply itself seemed to be in good working order. Running out of options I did something that really isn’t recommended – poking around inside the case while the system was powered on. I poked and prodded at the various components as safely as I could, and eventually I hit upon the problem – the cable connecting the power supply to the CPU was just slightly loose. The tiniest bump or prod on this connection switched the system off in exactly the same fashion as I’d been experiencing.

A simple loose connection between the power supply and the CPU was the cause of the problem.

Rerouting the cable in question, and tying it as securely as I could to the inside of the case, seems to have solved the problem. I can only assume that it came loose in the first place thanks to a combination of my amateur workmanship leaving it susceptible to the smallest of knocks… and the cats jumping on top of the case! They didn’t jump on the new case for a couple of days as they were wary of this new addition to the room, but I think their jumping must’ve been just enough to loosen this CPU power cable and cause those irritating random shutdowns. At time of writing it’s been just over a week since I rerouted the cable and the problem has not returned.

So that’s my PC building journey. It was an interesting experience, and while I can’t honestly say that I saved a lot of money by buying my own components, what I can say is that I got exactly the PC that I wanted. I got to choose every part, I got to make sure that I got components that met my requirements – or the requirements I thought I had, at least! – and I got a new experience out of it, too. At my age, brand-new experiences are few and far between!

One of the guilty kitties!

If you’re looking for a recommendation, I’d say that building a PC isn’t for the total beginner. Sure, most components snap together easily enough, and anyone who’s ever built a Meccano set would be able to do that part of it with a few basic tools and the instructions. But knowing where to begin, and where to look in the event of things not going exactly as planned… that required some background knowledge on the basics of how PCs work. If you’ve taken an interest in technology, though, and you know the difference between a CPU and a GPU, or which way around fans should be pointing, then I’d say it’s a fun project – but it is a project, and that requires some degree of effort, preparation, troubleshooting, and an ability to Google your way to solutions!

I’m glad I attempted this project, and hopefully the new PC will tide me over for the next few years with no trouble. I have vague plans, as mentioned, to get a ray-tracing graphics card in the months ahead, but for now I’m satisfied. I’ve copied over all of my files and backups, and I’ve started installing a few games to play – including a couple of titles that my old PC struggled to get running.

Stay tuned for a review of Windows 11 in the days ahead, because I definitely have some thoughts on Microsoft’s latest operating system. Some very critical thoughts!

Windows 11. It has TikTok!

Below you can find a list of the components that I used to build my new PC.

  • Power Supply: Be Quiet! Dark Power 12 850 Watt Fully Modular 80+ Titanium
  • Case: Be Quiet Black Dark Base 900
  • Case Fans: Be Quiet Silent Wings 3 PWM 140mm
  • Extra Case Fan: Noctua NF-A14 PWM 140mm
  • Motherboard: MSI MAG X570 Tomahawk WIFI ATX
  • CPU (Processor): AMD Ryzen 7 5800x 8-Core
  • GPU (Graphics Card): Palit GeForce GTX 1660 6GB
  • RAM: Corsair Vengeance LPX 16GB x4 (64GB total)
  • Solid-State Drive: Sabrent Rocket 2TB Nvme M.2
  • SATA Hard Disk: Seagate Exos Enterprise Class
  • PCIe Bluetooth Card: Gigabyte GC-WB1733D-I
  • Extra Cable: Sabrent SATA III
  • Operating System: Windows 11
  • Tools: iFixit Essential Electronics Toolkit; anti-static wrist strap

Below you can find links to a few websites that I found helpful during my planning and preparation stages:

I’m not an expert and this article is not intended as advice or a guide. You are solely responsible for the outcome if you choose to build your own PC, and I accept no responsibility for any damage or destruction that may result. Some stock images used above are courtesy of Pixabay. This article contains the thoughts and opinions of one person only and is not intended to cause any offence.