The CPU World’s Largest Maker that Doesn’t Make CPUs…

Would you believe me if I told you the world’s largest CPU maker doesn’t actually make any CPUs? I mean, that sounds ridiculous, right? And yet, it’s true. This little three-letter-company changed the way we use technology as a whole, and yet they don’t sell anything other than licenses. This is the history of ARM, the first virtual CPU maker. As PC enthusiasts we likely think of ARM as a pretty small-time technology – sure, you’ve got an ARM chip in your phone, but your desktop, your laptop, the servers you use, those are all, probably, still x86. Intel or AMD, right? Well first, that’s changing quicker than you might think, and second, I guarantee that you own and use tens or hundreds of ARM chips every single day. Your car is made up of ARM chips, your phone, your tablet, your TV, your smart speakers, your video doorbells, your smart light bulbs. That is all ARM. To figure out how ARM managed to ship 32 billion chips in 2022 – without building a single chip – we need to jump back to 1983 and look at the precursor to ARM, a little team bubbling away inside Acorn, a leading UK computer maker. 

For a little context, Acorn in the 80s was an incredibly successful UK tech company. They designed the BBC Micro – a very popular 6502 based computer that the BBC (yes, that BBC) commissioned Acorn to make – and the Electron, and were a bit of a hot-bed for employees to form their own startups. In the early 1980’s Acorn was searching for a new processor for their systems – an upgrade to the 6502 they had been using for the BBC Micro – and concluded that they all sucked, or weren’t available. They originally tried to use Intel’s 286 cores, but wanted to modify them and Intel wasn’t interested, so decided to do it themselves. After a visit to the Western Design Center in the states, which was effectively a one-man-band updating the 6502, Steve Furber and Sophie Wilson (who’s an absolute hero by the way) set about designing a new CPU, specifically with a RISC architecture. 

RISC is an incredibly important to ARM’s history here so let me explain exactly what that means. RISC stands for Reduced Instruction Set Computing, and is rivaling siblings of CISC, Complex Instruction Set Computing. X86 CPUs are CISC (although to be pedantic, they now use a RISC core internally with external translation hardware), whereas ARM and RISCV chips are, well, RISC. The big idea here is that by reducing the number of possible instructions the CPU can execute – things like ADD, MOVE, STORE, etc – you can simplify the core design down to its bare essentials, whereas with a CISC design because you have to support more and more different operations, the core gets more complicated and less efficient. That focus on efficiency is where RISC, and therefore ARM, cores really shine. 

Sophie designed a model of the core using BBC Basic on a dual 6502 BBC Micro, and designed the ARM instruction set, all to convince Hermann Hauser (the co-founder of Acorn) to put together a team to implement her model in hardware. By 1985, in combination with their hardware partner VLSI (not to be confused with LSI Logic, who Jensen worked for before he left to found NVIDIA, video about that in the cards above), Acorn developed their first ARM chip, the ARM1. The ARM1 was mostly used as a secondary chip alongside 6502’s in the BBC Micro to be able to better develop software for ARM, and to speed up the CAD software they were using to design the ARM2 – although considering it was made with a 3 micrometre process node, and was a relatively large 50mm2 die, it only drew 120 milliwatts while running at 6 MHz. Not bad for 1985 – for context Intel’s 1985 offering was the 80386, a 39mm2 die that drew two watts! The ARM2 was an improvement, both in clock speeds with the first parts running at 8MHz, but later versions in 1987 running at 10 or 12 MHz, and in core design. A 32 bit data bus, a 26 bit address space, and 27 32 bit registers, although it did lack direct memory access. Despite the low power and modest specs – sporting only 30,000 transistors total – the ARM2 managed to offer TWICE the performance of an Intel 80386 running at 16 MHz in the Dhrystone benchmark. 

There was a successor, ARM3, produced with 4 KB of cache, although by 1990 Apple had been in talks with Acorn about their RISC core designs and wanted to invest so they could have their own CPU for their then-upcoming Newton PDAs. Acorn and Apple jointly decided that it’d be best if Acorn would spin off their ARM business into its own venture, so in 1990 they did just that, with Apple and Acorn holding 43% ownership each of ARM Holdings. ARM was actually started with very little – as Robin Saxby, the founding CEO puts it, “we did have a barn, we started in a barn not a cow shed, and not a garage, it was actually a barn”. They started with £1.5 million from Apple, £250,000 from VLSI, and £1.5 million worth of IP from Acorn, and 12 Acorn employees. That was it. A twelve person company operating out of a barn in Cambridgeshire with no tools, no independent customers, but as Robin says, they did have “a barn, some energy, belief and experience”. Arm, by the way, initially stood for Acorn RISC Machines, but at Apple’s request, upon the investment, ARM became “Advanced RISC Machines”.  

ARM6, which confusingly was actually ARMv3, was the first chip design that the now independent ARM Holdings Limited made, starting in 1991, with a smaller process node – either 1.2 micrometre or 0.8 micrometre depending on version, as ARM7 is part of the same design family – and new features like a full 32 bit address space meaning up to 4GiB of memory can be addressed, and an integrated virtual memory management unit. This design meant that ARM was profitable by 1993 – just three years after being spun-off. That is a massive rarity – considering NVIDIA took years to get even close to profitability – but the key really is in their business model. ARM is not a chip maker – at least not in the same way NVIDIA is, and even they weren’t the same as AMD and Intel until recently as NVIDIA has always been fabless, as in they don’t do the actual manufacturing of the chips – ARM doesn’t even sell ARM-branded chips that they get someone else to make. ARM designs the cores and the instruction set, then licences those core designs to others to make their own chips. Apple is the perfect example for this – not just as a founding investor, but as a customer. Apple doesn’t want to do the work of designing their own cores and instruction set, that’s a lot of investment in both time and money (much more than their £1.5 million they put into ARM) so when ARM comes along and says, ‘hey, we have these cores and this instruction set, you can use them for a few quid’, Apple gets the benefit of their work, while still being able to design the final chip around their own requirements, getting near total control of the chip design, but without the hassle of designing the core. The key thing though is that ARM collects a royalty on every single chip that’s sold. It might be a single penny for all I know, but when you are talking about 32 billion chips sold in 2022, that’s still £320 million. It’s been reported that ARM’s royalty is usually more like one to two percent of the chip’s final sale price, and that Apple pays “less than 30 cents per chip”, but again even if you use that 30 cent mark for everyone, that’s still $9.6 billion in 2022. According to their 2024 financial statement they made $3.2 billion in revenue in 2024 with a total of just 253 customers – 31 ‘total access licences’ and 222 ‘flexible access licences’. That’s $12.8 million per customer, on average. Not bad huh?

ARM6 was exactly what Apple was looking for, designing the ARM610 for their Newton PDA. ARM’s focus on low power, but still high performance, meant they were the perfect fit for literally anything with a battery, and increasingly anything embedded. They refined their core design throughout the 90’s, making use of better process nodes to shrink the dies, drop the power consumption, and increase the frequencies. As Robin points out in his presentation, by 2005 you could get an ARM7TDMI core on a 65nm process node, that was just 0.1mm2, and drew just 9 milliwatts at 350 MHz. For context, Intel’s 2005 offerings were the Prescott based Pentium 4 chips, which used a 90nm process node and, at least looking at the extreme edition, drew 115 watts, which is almost 1,300 times more power. Admittedly it’s likely a hefty amount more performant, but still. Their focus on low power is quite the departure from what we as PC nerds are used to, but to say they were poised for success would be a complete understatement, which brings me nicely onto their key decision in the early 2000s that has led them to such dominance – Cortex. Cortex has been divided up over the years, from Cortex-M for microcontrollers, Cortex-R for real time chips, Cortex-A for application use, and Cortex-X for pure performance focused application chips. The first Cortex chip was actually a microcontroller, the M3, based on the ARMv7 architecture which found its way into a lot of different chips, like the ATMEL SAM3 and Silicon Labs Prescision32, to name but two. The following year, in 2005, they launched the Cortex-A8, the first majorly adopted ARM core. Apple used the A8 to create their own CPUs for their own devices – the A chips, namely the A4. The A4, fabbed by Samsung funnily enough, and replacing a Samsung-designed chip, powered the first generation of iPad, the iPhone 4, the fourth gen iPod touch, and the second generation Apple TV. They used it for a whole lot, and it’s easy to see why. With up to 1GHz clock speeds but just 0.45mW per megahertz, you’re talking about 450 milliwatts of power consumption at 1GHz. That’s incredible, on top of the absolutely tiny die size at 4 mm2, making it a perfect fit for still pretty high performance, but power-conscious applications like phones and tablets. 

2007 brought about the Cortex-M1 and the A9, with the A9 being an incredibly significant upgrade as basically a multi-core version of the A8, offering up to 4 cores – although Apple opted to use a dual core config in their A5 chip which powered the second generation iPad, the iPhone 4S, the 5th gen iPod Touch, the first iPad Mini and the third gen Apple TV. The A9 also found its way into NVIDIA’s Tegra chips which powered Microsoft’s Zune HD – I know – Audi infotainment displays, Tesla Model S displays, and NVIDIA Shield devices. The A9 was so successful that ARM followed it up with four different models, A5 for ultra low power designs, A7 for high efficiency, A12 for mainstream performance and A15 for high performance. On the microcontroller front we find two very familiar names – familiar to me anyway – the M0(+) and M4(F). Those are the two cores that I use in my own open source hardware – the M4 in the ATMEL SAMD51 in the response time tool and the M0+ in the SAMD21 in the latency and soon to be peripheral tools. It’s kinda funny to say I’m an ARM device maker! I’m pretty familiar with these cores, and their respective actual microcontrollers, and I can say that the level of performance they offer is pretty remarkable for their age, price and power consumption. I mean my whole board only draws tens of milliamps at five volts, and that’s everything, even while running a test and dumping megabytes of data over serial! They are incredible bits of kit, which is likely why especially the M0 and M0+ cores found their way into the automotive industry. Everything in your car now has a module. There’s a module for the windows (in each door), there’s a module for the lights, there’s a module for the battery, and every single one of those modules has at least one microcontroller – and many of those are M0 cores. Especially in higher end models there is likely tens or hundreds of ARM M cores in a single car. That’s insane – and goes to show just how popular ARM is.

One of ARM’s most popular design choices launched alongside the Cortex-A7 in 2011, called big.LITTLE. Big.LITTLE is a solution to the intrinsic compromise you need to make when designing a CPU core, balancing efficiency and performance. You can make a core really really fast, but it’s going to suck back a lot of power – even relatively little power compared to Intel (although that’s like comparing to the power the sun outputs isn’t it…) – or you can make it draw almost no power at all, but it won’t be able to do all that much. So ARM’s big (little) idea is that you don’t compromise. You just put some ultra-efficient cores alongside some high performance cores, and only turn the high performance cores on when you really need them. All the stuff that needs to happen in the background on a phone – communicating with the cellular radio, or polling for push notifications, or just updating the time, all that is handled by the low power cores that, as the name would suggest, draw almost no power at all, but then when you unlock your phone and start doing stuff the big cores fire up to handle it quickly. This gives you the best of both worlds – high performance, and low power. ARM originally designed the A7 to pair with the A15 for that ideal balance of power and performance, although in 2012 they launched their first 64 bit designs, A53 and A57, which went down a treat. You can find big.LITTLE A53/A57 cores in NVIDIA’s Tegra X1 chips, which you’ll find in the Nintendo Switch – although the 2019 onwards revisions actually dropped the A53 cores in favour of a die-shrunk quad core A57 setup only. 

One of the truly unique things ARM does is spend an inordinate amount of money on R&D. In 2006 the newly former CEO Robin Saxby said they aimed to spend about 25 percent of revenue on research and development, but at least as of 2024 that seems to have only risen to a hair over 60 percent – $1.9 billion, with $3.2 billion in revenue. That is an incredibly unique position to be in. For context, NVIDIA spends about 7 percent of their revenue on R&D, Intel is a little better at 30 percent or so (although that’s likely a sign of their revenue stagnation more than R&D spend), and AMD is around 25 percent right now. The fact that ARM basically doesn’t have anything else to spend their money on likely helps, but it also means that anyone else has very little hope of competing. ARM quickly became the industry standard for anything from ultra-low-power to fairly high performance – in 2006 they were the global semiconductor IP leader by two-and-a-half times, so god knows where they’re at now – and that means competitors like RISC-V struggle. RISC-V was created by UC Berkley, creating the RISC-V Foundation in 2015. RISC-V aims to be a much more open standard, specifically being royalty-free and open source. In theory that’s great for prospective chip designers, not having to pay anything for the instruction set or architecture, but in practice the amount of research and development, and just sheer market dominance, ARM has access to dwarfs any cost difference. There are some successful RISC-V products, the ESP32 from Espressif is likely the best example here, and the introduction of RISC-V cores into the Raspberry Pi RP2350 microcontroller is like a win there too, but on the whole it’s going to take a very long time for any significant shifts to happen in the RISC market. 

Part of that is also that ARM offers a more complete package – since 2005 they’ve even had their own graphics core, codenamed “Mali”, with growing performance and support – hell DirectX 12 and Vulkan 1.3 support are included in recent models, and their new Immortalis cores even support hardware raytracing! That complete package is what means ARM has a functional monopoly on smartphones and tablets. Through their customers, Apple, Samsung, Qualcomm and MediaTek, functionally every phone your can buy right now uses ARM cores, and many use ARM GPUs too. There are still a few markets ARM has yet to make substantial headway – namely the PC and desktop market, and to a lesser extent the server room too – although both of those have had some recent developments to help that adoption. In the PC space companies like Ampere (and partners like System76) are making ARM desktops a usable option. While the current options are more developer workstations, they are also somewhat required to be able to develop and test ARM compatible software, and at least according to Jeff Geerling they’re getting pretty damn good at it. The most obvious example of ARM in personal computers though comes from the anti-PC, Apple, and their M series chips. Apple took what they learned from over a decade of making their own ARM A series chips for their iPhones and iPads and upped the game significantly. The M1 was a revolution in terms of both performance and efficiency. We’re talking an order of magnitude less power for the same or better performance. It’s worth noting though that Apple’s ARM cores aren’t stock ARM Cortex designs – ARM has two different licensing options, either you pay for their core designs and use them as-is, or you pay extra to get free reign to do what you want with the designs. That’s what Apple did with the M series – Apple redesigned the cores to be their ideal blend of performance and efficiency, and thanks to Apples closed ecosystem they have the might to make that sort of tectonic shift and just drag everyone along for the ride. On Windows and Linux that isn’t quite as easy, hence the distinct lack of proper ARM support for mainstream desktop apps. Some do – apparently Adobe of all people have a decent ARM version of the Creative Cloud suite – but many don’t, and of course for gaming you’re a bit stuck. But if people like Ampere keep doing what they are doing, we might see ARM desktops sooner than you might think.

And that goes for servers too. It really only takes one or two big adopters to see a seismic shift in platform choice. Amazon, with their Graviton CPUs, and NVIDIA with their Grace CPUs, mean that ARM is projecting that by the end of this year 50% of servers will be running ARM chips, up from 15% at the start of the year. That is a bigger shift than we’ve effectively ever seen – faster than AMD with their EPYC lineup – and it’s easy to see why. Anandtech have a great test from 2020 testing Amazon’s Graviton2 against AMD EPYC and Intel Xeon Platinum chips, and the Graviton2 does incredibly well, all at an assumed significant power drop – somewhere around 100 watts instead of 180 to 420 for the same sort of performance. While AMD has definitely improved that with their later Zen architecture revisions – and have in some way responded to ARM with their ‘compact’ or ‘c’ series chips – the Zen 4c and Zen 5c options – and found uniqueness with their 3D vCache, as Wendell says there’s a whole bunch of sockets that have moved to ARM and won’t be coming back. That’s great news for ARM. 

Things haven’t been completely perfect for ARM though, especially in the last couple of years. ARM’s joint venture in China, joint with SoftBank specifically, went rogue in 2020. Well, I should say, the then CEO Allen Wu, went Rogue. To boil a years-long and complicated topic down, in China companies have an official seal stamp called a “chop”. That seal is required to be present on all officiated documents, and basically if there’s no stamp, the document isn’t legally binding. In 2020 Arm China’s board ousted Wu over conflicts of interest, but Wu refused to stamp his own dismissal and basically held the company hostage for a staggering two years. He reportedly tried to fire his board-appointed replacements, installed his own security team, and even announced Arm China had fully separated from Arm Holdings – so much so that they’d set up their own R&D department! Finally though, two years on, with the help of the local authorities Arm China was able to replace Wu with two co-CEOs, Liu Rechen and Eric Chen, and a new company chop was created. 

The other big drama from the 2020s was that in 2020 NVIDIA announced they’d be acquiring ARM for $40 billion, purchasing it’s 90% ownership stake from SoftBank (who made ARM private after buying out public shareholders for $32 billion in 2016), heralding the creation of the “World’s Premier Computing Company for the Age of AI”. The problem? Both the UK and the US governments essentially forbade it. The FTC sued to block the merger, calling it an “illegal vertical merger” that would give NVIDIA too much market power, and the UK ordered an inquiry, apparently concerned with both competition and national security. Even the EU and China weren’t too sure either. So, in 2022 they called it off. SoftBank opted to take ARM public (again) in 2023, raising $4.87 billion, valuing ARM at $54.5 billion. As of writing this in May of 2025 that value has only risen with the AI tide, sitting at almost $134 billion today. 

ARM is one of the stranger tech companies. They are worth an extraordinary amount of money – more than Intel and not too dissimilar from AMD – despite not making any products. They design the internals of chips that they then license to others to design into their own products. They truly sell the shovels during a gold rush. Personally I expect to see even more dominance from ARM in the server market, and a growing adoption for portable devices like laptops, with desktops being the last to fall as it were. I think it’s interesting to see AMD and Intel now have competition from a wholly different platform and architecture, forcing them to innovate not just to keep up with each other, but with the almost amorphous cloud that is ARM. I am also really interested to see how RISC-V does – as a strong supporter of open source projects I do hope they can succeed – although it’s an uphill battle for a number of reasons, not least financially. ARM has all the money, and when you have a royalty-free open source project, money doesn’t just roll in. Still, a rising tide lifts us all, and more competition is always good for us as consumers.