THE FUTURE OF PCS IS IN APPLE’S ARMS NOW

It’s official. After 15 long and fruitful years, Apple is finally breaking up with Intel.

The writing’s been on the wall a while, so the announcement at WWDC 2020 earlier this week didn’t come as a surprise. But the news is exciting nonetheless, and some of us PC nerds are already drooling at the prospect of sleek yet powerful new hardware from Apple.

But what exactly is the announcement, and why is everyone so excited?

Apple’s announcement is as significant as the US government announcing that fossil fuel-based vehicles will be phased out and replaced by electric vehicles within two years. Such a decision would involve a paradigm shift in transportation design that will include a fundamental reassessment of how vehicles function, charging infrastructure, and transport in general.

What Apple has done is announce an equivalent fundamental shift in the way computers will be designed and function.

ARM vs. Intel (x86): An intro to ISAs

ARM and x86 are what are known as ISAs or Instruction Set Architectures. Think of an ISA as a language that allows hardware and software to communicate with each other. Each style is unique, comes with its own sets of rules and quirks (think grammar, puns, etc.), and each was developed in its native environment and refined to suit the said environment.

What’s important here is that software and hardware designed to communicate in one ISA cannot just be switched over to another on a whim. It is possible, of course, but it takes a great deal of time and effort, and even more, time to perfect.

The ARM ISA is fundamentally a super-efficient ISA that prioritizes power consumption per unit time. It’s designed to rapidly deal with smaller, more uncomplicated instructions at meager power. ARM is also more modular, allowing chip designers to more easily add and remove components and features as they see fit.

The x86 ISA, an architecture developed by Intel and later expanded on by AMD, is designed to perform more complex calculations rapidly. It’s a more complex instruction set and inherently better suited for heavy-duty computing where power consumption is not as significant a factor. It’s also relatively less modular.

Since devices like mobile phones and tablets are small, can’t dissipate much heat, and have limited battery capacity, ARM makes a lot of sense on such platforms.

On the other hand, for desktop PCs, where power consumption and heat aren’t significant limiting factors, a dominant, power-hungry ISA like x86 is more suitable.

That being said, do bear in mind that there are supercomputers powered by ARM-based chips and ultra-low-power mobile devices powered by x86-based ones. In the end, it’s the implementation and design that matters.

Legacy code, or why change is hard

An operating system like macOS or iOS isn’t just a single entity. It’s a sophisticated platform with many sub-systems and protocols that also happens to support billions of apps and services.

With time, money, and effort, you can teach an OS to communicate in a new language. Still, you instantly make all those billions of apps incompatible with your OS, unless they are also trained to teach in the original language. All those developers who build apps and have made apps for one ISA must now switch to an entirely new ISA and update existing apps to that new ISA.

Usually, it’s simply not worth the effort of switching to a different ISA.

And there’s more. A desktop OS has vastly different requirements than a mobile OS. An average desktop PC like an iMac or Windows PC must process a lot more data than a mobile device. A desktop OS is interfacing with several high-speed peripherals such as SSDs, graphics cards, sound cards, USB devices, and more. Real multitasking also requires a lot more RAM and the ability to utilize that RAM fully.

For example, we simply don’t know yet if ARM can handle features like 10 Gbps ethernet, PCIe graphics cards, and Thunderbolt 3 SSDs and peripherals.

As fast as it is, Apple can’t just pop out the A13 Bionic chip from an iPhone 11 and pop it into a desktop/laptop motherboard, and have it deliver the same performance as a high-end desktop/laptop part. The CPU will have to be redesigned, though Apple has already started experimenting with some designs. Hint: T1 and T2.

How change can happen, and why it’s a good thing

This is the second time Apple is switching to a more efficient architecture. The first happened 15 years ago when Apple turned from IBM’s hotter, more power-hungry PowerPC architecture to Intel’s far more efficient x86 architecture.

It was this switch that allowed devices like the MacBook Air to exist, why copycat Ultrabooks are now a thing.

Now imagine what a switch to an ARM can do for the laptop industry.

Why wouldn’t you want a tablet-sized PC that outclasses beefy gaming notebooks of today while giving you all-day battery life and thermal design that won’t make noise or roast your thighs? These are only some of the more immediate benefits that an ARM transition can bring. In theory anyway.

Taking the learnings from its first transition from PowerPC to x86, Apple is now laying the groundwork for a smoother, more seamless transition from x86 to ARM. Apple isn’t jumping in blind.

You may not be aware of it, but just about every new MacBook since late 2017 ships with an ARM-based Apple chip (called T2) that handles system functions like that blisteringly fast SSD and features like high-speed, real-time disk encryption.

I won’t be surprised if Apple ships T3 and T4 chips with newer x86 Macs over the next few years as it starts off-loading more computing work them in a bid to further the development of its ARM-based platform.

Apple knows it can challenge Intel now, and it’s spent years ironing out wrinkles.

Apple’s engineers have put years of work into macOS as well and have already ported macOS and powerful core apps like Logic Pro and Final Cut Pro to ARM.

Given that most developers likely don’t have the time and money that Apple has, switching from x86 to the ARM ISA will be a challenge. Here’s what Apple has done to ease their burden:

  1. Apple updated Xcode, the Apple app you use to build Apple apps, with features that will make the porting process easier for developers. Apple claims that most devs should be able to switch their x86 apps over to ARM in just a few days. Apps in development from now on will support both ISAs by default.
  2. Apple released Rosetta 2, a toolkit of sorts that will automatically translate legacy x86 apps to the ARM ISA at install time. It will also act as a real-time translation layer between the x86 and ARM ISAs when needed.

There’s a performance cost to the latter approach, and some things will not translate correctly, but at least you’ll have a functional, though perhaps not performant, an app where previously you did not.

Once recompiled, all apps will, by default, be available in ARM as well as legacy x86 variants, and the appropriate version will be installed on the applicable Mac. As a bonus, iPadOS and iOS apps should also soon be available on the Mac.

This entire transition process, according to Apple, will take at least two years. Even after this period, Intel-based Macs will continue to be supported for “years to come.” The first ARM-powered Macs will ship this year, says Apple.

While certainly exciting, there are still several unanswered questions. I’ll get to those in a bit.

But first…

Y tho?

Reason #1: Efficiency

The original laptops were desktop PCs with batteries and screens attached to them (they still are, it’s just that the design is now more specialized). As portable PCs, it made sense that they would run desktop operating systems, and by extension, x86 or PowerPC (in that era).

As time progressed, it was simply more comfortable for everyone to stick with x86 for laptops, and as long as x86-based designs continued to evolve and get more efficient, why fix what wasn’t broken?

This brings us to reason #2.

Reason#2: Stagnation

Give an average user a 2016 MacBook Pro, and they will not be able to tell it apart from a 2020 model. It looks the same, and for most use-cases, performs the same. While the 2020 models boast of better Intel GPUs and more cores, the real benefit in later models comes from Apple’s T1/T2 chips, which allowed Apple to introduce features like TouchID, the debatably useful TouchBar, certain audio/video transcoding features, real-time disk encryption, and more.

On the other hand, the 2019 iPhone 11 Pro is worlds apart from 2016’s iPhone 7. The 2019 model is 3-5 times as powerful, boasts of several new cameras, new AI and AR features, slimmer bezels, longer battery life, Face ID, motion tracking, always-on Siri, and dozens of other features. A user is in no doubt as to which is the better option.

Intel, for all its prowess, hit a proverbial brick wall after Skylake, it’s 6th gen Core architecture that first appeared in 2015. Since then, Intel has only iterated on the design, making minor, incremental adjustments to the performance. The 8th Gen CPUs was a big bump in that they featured twice the number of cores, but these were still effectively based on Skylake, and more cores came at the cost of additional heat since the process node didn’t change. The manufacturing process node is an indicator of the size of the transistor clusters that make up a chip like a CPU core. If you can make the node smaller, you can cram more efficient cores into the same volume as a CPU based on a larger, less active node. Since the node hasn’t changed, you’ve simply increased the size of the chip. As a rule of thumb, a larger chip will generate more heat. And again, the performance benefits are not tangible to the average user.

10th Gen Ice Lake CPUs, which still aren’t mainstream, are built on a much smaller node and offer real benefits like AI acceleration, better power, and thermal efficiency, and vastly improved graphics performance, not to mention better Thunderbolt support and WiFi 6. But it’s already a case of too-little-too-late for Intel. Apple knows it can do better, and it’s already made up its mind.

Reason #3: Controlling the pace of innovation

An x86-based CPU is like a human brain: a centralized core that handles all the intelligent functions of a computer. This is good for more substantial work because you have a focal point for heat and energy consumption that you can base your design around. The problems appear when you want certain specialized features.

An ARM can be more modular and flexible. You can delegate tasks to dedicated hardware, resulting in a more efficient, decentralized design.

Always-on Siri, for example, can’t happen on x86 without a significant power cost. On its ARM-based iPhone chips, Apple could simply add an ultra-low-power module that was only responsible for listening to the wake word — “Hey Siri,” in this case — and waking up the rest of the system at only that point.

When Apple wanted motion-tracking features, it added the motion co-processor. When it wanted more efficient video transcoding, it added a dedicated video transcoder. When it wanted more efficient HDR image/video capture, it designed a more powerful ISP. The list is endless.

All this is still possible with x86, but Apple is at Intel’s mercy — and on Intel’s timeline — when it comes to the pace of innovation.

If it switches to its CPU design, Apple’s in-house team of brilliant chip designers and software engineers ensure that Apple will innovate on its terms.

This total control of the hardware-software stack is why an iPhone with 3 GB RAM outperforms an Android phone with 8 GB RAM. It’s why AirPods sync so seamlessly. It’s why features like Continuity and AirDrop exist, and why they can’t be replicated anytime soon.

Reason #4: Because it can

Apple has the clout to get developers to make the switch. App developers make more money on Apple platforms. That’s just fact. If they want to continue earning, they will have to make the switch whether they like it or not. For the consumer, it’s win-win.

Microsoft is attempting a similar switch with its UWP platform and planned ARM transition, and it has been for years now, but it’s slow going, and critical apps like Chrome are still not available on that platform.

Users don’t care, and in the end, we’re the ones who’ll benefit.

Do you care what ISA your device is running? Were you even aware of it before you heard Apple talk about the change or read this article?

If we can access WordChrome, and Photoshop, and use them the way we always have, we don’t care what platform we’re on.

All we want is slimmer, lighter devices with excellent battery life. As long as we get that and as long as we aren’t the ones having to make an effort to learn a new language, why should we care?

Unanswered questions

Apple’s ARM transition isn’t a pipedream slated for several years down the line. It’s already started. The WWDC demos we saw of macOS Big Sur — the new macOS — running so beautifully on Apple’s 6K XDR displays were powered by the same chip powering Apple’s latest iPad Pro, albeit with a lot more RAM and most likely a far more capable cooling system.

This chip was shown editing 3x 4K streams in real-time in Final Cut Pro with LUTs, text effects and animations, handling 3D rendering with lighting effects in a brute of a program like Maya, and running a modern game like Shadow of the Tomb Raider (SoTR) at 1080p with most of the fancy rendering effects turned on.

This is great, but the devil is in the details.

Sure, FCP on an ARM can handle 4K streams, but 4k streams in what format? Does it only work that well with optimized ProRes 422 and H.265 encoded video (formats natively supported by Apple silicon)? What about Canon RAW or some other such non-standard formats.

Sure, a game like SoTR was running on ARM, but at what framerate? A barely playable 30?

Will features like ray-tracing be supported?

Will Apple support third-party accelerators like graphics cards and sound cards?

Will there be support for Thunderbolt 3 and compatible peripherals? The dev kit only supports regular USB 3.2 Gen 2.

What’s going to happen to the $50,000 Mac Pro?

Will my MacBook become obsolete in two years?

Will new software features like AI-based scene reframing in the video (as demoed at WWDC) be entirely dependent on hardware? As a developer or creator, will I be forced to upgrade my Mac annually just to stay relevant and competitive?

At this point, it’s simply too early to tell, but I think these are questions that are also not worth worrying about at the moment. The transition is inevitable, it will take time, and the potential benefits appear to outweigh the concerns.

It might be a rocky few years for certain types of professionals to be sure. Still, for everyone else, this is the most exciting announcement in the PC space in years, and it might precipitate the push to an entirely ARM-based world for the average consumer.

The PC’s future does lay in Apple’s ARMs now.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Do NOT follow this link or you will be banned from the site!
error: Unauthorized Content Copy Is Not Allowed
%d bloggers like this: