Apple’s official unveiling of three M1-powered Macs failed to convince me to part with my hard-earned cash in exchange for making the leap to Apple Silicon.
Yes, not even that clip of Apple’s senior vice president of Software Engineering Craig Federighi gazing lovingly at his M1 Mac didn’t work on me.
Here’s why.
Must read: Apple Silicon M1 chip — in pictures
No matter how fast Apple Silicon M1 is, and early benchmarks suggest that it is very fast, with single-core performance outpacing other Macs, and multi-core not far behind the leaders, it feels like early days.
M1 is clearly much more than I and many others in the tech industry had expected. Right out of the gate, Apple has launched processors that are going to leave Intel and AMD scrambling to beat, especially in terms of performance per watt.
And by the time the “big chip players” can match M1, Apple will have moved on.
This really is the beginning of something big, and something that will radically change the Mac — and quite possibly Windows PC — ecosystem dramatically.
So why do I not want in on the ground floor?
If I wasn’t already sitting behind a high-spec MacBook Pro, of if I were an iOS developer, then I’d probably have pulled the trigger on one. But right now, I’m curious to see how three areas pan out over the coming couple of years.
Firstly, is this 16GB RAM ceiling going to go away soon? This is a limitation that I see high-end pros hitting up against hard, and it’s something that’s hard to work around. It’s possible that M1 will change our expectation of how much RAM is needed, but as someone who uses applications that demand lots and lots of RAM, 16GB feels meager.
Next, there’s the issue of discrete GPUs. Sure, the M1 seems to deliver a lot of power here, but it’s another bottleneck, confounded by the fact that there’s no option to add an external GPU to M1 systems.
Finally, there’s the realization that today’s M1 chips, no matter how good they are, will be tomorrow’s base models, and will be massively outperformed by the next iteration, and that the kinks and bottlenecks that now exist over RAM and GPU may evaporate in that time.