Published: Fri 02 September 2022
By Koen Martens
In News .
tags: frontpage-news
I’m always looking back to the seventies and eighties with envy when it comes to computers. The first microprocessors were just invented (the Intel 4004 for example – widely credited as being the first microprocessor – was released in 1971) and everything was possible. The computing power we now take for granted as something in our pocket was just a dream back then, with only a fraction of that power occupying a good-sized room. There was promise, there was excitement, there was optimism. We were setting foot on uncharted territory, discovering a whole new world.
Much has changed, but essentially, the model of computing is still the same as that early Intel 4004: a central processing unit (CPU ) and memory, connected together with a bunch of wires. The memory contains instructions, which tell the CPU what to do, and data, which is what the CPU operates on. It continuously looks in memory for the next instruction, fetches the associated data, performs the operation, and stores the result back into memory.
While the past decades have seen that basic model being enriched with all kinds of bells and whistles, the essence of what a CPU is doing hasn’t changed. Sure, processors have gotten faster by virtue of various optimisations and tricks to speed things up and they are able to do much more complex operations. For example, early micro processors could only do addition and subtraction, not division or multiplication. The latter two had to be implemented in software in terms of addition and subtraction, which understandably made those operations slow. Memories have gotten bigger, and we’ve added even bigger (but slower) memory such as floppy drives and hard-disks. But when you get down to it, all the contemporary CPU does is look into memory for the next instruction, fetch the data, do the thing and store the result back into memory. It’s gotten a bit boring, to be honest.
However, lately there have been some interesting developments. Of course, there’s the whole promise of quantum computing. I’ve been part of a team that built a quantum computer a few years ago. It’s an exciting yet mystifying field, the pioneers of which are somewhat in the same position as those who pioneered the digital revolution in the seventies. But it’s a long road before any of that will lead to any practical application (although preliminary limited results have been booked by purpose-built computers that exploit quantum effects to solve a limited set of hard problems quicker than classical computers).
I’m a bit more down-to-earth and pragmatic, and what has me really excited are recent developments in CPU design. One issue that has plagued computer security is the fact that memory is organised in a linear array of bytes, of which each byte has an address. Managing what is stored where is a matter of manipulating pointers. For example, let’s say I want to store the sentence ‘Hello, world’ into memory. Each character is a byte, so that would be twelve bytes. If I want to keep track of this sentence, I need to store the address within memory where I put the first letter of the sentence as well as the length. And that’s often where things go awry: if I overwrite the sentence, but don’t check the length of the new data, it might exceed the twelve bytes I initially allocated for it, overwriting whatever comes after it. Not a good thing, and this principle is often exploited to break IT security.
ARM , a manufacturer of microprocessor designs that are used in many of our modern day mobile gadgets, has recently introduced a proof of concept of a CPU that will prevent such mishaps from happening. Where in traditional CPU ’s, the addresses and sizes of information in memory are manipulated like any other data, in ARM ’s proposed CHERI architecture, this information is promoted to a hardware primitive that makes it hard – never say never – to exceed boundaries and overwrite data inadvertently. If you’re interested, my favourite news outlet The Register has a nice write-up .
While ARM is dominating the mobile market, with Intel and AMD splitting the desktop and laptop market, there is a third contender gaining traction and respect: RISC -V . While ARM , Intel and AMD protect their designs and architectures with aggressive intellectual property rights management, RISC -V proposes an entirely open architecture. Anyone is able to create a CPU based on this architecture, which is a great boon to innovation. While it’s not able to tout a proven track record of decades of existence, there are more and more RISC -V based designs available, ranging from tiny microcontrollers that are barely able to manage a remote controlled garage door to powerful beasts that can run a desktop operating system.
But what has got me really excited is recent news of a processor that takes a radical leave from the classical architecture. In a recent article in Nature , scientists introduce the NeuRRAM. Instead of separating the CPU and memory, both are integrated. This is known as ‘in-memory computing’, and it prevents the need for hauling back and forth data between memory and CPU , slowing things down. It’s actually akin to how our brain works – or at least, our current understanding thereof – in that there are many tiny processing units surrounded by tiny amounts of memory, with short communication paths between them. In the brain, processing and memory is combined in neurons, which communicate with nearby neurons through the release of electrical charges.
The scientists who made the chip prove themselves to be true wordsmiths as well, throwing around terms such as ‘transposable neurosynaptic array (TNSA )’ and CMOS neuron circuits. Pure science-fiction!
Note, though, that in itself, this design is not new. Technologies like it have been studied for a while. However, this is one of the first actual chips that implements this and is actually good at processing information.
It’s primary design goal is speeding up artificial intelligence workloads, and the team demonstrates a number of use cases running on the chip. While I’m highly sceptical of what we call artificial intelligence nowadays – I’m more inclined to call that ‘statistics on steroids’ – this design has got me excited. Maybe it’s just because it’s something new, something different. Or maybe it’s because the promise is of more advanced computers that may one day be able to approximate what our brain is capable of doing.
Whether that is something we should desire, though, I’m not decided yet. I mean, why are we, as the human race, pursuing artificial intelligence? To replace expensive, whiny and fallible humans in various professions? Sure, computers work for twenty-four hours a day without complaining. And if they do break, you just replace them. No-one will mourn a dead computer.
The algorithms we have now are capable of amazing feats on the face of it, but they’re nowhere close to implementing actual intelligence as we understand it in human beings. Despite attempts by big tech, we can’t yet replace a human being with an algorithm. I wonder, if we get there at some point, won’t we have recreated the human mind in some form or the other, creating artificial sentient beings. And once they are that, should we not treat them with the same respect and ethical considerations as an actual fellow human being?
Anyway, food for thought. I’m happy to see all these new exciting developments in CPU design though. Honestly, it had all gotten a bit boring, but these are some of the things that make my nerdy core glow. I’m eager to find out all about it and find excuses to use this new technology in a hobby project and hope to have a need for something like this in my day job. Currently, I’m working in more down to earth IoT chip design, but you never know!