Computing since Von Neumann

The most profoundly impactful moment for computing, since Von Neumann

I stole and twisted that quote from a film to get your attention (… do you know which one?)  It just might end up being true though. I am part of a team from the Atos Scientific Community who are tracking and assessing the implications of advances in memory technology. It’s what we are calling “Computing Memory”.

Let me start by explaining the Von Neumann architecture. This is how electronic digital computers were envisioned in the 1940s, and the fundamental principles are still with us today. The core design of the computer is comprised of a Central Processing Unit (CPU) and addressable memory. Information (data) is input into the computer, it is processed, and an output produced. Both the program (the instructions required to execute some calculation or operation) and the data are held in that same memory. There are other architectures where this is not the case, but Von Neumann, where is does, has prevailed.

Now, data is stored somewhere, in some mass storage device. Over the years this medium has changed – from punched cards, paper tape, magnetic spinning disks and more recently Solid State Disks with Flash memory (not to be confused with the memory we discussed above). Data is retrieved from mass storage and written into memory so that it can be addressed by the CPU, where operations can be carried out on it.  Then the result might be moved out again to mass storage and stored. That’s basically what happens – move data in; process it; move resultant data out again.
So what’s all the fuss about?
We’re about to see an extraordinary step change in memory technology. It will shake the foundations of what I’ve just described.

You see mass storage in its various forms is slow. Very slow compared to the way the CPU has evolved over the last 70 years.  (Has it really been that long?!!) You will know, I’m sure, Moore’s law. No, it’s not about speed – it’s about the density of transistors, but the side effect has been increasing speed of the processor. We haven’t seen the same increase in storage speed – though we have seen an explosion in storage capacity requirements. “We produced more data on the planet over the last two years than all previous years put together” … I keep reading; every few years someone says it again, and again.

But you can turn the power off to mass storage, and it remains. You don’t lose it. And that’s useful.

Compare this to memory. Memory is relatively fast, but when you turn the power off, everything goes. It’s volatile.  It’s just the nature of it – memory is made from transistors, and when no power is applied, they return to their default state.

So there are two things here. Firstly it takes a long time to get data into memory from mass storage, and secondly you have to power memory continuously in order to use it.

What if you had the best of both worlds. Fast memory, with mass storage capacity, that doesn’t need power to retain its information.

Well, that’s what’s just about to happen; and it’s tremendously exciting. Technology pioneers are now launching products that do just this (or nearly this). For instance “3D” memory technologies which have multiple layers, not just a single layer, are increasing capacity significantly.

HP in particular has been developing technology to create a new fundamental electronic component that was theoretically invented (or should I say discovered) and described as the Memristor back in 1971.This must be extraordinarily difficult though, as whilst HP announced that they had found a way to do this in 2008, they haven’t yet been able to bring it to market.  There is some controversy at the moment due to an article published in Nature this year which claims that the 2008 announcement is not strictly a true Memristor (and cannot be), but a device that just acts like a Memristor.

I don’t suppose that mattes if HP can make the application of it a reality as it’s what we do with it that is important in practice.

And that brings me to the implications. What if we do have large-scale, fast, non-volatile memory at our disposal? What does that mean?

Firstly it means we don’t need all of that separate mass storage. All of our data resides in memory all of the time. No time is wasted retrieving data from elsewhere, or moving it back. And that is going to have a huge beneficial impact on overall system performance.  I’m not saying that mass storage will disappear immediately – I am sure the situation will evolve over time. We know nothing of the economics yet, so it might be the case that for HP (or whomever) to recover their R&D costs will need charge handsomely for the privilege of accessing this new technology.

Secondly, if you only need power to change the state of a bit, not to maintain that state, this is going to cause a monumental reduction in power requirements. Remember if there is no mass storage (spinning disk), and now very little power for the memory itself, we are slashing the costs of computation.

This creates two new scenarios for us.  A massive high-performance computation capability where petabytes of information can be accessed and processed instantly. This is going to change the performance computing landscape. And further than that we can think of it as taking the compute capability to the data, where it needs to be. It’s a fundamental shift, and this is why I am comparing it to the Von Neumann model.

Secondly, it’s going to drive a new wave of changes in the economics of computing. It is going to be so cheap, comparatively, to perform computation there will be a new market for cost-efficient computing measured purely by power usage. In fact energy will become the metric to live or die by in the computing industry as we strive to deliver our services.

About Mike Smith

Chief Technology Officer and member of the Atos Scientific Community. Mike has been in the IT industry for over 20 years, designing and implementing complex infrastructures that underpin key Government and private sector solutions. Setting Atos technical strategy, researching new technologies and supporting the consulting and architect communities. Previously Mike has held technical and management positions in British Rail, Sema Group and Schlumberger. He has a daughter and a son, both keen on anything but technology. Mike's sporting passion rests with Test Match Special, and is jealous/proud of his son's Ice Hockey skills.

  • Mark Cuddy

    Hi Mike,

    I haven’t read much about this so really interesting….what implications would it have for data security?