Modern computer architecture
#1
Modern computer architecture
Seeing this section appears to be lacking anything really geeky:
Are current computers (PCs, iPads, smartphones etc) still based on the Von Neumann architecture?
I know they were when I did computing/electronics at college - but that was over 15 years ago now!
And if its been replaced, what's replaced it? And if we are still using it, why hasn't there been anyone clever enough since to come up with something more befitting for modern/emerging technology?
Answers on a postcards
Are current computers (PCs, iPads, smartphones etc) still based on the Von Neumann architecture?
I know they were when I did computing/electronics at college - but that was over 15 years ago now!
And if its been replaced, what's replaced it? And if we are still using it, why hasn't there been anyone clever enough since to come up with something more befitting for modern/emerging technology?
Answers on a postcards
Last edited by ALi-B; 22 September 2011 at 01:38 PM.
#2
Yes and no.
The main two architectures are the von Neumann and the Harvard architecture. The von Neumann variant has it's advantages in being determinsitic and not impacted by race conditions as they can happen with the parallelism of the Harvard architecture. Current processors use a "modified" Harvard model which means that the CPU itself (with its divided cache) works with a Harvard architecture and RAM access is done via a von Neumann architecture (so code can be moved like data - very handy). This is afaik true for current x86 (PC) and ARM (=many mobile) processors.
By the way: these "hybrids" were already around in the early 90s (Amiga 3000, Apple Mac IIx etc.)
The main two architectures are the von Neumann and the Harvard architecture. The von Neumann variant has it's advantages in being determinsitic and not impacted by race conditions as they can happen with the parallelism of the Harvard architecture. Current processors use a "modified" Harvard model which means that the CPU itself (with its divided cache) works with a Harvard architecture and RAM access is done via a von Neumann architecture (so code can be moved like data - very handy). This is afaik true for current x86 (PC) and ARM (=many mobile) processors.
By the way: these "hybrids" were already around in the early 90s (Amiga 3000, Apple Mac IIx etc.)
#4
Scooby Regular
iTrader: (19)
Yes and no.
The main two architectures are the von Neumann and the Harvard architecture. The von Neumann variant has it's advantages in being determinsitic and not impacted by race conditions as they can happen with the parallelism of the Harvard architecture. Current processors use a "modified" Harvard model which means that the CPU itself (with its divided cache) works with a Harvard architecture and RAM access is done via a von Neumann architecture (so code can be moved like data - very handy). This is afaik true for current x86 (PC) and ARM (=many mobile) processors.
By the way: these "hybrids" were already around in the early 90s (Amiga 3000, Apple Mac IIx etc.)
The main two architectures are the von Neumann and the Harvard architecture. The von Neumann variant has it's advantages in being determinsitic and not impacted by race conditions as they can happen with the parallelism of the Harvard architecture. Current processors use a "modified" Harvard model which means that the CPU itself (with its divided cache) works with a Harvard architecture and RAM access is done via a von Neumann architecture (so code can be moved like data - very handy). This is afaik true for current x86 (PC) and ARM (=many mobile) processors.
By the way: these "hybrids" were already around in the early 90s (Amiga 3000, Apple Mac IIx etc.)
Could you be a bit more precise
#5
shouldn't we be suspicious of shared cache memory (von neumann) across multiple cores in the (harvard) chip? this never really seemed sensible to me. unless one core/process space, like a singleton design i guess, manages every other core - but i'm not sure if this occurs.
btw, i'm not being a smart a$$, i am actually asking as a 31 bit assembler sysprog of old.
btw, i'm not being a smart a$$, i am actually asking as a 31 bit assembler sysprog of old.
Last edited by ChefDude; 22 September 2011 at 02:48 PM.
#6
I have to admit, that my response was a bit brief and very general. But I think it served it's purpose
I'm not entirely sure, if I got your point correctly. But I think you are refering to the shared L2 cache in, for example, the Core architecture? And yes, it's a bit counter intuitive having 2 cores randomly reading data from one cache. Thing is, that this "advanced smart cache", where memory can be allocated (and locked) by each core dynamically, has proven to be more efficient than having the cache completely (!) separated for each core - in which case the two cores would have to communicate (and even replicate data) over the memory bus if data exchange is neccessary. L1 cache for data and instructions exists on each core seperately and each has it's own prefetchers (instructions, IP (data), DCU (multiple read detection)) though -> true Harvard architecture. Basically it's like programming for multithreaded environments, where mutexes exist to block certain areas from parallel access. (http://www.behardware.com/art/imprimer/623/ nice writeup I found while looking for some pictures I originally intended to link - page 6 is the interesting part)
When you're talking about extended memory/RAM access then there is your memory controller in the northbridge or the CPU managing all memory access over the memory bus (or singleton if you want).
Anyways... Hope I didn't kill the geek thread. More of this please
shouldn't we be suspicious of shared cache memory (von neumann) across multiple cores in the (harvard) chip? this never really seemed sensible to me. unless one core/process space, like a singleton design i guess, manages every other core - but i'm not sure if this occurs.
When you're talking about extended memory/RAM access then there is your memory controller in the northbridge or the CPU managing all memory access over the memory bus (or singleton if you want).
Anyways... Hope I didn't kill the geek thread. More of this please
Trending Topics
#8
#9
Scooby Regular
Join Date: Mar 2008
Location: Aberdare / Daventry
Posts: 5,365
Likes: 0
Received 0 Likes
on
0 Posts
They are not so much going for new architecture any more but with smaller technology.
Intel are tooling up for 14nm transistors whilst teaming up with Toshiba aiming for 11nm.
Intel are tooling up for 14nm transistors whilst teaming up with Toshiba aiming for 11nm.
#10
The thing is its hardly innovative really (except from a mnaufcatureing point of view) - its just pushing through any issue/limitation by the use of brute force for the processing speed/throughput.
So really we're still on tweked versions of Harvard and Von Neumann both of which predate the silicone transistor. It makes me wonder after all this time, is there still a better way?
I supose its like re-inventing the internal combustion engine, why mess with something that already works? Just tweek it to make it more powerful/reliable. However it does show the brilliance of those that came up with the computer architecture whom have yet to be superceeded by anything from any of our current generation of boffins.
Last edited by ALi-B; 23 September 2011 at 10:25 PM.
Thread
Thread Starter
Forum
Replies
Last Post