newbie: quick question: Von Neumann vs. Harvard architecture

newbie: quick question: Von Neumann vs. Harvard architecture

Post by malak » Tue, 24 Aug 2004 22:24:51


hi,

i just read that the main disadvantage of Harvard architecture is that
it doesnt allow self-modifying code, right? but i was thinking that
that's not allowed either by the OS, as usually the code segment is
readonly, and writes will raise a GPF, then why do we still use Von
Neumann? Harvard would be faster, right?
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Joe Pfeiff » Wed, 25 Aug 2004 00:13:44


XXXX@XXXXX.COM (sebastian) writes:


I'd always heard the distinction as Harvard vs. Princeton, not Harvard
vs. von Neumann. At any rate, virtually everything today uses a
Harvard architecture for the caches (which does indeed mean
self-modifying code is only possible with hacks). Pentium 4 even
takes things a step beyond that, as the instruction cache (they call
it a trace cache) contains decoded instructions.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.yqcomputer.com/ ~pfeiffer

 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Kai Harrek » Wed, 25 Aug 2004 00:52:38


XXXX@XXXXX.COM (sebastian) writes:


Harvard requires you to partition your memory - not optimal from a
cost perspective.

Also, self-modifying code is used inside OSes (the "tramboline" code
in Linux comes to mind).


Kai
--
Kai Harrekilde-Petersen <khp(at)harrekilde(dot)dk>
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by nmm1 » Wed, 25 Aug 2004 01:02:14


In article < XXXX@XXXXX.COM >,
Kai Harrekilde-Petersen < XXXX@XXXXX.COM > writes:
|> XXXX@XXXXX.COM (sebastian) writes:
|>
|> > i just read that the main disadvantage of Harvard architecture is that
|> > it doesnt allow self-modifying code, right? but i was thinking that
|> > that's not allowed either by the OS, as usually the code segment is
|> > readonly, and writes will raise a GPF, then why do we still use Von
|> > Neumann? Harvard would be faster, right?
|>
|> Harvard requires you to partition your memory - not optimal from a
|> cost perspective.

At worst a factor of two.

|> Also, self-modifying code is used inside OSes (the "tramboline" code
|> in Linux comes to mind).

But it doesn't have to be, any more than it is essential in
applications.


Regards,
Nick Maclaren.
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Bernd Pays » Wed, 25 Aug 2004 01:10:46


Actually, demand-paging a program is "self-modifying" code, too (though the
writer to code space is the hard-disk). Or resolving calls to shared
libraries.

The speed argument is also quite bogus. Today, CPUs have separated L1
caches, since this is the place where you need the additional bandwidth of
Harward "architecture" (it's rather a Harward implementation here). All
other caches are unified, since the bandwidth then can be arbitrated to
program or data on demand, and especially for the external memory, the
extra pins for Harward architecture are not worth the price.

The strangest Harward architectures can be found in embedded processors,
like the 8051, which shares the instruction/data bus, so the only advantage
left is that you can use two times 64k (one for program, one for data)
instead of being limited to one 64k space.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.yqcomputer.com/ ~paysan/
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Christophe » Wed, 25 Aug 2004 03:25:50


XXXX@XXXXX.COM (Nick Maclaren) writes:

That may be true for the memory itself, but not for the caches. I
would imagine that putting a hard partition in all of your caches
(aka, not using a unified L2 or L3 cache) could cost you much more
than a factor of two in performance, and upping your cache size to
match what you would have without using a Harvard architecture
(doubling your cache size) may more than double the cost of the
chip...

Chris
--
Chris Colohan Email: XXXX@XXXXX.COM PGP: finger XXXX@XXXXX.COM
Web: www.colohan.com Phone: (412)268-4751
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by nmm1 » Wed, 25 Aug 2004 05:39:14

In article < XXXX@XXXXX.COM >,


There is no need to implement that architecture incompetently, you
know! Provided the caches carry a tag saying which class of object
they are caching, and all classes can be stored equivalently, then
you can share caches ad lib.


Regards,
Nick Maclaren.
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Christophe » Wed, 25 Aug 2004 07:16:39


XXXX@XXXXX.COM (Nick Maclaren) writes:


Of course you could do that... but once you have done that, haven't
you effectively reverted back to a Von Neumann architecture? (Or a
hybrid architecture?)

I guess we are mostly arguing about nomenclature. Perhaps the
question is better stated "is it better to partition or share the
memory system"? Current machine designs imply that:

a) closer to the CPU the extra dedicated bandwidth and performance
advantages due to specialization of structures you get from not
sharing makes partitioning worthwhile; and

b) further from the CPU the flexibility in allocating bandwidth and
storage between the instruction and data reference stream makes
sharing worthwhile.

The scheme you mention (tagging lines with a type and sharing) just
shows that it is easy to fake a Harvard architecture interface on top
of a Von Neumann architecture machine.

Or do I have my terms mixed up? When people talk about Von Neumann
versus Harvard, are they just referring to the ISA and not the
hardware implementation? Do we really have a term to properly
describe today's hybrid designs (separate caches close to the CPU,
shared far away from the CPU)?

...

Or perhaps you are thinking of a design like this?

data refs-->L1 dcache--\ /-->dmemory
==>L2 ucache==
inst refs-->L1 icache--/ w/ tags \-->imemory

I have to admit I am somewhat baffled as to what advantages such a
design might have. Do any systems exist in this (or a similar)
configuration?

Chris
--
Chris Colohan Email: XXXX@XXXXX.COM PGP: finger XXXX@XXXXX.COM
Web: www.colohan.com Phone: (412)268-4751
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by hayne » Wed, 25 Aug 2004 12:07:14

The Harvard and von Neumann (or Princeton) terms are more complicated
today than they once were. In early days Harvard meant that instructions
and data were stored in separate memories; and von Neumann meant that
they were stored in the same memory and instructions could be operated
on as if they were data. This was important in the early days of very
costly, unreliable hardware because self-modifying code was a way to
get effects that we now get with things like index registers.

Later on we get into other concepts that might be called virtual Harvard
or quasi-Harvard. For instance the Burroughs 6500 and its descendants,
later the Unisys A-series, had tag type bits on each word of memory so that
only instructions could be executed as instructions. And it required a
specially-blessed compiler program to set those type bits to instruction
after creating the instruction words. But everything is stored in the
same physical memory, so you don't have the problem of running out of
instruction memory when you have data memory left over, and vice versa.
But self-modifying code is impossible by design. Except you could write
an interpreter that simulates a machine that can modify its own code.

A fairly late example of a pure Harvard machine was the Bell System #1
electronic switching system, which had separate memories for program and
data. The program store was writable only offline, while the data store
was read/write. And then we got into microcontrollers and microcomputers
that might have all the instructions in ROM and a little bit of RAM for
data.

Then what would you call a microprogrammed machine, where the micro
instructions are in a separate read-only memory but the programmer-
accessible instructions and data are all in the same memory? That's
Harvard at the micro level and von Neumann at the programmer level.

Then as others have pointed out there are modern machines that are
conceptually von Neumann, but have separate caches for instructions
and data, so that self-modifying code is difficult even though the
main memory is unified.

You raise the suggestion that Harvard architecture would be faster.
That's what the separate caches are intended to accomplish. In earlier
days there were machines with separate paths to memory for instructions
and data - was that Harvard or von Neumann?
--

jhhaynes at earthlink dot net
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by josmal » Wed, 25 Aug 2004 18:52:50


I wouldn't call that a disadvantage. But the bad thing that Pure
Harvard architecture has is that it disallows you to have JIT. Or just
intime compiler. And in some cases even getting compiler for harvard
architecture is impossible.


Of course it would be faster, but costs would be higher depending on
your implementation. There are some cases where Harvard architecture
won't cost more than von Neuman architecture and in that market ther
Exists Harvard machines.

Jouni Osmala
Helsinkin University of Technology
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by anto » Wed, 25 Aug 2004 19:08:49


XXXX@XXXXX.COM (sebastian) writes:

It also does not allow any kind of run-time code generation (e.g., JIT
compilers), and other code generation and execution often becomes
harder.


Code sections are usually read-only, right. But on a Von-Neumann
machine you can generate code into a writeable area, and then execute
that (that's what JITs do). And if you really want self-modifying,
you can then change that code. Or, (on some OSs) you can use a linker
option to make the code segment writable (usually -N on Unix).


Apart from the partitioning disadvantages mentioned by others, it
makes a bunch of system stuff easier to write.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
XXXX@XXXXX.COM Most things have to be believed to be seen
http://www.yqcomputer.com/
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by nmm1 » Wed, 25 Aug 2004 19:52:34


In article < XXXX@XXXXX.COM >,
XXXX@XXXXX.COM (Jouni Osmala) writes:

|> >
|> > i just read that the main disadvantage of Harvard architecture is that
|> > it doesnt allow self-modifying code, right?
|>
|> I wouldn't call that a disadvantage. But the bad thing that Pure
|> Harvard architecture has is that it disallows you to have JIT. Or just
|> intime compiler. And in some cases even getting compiler for harvard
|> architecture is impossible.

While that is true for one meaning of the term 'compiler', it is not
true for all. Compilers can compile into a suitable code for easy
(such as fast) interpretation.


Regards,
Nick Maclaren.
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by jsavar » Thu, 26 Aug 2004 01:29:13

On 23 Aug 2004 06:24:51 -0700, XXXX@XXXXX.COM (sebastian) wrote, in
part:


But von Neumann is just as fast if you make the path to memory twice as
wide.

John Savard
http://www.yqcomputer.com/ ~jsavard/index.html
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by Scott Moor » Sat, 28 Aug 2004 16:50:55


Coupla comments. First, Harvard and "Von Neumann" are not mutually exclusive.
Harvard is generally considered to be a type of stored program computer.
Second, its more correctly known as "stored program computer", since that
is what the original paper described. It is often known as "Von Neumann"
because that was the name on the paper describing it, even though
computer historians agree that Von Neumann acted only as the scribe for
the group.

Most OSes allow self modifying code in some fashion. Its pretty much
required to be able to accomplish certain important tasks, such as
compatiblity between calling conventions, and code speedups. I disagree
that it is "bad". Certainly having the main code write protected is
a good idea, but your data is your data: read it, write it, execute it
if you like. They are all valid operations. Executing your data area
is no more dangerous than reading or writing it, both will cause
page faults if they go astray. If it WERE impossible to execute data
space, many of my best programs would fail. Why ? Because in my command
line interpreters, I commonly compile the line into machine code and
execute that, resulting in much better execution time on script
interpreters. Programming does not need another pointless restriction.

Harvard was both a memory protection method and a way to extend the
address space. With code and data occupying two disjoint address spaces,
memory addressing is "doubled" (or something like it). For this reason,
harvard had a brief return to popularity with early microprocessors
and their 16 bit address limits.

However, harvard is also a design concept, anytime you have split code
and data spaces, even in the context of unified memory. Thus, caches
can be split data and code.

I personally use a assembler and compiler that links for the Harvard model,
even though I have never used it on a true Harvard computer. Why ? It
simply gives a good abstraction of the linking process. Code ends up in
one block, and data ends up in another, and they are located independently
in memory. This covers both the idea that an imbedded program can have
the code lives in a separate device than data, and also paged OSes, because
most systems cannot have the code and data live in the same page.

--
Samiam is Scott A. Moore

Personal web site: http:/www.moorecad.com/scott
My electronics engineering consulting site: http://www.yqcomputer.com/
ISO 7185 Standard Pascal web site: http://www.yqcomputer.com/
Classic Basic Games web site: http://www.yqcomputer.com/
The IP Pascal web site, a high performance, highly portable ISO 7185 Pascal
compiler system: http://www.yqcomputer.com/

Being right is more powerfull than large corporations or governments.
The right argument may not be pervasive, but the facts eventually are.
 
 
 

newbie: quick question: Von Neumann vs. Harvard architecture

Post by rpw3 » Mon, 30 Aug 2004 21:14:01


+---------------
| A fairly late example of a pure Harvard machine was the Bell System #1
| electronic switching system, which had separate memories for program and
| data. The program store was writable only offline, while the data store
| was read/write.
+---------------

An even later (circa 1985) example was the original AMD Am29000 RISC,
which was pure Harvard on the external bus interfaces[1]. That didn't
last all that long, though, and within a few years the Am29030 introduced
an internal I-cache and merged the external I & D busses into a single
data bus. [Internally it was still quasi-Harvard, of course.]


-Rob

[1] Well, the I & D data busses were separate, but it shared the A bus
for addresses (with a tag bit). That really didn't matter that much,
since for good performance the instruction memory was almost always
made burst-capable (and often the data memory as well), so you only
needed the A bus to start an I-burst at branches, then it could be
used to start D cycles the rest of the time.

-----
Rob Warnock < XXXX@XXXXX.COM >
627 26th Avenue <URL: http://www.yqcomputer.com/ ;
San Mateo, CA 94403 (650)572-2607