The Sage Keeps Asking People to "Prove" Things, Here's His Chance (#1)

The Sage Keeps Asking People to "Prove" Things, Here's His Chance (#1)

Post by Kevin G. R » Thu, 11 Sep 2003 22:06:05


>So let's start with your first point:

Frankly, I, personally, do not care if "the sage" thinks this can be proven for
any HLL but not for assembler, because this, while a good thing in some
circumstances, is not a be-all, end-all for computer languages and translation
systems (e.g., compilers, assemblers, interpreters).

This is a little bit like saying that a ground transportation vehicle MUST be
able to do 0 to 60 (mph) in 10 seconds or less. (As if school busses, tractor-
trailers, dump trucks, and, oh yeah, the suburban station-wagon/minivan are
NOT ground transportation vehicles.) For sports cars, yeah; for junior's
first car - I don't think so.

Why not add demands for efficient numerics, symbolic manipulation,
transparent garbage collection and native support for reading/writing
MS Word documents? While all these can be useful in some context, are
they relevant? I don't think so. Neither is the demand for hardware
independence.

One example, for embedded systems work, hardware independence is often NOT
a virtue, it can be a DISvirtue, and C/C++ is being used in that domain.

Besides, C started life as an HLA for the PDP-11 (once the attempt to
reproduce BCPL was dropped). So talking "processor independence" and
"C" in the same sentence is an oxymoron. Implementing C on many machines
required forcing a totally inappropriate processor architecture on
a different hardware architecture. Stack-orientation, whilst common
in many of today's processors, was foreign to many of the machines that
C was implemented upon.

So although you have attempted to put the debate into objective terms,
I think it is STILL predicated upon assumptions (unproven, unprovable,
and whose applicability is debatable, however desireable they may be
in some situations) that are not really acceptable.

This is mucho work on your part, Randy, and I suspect the effort is
for naught.

Sincerely
Kevin
 
 
 

The Sage Keeps Asking People to "Prove" Things, Here's His Chance (#1)

Post by blackmarli » Fri, 12 Sep 2003 06:05:11

Randall Hyde" < XXXX@XXXXX.COM > wrote in message news:<2vw7b.5267$ XXXX@XXXXX.COM >...

Firstly, I will state that I see no reason why I should not
be constrained by the "ground rules" too, besides I normally
follow most of them anyway.

[snip]


As this is, in my opinion, the most difficult point to disprove
I will have a go. (I like a challange.) My antithesis is that
assembly can be made portable, provided certain conditions are
met and/or standards followed.

To start with, let me state that any language which provides an
extensible interface and calls to operating system primitives
(using said operating systems API) can result in non-portable
programmes but with the required level of dicipline on the part
of the programmer, ie. only using well specified standard library
code. If this is impossible, placing all non portable code in a
small, well specified and easily rewritten module to minimise
workload during transfer to a different system. (The latter is
the technique use in programming vim [ www.vim.org ], which while
written in C uses some non portable interfaces which must be
replaced during porting -- vim is available on many architectures
as a result of this programming style.)

Firstly, assembly can avoid operating system dependencies by
using a well definied library which is available for several
operating systems. The HLA standard library
[ http://webster.cs.ucr.edu/Page_asm/0_RHUCRLib.html ] is an
example of such a piece of code, this is available for both
Linux and Windows -- there is no doubt that it could be
rewritten for other x86 architecture operating systems.

Secondly, on the issue of architecture, assembly still has
a limited degree of portability. On the Apple Mac, during
the transition from the Motorola 680x0 series processor based
systems to the faster PowerPC architecture, existing code could
not just be thrown away and old applications needed to be
supported so the PowerPC based Mac needed to be able to run
software for a completely different processor, the 680x0. The
solution Apple choose was interpretation, code on the 680x0
would execute in an emulation programme on the PowerPC, thereby
allowing old applications to run on the new hardware (normally
at a faster rate due to improvements in the more modern PowerPC
processor series). [ http://www.kearney.net/~mhoffman/macmac.html
]. Apple also launched similar emulators to run on Solaris and
HP/UX, allowing Mac machine code to run on totally alien systems
[ http://www.byte.com/art/9404/sec4/art3.htm ].

Another strategy may be employed: binary translation [
http://www.ifi.unizh.ch/groups/richter/people/pilz/oct/ ] this
technique converts an existing executable file from one
processors binary format to that of a completely different
processor. Such transalations are a fairly hot research topic
at the moment.
[ http://www.itee.uq.edu.au/~csmweb/decompilation/bintrans.html ]
[ "Binary translation" -- Sites, R.L., Cherno, A., Kirk, M.B.,
Marks, M.P. & Robinson, S.G. -- Communications of the ACM, Feb.
1993, Vol. 36, No. 2, pp. 69-81 ]. Interestingly a similar
technique is used in the M$CLR [ "A Technical Overview of the
Commmon Language Infrastructure." -- Meijer, E., & Gough, J. --
http://research.microsoft.com/~emeijer/ ].

Java byte code implements yet another way to make machine code
portable, JIT compilation. This is a dynamic form of binary
translation where routines
 
 
 

The Sage Keeps Asking People to "Prove" Things, Here's His Chance (#1)

Post by Randall Hy » Fri, 12 Sep 2003 12:06:22

ell, as I expected, TS wasn't up to defending
all these claims he has made about HLLs vs. C/C++.
That's okay.
Nevertheless, it's not a bad idea to go over each of these
points and present *my* take on them.
BTW, TS, you "punted" so don't expect any replies (from
me) for any comments you have. You had your chance,
I would have helped you even, but this is for everyone
else's benefit.


First, let's begin by discussing how important this attribute is to someone
who wants to use a language.

The problem with "hardware-independence" is that you always wind
up with a "lowest-common-denominator" result. *Every* language
I've bothered to learn exhibits *some* hardware dependencies. For example,
most programming languages I've come across (not all, but most) assume an
imperative execution model (e.g., Von Neumann or Harvard Architecture).
There are clearly some that don't immediately fit this mold (e.g., Prolog),
but even those languages that aren't directly imperative are written with
an imperative execution model in mind. For example, very few common
languages would run well on a dataflow machine (and, conversely, languages
invented for dataflow machines rarely do well on imperative machines).
Likewise, few languages do well on SIMD machines or even MIMD
machines (though some languages are okay at *coarse* MIMD).

How many languages work well on decimal machines?
How many would work well on a ternary machine?

Today, most popular programming languages *assume* that their
code executes on a binary machine. Many algorithms depend upon
execution on a binary machine (e.g., ever used the "<<" operator in
C/C++?). Yeah, the language could be tweaked to produce
portable results on a ternary architecture, but the performance hit
would be unacceptable to most people (even Knuth was unable
to get away with machine independence in his "MIX" VM; he
had to break down and use a binary shift operator in a couple
of algorithms).

Now to the specific question. Are assembly and C/C++
hardware independent?

Well, what does hardware independent mean?
As best I can tell from the original paper on which TS based
his post, this means "CPU independence".

Clearly, C/C++ is *fairly* CPU-independent. The proof of
that is the wide variety of C/C++ compilers running on
different CPUs. While C/C++ does contain some hardware
specific features, there is no question that C/C++ is fairly
hardware independent. Indeed, on the basis of real-world
implementations, it's probably not too outrageous to claim
that C/C++ is the most hardware independent language
available today (yes, other languages like Ada have been
*designed* to be less hardware dependent, but C/C++
has been ported to more CPUs than just about any other
language, so arguing that other languages are more hardware
independent that C/C++ is a non-starter).

Is assembly language hardware independent?
Well, the obvious answer is "no."
However, what is "assembly language"?
Based on comments I've read in TS' posts,
"assembly language" is not a specific language (e.g.,
MASM, HLA, RosASM, whatever), but a generic
term applying to *all* assembly languages. In this sense,
"assembly language" is probably the one language that *is*
ported to every CPU out there. After all, *every* CPU has
a machine instruction set (and almost every manufacturer
provides an assembler of some sort for their CPUs), therefore,
much as one can argue that C/C++ is hardw
 
 
 

The Sage Keeps Asking People to "Prove" Things, Here's His Chance (#1)

Post by Kevin G. R » Fri, 12 Sep 2003 21:40:04

>Indeed, on the basis of real-world

C + LINT perhaps, not C alone. The old Fortran-77 standard
language is probably more hardware independent than is C
w/o using LINT. I haven't really done much with C++
and so do not comment on that.