Programming paradigms

Programming paradigms

Post by shantan » Fri, 26 Dec 2003 10:10:27


Hi,

Does anyone know of any work being done to understand the strengths of
various paradigms? By this I mean something like a clear denotational,
axiomatic and operational semantics for OOPS, Functional, Logic and
Imperative? Having done that, has anyone looked into trying to build a
language with the strengths of all the languages?

Would be obliged to know the answer to this.

Regards
Shantanu
 
 
 

Programming paradigms

Post by Feue » Fri, 26 Dec 2003 15:20:16


Yes.


What's that have to do with your first question?


All the time.


What for?

David

 
 
 

Programming paradigms

Post by Joachim Du » Fri, 26 Dec 2003 18:16:09


Yes to the last one. Take a look at http://www.yqcomputer.com/ ~pvr/ .

Regards,
Jo
 
 
 

Programming paradigms

Post by shantan » Sat, 27 Dec 2003 09:58:47

Thanks a ton Jo for the site! I am downloading the book and would get
back to discussion as I get to know more...

Thank again!

Regards
Shantanu
 
 
 

Programming paradigms

Post by shinj » Sun, 28 Dec 2003 09:09:39


That's a very interesting question. It also ties into the question
about how a specific language or paradigm affects the way we think
about solutions to programming problems. Do people more naturally
understand the imperative paradigm, or is that just due to history and
the prevalance of imperative languages. Do people think more
effectivley in one paradigm over another? How about infix vs prefix
notation, especially with regards to mathematical notation? And does
the OO paradigm really help us tame complexity, or is that just
marketing with little justification? etc, etc.

I know I've only added alot more questions without answering your
original question, but its an interesting area. I hope some people
here with more experience will be able to answer your original
question.
 
 
 

Programming paradigms

Post by Joachim Du » Sun, 28 Dec 2003 17:21:38

hinji Ikari wrote:

There is an influence. People who are programming in a language that
doesn't have recursion will find recursive algorithms later than
iterative ones, for example.


It's the latter. It's quite hard to teach beginners how to decompose an
intention into a series of steps that will achieve that intention.
However, once they have mastered that step, there aren't many serious
roadblocks ahead.

However, I'm not sure that things will be very different if another
paradigm becomes standard. For functional programming, it would be
recursion as a basic building block, and it's not easy to wrap one's
mind around it and its ramifications (fixed-point theory, tail call
optimization and iterative vs. general recursion, accumulators, the
definition of fold and similar HOFs as tail-recursive functions -
recursion is not just difficult, there's also a whole lot of strings
attached).


The effectiveness of thought processes is so difficult to measure that
I'd distrust any study in that direction.
People use one thought process and stick with it, so they can't easily
come to conclusions via self-observation. Worse: they forget how their
thought processes worked before they adapted them to programming. (At
least that's the case for me: there was a quite marked transition from
being an ordinary person to becoming a programmer. I adopted a mode of
"structural thinking", while my previous thinking was far more "fuzzy".
Unfortunately, I almost immediately forgot how I had been thinking
before the transition, and I have never been able to do a direct
comparison of the effectiveness of the different kinds of thought
processes. My conjecture at that time was that "programmer thinking" was
slightly more efficient overall, though there'd be some areas where
effectivity would drop a bit; I was far from sure though, and now that
my previous thinking modes are just a hazy memory, I'd be unable to say
anything definitive.)


Utterly irrelevant. Programming isn't about writing down mathematical
formulae, it's about transforming specifications (which are often quite
hazy) into a program, while keeping an eye on issues like
maintainability, efficiency, and programmer time budget.
A solid mathematical background for the language can help, since that
allows more powerful automated reasoning over the programs, which in
turn makes the secondary goals easier to attain. But that doesn't extend
to the notation.
Of course, if the programming language has a notation that's close to
the application domain, it helps if you can express things in a way that
the customer will understand. However, this pertains only to the top
levels of the system, where the data being processed is near to the
abstractions that the customer uses. For the internals, notational
nearness is irrelevant since these are objects that the customer never
sees, and couldn't interpret even if he saw them. (That's one of the
reasons why "4GLs" have failed: it's easy to write the top 10% of the
system in them, but the adaptation to customer though processes usually
made them clumsy and awkward to use when building the remaining 90% that
went "below the hood".)


OO is a large bundle of ideas, thought to have a synergy.
That synergy diminishes as the language becomes higher-level,
particularly as the language has easy-to-use constructs that iterate
over collection data types.
Personally, I think that many ideas that went into
 
 
 

Programming paradigms

Post by Alaric B S » Mon, 29 Dec 2003 00:01:30


I like this question. I've seen OO be a useful thing, allowing
algorithms to abstract away from implementation details of the objects
they deal with not just for reasons of layer seperation but so the
algorithm can be applied to objects with validly different
implementations of the relevant interfaces. And I've seen it being
useless (just emulating a system of named-field-tuples and procedures
like C or Pascal, but with slightly clunky syntax).

This comes down to distinctions of ideology, so I'm not sure if one
uber-model will capture both, apart from the sum of the OO and non-OO
models of program structuring; with seperated notions of 'method' and
'function'.

The multimethod / typeclass approach seems to be a winner, since it lets
you extend classes without having to change the class. I think it's
still worth having 'private' and 'public' fields; I would define the
distinction as "implementations of methods on the class can access
fields directly. Fields marked 'public' get getter/setter pairs created
for public access automatically". I'm a fan of forcing access via
getter/setter methods, or at least language syntax that lets you insert
code into the get/set actions of class fields without altering all the
source that references it. There ought to be a way to define private
methods, too, that are not externally accessible but act as helpers to
other methods.

ABS
 
 
 

Programming paradigms

Post by Marcin 'Qr » Mon, 29 Dec 2003 01:03:21


For my pet language I couldn't find a good way to have private fields.

There can be private fields if you provide "methods" in the sense of other
fields which hold functions (let's call them internal methods) which refer
to that private fields - no problem this way. But it's not possible to
have a field which can be accessed only from generic functions. Access
control + generic functions + dynamic typing doesn't mix. Pick any two
and it works. I chose generic functions and dynamic typing.


In my language you can define what does it mean to access a particular
field (read or write). This applies to plain variables too. So there can
be constant bindings, mutable bindings, lazy bindings, thread-local
variables, variables which mirror some external state in real time etc.
Variables are first class objects, but an expression being just a variable
name runs the reader.

Object fields are normally not accessed through getters/setters because
it would require introducing lots of global names.


Private generic functions - no problem. Private internal methods accessed
from other internal methods - no problem. But there are no private methods
accessible from generic functions and not just from anybody who holds a
reference to the object. Is it good enough?

Followup-To: comp.lang.misc; although my language is ropughly as
functional as SML/OCaml, this topic is independent from functional
programming.

--
__("< Marcin Kowalczyk
\__/ XXXX@XXXXX.COM
^^ http://www.yqcomputer.com/ ~qrczak/
 
 
 

Programming paradigms

Post by Neelakanta » Fri, 09 Jan 2004 06:43:34

In article < XXXX@XXXXX.COM >, Marcin 'Qrczak'



FWIW, Dylan has all three. Access control isn't at the class or method
level, though -- the module system controls the visibility of
identifiers. It's a fairly common pattern in Dylan to export an
abstract base class, and then keep the implementation classes hidden
from view. That way users can't do gf dispatch on implementation-
specific details.

I vaguely recall that Mozart/Oz is a dynamically-typed language with a
more sophisticated device for abstraction, but can't remember any
details. Anyone know the score there?

--
Neel Krishnaswami
XXXX@XXXXX.COM
 
 
 

Programming paradigms

Post by Borci » Fri, 09 Jan 2004 08:44:27


Paradigms is a programming language replacing variables by variouses, and as
canonical example of the utility of variouses as compared to variables, how
they help capture for the well-being of Paradigms, the descriptive
documentation protocol in terms of idiosyncrasies such as saying "various
paradigms" to mean "variouses in Paradigms".

HTH, B.
 
 
 

Programming paradigms

Post by Marcin 'Qr » Fri, 09 Jan 2004 09:26:19


Hmm, right. It does it by making a method for each field, where the
automatically generated implementation of the method does the necessary
magic to access the field.

But I'm afraid of implicitly defining such a lot of methods. In what
namespaces they should be put in? Which fields should generate methods
put in the same generic function and which fields should generate separate
generics?

Dylan as far as I remember merges such implicitly defined generics from
all modules in a library, which is a higher level of packaging than a
module. When several modules in a single library use the same field name,
they are put in the same generic function, and when this happens in
separate libraries, it depends on whether one of them uses the other
(the implicit generics are not generated if their names are already in
scope) - do I understand right? I don't like these rules, even if they
work in many cases...


Sure, I too hate tying access control to definitions of classes. The
problem is that I don't want to pollute the global module namespace with
all the field names. For example when I represent an abstract syntax
or semantic tree, there are lots of fields - it's enough when node
constructors have their own global names.

--
__("< Marcin Kowalczyk
\__/ XXXX@XXXXX.COM
^^ http://www.yqcomputer.com/ ~qrczak/
 
 
 

Programming paradigms

Post by pvr » Fri, 09 Jan 2004 18:16:40


For controlling visibility, Oz has two mechanisms: lexical scoping
(as usual, for static visibility) and name values (for programming
other kinds of visibility). Name values are a simple idea: they are
unforgeable constants that do not have a print representation.
The only way to get access to a particular name value is to be
passed it from someone else who has access.

Some of the common idioms (such as private methods) are
supported with syntax (called "linguistic abstraction" in my book),
which is defined by translating into a program that uses name values.
But name values are quite general: they can express just about any
access control rules. Whether or not you want to provide syntax
for some of these is another question.

Peter
 
 
 

Programming paradigms

Post by Joachim Du » Fri, 09 Jan 2004 23:04:27


In some operating systems, a similar concept was introduced under the
name of "capability".
BTW I think that having a print representation wouldn't hurt. The main
point is unforgeability: that it's impossible to create a name value
(capability) from a print representation (or, for that matter, from any
uncontrolled data structure).

Regards,
Jo
 
 
 

Programming paradigms

Post by Lex Spoo » Sat, 10 Jan 2004 01:40:13


XXXX@XXXXX.COM (Shinji Ikari) writes:



There is increasing evidence that narrative is built in to human
brains, so that would suggest that people *understand* imperative
constructs at a deep level.

There is also the issue that imperative languages have a closer match
to the underlying hardware; when you are *programming*, as opposed to
just writing down some math, you care about this.

There is also the observation that experts in any language seem to do
quite well....

Still, ease of understanding is not the only criterion. There are
also things like:

- concise programs

- mental ease of typical modifications

- concise changes under typical modifications

- likeliness of remaining errors, under a particular testing regime

- time to achieve satisfactory error levels, under a particular
testing regime

- communicability of issues and possibilities to clients


If you really want to answer questions like this, then it seems you
need to commit on some issues. What kinds of problems are being
solved? How much and in what ways will the requirements be likely to
change? What are the correctness requirements? What kind of
verification will be performed? Who are the clients, and how well do
they need to understand the situation?


-Lex
 
 
 

Programming paradigms

Post by Neelakanta » Sat, 10 Jan 2004 05:33:58


Hi Lex, I'm not sure I agree with this. I mean, I totally agree that
narrative appears to be fundamental to human reasoning, but I don't
think that implies an "imperative worldview".

The reason I am skeptical is because the organizing principle for
human knowledge appears to be cause and effect: machine learning
researchers find that getting domain experts to put their knowledge
into propositional or probabalistic form is very hard, but that domain
experts find it pretty easy to express their knowledge in terms that
are easily encoded by Bayesian belief networks. (This obviously isn't
conclusive evidence, but it is very suggestive.)

Causality gives you a notion of sequence because causes must happen
before effects, but it doesn't have a global store that must be
updated. The key device of causal reasoning is the counterfactual, a
kind of story-like reasoning that's almost impossible to encode in an
imperative language. I would imagine that a "natural" programming
language would some kind of "modal logic language", like a Prolog
augmented with the causal operators David Lewis invented way back in
the 1970s.

You know, making such a language could make a seriously fun project.
I wish I knew something about modal logic!

--
Neel Krishnaswami
XXXX@XXXXX.COM