## curried and uncurried functions

### curried and uncurried functions

rincewind schrieb:

One never knows ...

This seems to depend on the language you use. For example, in Haskell
this question does not make sense since there are curried functions
only, so no need to "create curried versions". There is also no need to
"create uncurried versions" explicitely, since we have

uncurry f a b = f (a,b)

I think a function taking a tuple is more costly unless the tuples are
already there. But if you have the choice
f 1 2
should be cheaper than
f (1,2)
because the tuple must be constructed, which involves passing 2
arguments and then this tuple must be passed to f. So, not taking
possible optimizations into account, we have 2 function calls and 3
arguments to pass in the latter case while only 1 function call and 2
arguments to pass in the former. And we have still not considered the
costs of deconstruction of athe tuple that has to take place in f
itself.

### curried and uncurried functions

Tuple construction is primitive, so this isn't argument passing.

*Not taking* optimizations into account, you forgot the cost of one
closure construction in the latter case, which will most likely
outweigh the cost of constructing a simple pair.

*Taking* optimizations into account, it is easier to optimize tupled
functions than curried ones. And you can potentially optimize them more
aggressively, because there are less possible use cases (no partial
application).

Just like "deconstructing" the additional closure. ;-)

So, cost is not a good argument for curried style. Convenience is.

### curried and uncurried functions

XXXX@XXXXX.COM schrieb:

Not so. Depends on the language. In Haskell, for example, you may well
write:
(,) 42 -- a -> (Int, a)
constructing a function that takes one argument and constructs a tuple.
So, tuple construction is itself curryied and an application like
f (42, 43)
is really
f ((,) 42 43)

Closure construction should be a more primitive operation than tuple
construction in most languages, since the latter may include the
former, see above.

This looks like a not very well founded statement to me.

This statement is self defying, IMHO. If we have a language that can
call "uncurried" functions somehow faster, what then would the compiler
prevent to generate an uncurried call when it sees one?

The closure can be opimized away at the call site, the tuple
deconstruction cannot. I admit, however, that this costs 2 indirect
accesses only.

### curried and uncurried functions

So, what would be the implementation of (,)? I'm not aware of any
compiler that would not treat (,) as sugar for \x.\y.(x,y). After all,
at some point you still need tuple construction primitive (like other
forms of aggregation).

No, see above.

Having worked on functional language implementations quite a bit, I
have some confidence in the validness of this statement. ;-)

With tupled functions you can simply read the "true arity" of a
function from its type. With currying you cannot: a function of type
t->u->v may either be defined as a 2-argument function, or be a
1-arg-function returning another 1-arg-function. So unless you
statically know the exact call target you always have to do some form
of dynamic arity matching (unless you are willing to pay for a lot of
eta expansions and call indirections to normalize arities). If done
right, this is not expensive; but it's not free either.

In both cases, polymorphism may introduce additional complications,
though.

The cost calculation was for the hypothetical unoptimized approach,
wasn't it?

### curried and uncurried functions

XXXX@XXXXX.COM schrieb:

Why should it be different from that for Pair in
data Pair a b = Pair a b;

Indeed. Like I said, it's curried at the bottom. :)

Sure.

### curried and uncurried functions

"Ingo Menger" < XXXX@XXXXX.COM > writes:

Of course it can. Good compilers for, e.g., SML do this on a regular
basis.

### curried and uncurried functions

Matthias Blume schrieb:

To be sure, I meant that when the tuple elements are accessed, there
will be one more indirection compared with normal arguments.
Of course, the folliwng function will not "deconstruct"

f :: (a,b) -> (a,b)
f = id

### curried and uncurried functions

Ingo Menger write:

It isn't. Constructor application is primitive likewise (it falls under
the "other forms of aggregates" I mentioned). Even if the language uses
curried notation for constructor application.

No, at the bottom you have (x,y) (which you can read as a mixfix
constructor application if you want), which is a primitive construction.

### curried and uncurried functions

"Ingo Menger" < XXXX@XXXXX.COM > writes:

No, usually there won't be. Most SML compilers will try very hard to
flatten the runtime representation of argument and use registers (or
other machine-specific efficient calling conventions) for the
components.

### curried and uncurried functions

One (possibly minor) consideration is the readability of the type of
the function. In the SML idiom of functions of several variables being
functions of tuples it is easy to distinguish at a glance "ordinary"
functions like max (a,b) from higher-order functions by their type: the
type-expression of an ordinary function only has one arrow. With the
Haskell curried idiom, everything with more than 1 variable is in
effect a higher-order function, so type signatures tend to have a lot
of arrows, even if the programmer isn't thinking of the function as
higher-order. Thus, it *could* be argued that the uncurried version
leads to types that better reflect the programmer's intention, with a
possible gain in readability. In Haskell, I feel like I never quite
know what the "real" arrow is in a type-expression with a long list of
arrows. Nevertheless, this is a matter of taste (and my taste comes
from having learned SML first and only now trying to learn some
Haskell) and a good rule of thumb would be to follow the standard idiom
of whatever language you are using.

### curried and uncurried functions

Hmm... my impression is that it's probably a wash. In the first-order case
(calling a statically known function) it obviously makes no difference one
way or the other. In the higher-order case, you have to do an argument-count
check when calling a curried function. But a function with a tuple parameter
can't simply be represented by a multi-parameter closure, because it might
be a specialization of a function that doesn't take a tuple parameter (like
id) or it might be called by something that doesn't know the parameter type
(like map). As far as I can tell this necessitates a runtime check of some kind.

That problem doesn't occur in MLton (which eliminates polymorphism at
compile time), and I can easily imagine that tuple arguments are more
efficient there. On the other hand, curried arguments are at least as
efficient as tuple arguments, even at higher order, in jhc (another
whole-program compiler that works by firstifying everything).

-- Ben

### curried and uncurried functions

This may be deliberate. In strict languages there's a sharp distinction
between values and functions, and it makes sense to reflect that with a
binary distinction in the type (0 or 1 arrows). In Haskell a value is, for
most purposes, just like a 0-argument function, and it makes sense to
reflect that in function types that go to T instead of () -> T as the
argument count goes to zero. In ML there is a "real" function arrow where
the computation and side effects happen, but in Haskell all you can do is
feed arguments to the monster until it becomes a datatype; only then (by
case analysis) can you force the computation and find out if something diverged.

(Actually this is true only in the absence of `seq`; with `seq` you can
force the computation early, which destroys this nice property. Haskell has
lots of nice properties that are destroyed by certain builtins, like type
safety and unsafePerformIO...)

-- Ben

### curried and uncurried functions

Not if the tuple is optimised away to leave a regular 2-argument function
with arguments passed in registers, which is probably is (e.g. in OCaml).

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.yqcomputer.com/

### curried and uncurried functions

Yes, as I said, polymorphism induces additional complication (and may
more or less equalise the two approaches again). But polymorphism can
be compiled away (at least in sufficiently conservative languages),
higher-orderness can't.

What do you mean by "firstifying", monomorphisation? That is not
generally possible in Haskell, so how do they do it? I'm not familar
with the workings of JHC, but if they really monomorphise, and tupling
is less efficient than currying nevertheless, then I guess that they
simply did not bother to optimise tupling properly (which is a
reasonable approach for Haskell, of course).

### curried and uncurried functions

XXXX@XXXXX.COM schrieb:

Ok.

Still this packing function needs to allocate memeory on the heap,
which must later be garbage collected, etc. This exactly is the reason
why optimization tries to optimize away construction if it can be sure
that the aggregate is only there to be deconstructed immediately.
But, if this is so, why then not write f a b instead f (a,b) in the
first place? (Given that the compiler can somehow recognize fully
satisfied applications (which is more or less trivial) and thus can
avoid making the closure.)