question on typing processes and higher-order functions a

question on typing processes and higher-order functions a

Post by neuendo » Thu, 27 May 2004 06:15:54

t 01:08 PM 5/25/2004, Bertram Ludaescher wrote:

I'm confused: Are you saying that this is what Ptolemy does, and you don't
it, or that Ptolemy does not do this, and you would like it to?

Could you consider this to be another higher-order function that takes
expandArray :: <[int]> -> <int>?

I don't think this makes sence... SDF actor functions don't have access to
the whole stream... they have access to a fixed length prefix of the stream.

why not another HOF: expandStream :: <<b>> -> <b> ?
I think that the advantage of expandArray over expandStream is that arrays
are generally finite, while
streams are not, and hence it is more likely that the computation I've
specified actually processes the
data being created... Note that there are two ways to implement
expandStream (does it produce an infinite stream consisting of the first
element of each input stream, or does it produce an infinite stream that
begins with the first
infinite input stream, and then never gets to the other ones?)

Posted to the ptolemy-hackers mailing list. Please send administrative
mail for this list to: XXXX@XXXXX.COM

question on typing processes and higher-order functions a

Post by neuendo » Fri, 28 May 2004 05:45:50

think Edward understood your original question better than I did...

The way I would perhaps summarize this is that, despite our efforts
to orthogonalize computation from communication, Ptolemy still
results in the construction of actors that are explicit about how data
is packaged into tokens. From the point of view of describing the functions
that some actors perform, however, there is no apparent reason why this
needs to be explicit in the definition of how to compute factors.

Maybe a better way to think of it is to separate the dataflow interface
from the
definition of the behavior. Define factors using a function closure, then wrap
this function closure in different dataflow behaviors... One that maps
onto different ports. One that maps them onto a sequence from a single port.
One that maps them onto a single data token from a single port.

In terms of a type system to select automatically between the above options
I think one issue comes down to "How do I select between different possible
implementations?" If this is left implicit is the behavior of a model
still well-defined? What are the criterion for making such a selection?
What information is needed to evaluate the criterion? Can this information be
extracted automatically?

Note that for your example, the behavior of the system is more statically
(and more implementation freedom is probably available) if the number
factors is fixed.
Hence, part of the original specification needs to give properties of the
collections (such as
fixed size) that are important for making the decision. The difficulty
with inferring the
size of arrays in Java is because I can say:

new int[x];

where the value of x cannot be determined statically... In Java, the size
of array
allocations are not decideable in general. Note that there is nothing to
building an analysis that says "these arrays have this size, and these
arrays I can't tell,
so you have to give me more information before I will let you put an SDF
around your function". Edward: the "dependent type problem" you refer to
is a red
herring... It just says that for languages with dynamic allocation of
arrays, it is undecideable
to determine the size of every array object... Language implementations
(e.g. Java compilers)
often analyze array bounds anyway (e.g. to remove bounds checks). In
languages where
memory management is very important (e.g. Loop nest optimization for
Fortran 77 on
supercomputers) such array size analysis is crucial to efficient performance.


At 12:59 PM 5/26/2004, Bertram Ludaescher wrote:

Posted to the ptolemy-hackers mailing list. Please send administrative
mail for this list to: XXXX@XXXXX.COM