think Edward understood your original question better than I did...
The way I would perhaps summarize this is that, despite our efforts
to orthogonalize computation from communication, Ptolemy still
results in the construction of actors that are explicit about how data
is packaged into tokens. From the point of view of describing the functions
that some actors perform, however, there is no apparent reason why this
needs to be explicit in the definition of how to compute factors.
Maybe a better way to think of it is to separate the dataflow interface
definition of the behavior. Define factors using a function closure, then wrap
this function closure in different dataflow behaviors... One that maps
onto different ports. One that maps them onto a sequence from a single port.
One that maps them onto a single data token from a single port.
In terms of a type system to select automatically between the above options
I think one issue comes down to "How do I select between different possible
implementations?" If this is left implicit is the behavior of a model
still well-defined? What are the criterion for making such a selection?
What information is needed to evaluate the criterion? Can this information be
Note that for your example, the behavior of the system is more statically
(and more implementation freedom is probably available) if the number
factors is fixed.
Hence, part of the original specification needs to give properties of the
collections (such as
fixed size) that are important for making the decision. The difficulty
with inferring the
size of arrays in Java is because I can say:
where the value of x cannot be determined statically... In Java, the size
allocations are not decideable in general. Note that there is nothing to
building an analysis that says "these arrays have this size, and these
arrays I can't tell,
so you have to give me more information before I will let you put an SDF
around your function". Edward: the "dependent type problem" you refer to
is a red
herring... It just says that for languages with dynamic allocation of
arrays, it is undecideable
to determine the size of every array object... Language implementations
(e.g. Java compilers)
often analyze array bounds anyway (e.g. to remove bounds checks). In
memory management is very important (e.g. Loop nest optimization for
Fortran 77 on
supercomputers) such array size analysis is crucial to efficient performance.
At 12:59 PM 5/26/2004, Bertram Ludaescher wrote:
Posted to the ptolemy-hackers mailing list. Please send administrative
mail for this list to: XXXX@XXXXX.COM