lawomir J. wrote:
Excuse me for interrupting but...huh? You say "by definition" the node
has to be "there" waiting for the data? Why? Where? A dataflow graph
is just a data structure that tells how data and operations are related,
so that operations can fire when their needed data is available. Yes,
one way to implement such a data structure might be to statically embed
operations on the platform and to have them each wait (perhaps even
poll) for their input data, but to criticize dataflow as a whole because
you don't like one possible implementation of it won't get you far.
For the most part, dataflow certainly doesn't dictate "where" an
operation needs to execute, nor that the operation itself should consume
any resources at all until all of its input data is ready. These are
advantages of dataflow over threaded programming, and even apparently
over your proposal, where a node must maintain internal run state while
blocked waiting for completion of other nodes. In fact, with your
pre-reservation calls in your x^2+y^2 example, you clearly dictate that
the end node (D) maintain state all the time it's waiting for its inputs
to come down the line. And, the end node apparently must then manually
decide each time an x^2 or y^2 comes down the line whether its pair
exists so that it can perform the addition and release the reservation.
Stuff like that is done automatically in mature dataflow languages.
Because a dataflow graph is just a data structure, decisions of when and
where the operations will execute can be made dynamically by a runtime
system, taking into account not only which operations are ready to fire,
but also dynamic state of the platform such as processor load, faults,
network weather, etc. The stressflow implementations you recommend
apparently have no such flexibility.
Then again, you apparently try to make this sound like an advantage.
For example, on the web page, you say:
The key implementation rule of stressflow is this if atom A
communicates with atoms B and C, then all that is necessary to assure
synchronization and data cohesion between A, B, and C is direct
connection between A, B, and C. No execution layers or threads that
queue or sequence connections between A, B, and C together with any
other connections, and no iddlemenin between. Being able to
accomplish this, stressflow becomes superior internal implementation
method for any oftware wiringmethod.
I will assume that you believe ScalPL (formerly Software Cabling)
qualifies as a "software wiring method", making your final sentence
false. If one wants to use a sufficiently restricted subset of ScalPL
(though still as powerful as Stressflow), then one does not *need* a
runtime system to facilitate the interactions, and as I have mentioned
in the past, early implementations of Software Cabling ("F-Nets" or
"LGDF2") didn't use one. A runtime system is nice to have, though, for
(for example) exploiting platform dynamics as mentioned above.
There are other advantages to an execution layer. In real life, the A,
B, and C of your statement may not be so clearcut, and may not be known
until runtime. For example, some A may be related (at different times)
to any of B1, B2, B3, ..., B400000, and to any of C1, C2, .....C546000.
While it is often technically possible to treat all of these
dependences statically and to test and enforce each (combinati