Casey Hawthorne
2009-11-16 19:08:27 UTC
Parallelism and Concurrency: I would separate the two concepts:
Parallelism - one task shared across multiple cores
Concurrency - different tasks shared among multiple cores
For both, the overhead of communication between the tasks compared to
the useful work done on the task itself, is the biggest challenge.
For parallelism, (and concurrency), any immutable data can be locally
stored, reducing memory latency, and since memory prices are falling
all the time, more and more local data can be accommodated.
So, for problems with many readers and only one writer, things are not
so bad.
For many writers and some readers, one wants as much immutable data as
possible, and also to localize the effects of mutable data, which
would seem to suit functional languages, like Haskell, which are
designed to "cleanly" separate pure code from side-effect code
(globally affecting code).
--
Regards,
Casey
Parallelism - one task shared across multiple cores
Concurrency - different tasks shared among multiple cores
For both, the overhead of communication between the tasks compared to
the useful work done on the task itself, is the biggest challenge.
For parallelism, (and concurrency), any immutable data can be locally
stored, reducing memory latency, and since memory prices are falling
all the time, more and more local data can be accommodated.
So, for problems with many readers and only one writer, things are not
so bad.
For many writers and some readers, one wants as much immutable data as
possible, and also to localize the effects of mutable data, which
would seem to suit functional languages, like Haskell, which are
designed to "cleanly" separate pure code from side-effect code
(globally affecting code).
--
Regards,
Casey