Post by Adrian HeyPost by Dirk ThierbachAnd just for the record, IMO space usage is not a question of
"robustness". For me, "robustness" means "correct behaviour even in
unusual circumstances". Which is something Haskell is very good at.
Apart from the (un?)usual circumstances of running out of memory :-)
Nearly every program for a non-trivial problem will sooner or later run out
of memory, given a large enough input.
Post by Adrian HeyI'm afraid your definition of robustness would be unacceptable even for
most ordinary industrial uses, let alone safety critical applications.
But "acceptable for industrial uses" or "acceptable for safety critical
applications" is not the same as "robust". "Robustness" is just one
criterion. There are other criteria like for example "real-time
capable" or "memory bounded".
Of course you're free to use "robust" in any way you like. I just
wanted to point out that at least I use the word differently, and if
you insist using it your way, that can lead to confusion. Which is bad
for communication. And given the question of the original poster, it
looks like I'm not the only one.
Is it really too much to ask to use "bad at controlling space usage" or
something like that instead of "robust"?
Post by Adrian HeyIt's not really very satisfactory for "bucket loads of RAM + swap
space" server/workstation/desktop apps either IMO, but you can
probably get away with it here as occasional crashes seem to be
regarded as normal and acceptable program behaviour anyway.
I don't know what kind of Haskell programs you write, but my Haskell
programs don't have "occasional crashes". And it's certainly not
acceptable to have them. OTOH, there's no question that Haskell is not
suitable for, say, embdedded systems. And no one says it ought to.
Post by Adrian HeyBut if you're designing a (hardware) product of any kind you need a
definite figure for the minimum memory needed by any embedded s/w in
order to make a reliable product.
Yes. So, for embedded systems, Haskell is out. As are probably all
garbage collected languages. But that doesn't mean that those languages
are not "robust".
Post by Adrian HeyAlternatively the embedded s/w designers know the memory constraints
they're working with and proceed accordingly.
Yes, of course. But we're not talking about embedded systems.
Post by Adrian HeyPost by Dirk ThierbachThere's plenty of imperative algorithms that have really bad
asymptotic space behaviour, so one could also claim using the same
reasoning that "C is not robust".
I don't think so, because you would either know their space behaviour
and budget accordingly or avoid the use of such algorithms altogether.
The point was not that one can avoid to use a specific algorithm
(which is only possible probably if you're doing "trivial" stuff, like
for embdedded systems), the point was that it's not legitimate
to conclude from "there are programs that have bad space usage"
that "this language is not robust".
Post by Adrian HeyThe trouble with Haskell isn't so much that the space behaviour is
good/bad, the trouble is that it's unknown and unpredictable.
It's certainly not "unknown"; given a fixed compiler, it's actually
completely determined. It's also not completely unpredictable: with a
bit of experience, one can usually say in advance in many cases what
the space usage is going to be. Even across compilers.
What IS bad is that this doesn't work always. There can be surprises,
and cases where the algorithm is difficult enough to make predictions
about space usage hard, or cases where you have to jump through hoops
to get space usage down. But that's not really news, it has been like
this for a long time, and one can still write Haskell programs that
are quite useful in spite of that.
So it's not really like this white/black picture that you're trying
to paint.
Post by Adrian HeyC certainly sucks in many ways, but (as JH pointed out) the one good
thing about it is that the space behaviour of a given C program will be
uniform across all platforms/compilers/optimisation settings (at worst
differing only in known constant factors).
Yes. So C is a good language for embedded systems, while Haskell is not.
OTOH, Haskell is a good language for complicated algorithms, while C
is not.
Use the right tool for the job. Not all programming languages are
equally suited for all applications. That's the reason there are different
languages in the first place.
Post by Adrian HeyThere's a thread about this issue right now on the ghc-users mailing
http://www.haskell.org/pipermail/glasgow-haskell-users/2009-June/017359.html
Interestingly Simon M states that "GHC will happily combine them with
CSE and possibly also lift them to the top-level; both transformations
might have a big impact on space behaviour." Yet more confusion and
ambiguity re. CSE and similar transformations :-)
Yes. And that's a direct consequence, again, that a Haskell program
specifies in the first place "what" to do, and only indirectly (if at
all) "how" to do it (i.e., which reduction strategy to use). On the one
hand, that is good, because it gives the compiler for freedom to apply
optimizations. On the other hand, it's bad, because there is no
*guarantee* about secondary characteristics like space and time complexity.
That's why I really like the idea of *additionally* specifying
those complexities. And then let the compiler figure out if it can
automatically meet these requirements by choosing the correct evaluation
strategy (maybe with a few more hints in the right places).
Post by Adrian HeyPost by Dirk ThierbachIt would be more to the point to say that Haskell offers *less
direct control* over space usage. Which is certainly something that
could be improved.
It's difficult to see how this could ever be done with Haskell, for
cultural reasons mainly (not technical reasons).
I don't consider myself qualified to speculate about "cultural reasons"
and "the" Haskell community. Maybe you do, but I don't.
Post by Adrian HeyI think we need to get back in touch with the imperative realities
of life and recognise that whatever benefits "pure functions" may
offer as mathematical abstractions, ultimately any real computation
on any real machine (even "pure" expression evaluation) is an *act*
that does have real world observable side effects that must be
understood and controlled.
And I think that this misses the point of this discussion completely.
The "observable side effects" (unless you mean by that space usage
only) are better be dealt with the appropriate abstractions, say,
monads.
Space usage is dependent on the evaluation strategy, and has little to
with the "pure functional" vs. "imperative" approach. With a strict
evaluation strategy, space usage is much easier to figure out. OTOH,
sometimes lazy algorithms are much more elegant and a pleasure to
write (which is what I miss most when programming in, say, OCaml).
So, again, it's not black vs. white. Experience has shown that
sometimes (not always) strict evaluation is necessary, and also that
the current instruments Haskell has to deal with it are not simple
enough to use. So that's the problem that has to be fixed. And Haskell
is still enough of a research language to try out new ideas.
But there's no need to throw out the child with the bathwater, and
complain that "all this lazy pure functional stuff is nonsense,
and we should go back to use imperative, simple programs". Nobody
forces you to use Haskell. Nobody says that Haskell should be the one
and only language. If I was to write a program for an embdedded system,
I would use C (or maybe Forth), and not Haskell.
Post by Adrian HeyI don't believe that most in the Haskell community would even accept
that a language that did address these issues properly was "purely
functional".
Huh? There is already research about purely functional languages
in combination with "type and effect systems", or, as I mentioned, the
type systems that give bounds for space usage. That doesn't make the core
language any less purely functional.
Post by Adrian Heye.g. given the controversy that surrounds the earlier
mentioned (and IMO very conservative) top level <- bindings proposal.
I don't really understand why you are so worked up about the top level
bindings either, but this posting is already long enough.
I think I'll EOT soon, as all this is more about philosophy and opinion
than about useful practical stuff, and discussing opinions isn't really
that interesting.
I mainly wanted to point out that it has been long known and
acknowledged that controlling space usage in Haskell can be hard, that
there are nevertheless ways to deal with that and in many cases it
isn't really a problem, and that I found the idea of additionally
specifying the complexity an interesting one. That's all.
- Dirk