Thursday, 5 June 2003
Hot Gingerbread and Dynamite
Schubert, string quintet in C, D 956. I seem to be timing my
entries to coincide with Schubert listening quite a bit these days.
I was away in Sydney at the end of last week for
a NICTA workshop on Formal Methods. The venue was the Scientia
building on the UNSW campus. I had a pleasant three days, gave a talk
that prompted some interested sounding questions, and heard lots of
interesting talks by others.
As is often the case at these sorts of gatherings, there was a bit
of soul-searching about the utility of what we’re doing. Carroll
Morgan put it nicely when he said that we shouldn’t be too harsh on
ourselves, and that FM suffers a little from the same problems as
AI, the perception that
If it’s not boring, it can’t be Formal Methods.
The problem is that academic theory takes a long time to filter
out into industrial practice (and not all theory is going to make it
either, of course; some of it’s no use to man nor beast). Someone working
on theory now may well have to wait an awfully long time in order to
see their work gain a broader acceptance. And given the failure rate,
how can they be sure of its real worth?
Finally, my last word on The matrix:
article analyses the film from a religious and/or
philosophical point of view, and finds all sorts of interesting
stuff in it. This is all very well though I can’t help suspect
that you’d find all sorts of interesting stuff in just about
anything if you looked hard enough, and if you knew what you
were going to find in advance. My problem with The
matrix is more that it’s poor science-fiction, and any
amount of religious imagery doesn’t alter that.
On the other hand, it is an entertaining film.
AI and Formal Methods aren't defined by boredom!
I don't think this is the first time I've said this, and I don't think I'm the first to say it, but...
AI and Formal Methods' "problem" is that they are working on problems that don't yet have useful enough solutions. When a useful enough solution is found to a class of problems, practitioners establish a new field of study to support the useful work they're doing. (And they don't call it AI/Formal Methods.) This has already happened for Expert Systems and Compilers, and it'll be happening now for static analysis and checking hardware verification assertion languages.
"What is Philosophy?" has a similar answer. At the time of the Ancient Greeks, Philosophy covered every area of critical inquiry. Today, Philosophy is all the bits of critical inquiry leftover after demonstrably useful areas of study have established pedagogical, methodological and procedural norms and set themselves up as distinct disciplines. (This happened for Physics and all the hard sciences, and has most recently happened for Psychology.)
I'm now consumed by this vision of Formal Methods and AI as "primeval sludge disciplines", out of which useful stuff can emerge. :-) But this point would agree with the perception that Carroll characterised as "If it's not boring...", the 'interesting' fields may have spawned from FM, but once they've done so, they don't get called FM anymore.
I don't think the situation with philosophy is quite analogous, however. There the break between the ancestor and child disciplines seem more absolute. Perhaps this is because the AI/FM examples are still within a field recognisable as computer science, where "pedagogical, methodological and procedural norms" are held in common across compilers to concurrency theory.
Somehow it doesn't feel right to claim that FM (AI) is to computer science what philosophy is to all fields of knowledge.
My point was I guess that "boring" was a poor word to use - at least without saying who was finding topics boring, and why. If we replace "interesting" (to who?) by "useful enough" (to non-academic practioners), I'd be closer to agreeing with Carroll's view.
Anyway, it sounds like it would have been an (academically) "interesting" workshop. :-)
I wasn't *quite* saying that FM/AI is to compsci what philosophy is to all fields of knowledge... :-) More like, e.g. AI is to Expert Systems what Philosophy is to Psychology. Most child disciplines broke away from Philosophy quite a while ago (way before even computer science emerged as a discipline!) so it'd be natural that they would seem more distinct.
I realise that AI/Expert Systems are "smaller" fields than Philosophy/Pscychology - I think my analogy can break down over this point. But all analogies break down at some point!
Compilers and concurrency theory both share some norms, but so do chemistry and biology. There's a lot of stuff that's really quite specific to compilers - of course there is the body of academic knowledge, but there is also a body of not-quite academic knowledge about techniques, trade-offs, and experience for compiler implementation, optimisation, etc that really doesn't have much direct bearing on concurrency theory.
Fair enough. I guess this comes back to the idea I expressed in the entry: how is the poor FM practitioner supposed to know what part of their field is about to calve off and become a useful new area of study.
I'm beginning to think that one way is to expose yourself to lots of industrial practice, and hope to get an insight into what today's problems on the ground look like. Getting such opportunities is a problem in itself, and it might prompt rather short-termist thinking. Moreover it might not be very interesting!