Don’t click it

My normal policy is not to link to flash-based sites because I think it just encourages them. However, dontclick.it is snazzy enough to be an exception. Try it!

I’ve see a few examples of gesture-based user-interfaces and, while this is one of the best looking, I’m still skeptical. Without primities (such as buttons, sliders, and links) with familiar and predictable behaviours its just too difficult to be confident about what a gesture will do to an object. And without predictable consequences to an interaction I don’t know what to do and I get buried under the massive visual distraction that occurs when I mouse around to try things.

So, as an example of user-interface design it falls somewhat short. But damnit, its pretty.

(Via David Weinberger)

Atom vs RSS

It seems to have been a long time coming, but Atom has finally reached 1.0 status as an IETF standard. Lots of smart people seem to have worked hard to produce a high-quality result. Judging by this comparison of features, it certainly seems better than RSS 2.0.

Unfortunately, I don’t see it gaining much traction any time soon.

The problem is that Atom doesn’t really solve any problems for most people. As a syndication format, Atom solves some problems to do with the identity of posts and optional fields. This is definitely A Good Thing. As I said, Atom is better than RSS. The problem, though, is that RSS is good enough. There isn’t see much incentive for anyone to change. This is not because a lot of infrastructure needs to be ripped-out: on the contrary, aggregator vendors will, I’m sure, quickly add Atom 1.0 support to their products and many CMS vendors will provide Atom feed capabilities. Anyone who then wants Atom can use it. I don’t see that happening much because no-one (feed consumers or producers) really gain much  Its like IMAP vs POP3. IMAP is widely supported, has some great features, and is certainly technically superior; but everyone gets by with POP3 because it is good enough.

As a publishing protocol, Atom is infinitely better than the alternative from the RSS world: the MetaWeblog API. Anyone who has ever had to write code to generate conformant MetaWeblog messages (and I have) knows that the spec is a joke: some things just plain can’t be done, and everything else can be done in multiple ways. But no-one except a small number of developers writing blogging clients really care. Again,  MetaWeblog is good enough because its shortcomings are not a problem for most people.

Its a shame. As a techie I want the best solution to succeed. I may be wrong, but I don’t think that is going to happen in this case.

The lesson? Go read the cathedral and the bazaar again. Build something that does the job; release early and often; allow users to extend what you’ve built.

None of this is new.

I’m Back

Those not reading this in an aggregator will have noticed that I’ve made some fairly major changes. I’ve moved from the home-made CityDesk-based site to one based on Community Server. It certainly looks more professional.

All the old content is there, although some of it might be incorrectly titled. Photos that were part of posts are also missing. I’ll sort those issues out in the next few days.

The new RSS feed is here. The old feeds will hang around for a month or so, but now redirect to the new feeds so aggregators should update automatically.

I hope that this new infrastructure will make it easier for me to update this blog. More on this, and the reasons for the six-month hiatus, soon.

Neural Simulation

New Scientist has a report that IBM and the Ecole Polytecnique F?d?rale de Lausanne in Switzerland are getting together to produce a simulation of a whole human brain using a custom-built supercomputer. Apart from the sheer audacity of this project, what is interesting is that they intend to model individual neurons using a detailed bio-electrical model. Normally, neurons are modelled as simple idealised object, but but this simulation will be based on a model of how real neurons behave electrically.

For over a decade [they] have been building a database of the neural architecture of the neocortex, the largest and most complex part of mammalian brains.

Using pioneering techniques, they have studied precisely how individual neurons behave electrically and built up a set of rules for how different types of neurons connect to one another.

Very thin slices of mouse brain were kept alive under a microscope and probed electrically before being stained to reveal the synaptic, or nerve, connections

I find this interesting because, back in the late 80’s I worked for a year at the (now defunct) IBM UK Scientific Centre in Winchester. For a lot of that time I was involved with a project at Southampton University to model the electrical characteristics of Hippocampal neurons taken from rats. The brain samples were sliced, probed, and stained just as described in the New Scientist article. The reason for staining them is so that the neuron’s shape can be mapped, which allows you to determine their volume and dendrite crossectional area – which in turn determine electrical properties such as capacitance and firing latency (if I remember right). I wrote the software that semi-automatically built a 3D ball-and-cone model model of the cell from a set of overlapping scans of the brain slices.
Back in 1989 the best we could do was simulate a single neuron: anything more was just computationally infeasible. Now, just fifteen years later, it makes sense to talk about working towards simulating an entire brain in ten years time. How things change.

Classic Games

If (as I did) you spent any time back in the eighties playing Scott Adams adventure games, you might want to surf over to here and download the games. I remember playing Voodoo Castle on an old Apple ][, and Secret Mission on my Vic 20. Ah… nostalgia…