Damn Small and Damn Cool

USB ‘thumb’ (or ‘pen’) drives are one of those low-profile technological developments that you mostly don’t give much thought to; but when you do you realise just how amazing they are. Back in 1989 I bought my first 20GB hard-drive for my first PC  The drive was big (perhaps a bit bigger than a video-cassette box), heavy and relatively fragile (if you were going to move the PC you had to remember to manually park the drive heads before powering-down). It cost over ?200. Today I can pay ?25 for a 500MB thumb drive that weighs essentially nothing and can live quite happily in the hostile environment of my work bag. Pretty maazing, when you think about it. Which I usually don’t.

A few days ago, while on the train home from work, I was reading an article about how to install Linux on a thumb drive: the idea being that you can carry around a fully-configured system in your pocket for those inconvenient times when you don’t have access to your usual PC. Just find any old PC, plug-in the thumb drive, reboot, and (assuming the bios can handle it) the system boots the OS on the thumb drive rather than the one on the hard drive. I thought that this was moderately interesting but nothing that special. As it happened, I’d spent a large part of the day (the part I get paid for) running tests on a client-server system where the clients and server were each running on virtual machines using VMware. VMWare is another piece of technology that I keep forgetting to remember is deeply impressive. It essentially allows you to run multiple copies of Windows (or Linux) on the same machine at the same time. Each ‘guest’ machine thinks it is a real PC, and can access network resources via the real host PC.

So I thought, booting from thumb drives is all very well, but what I really want is something like WMware that lets me run a guest OS from a thumb drive without rebooting. That way I don’t have to disturb the guest PC (which, in say a cyber-cafe, may be locked-down to prevent reboots). At first i thought one of the ‘live cd‘ Linux distributions might do this, but it turned-out they all require a reboot.

Then I found something remarkable: the Embedded version of Damn Small Linux. DSL is a small-footprint (50MB max) Linux distro that can boot from CD, and very good it is too. But the magic is in the Embedded version. This is essentially a pre-configured copy of DSL that runs in an open-source, VMware-like virtualisation system called Qemu. Unzip DSL Embedded onto a thumb drive (or a hard-drive folder), run a batch file (no installation needed), and Damn Small Linux boots-up inside its own window. After a bit of grinding an X11-based desktop appears. If the host PC is on a DHCP-enabled hetwork then the Linux machine acquires its own IP address and you can access the net using the pre-installed browser (Firefox), email and ftp apps. Clients for vnc and terminal services are included, and I easily logged-into my home server.

This, my friends, is very, very cool. And since it is also very, very free, I urge you to try it. I couldn’t find the Embedded version on the DSL download page, but you can get it from ibiblio here (53MB).

I’m going to play with this and see what it can do. First-up is to get Open VPN working so that I can tunnel into my home network. Watch this space!

Damn Small and Damn Cool

USB ‘thumb’ (or ‘pen’) drives are one of those low-profile technological developments that you mostly don’t give much thought to; but when you do you realise just how amazing they are. Back in 1989 I bought my first 20GB hard-drive for my first PC  The drive was big (perhaps a bit bigger than a video-cassette box), heavy and relatively fragile (if you were going to move the PC you had to remember to manually park the drive heads before powering-down). It cost over ?200. Today I can pay ?25 for a 500MB thumb drive that weighs essentially nothing and can live quite happily in the hostile environment of my work bag. Pretty maazing, when you think about it. Which I usually don’t.

A few days ago, while on the train home from work, I was reading an article about how to install Linux on a thumb drive: the idea being that you can carry around a fully-configured system in your pocket for those inconvenient times when you don’t have access to your usual PC. Just find any old PC, plug-in the thumb drive, reboot, and (assuming the bios can handle it) the system boots the OS on the thumb drive rather than the one on the hard drive. I thought that this was moderately interesting but nothing that special. As it happened, I’d spent a large part of the day (the part I get paid for) running tests on a client-server system where the clients and server were each running on virtual machines using VMware. VMWare is another piece of technology that I keep forgetting to remember is deeply impressive. It essentially allows you to run multiple copies of Windows (or Linux) on the same machine at the same time. Each ‘guest’ machine thinks it is a real PC, and can access network resources via the real host PC.

So I thought, booting from thumb drives is all very well, but what I really want is something like WMware that lets me run a guest OS from a thumb drive without rebooting. That way I don’t have to disturb the guest PC (which, in say a cyber-cafe, may be locked-down to prevent reboots). At first i thought one of the ‘live cd‘ Linux distributions might do this, but it turned-out they all require a reboot.

Then I found something remarkable: the Embedded version of Damn Small Linux. DSL is a small-footprint (50MB max) Linux distro that can boot from CD, and very good it is too. But the magic is in the Embedded version. This is essentially a pre-configured copy of DSL that runs in an open-source, VMware-like virtualisation system called Qemu. Unzip DSL Embedded onto a thumb drive (or a hard-drive folder), run a batch file (no installation needed), and Damn Small Linux boots-up inside its own window. After a bit of grinding an X11-based desktop appears. If the host PC is on a DHCP-enabled hetwork then the Linux machine acquires its own IP address and you can access the net using the pre-installed browser (Firefox), email and ftp apps. Clients for vnc and terminal services are included, and I easily logged-into my home server.

This, my friends, is very, very cool. And since it is also very, very free, I urge you to try it. I couldn’t find the Embedded version on the DSL download page, but you can get it from ibiblio here (53MB).

I’m going to play with this and see what it can do. First-up is to get Open VPN working so that I can tunnel into my home network. Watch this space!

How to redirect an asp.net-generated RSS feed

One thing you’ll want to do if you change the location of your blog is to have your subscribers automatically pick-up the new location of your RSS feed. This post just documents a trick that I used recently to do this.

If you are lucky enough to using an asp.net-based blogging tool then it is fairly easy. You’ll probably have a .aspx file that emits rss/xml. Most aggregators will interpret an HTTP 301 response to a request for this file as an instruction to automatically change the feed subscription location. To generate this response, simply replace the contents of the feed-generating file with something like the following:

<%@ Page language=”c#” %>
<%@ Import Namespace=”System.Web” %>

<%
Context.Response.StatusCode = 301;
Context.Response.AddHeader(“Location”, “http://www.mynewdomain.com/mynewfeed.aspx”);
Response.End();
%>

Replace the URL with the URL of your new feed, and thats it. Then sit back and monitor the traffic on your old and new sites until nobody is hitting the old feed and everyone is seeing the new feed. Then delete the old site.

New Orleans

New Orleans Times-Picayune is publishing breaking news in weblog format. It makes for a pretty disturbing read: destruction and death and misery. A lot of people have had their lives wrecked, and are going to be living like refugees for years to come.

Why were so many people apparently too poor to be able to get transport out of the city, and/or un-informed about the likely consequences?

(Via Dave Winer)

Update (14.10): If you believe this, then its all too clear why so many people stayed in the city: the state givernment assumed that people would use private transport to evacuate. I’m not sure whether cities in the US have standing plans for total evacuation, but everyone knew that the hurricaine was coming at least 24 hourse before it hit the coast. Surely someone must have thought “what about the people who don’t have cars?” If the comment is correct, and even 10% is the city population was left without an escape option then 48,000 people were just abandoned.

I wonder if, in general, evacuation plans exist for cities in the UK? Manchester (where I live) has had an evacuation plan for the city-centre for some years, and google knows about similar plans for other cities, but nothing that seems to cover an entire city. Is this something that can be adequately planned for?

Don’t click it

My normal policy is not to link to flash-based sites because I think it just encourages them. However, dontclick.it is snazzy enough to be an exception. Try it!

I’ve see a few examples of gesture-based user-interfaces and, while this is one of the best looking, I’m still skeptical. Without primities (such as buttons, sliders, and links) with familiar and predictable behaviours its just too difficult to be confident about what a gesture will do to an object. And without predictable consequences to an interaction I don’t know what to do and I get buried under the massive visual distraction that occurs when I mouse around to try things.

So, as an example of user-interface design it falls somewhat short. But damnit, its pretty.

(Via David Weinberger)

Atom vs RSS

It seems to have been a long time coming, but Atom has finally reached 1.0 status as an IETF standard. Lots of smart people seem to have worked hard to produce a high-quality result. Judging by this comparison of features, it certainly seems better than RSS 2.0.

Unfortunately, I don’t see it gaining much traction any time soon.

The problem is that Atom doesn’t really solve any problems for most people. As a syndication format, Atom solves some problems to do with the identity of posts and optional fields. This is definitely A Good Thing. As I said, Atom is better than RSS. The problem, though, is that RSS is good enough. There isn’t see much incentive for anyone to change. This is not because a lot of infrastructure needs to be ripped-out: on the contrary, aggregator vendors will, I’m sure, quickly add Atom 1.0 support to their products and many CMS vendors will provide Atom feed capabilities. Anyone who then wants Atom can use it. I don’t see that happening much because no-one (feed consumers or producers) really gain much  Its like IMAP vs POP3. IMAP is widely supported, has some great features, and is certainly technically superior; but everyone gets by with POP3 because it is good enough.

As a publishing protocol, Atom is infinitely better than the alternative from the RSS world: the MetaWeblog API. Anyone who has ever had to write code to generate conformant MetaWeblog messages (and I have) knows that the spec is a joke: some things just plain can’t be done, and everything else can be done in multiple ways. But no-one except a small number of developers writing blogging clients really care. Again,  MetaWeblog is good enough because its shortcomings are not a problem for most people.

Its a shame. As a techie I want the best solution to succeed. I may be wrong, but I don’t think that is going to happen in this case.

The lesson? Go read the cathedral and the bazaar again. Build something that does the job; release early and often; allow users to extend what you’ve built.

None of this is new.

I’m Back

Those not reading this in an aggregator will have noticed that I’ve made some fairly major changes. I’ve moved from the home-made CityDesk-based site to one based on Community Server. It certainly looks more professional.

All the old content is there, although some of it might be incorrectly titled. Photos that were part of posts are also missing. I’ll sort those issues out in the next few days.

The new RSS feed is here. The old feeds will hang around for a month or so, but now redirect to the new feeds so aggregators should update automatically.

I hope that this new infrastructure will make it easier for me to update this blog. More on this, and the reasons for the six-month hiatus, soon.

Neural Simulation

New Scientist has a report that IBM and the Ecole Polytecnique F?d?rale de Lausanne in Switzerland are getting together to produce a simulation of a whole human brain using a custom-built supercomputer. Apart from the sheer audacity of this project, what is interesting is that they intend to model individual neurons using a detailed bio-electrical model. Normally, neurons are modelled as simple idealised object, but but this simulation will be based on a model of how real neurons behave electrically.

For over a decade [they] have been building a database of the neural architecture of the neocortex, the largest and most complex part of mammalian brains.

Using pioneering techniques, they have studied precisely how individual neurons behave electrically and built up a set of rules for how different types of neurons connect to one another.

Very thin slices of mouse brain were kept alive under a microscope and probed electrically before being stained to reveal the synaptic, or nerve, connections

I find this interesting because, back in the late 80’s I worked for a year at the (now defunct) IBM UK Scientific Centre in Winchester. For a lot of that time I was involved with a project at Southampton University to model the electrical characteristics of Hippocampal neurons taken from rats. The brain samples were sliced, probed, and stained just as described in the New Scientist article. The reason for staining them is so that the neuron’s shape can be mapped, which allows you to determine their volume and dendrite crossectional area – which in turn determine electrical properties such as capacitance and firing latency (if I remember right). I wrote the software that semi-automatically built a 3D ball-and-cone model model of the cell from a set of overlapping scans of the brain slices.
Back in 1989 the best we could do was simulate a single neuron: anything more was just computationally infeasible. Now, just fifteen years later, it makes sense to talk about working towards simulating an entire brain in ten years time. How things change.