Adventures in Yak Shaving with System.CommandLine

Over the last few months I’ve been using wasting odd moments of free time by tinkering with some code to extract pictures from a Google Takeout archive. The idea is to use the json metadata in the archive to restore the image timestamps (which Google removes from the embedded image metadata – for reasons best known to itself), rationalise the file naming, separate edits and originals, etc. The ultimate aim is to be able to grab a Takeout, extract and locally archive all the images from some period of time (say, the last year) so that I can then manually remove those images from my Google Photos collection. And the aim of that is to reduce my exposure to Google arbitrarily closing my account and consequently deleting my pics. And because I’m old fashioned enough to distrust 100% reliance on “the cloud”.

So I wrote some code and got it working as a c# .net 6 command-line app. Its a bit rough but it does what I need.

And then I had the genius idea of restructuring it as a set of providers that could be used to extract all the other stuff that you might find in a Takeout archive: contacts, emails, whatever. And of course this would need command-line options that apply to each provider, to allow the output to be customised. Which requires a way of grouping those options – basically I needed the idea of “commands” that delimit groups of options and correspond to the different types of media in the archive. I also needed some options that are global and not associated with a command – for input and output directories, for example. At this point my old CommandLineParser class that I’ve been dropping into console apps for the decade or so was not going to cut it.

So I did some reading and decided to try System.CommandLine – the shiny new way to parse command line parameters. This is still in beta but my initial impression was favourable. Basically, you create an object model of your command-line syntax, hook it up to handlers, and let the library do the grunt work of parsing the command-line into values, handling errors, automatically generating help text (particularly impressive), and lots of other stuff.

Here’s a little test app that I made:

    public static int Main(string[] args)
        // audio command
        var thresholdOpt = new Option<int>("--threshold");
        var scaleOpt = new Option<double>("--scale");
        var audioCommand = new Command("audio") { thresholdOpt , scaleOpt};
            (int threshold, double scale) => { Console.WriteLine($"threshold={threshold}, scale={scale}"); },
            thresholdOpt, scaleOpt);

        // video command
        var monochromeOpt = new Option<bool>("--mono", description: "Monochrome");
        var colourOpt = new Option<bool>("--colour");
        var brightnessOpt = new Option<int>("--brightness");
        var videoCommand = new Command("video") { monochromeOpt, colourOpt, brightnessOpt };
            (bool mono, bool colour, int brightness) => { Console.WriteLine($"mono={mono}, colour={colour}, brightness={brightness}"); },
            monochromeOpt, colourOpt, brightnessOpt);

        // root command
        var infileOpt = new Option<FileInfo>("--i");
        var outfileOpt = new Option<FileInfo>("--o);
        var rootCommand = new RootCommand("test");
            (FileInfo infile, FileInfo outfile) => { Console.WriteLine($"i={infile}, o={outfile}"); },
            infileOpt, outfileOpt);

        return rootCommand.Invoke(args);

This implements the commands for an entirely fictitious test program that might be invoked with arguments like:

test --i "input.dat" --o "output.dat" audio --threshold 42 --scale 3.14 video --mono --brightness 60

Hopefully the similarity to my Takeout extractor should be obvious.

I was initially a bit mystified by the use of lambdas as “handlers” that are passed the values of various options. This mean that there was no single place in the code where everything about the parse was “known”. I didn’t know why it was like that but I thought I could work around it.

The first difficulty I encountered was that, while it is possible to associate options with the root command and also associated commands (which have their own options), only the first command is ever parsed. So if I include the audio command then the video command is ignored. Also, if any command is included in the args array then options associated with the root command itself (e.g. --i and --o) are not parsed. Clearly I was either not understanding something, or I wasn’t using it in the way that it was designed to be used. I opened an issue on github and fairly quickly got confirmation that it was the latter.

There was, however, cause for hope: I could split the command-line at command-token boundaries and parse each subset of arguments separately. Since RootCommand.Invoke() is actually an extension method (more of this below) I wrote a new extension method to do this:

    public static int InvokeMultiCommand(
        this RootCommand command, 
        string[] args)
        var commands = new List<Command>() { command };
        foreach (var seg in SegmentArgs(args, commands.ToArray()))
            var exitCode = command.Invoke(seg);
            if (exitCode != 0)
                return exitCode;
        return 0;    }

SegmentArgs() does the job of chopping up the string[] arguments array into a string[][].

With that working, I looked at how to customise the help output to include all commands and their options. As it stood, invoking the app it with the –help option gave the following:

I needed descriptions for all the commands, and also for their options to be listed.

After reading the documentation for help customisation, and digging into how the help is generated, I realised that what I’d done so far was the easy bit. The library provides a CommandLineBuilder class, instances of which can be wired to lambdas that customise how it generates help text. But having done that, the CommandLineBuilder instance is responsible for doing the parse via it’s Invoke() method, not the root command. And there didn’t seem to be a way to make this compatible with the code I’d already written: I wanted to parse commands separately but have help generation that was aware of the syntax of all commands. There seemed to be a fundamental mismatch.

I tried extending CommandLineBuilder by the deeply unfashionable approach of sub-classing, but its Build() method (which generates a Parser object to actually do the parse) isn’t virtual so I couldn’t override it. And many of its key methods are implemented as extension methods, so I couldn’t override them either.

I tried extending CommandLineBuilder instead, but I found that I was having to wrap more and more of its functionality. And because CommandLineBuilder is injected as a dependency at various points, my non-overriding extension methods weren’t being called anyway.

So I gave up shaving the yak. At the top of my stack of requirements, I just wanted to archive photos. At the bottom of the stack I was hacking on a command-line parsing library to extend it in an unusual way. It was an interesting exercise, but I was wasting time. Its always good to know when to give up and pop the stack.

Cleaning-up a Samsung Galaxy Tab S7+

I recently bought an Galaxy Tab S7+ tablet to replace an ancient and failing laptop, and on a hunch that it might stop me from being distracted by my phone. I’m very pleased with it, although I’m still not sure whether its had a positive impact on my tendency to stare at my phone.

But man does it come loaded down with a lot of crap. I don’t need a Samsung contacts app – the google one is perfectly adequate. Same with calendar and photo gallery. And I don’t need text messaging and phone apps. And I certainly don’t need two personal assistant apps (yes, you Bixby). And of course most of them can’t be uninstalled – because someone at Samsung thinks I want their particular form of value add.

So of course I decided to remove them anyway. This post describes the process. But before that:

Warning! Doing what is described here could cause data loss, device instability, and/or could brick your device completely. Don’t copy what I did unless you understand the consequences and accept responsibility. I can’t be held responsible for your choices.

The first thing I did was quickly remove components of Bixby that are integrated into the user interface, using the steps described in this article on Android Central.

It is possible to uninstall “uninstallable” apps using adb – the Android Debug Bridge that is part of the Android platform tools. So the first thing to do is to download and install the tools on a PC or Mac. Make sure the bin directory is in your path. Then enable developer mode on your Android device and connect it to your PC via USB. I’m not going to explain how to do any of this: if you don’t already know then I think it’s fair to say that you really shouldn’t be contemplating any of this.

The command to uninstall an app or package is adb -d shell pm uninstall --user 0 <packagename>

The --user 0 switch runs the command as the device’s root user, which is necessary for some of the apps I uninstalled.

So all that we need to do now is determine the package names for the apps we want to uninstall. There are probably a few ways to do this, but I found that the easiest way was to install an app such as APK Extractor, which lists the installed apps and their package names. I didn’t extract any apks with it, and I uninstalled it later.

Then it’s just a matter of running the above command using the names of the packages that you want to remove. Here’s the ones I removed:

Package NameDescription Calendar Bixby Routines Bixby Vision Voice Service Notes Notes Phone Messages
com.fluidtouch.noteshelf2Noteshelf Calculator Files (1)

(1) Uninstalling this package makes it possible to install Google Files. I have no idea how that works.

The above is very much a minimum set. There are still bits of Bixby and other packages installed but they don’t seem to intrude on my experience of using the device and it seems a lot cleaner now. As a device I really like it.

Once again: this is what worked for me. Your experience may differ. I hope it is useful.


Being Mobile

Yesterday I bought a T-Mobile G1 – the “Googlephone”.

I haven’t had much time to play with it yet, but it seems like a great piece of technology.The broadband access to email and maps, the user-interface, and the general build quality are very good. I hear it also does phone calls. I hope to have more to say about it later.

But there’s a story here.

Some time in early 1998 I read the following on David Bennahum’s (now long-defunct) Meme mailing list:

8:30 am, mid-April, standing on the platform of Track 3, waiting for the Times Square shuttle to take me to Grand Central Station. About six hundred people are queued up, clustered in blobs along memorized spots where we know the subway doors will open. Most are just standing. Some are reading the morning papers. I’m downloading email through a metal ventilation shaft in the ceiling. I point my wireless modem like a diving rod toward the breeze coming down from the street above. I can see people’s feet criss-crossing the grate. If wind can get down here this way, I figure packets of data can too.   (Link)

He was describing his experience of mobile, wireless internet connectivity using Palm Pilot with an attached (bulky) Novatel Minstrel modem. This image stuck in my mind. I had had net access since the late eighties as a student, and limited access at work (I’m a developer) since about 1993, but always tethered to a desk. This mobile internet idea was cool. I decided that I had to get some of this.

In late 1998 I bought my first mobile computing device – a Philips Velo 500. This was pretty curring-edge at the time: about as big as a thick paperback, it ran Windows CE 2, had a monochrome LCD display with a green backlight, and a “chicklet” keyboard. Crucially, it also had a built-in 19.2kb/s modem, and built-in browser and email client. I had great fun plugging it into phone lines and showing people “look… email… web…!”. It wasn’t all that impressive, though, and it was too big and heavy to fit into a pocket.I didn’t yet have a mobile phone, and the Velo wouldn’t have connected to it anyway. All in all, not really what I’d imagined.

In late 1999 I bought a Palm Vx. This was a significant improvement. Even with its tiny 33.6kb/s modem clipped on it would fit comfortably in a jacket pocket. I bought some third-party brower and email software. Then I got a mobile phone with an IRDA modem, and suddenly I could sit in Starbucks downloading my email like a proper alpha geek. For a couple of years that was my primary personal email system. It was slow, though – GSM data runs at about 9kb/s. Also, making sure that the phone stayed in line of sight with the Vx was awkward. But it worked.

By 2004 I had acquired an HP 4150 PDA and a GPRS phone. This was more like it! The 4150 had a colour screen with decent resolution and the Bluetooth/GPRS connection was quite fast. It was annoying that that I had to fiddle with both devices to turn bluetooth on before accessing the net, the data charges were pretty steep, and I now had two devices to carry around. The main problem, though, was that Windows CE was just plain awful to use. Hmm. Still not right.

So now I have this G1. It has a high-resolution screen, okay keyboard, always-on broadband, and its fairly small. Its my fourth personal generation of mobile internet device, and it finally seems that it might be what I wanted back in 1998 – although I didn’t know what that was at the time. We’ll see.

(I still have the velo and the Vx.)