What exactly was the point of [ “x$var” = “xval” ]?

In shell scripting you sometimes come across comparisons where each value is prefixed with "x". Here are some examples from GitHub:

if [ "x${JAVA}" = "x" ]; then
if [ "x${server_ip}" = "xlocalhost" ]; then
if test x$1 = 'x--help' ; then

I’ll call this the x-hack.

For any POSIX compliant shell, the value of the x-hack is exactly zero: this comparison works without the x 100% of the time. But why was it a thing?

Online sources like this stackoverflow Q&A are a little handwavy, saying it’s an alternative to quoting (oof), pointing towards issues with "some versions" of certain shells, or generally cautioning against the mystic behaviors of especially ancient Unix system without concrete examples.

To determine whether or not ShellCheck should warn about this, and if so, what its long form rationale should be, I decided to dig into the history of Unix with the help of The Unix Heritage Society‘s archives. I was unfortunately unable to peer into the closely guarded world of the likes of HP-UX and AIX, so dinosaur herders beware.

These are the cases I found that can fail.

Left-hand side matches a unary operator

The AT&T Unix v6 shell from 1973, at least as found in PWB/UNIX from 1977, would fail to run test commands whose left-hand side matched a unary operator. This must have been immediately obvious to anyone who tried to check for command line parameters:

% arg="-f"
% test "$arg" = "-f"
syntax error: -f
% test "x$arg" = "x-f"
(true)

This was fixed in the AT&T Unix v7 Bourne shell builtin in 1979. However, test and [ were also available as separate executables, and appear to have retained a variant of the buggy behavior:

$ arg="-f"
$ [ "$arg" = "-f" ]
(false)
$ [ "x$arg" = "x-f" ]
(true)

This happened because the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.

The "modern" Bourne shell behavior was copied by the Public Domain KornShell in 1988, and made part of POSIX.2 in 1992. GNU Bash 1.14 did the same thing for its builtin [, and the GNU shellutils package that provided the external test/[ binaries followed POSIX, so the early GNU/Linux distros like SLS were not affected, nor was FreeBSD 1.0.

The x-hack is effective because no unary operators can start with x.

Either side matches string length operator -l

A similar issue that survived longer was with the string length operator -l. Unlike the normal unary predicates, this one was only parsed as part as part of an operand to binary predicates:

var="helloworld"
[ -l "$var" -gt 8 ] && echo "String is longer than 8 chars"

It did not make it into POSIX because, as the rationale puts it, "it was undocumented in most implementations, has been removed from some implementations (including System V), and the functionality is provided by the shell", referring to [ ${#var} -gt 8 ].

It was not a problem in UNIX v7 where = took precedence, but Bash 1.14 from 1996 would parse it greedily up front:

$ var="-l"
$ [ "$var" = "-l" ]
test: -l: binary operator expected
$ [ "x$var" = "x-l" ]
(true)

It was also a problem on the right-hand side, but only in nested expressions. The -l check made sure there was a second argument, so you would need an additional expression or parentheses to trigger it:

$ [ "$1" = "-l" -o 1 -eq 1 ]
[: too many arguments
$ [ "x$1" = "x-l" -o 1 -eq 1 ]
(true)

This operator was removed in Bash 2.0 later that year, eliminating the problem.

Left-hand side is !

Another issue in early shells was when the left-hand side was the negation operator !:

$ var="!"
$ [ "$var" = "!" ]
test: argument expected            (UNIX v7, 1979)
test: =: unary operator expected   (bash 1.14, 1996)
(false)                            (pd-ksh88, 1988)
$ [ "x$var" = "x!" ]
(true)

Again, the x-hack is effective by preventing the ! from being recognized as a negation operator.

ksh treated this the same as [ ! "=" ], and ignored the rest of the arguments. This quiety returned false, as = is not a null string. Ksh continues to ignore trailing arguments to this day:

$ [ -e / random words/ops here ]
(true)                              (ksh93, 2021)
bash: [: too many arguments         (bash5, 2021)

Bash 2.0 and ksh93 both fixed this problem by letting = take precedence in the 3-argument case, in accordance with POSIX.

Left-hand side is "("

This is by far my favorite.

The UNIX v7 builtin failed when the left-hand side was a left-parenthesis:

$ left="(" right="("
$ [ "$left" = "$right" ]
test: argument expected
$ [ "x$left" = "x$right" ]
(true)

This happens because the ( takes precedence over the =, and becomes an invalid parenthesis group.

Why is this my favorite? Behold Dash 0.5.4 up until 2009:

$ left="(" right="("
$ [ "$left" = "$right" ]
[: 1: closing paren expected
$ [ "x$left" = "x$right" ]
(true)

That was an active bug when the StackOverflow Q&A was posted.

But wait, there’s more!

Here’s Zsh in late 2015, right before version 5.3:

% left="(" right=")"
% [ "$left" = "$right" ]
(true)
% [ "x$left" = "x$right" ]
(false)

Amazingly, the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!

The bugs are of course increasingly hard to come across. The Zsh one only triggers when comparing left-paren against right-paren, as otherwise the parser will backtrack and figure it out.

Another late holdout was Solaris, whose /bin/sh was the legacy Bourne shell as late as Solaris 10 in 2009. However, this was undoubtedly for compatibility, and not because they believed this was a viable shell. A "standards compliant" shell had been an option for a long time before Solaris 11 dragged it kicking and screaming into 21th century — or at least into the 90s — by switching to ksh93 by default in 2011.

In all cases, the x-hack is effective because it prevents the operands from being recognized as parentheses.

Conclusion

The x-hack was indeed useful and effective against several real and practical problems in multiple shells.

However, the value was mostly gone by the mid-to-late 1990s, and the few remaining issues were cleaned up before 2010 — shockingly late, but still over a decade ago.

The last one managed to stay until 2015, but only in the very specific case of comparing opening parenthesis to a closed parenthesis in one specific non-system shell.

I think it’s time to retire this idiom, and ShellCheck now offers a style suggestion by default.

Epilogue

The Dash issue of [ "(" = ")" ] was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today:

$ str="-e"
$ [ \( ! "$str" \) ]
[: 1: closing paren expected     # dash
bash: [: `)' expected, found ]   # bash

POSIX fixes all these ambiguities for up to 4 parameters, ensuring that shells conditions work the same way, everywhere, all the time.

Here’s how Dash maintainer Herbert Xu put it in the fix:

/*
 * POSIX prescriptions: he who wrote this deserves the Nobel
 * peace prize.
 */

Zsh and Fish’s simple but clever trick for highlighting missing linefeeds

tl;dr: We look at how Zsh and Fish is able to indicate a missing terminating linefeed in program output when the Unix programming model precludes examining the output itself.

Most shells, including bash, ksh, dash, and ash, will show a prompt wherever the previous command left the cursor when it exited.

The fact that the prompt (almost) always shows up on the familiar left-most column of the next line is because Unix programs universally cooperate to park the cursor there when they exit.

This is done by always making sure to output a terminating linefeed \n (aka newline):

vidar@vidarholen-vm2 ~ $ whoami
vidar
vidar@vidarholen-vm2 ~ $ whoami | hexdump -c
0000000   v   i   d   a   r  \n  

If a program fails to follow this convention, the prompt will end up in the wrong place:

vidar@vidarholen-vm2 ~ $ echo -n "hello world"
hello worldvidar@vidarholen-vm2 ~ $

However, I recently noticed that zsh and fish will instead show a character indicating a missing linefeed, and still start the prompt where you’d expect to find it:

vidarholen-vm2% echo -n "hello zsh"
hello zsh% 
vidarholen-vm2%

vidar@vidarholen-vm2 ~> echo -n "hello fish"
hello fish⏎
vidar@vidarholen-vm2 ~> 

If you’re disappointed that this is what there’s an entire blog post about, you probably haven’t tried to write a shell. This is one of those problems where the more you know, the harder it seems (obligatory XKCD).

If you have a trivial solution in mind, maybe along the lines of if (!output.ends_with("\n")) printf("%\n");, consider the following restrictions*:

  • Contrary to popular belief, the shell does not sit between programs and the terminal. The shell has no ability to intercept or examine the terminal output of programs.
  • The terminal programming model is based on teletypes (aka TTYs), electromechanical typewriters from the early 1900s. They printed letter by letter onto paper, so there is no memory or screen buffer that can be programmatically read back.

Given this, here are some flawed ways to make it happen:

  • The shell could use pipes to intercept all output, and relay it onto the terminal. While it works in trivial cases like whoami, some programs check whether stdout is a terminal and change their behavior, others go over your head and talk to the TTY directly (e.g. ssh‘s password prompt), and some use TTY specific ioctls that fail if the output is not a TTY, such as querying window size or disabling local echo for password input.

  • The shell can ptrace the process to see what it writes where. This has a huge overhead and breaks sudo, ping, and other commands that rely on suid.

  • The shell can create a pseudo-tty (pty), run commands in that, and relay information back and forth much like ssh or script does. This is an annoying and heavy-handed approach, which in its ultimate form would require re-implementing an entire terminal emulator.

  • The shell can use ECMA-48 cursor position reporting features: printf '\e[6n' on a supported terminal will cause the terminal to simulate user input on the form ^[[y;xR where y and x is the row and column. The shell could then read this to figure out where the cursor is. These kinds of round trips are feasible, but somewhat slow and annoying to implement for such a simple feature.

Zsh and Fish instead have a much simpler and far more clever way of doing it:

  1. They always output the missing linefeed indicator, whether or not it’s needed.
  2. They then pad out the line with $COLUMN-1 spaces
  3. This is followed by a carriage return to move to the first column
  4. Finally, they show the prompt.

This solution is very simple because it only requires printing a fixed string before every prompt, but it’s highly effective on all terminals§.

Why?

Let’s pretend our terminal is 10 columns wide and 3 rows tall, and a canonical program just wrote a short string with a trailing linefeed:

[vidar     ]
[|         ]
[          ]

The cursor, indicated by |, is at the start of the line. This is what would happen in step 1 and 2:

[vidar      ]
[%         |]
[           ]

The indicator is shown, and since we have written exactly $COLUMN characters, the cursor is after the last column. Step 3, a carriage return, now moves it back to the start:

[vidar      ]
[|%         ]
[           ]

The prompt now draws over the indicator, and is shown on the same line:

[vidar      ]
[~ $ |      ]
[           ]

The final result is exactly the same as if we had simply written out the prompt wherever the cursor was.

Now, let’s look at what happens when a program does not output a terminating linefeed:

[vidar|     ]
[           ]
[           ]

The indicator is shown, but this time the spaces in step 2 causes the line to wrap all the way around to the next line:

[vidar%     ]
[     |     ]
[           ]

The carriage return moves the cursor back to the start of the next line:

[vidar%     ]
[|          ]
[           ]

The prompt is now shown on that line, and therefore doesn’t overwrite the indicator:

[vidar%     ]
[~ $ |      ]
[           ]

And there you have it. A seemingly simple problem turned out harder than expected, but a clever use of line wrapping made it easy again.

Now that we know the secret sauce, we can of course do the same thing in Bash:

PROMPT_COMMAND='printf "%%%$((COLUMNS-1))s\\r"'

* These same restrictions are reflected in several other aspects of Unix:

  • While useful and often requested, there is no robust way to get the output of the previously executed command.
  • It’s surprisingly tricky to take screenshots/dumps of terminals, and it only works on specific terminals.
  • The phenomenon of background process output cosmetically trashing foreground processes is well known, and yet there’s no solution

§ Fish developer and Hacker News reader ComputerGuru explains that there are many caveats related to various terminals’ line wrapping that make this trickier than shown here.

An ode to pack: gzip’s forgotten decompressor

The latest 4.13.9 source release of the Linux kernel is 780MiB, but thanks to xz compression, the download is a much more managable 96 MiB (an 88% reduction)

Before xz took over as the default compression format on kernel.org in 2013, following the "latest" link would have gotten you a bzip2 compressed file. The tar.bz2 would have been 115 MiB (-85%), but there’s was no defending the extra 20 MiB after xz caught up in popularity. bzip2 is all but displaced today.

bzip2 became the default in 2003, though it had long been an option over the less efficient gzip. However, since every OS, browser, language core library, phone and IoT lightswitch has built-in support for gzip, a 148 MiB (-81%) tar.gz remains an option even today.

gzip itself started taking over in 1994, before kernel.org, and before the World Wide Web went mainstream. It must have been a particularly easy sell for the fledgeling Linux kernel: it was made, used and endorsed by the mighty GNU project, it was Free Software, free of patent restrictions, and it provided powerful .zip style DEFLATE compression in a Unix friendly package.

Another nice benefit was that gzip could decompress other contemporary formats, thereby replacing contested and proprietary software.

Among the tools it could replace was compress, the de-facto Unix standard at the time. Created based on LZW in 1985, it was hampered by the same patent woes that plagued GIF files. The then-ubiquitous .Z suffix graced the first public Linux releases, but is now recognized only by the most long-bearded enthusiasts. The current release would have been 302 MiB (-61%) with compress.

Another even more obscure tool it could replace was compress‘s own predecessor, pack. This rather loosely defined collection of only partially compatible formats is why compress had to use a capital Z in its extension. pack came first, and offered straight Huffman coding with a .z extension.

With pack, our Linux release would have been 548 MiB (-30%). Compared to xz‘s 96 MiB, it’s obvious why no one has used it for decades.

Well, guess what: gzip never ended its support! Quoth the man page,

gunzip can currently decompress files created by gzip, zip,
compress, compress -H or pack.

While multiple implementations existed, these were common peculiarities:

  • They could not be used in pipes.
  • They could not represent empty files.
  • They could not compress a file with only one byte value, e.g. "aaaaaa…"
  • They could fail on "large" files. "can’t occur unless [file size] >= [16MB]", a comment said dismissively, from the time when a 10MB hard drive was a luxury few could afford.

These issues stemmed directly from the Huffman coding used. Huffman coding, developed in 1952, is basically an improvement on Morse code, where common characters like "e" get a short code like "011", while uncommon "z" gets a longer one like "111010".

  • Since you have to count the characters to figure out which are common, you can not compress in a single pass in a pipe. Now that memory is cheap, you could mostly get around that by keeping the data in RAM.

  • Empty files and single-valued files hit an edge case: if you only have a single value, the shortest code for it is the empty string. Decompressors that didn’t account for it would get stuck reading 0 bits forever. You can get around it by adding unused dummy symbols to ensure a minimum bit length of 1.

  • A file over 16MB could cause a single character to be so rare that its bit code was 25+ bits long. A decompressor storing the bits to be decoded in a 32bit value (a trick even gzip uses) would be unable to append a new 8bit byte to the buffer without displacing part of the current bit code. You can get around that by using "package merge" length restricted prefix codes over naive Huffman codes.

I wrote a Haskell implementation with all these fixes in place: koalaman/pack is available on GitHub.

During development, I found that pack support in gzip had been buggy since 2012 (version 1.6), but no one had noticed in the five years since. I tracked down the problem and I’m happy to say that version 1.9 will again restore full pack support!

Anyways, what could possibly be the point of using pack today?

There is actually one modern use case: code golfing.

This post came about because I was trying to implement the shortest possible program that would output a piece of simple ASCII art. A common trick is variations of a self-extracting shell script:

sed 1d $0|gunzip;exit
<compressed binary data here>

You can use any available compressor, including xz and bzip2, but these were meant for bigger files and have game ruining overheads. Here’s the result of compressing the ASCII art in question:

  • raw: 269 bytes
  • xz: 216 bytes
  • bzip2: 183 bytes
  • gzip: 163 bytes
  • compress: 165 bytes
  • and finally, pack: 148 bytes!

I was able to save 15 bytes by leveraging gzip‘s forgotten legacy support. This is huge in a sport where winning entries are bytes apart.

Let’s have a look at this simple file format. Here’s an example pack file header for the word "banana":

1f 1e        -- Two byte magic header
00 00 00 06  -- Original compressed length (6 bytes)

Next comes the Huffman tree. Building it is simple to do by hand, but too much for this post. It just needs to be complete, left-aligned, with eof on the right at the deepest level. Here’s the optimal tree for this string:

        /\
       /  a
      /\
     /  \
    /\   n
   b  eof

We start by encoding its depth (3), and the number of leaves on each level. The last level is encoded minus 2, because the lowest level will have between 2 and 257 leaves, while a byte can only store 0-255.

03  -- depth
01  -- level 1 only contains 'a'
01  -- level 2 only contains 'n'
00  -- level 3 contains 'b' and 'eof', -2 as mentioned

Next we encode the ASCII values of the leaves in the order from top to bottom, left to right. We can leave off the EOF (which is why it needs to be in the lower right):

61 6e 62  -- "a", "n" ,"b"

This is enough for the decompressor to rebuild the tree. Now we go on to encode the actual data.

Starting from the root, the Huffman codes are determined by adding a 0 for ever left branch and 1 for every right branch you have to take to get to your value:

a   -> right = 1
n   -> left+right = 01
b   -> left+left+left -> 000
eof -> left+left+right -> 001

banana<eof> would therefore be 000 1 01 1 01 1 001, or when grouped as bytes:

16  -- 0001 0110
C8  -- 1100 1   (000 as padding)

And that’s all we need:

$ printf '\x1f\x1e\x00\x00\x00\x06'\
'\x03\x01\x01\x00\x61\x6e\x62\x16\xc8' | gzip -d
banana

Unfortunately, the mentioned gzip bug triggers due to failing to account for leading zeroes in bit code. eof and a have values 001 and 1, so an oversimplified equality check confuses one for the other, causing gzip to terminate early:

b
gzip: stdin: invalid compressed data--length error

However, if you’re stuck with an affected version, there’s another party trick you can do: the Huffman tree has to be canonical, but it does not have to be optimal!

What would happen if we skipped the count and instead claimed that each ASCII character is equally likely? Why, we’d get a tree of depth 8 where all the leaf nodes are on the deepest level.

It then follows that each 8 bit character will be encoded as 8 bits in the output file, with the bit patterns we choose by ordering the leaves.

Let’s add a header with a dummy length to a file:

$ printf '\x1F\x1E____' > myfile.z

Now let’s append the afforementioned tree structure, 8 levels with all nodes in the last one:

$ printf '\x08\0\0\0\0\0\0\0\xFE' >> myfile.z

And let’s populate the leaf nodes with 255 bytes in an order of our choice:

$ printf "$(printf '\\%o' {0..254})" |
    tr 'A-Za-z' 'N-ZA-Mn-za-m' >> myfile.z

Now we can run the following command, enter some text, and hit Ctrl-D to "decompress" it:

$ cat myfile.z - | gzip -d 2> /dev/null
Jr unir whfg pbaivaprq TMvc gb hafpenzoyr EBG13!
<Ctrl+D>
We have just convinced GZip to unscramble ROT13!

Can you think of any other fun ways to use or abuse gzip‘s legacy support? Post a comment.

Why Bash is like that: Signal propagation

Bash can seem pretty random and weird at times, but most of what people see as quirks have very logical (if not very good) explanations behind them. This series of posts looks at some of them.

How do I simulate pressing Ctrl-C when running this in a script:

while true; do echo sleeping; sleep 30; done

Are you thinking “SIGINT, duh!”? Hold your horses!

I tried kill -INT pid, but it doesn’t work the same: Ctrl-C kills the sleep and the loop SIGINTing the shell does nothing (but only in scripts: see Errata)

SIGINTing sleep makes the loop continue with the next iteration

HOWEVER, if I run the script in the background and kill -INT %1 instead of kill -INT pid, THEN it works :O

Why does Ctrl-C terminate the loop, while SIGINT doesn’t?

Additionally, if I run the same loop with ping or top instead of sleep, Ctrl-C doesn’t terminate the loop either!

Yeah. Well… Yeah…

This behaviour is due to an often overlooked feature in UNIX: process groups. These are important for getting terminals and shells to work the way they do.

A process group is exactly what it sounds like: a group of processes. They have a leader, which is the process that created it using setpgrp(2). The leader’s pid is also the process group id. Child processes are in the same group as their parent by default.

Terminals keep track of the foreground process group (set by the shell using tcsetpgrp(3)). When receiving a Ctrl-C, they send the SIGINT to the entire foreground group. This means that all members of the group will receive SIGINT, not just the immediate process.

kill -INT %1 sends the signal to the job’s process group, not the backgrounded pid! This explains why it works like Ctrl-C.

You can do the same thing with kill -INT -pgrpid. Since the process group id is the same as the process group leader, you can kill the group by killing the pid with a minus in front.

But why do you have to kill both?

When the shell is interrupted, it will wait for the running command to exit. If this child’s status indicates it exited abnormally due to that signal, the shell cleans up, removes its signal handler, and kills itself again to trigger the OS default action (abnormal exit). Alternatively, it runs the script’s signal handler as set with trap, and continues.

If the shell is interrupted and the child’s status says it exited normally, then Bash assumes the child handled the signal and did something useful, so it continues executing. Ping and top both trap SIGINT and exit normally, which is why Ctrl-C doesn’t kill the loop when calling them.

This also explains why interrupting just the shell does nothing: the child exits normally, so the shell thinks the child handled the signal, though in reality it was never received.

Finally, if the shell isn’t interrupted and a child exits, Bash just carries on regardless of whether the signal died abnormally or not. This is why interrupting the sleep just continues with the loop.

In case one would like to handle such cases, Bash sets the exit code to 128+signal when the process exits abnormally, so interrupting sleep with SIGINT would give the exit code 130 (kill -l lists the signal values).

Bonus problem:

I have this C app, testpg:
int main() {
    setsid();
    return sleep(10);
}

I run bash -c './testpg' and press Ctrl-C. The app is killed. Shouldn't testpg be excluded from SIGINT, since it used setsid?

A quick strace unravels this mystery: with a single command to execute, bash execve’s it directly — a little optimization trick. Since the pid is the same and already had its own process group, creating a new one doesn’t have any effect.

This trick can’t be used if there are more commands, so bash -c './testpg; true' can’t be killed with Ctrl-C.

Errata:

Wait, I started a loop in one terminal and killed the shell in another. 
The loop exited!

Yes it did! This does not apply to interactive shells, which have different ways of handling signals. When job control is enabled (running interactively, or when running a script with bash -m), the shell will die when SIGINTed

Here’s the description from the bash source code, jobs.c:2429:

  /* Ignore interrupts while waiting for a job run without job control
     to finish.  We don't want the shell to exit if an interrupt is
     received, only if one of the jobs run is killed via SIGINT. 
   ...