So what exactly is -ffunction-sections and how does it reduce binary size?

If you’d like a more up-to-date version of ShellCheck than what Raspbian provides, you can build your own on a Raspberry Pi Zero in a little over 21 hours.

Alternatively, as of last week, you can also download RPi compatible, statically linked armv6hf binaries of every new commit and stable release.

It’s statically linked — i.e. the executable has all its library dependencies built in — so you can expect it to be pretty big. However, I didn’t expect it to be 67MB:

build@d1044ff3bf67:/mnt/shellcheck# ls -l shellcheck
-rwxr-xr-x 1 build build 66658032 Jul 14 16:04 shellcheck

This is for a tool intended to run on devices with 512MiB RAM. strip helps shed a lot of that weight, and the post-stripped number is the one we’ll use from now on, but 36MB is still more than I expected, especially given that the x86_64 build is 23MB.

build@d1044ff3bf67:/mnt/shellcheck# strip --strip-all shellcheck
build@d1044ff3bf67:/mnt/shellcheck# ls -l shellcheck
-rwxr-xr-x 1 build build 35951068 Jul 14 16:22 shellcheck

So now what? Optimize for size? Here’s ghc -optlo-Os to enable LLVM opt size optimizations, including a complete three hour Qemu emulated rebuild of all dependencies:

build@31ef6588fdf1:/mnt/shellcheck# ls -l shellcheck
-rwxr-xr-x 1 build build 32051676 Jul 14 22:38 shellcheck

Welp, that’s not nearly enough.

The real problem is that we’re linking in both C and Haskell dependencies, from the JSON formatters and Regex libraries to bignum implemenations and the Haskell runtime itself. These have tons of functionality that ShellCheck doesn’t use, but which is still included as part of the package.

Fortunately, GCC and GHC allow eliminating this kind of dead code through function sections. Let’s look at how that works, and why dead code can’t just be eliminated as a matter of course:

An ELF binary contains a lot of different things, each stored in a section. It can have any number of these sections, each of which has a pile of attributes including a name:

  • .text stores executable code
  • .data stores global variable values
  • .symtab stores the symbol table
  • Ever wondered where compilers embed debug info? Sections.
  • Exception unwinding data, compiler version or build IDs? Sections.

This is how strip is able to safely and efficiently drop so much data: if a section has been deemed unnecessary, it’s simple and straight forward to drop it without affecting the rest of the executable.

Let’s have a look at some real data. Here’s a simple foo.c:

int foo() { return 42; }
int bar() { return foo(); }

We can compile it with gcc -c foo.c -o foo.o and examine the sections:

$ readelf -a foo.o
ELF Header:
  Magic:   7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
  Class:        ELF32
  Data:         2's complement, little endian
  Version:      1 (current)
  OS/ABI:       UNIX - System V
  ABI Version:  0
  Type:         REL (Relocatable file)
  Machine:      ARM
[..]

Section Headers:
  [Nr] Name       Type      Addr   Off    Size   ES Flg Lk Inf Al
  [ 0]            NULL      000000 000000 000000 00      0   0  0
  [ 1] .text      PROGBITS  000000 000034 000034 00  AX  0   0  4
  [ 2] .rel.text  REL       000000 000190 000008 08   I  8   1  4
  [ 3] .data      PROGBITS  000000 000068 000000 00  WA  0   0  1
  [ 4] .bss       NOBITS    000000 000068 000000 00  WA  0   0  1
  [..]

Symbol table '.symtab' contains 11 entries:
   Num:    Value  Size Type    Bind   Vis      Ndx Name
   [..]
     9: 00000000    28 FUNC    GLOBAL DEFAULT    1 foo
    10: 0000001c    24 FUNC    GLOBAL DEFAULT    1 bar

There’s tons more info not included here, and it’s an interesting read in its own right. Anyways, both our functions live in the .text segment. We can see this from the symbol table’s Ndx column which says section 1, corresponding to .text. We can also see it in the disassembly:

$ objdump -d foo.o
foo.o:     file format elf32-littlearm

Disassembly of section .text:
00000000 <foo>:
   0:   e52db004   push    {fp}
   4:   e28db000   add     fp, sp, #0
   8:   e3a0302a   mov     r3, #42 ; 0x2a
   c:   e1a00003   mov     r0, r3
  10:   e28bd000   add     sp, fp, #0
  14:   e49db004   pop     {fp}
  18:   e12fff1e   bx      lr

0000001c <bar>:
  1c:   e92d4800   push    {fp, lr}
  20:   e28db004   add     fp, sp, #4
  24:   ebfffffe   bl      0 <foo>
  28:   e1a03000   mov     r3, r0
  2c:   e1a00003   mov     r0, r3
  30:   e8bd8800   pop     {fp, pc}

Now lets say that the only library function we use is foo, and we want bar removed from the final binary. This is tricky, because you can’t just modify a .text segment by slicing things out of it. There are offsets, addresses and cross-dependencies compiled into the code, and any shifts would mean trying to patch that all up. If only it was as easy as when strip removed whole sections…

This is where gcc -ffunction-sections and ghc -split-sections come in. Let’s recompile our file with gcc -ffunction-sections foo.c -c -o foo.o:

$ readelf -a foo.o
[..]
Section Headers:
  [Nr] Name          Type      Addr  Off  Size ES Flg Lk Inf Al
  [ 0]               NULL      00000 0000 0000 00      0   0  0
  [ 1] .text         PROGBITS  00000 0034 0000 00  AX  0   0  1
  [ 2] .data         PROGBITS  00000 0034 0000 00  WA  0   0  1
  [ 3] .bss          NOBITS    00000 0034 0000 00  WA  0   0  1
  [ 4] .text.foo     PROGBITS  00000 0034 001c 00  AX  0   0  4
  [ 5] .text.bar     PROGBITS  00000 0050 001c 00  AX  0   0  4
  [ 6] .rel.text.bar REL       00000 01c0 0008 08   I 10   5  4
  [..]

Symbol table '.symtab' contains 14 entries:
   Num:    Value  Size Type    Bind   Vis      Ndx Name
[..]
12: 00000000    28 FUNC    GLOBAL DEFAULT    4 foo
13: 00000000    28 FUNC    GLOBAL DEFAULT    5 bar

Look at that! Each function now has its very own section.

This means that a linker can go through and find all the sections that contain symbols we need, and drop the rest. We can enable it with the aptly named ld flag --gc-sections. You can pass that flag to ld via gcc using gcc -Wl,--gc-sections. And you can pass that whole thing to gcc via ghc using ghc -optc-Wl,--gc-sections

I enabled all of this in my builder’s .cabal/config:

program-default-options
  gcc-options: -Os -Wl,--gc-sections -ffunction-sections -fdata-sections
  ghc-options: -optc-Os -optlo-Os -split-sections

With this in place, the ShellCheck binary became a mere 14.5MB:

-rw-r--r-- 1 build build 14503356 Jul 15 10:01 shellcheck

That’s less than half the size we started out with. I’ve since applied the same flags to the x86_64 build, which brought it down from 23MB to 7MB. Snappier downloads and installs for all!


For anyone interested in compiling Haskell for armv6hf on x86_64, I spent weeks trying to get cross-compilation going, but in the end (and with many hacks) I was only able to cross-compile armv7. In the end I gave up and took the same approach as with the Windows build blog post: a Docker image runs the Raspbian armv6 userland in Qemu user emulation mode.

I didn’t even have to set up Qemu. There’s tooling from Resin.io for building ARM Docker containers for IoT purposes. ShellCheck (ab)uses this to run emulated GHC and cabal. Everything Just Works, if slowly.

The Dockerfile is available on GitHub as koalaman/armv6hf-builder.

Bash’s white collar eval: [[ $var -eq 42 ]] runs arbitrary code too

Did you know this bash snippet is open to arbitrary code execution from user input?

#!/bin/bash
read -rp "Enter guess: " num
if [[ $num -eq 42 ]]
then
  echo "Correct"
else
  echo "Wrong"
fi

Here’s an example:

$ ./myscript
Enter guess: 42
Correct

$ ./myscript
Enter guess: a[$(date >&2)]+42
Sun Feb  4 19:06:19 PST 2018
Correct

This is not a new discovery or recently introduced vulnerability, but it’s one of the lesser discussed issues in bash scripting. ksh behaves the same way, if you account for the lack of read -p.

The shell evaluates values in an arithmetic context in several syntax constructs where the shell expects an integer. This includes: $((here)), ((here)), ${var:here:here}, ${var[here]}, var[here]=.. and on either side of any [[ numerical comparator like -eq, -gt, -le and friends.

In this context, you can use or pass literal strings like "1+1" and they will evaluated as math expressions: var="1+1"; echo $((var)) outputs "2".

However, as demonstrated, it’s not just a bc style arithmetic expression evaluator, it’s more closely integrated with the shell: a="b"; b="c+1"; c=41; echo $((a)) will show 42.

And any value used as a scalar array subscript in an arithmetic context will be evaluated and interpreted as an integer — including command substitutions.

This issue is generally trivialized or ignored. It’s not pointed out in code reviews or example code, even by those who are well aware of this issue. It would be pretty tedious to use ${var//[!0-9-]/} for every comparison, and it’s often not relevant anyways.

This isn’t necessarily unfair, because arbitrary code execution is only a security issue if it also provides escalation of privileges, like when you’re doing math using values from untrusted files, or with user input from CGI scripts.

The initial example rather involves being on the other side of this airtight hatchway, where you can do the same thing much more easily by just running date instead of ./myscript in the first place.

Contrast this to eval.

It also frequently allows arbitrary code execution, but any use of it — safe or unsafe, on either side of said hatchway — is likely to elicit a number of comments ranging from cautionary to condescending.

"eval is evil", people say, even though no one ever said that "-eq is evil" or "$((foo+1)) considered harmful".

Why such a double standard?

I would argue that eval is considered bad not (just) because it’s unsafe, but because it’s lowbrow. Safety is just the easiest way to argue that people should use better constructs.

eval is considered lowbrow because it’s primary use case is letting newbies do higher level operations without really knowing how the shell works. You don’t have to know the difference between literal and syntactic quotes, or how to work with the order of expansion, as long as you can figure out roughly what you want to run and generate such commands.

Meanwhile, Bash arithmetic contexts are a convenient and useful feature frequently employed and preferred by advanced scripters.

Here’s an example of a script where you can run ./myscript 1 or ./myscript 2 to get "Foo" or "Bar" respectively. You’ve probably seen (and at some point tried) the variable-with-numeric-suffix newbie pattern:

#!/bin/bash
var1="Foo"
var2="Bar"
eval echo "\$var$1"

This example would get a pile of comments about how this is unsafe, how ./myscript '1; rm -rf /' will give you a bad time, so think of the children and use arrays instead:

#!/bin/bash
var[1]="Foo"
var[2]="Bar"
echo "${var[$1]}"

As discussed, this isn’t fundamentally more secure: ./myscript '1+a[$(rm -rf /)]' will still give you a bad time. However, since it now uses the approprate shell features in a proper way, most people will give it a pass.

Of course, this is NOT to say that anyone should feel free to use eval more liberally. There are better, easier, more readable and more correct ways to do basically anything you would ever want to do with it. If that’s not reason enough, the security argument is still valid.

It’s just worth being aware that there’s a double standard.

I have no particular goal or conclusion in mind, so what do you think? Should we complain about eval less and arithmetic contexts more? Should we ignore the issue as a lie-to-children until eval makes it clear a user is ready for The Talk about code injection? Should we all just switch to the fish shell?

Post your thoughts here or on whichever site you came from.

An ode to pack: gzip’s forgotten decompressor

The latest 4.13.9 source release of the Linux kernel is 780MiB, but thanks to xz compression, the download is a much more managable 96 MiB (an 88% reduction)

Before xz took over as the default compression format on kernel.org in 2013, following the "latest" link would have gotten you a bzip2 compressed file. The tar.bz2 would have been 115 MiB (-85%), but there’s was no defending the extra 20 MiB after xz caught up in popularity. bzip2 is all but displaced today.

bzip2 became the default in 2003, though it had long been an option over the less efficient gzip. However, since every OS, browser, language core library, phone and IoT lightswitch has built-in support for gzip, a 148 MiB (-81%) tar.gz remains an option even today.

gzip itself started taking over in 1994, before kernel.org, and before the World Wide Web went mainstream. It must have been a particularly easy sell for the fledgeling Linux kernel: it was made, used and endorsed by the mighty GNU project, it was Free Software, free of patent restrictions, and it provided powerful .zip style DEFLATE compression in a Unix friendly package.

Another nice benefit was that gzip could decompress other contemporary formats, thereby replacing contested and proprietary software.

Among the tools it could replace was compress, the de-facto Unix standard at the time. Created based on LZW in 1985, it was hampered by the same patent woes that plagued GIF files. The then-ubiquitous .Z suffix graced the first public Linux releases, but is now recognized only by the most long-bearded enthusiasts. The current release would have been 302 MiB (-61%) with compress.

Another even more obscure tool it could replace was compress‘s own predecessor, pack. This rather loosely defined collection of only partially compatible formats is why compress had to use a capital Z in its extension. pack came first, and offered straight Huffman coding with a .z extension.

With pack, our Linux release would have been 548 MiB (-30%). Compared to xz‘s 96 MiB, it’s obvious why no one has used it for decades.

Well, guess what: gzip never ended its support! Quoth the man page,

gunzip can currently decompress files created by gzip, zip,
compress, compress -H or pack.

While multiple implementations existed, these were common peculiarities:

  • They could not be used in pipes.
  • They could not represent empty files.
  • They could not compress a file with only one byte value, e.g. "aaaaaa…"
  • They could fail on "large" files. "can’t occur unless [file size] >= [16MB]", a comment said dismissively, from the time when a 10MB hard drive was a luxury few could afford.

These issues stemmed directly from the Huffman coding used. Huffman coding, developed in 1952, is basically an improvement on Morse code, where common characters like "e" get a short code like "011", while uncommon "z" gets a longer one like "111010".

  • Since you have to count the characters to figure out which are common, you can not compress in a single pass in a pipe. Now that memory is cheap, you could mostly get around that by keeping the data in RAM.

  • Empty files and single-valued files hit an edge case: if you only have a single value, the shortest code for it is the empty string. Decompressors that didn’t account for it would get stuck reading 0 bits forever. You can get around it by adding unused dummy symbols to ensure a minimum bit length of 1.

  • A file over 16MB could cause a single character to be so rare that its bit code was 25+ bits long. A decompressor storing the bits to be decoded in a 32bit value (a trick even gzip uses) would be unable to append a new 8bit byte to the buffer without displacing part of the current bit code. You can get around that by using "package merge" length restricted prefix codes over naive Huffman codes.

I wrote a Haskell implementation with all these fixes in place: koalaman/pack is available on GitHub.

During development, I found that pack support in gzip had been buggy since 2012 (version 1.6), but no one had noticed in the five years since. I tracked down the problem and I’m happy to say that version 1.9 will again restore full pack support!

Anyways, what could possibly be the point of using pack today?

There is actually one modern use case: code golfing.

This post came about because I was trying to implement the shortest possible program that would output a piece of simple ASCII art. A common trick is variations of a self-extracting shell script:

sed 1d $0|gunzip;exit
<compressed binary data here>

You can use any available compressor, including xz and bzip2, but these were meant for bigger files and have game ruining overheads. Here’s the result of compressing the ASCII art in question:

  • raw: 269 bytes
  • xz: 216 bytes
  • bzip2: 183 bytes
  • gzip: 163 bytes
  • compress: 165 bytes
  • and finally, pack: 148 bytes!

I was able to save 15 bytes by leveraging gzip‘s forgotten legacy support. This is huge in a sport where winning entries are bytes apart.

Let’s have a look at this simple file format. Here’s an example pack file header for the word "banana":

1f 1e        -- Two byte magic header
00 00 00 06  -- Original compressed length (6 bytes)

Next comes the Huffman tree. Building it is simple to do by hand, but too much for this post. It just needs to be complete, left-aligned, with eof on the right at the deepest level. Here’s the optimal tree for this string:

        /\
       /  a
      /\
     /  \
    /\   n
   b  eof

We start by encoding its depth (3), and the number of leaves on each level. The last level is encoded minus 2, because the lowest level will have between 2 and 257 leaves, while a byte can only store 0-255.

03  -- depth
01  -- level 1 only contains 'a'
01  -- level 2 only contains 'n'
00  -- level 3 contains 'b' and 'eof', -2 as mentioned

Next we encode the ASCII values of the leaves in the order from top to bottom, left to right. We can leave off the EOF (which is why it needs to be in the lower right):

61 6e 62  -- "a", "n" ,"b"

This is enough for the decompressor to rebuild the tree. Now we go on to encode the actual data.

Starting from the root, the Huffman codes are determined by adding a 0 for ever left branch and 1 for every right branch you have to take to get to your value:

a   -> right = 1
n   -> left+right = 01
b   -> left+left+left -> 000
eof -> left+left+right -> 001

banana<eof> would therefore be 000 1 01 1 01 1 001, or when grouped as bytes:

16  -- 0001 0110
C8  -- 1100 1   (000 as padding)

And that’s all we need:

$ printf '\x1f\x1e\x00\x00\x00\x06'\
'\x03\x01\x01\x00\x61\x6e\x62\x16\xc8' | gzip -d
banana

Unfortunately, the mentioned gzip bug triggers due to failing to account for leading zeroes in bit code. eof and a have values 001 and 1, so an oversimplified equality check confuses one for the other, causing gzip to terminate early:

b
gzip: stdin: invalid compressed data--length error

However, if you’re stuck with an affected version, there’s another party trick you can do: the Huffman tree has to be canonical, but it does not have to be optimal!

What would happen if we skipped the count and instead claimed that each ASCII character is equally likely? Why, we’d get a tree of depth 8 where all the leaf nodes are on the deepest level.

It then follows that each 8 bit character will be encoded as 8 bits in the output file, with the bit patterns we choose by ordering the leaves.

Let’s add a header with a dummy length to a file:

$ printf '\x1F\x1E____' > myfile.z

Now let’s append the afforementioned tree structure, 8 levels with all nodes in the last one:

$ printf '\x08\0\0\0\0\0\0\0\xFE' >> myfile.z

And let’s populate the leaf nodes with 255 bytes in an order of our choice:

$ printf "$(printf '\\%o' {0..254})" |
    tr 'A-Za-z' 'N-ZA-Mn-za-m' >> myfile.z

Now we can run the following command, enter some text, and hit Ctrl-D to "decompress" it:

$ cat myfile.z - | gzip -d 2> /dev/null
Jr unir whfg pbaivaprq TMvc gb hafpenzoyr EBG13!
<Ctrl+D>
We have just convinced GZip to unscramble ROT13!

Can you think of any other fun ways to use or abuse gzip‘s legacy support? Post a comment.

ShellCheck and shadowed case branches

As of the the latest commit, ShellCheck will try to detect shadowed case branches.

Here’s an adaptation from an unnamed script on GitHub:

case $1 in
    -h|--help)
        help
        exit 0
        ;;
    -h|--hub)
        hub=$2
        shift
        ;;
    *)
        die "Unknown option: $1"
        ;;
esac

The original case statement was significantly longer, so you’d be excused for not noticing the problem: -h is used for two different branches. Because of this, -h as a short option for --hub will not work.

If you run ShellCheck on this example now, you will get a pair of helpful warnings:

Line 4:
    -h|--help)
    ^-- SC2221: This pattern always overrides a later one.
 
Line 8:
    -h|--hub)
    ^-- SC2222: This pattern never matches because of a previous pattern.

Very simple and probably somewhat useful in certain cases, right? Well, it gets slightly more interesting.

Here is another example adapted from the wild:

case $1 in
    -h|--help|-?)
        usage
        exit
        ;;
    -v|--verbose)
        verbose=1
        ;;
    *)
        die "Unknown option: $1"
        ;;
esac

Did you spot the same problem? ShellCheck did:

Line 4:
    -h|--help|-?)
              ^-- SC2221: This pattern always overrides a later one.

Since an unescaped ? matches any character, it will match also match -v, so the short form of --verbose will not work.

Similarly, it recognizes two separate issues in this example:

    -*|--*) die "Invalid option: $1" ;;
    --) shift; break ;;

The end-of-option -- marker will never be recognized, and -*|--* is redundant because the first already covers the second.

These are all very simple cases, but this also works more generally. Here’s a fabricated music sorting script where the bug would be exceedingly hard to spot in a longer list of bands:

case "${filename,,}" in
  *"abba"*.mp3 ) rm "$filename" ;;
  *"black"*"sabbath"*.mp3 ) mv "$filename" "Music/Metal" ;;
esac

So how does it work?

There are very clever ways of determining whether one regular language is a superset of another by intersecting it with the complement of the other, and checking the result for satisfiability.

ShellCheck uses none of them.

I’ve written a regex inverter before, and that level of complexity was not something I wanted to introduce.

Instead, ShellCheck’s pattern intersection and superset supports only basic DOS style wildcard patterns: ?, * and literals. It just does a simple recursive match on the two patterns.

Let’s call the patterns A and B, and we wish to check if A is a superset of B, i.e. if A matches everything that B does.

We have two arbitrary shell patterns that we want to turn into a simplified form, while ensuring we don’t simplify away any details that will cause a false positive. ShellCheck does this in two ways:

It creates A in such a way that it’s guaranteed to match a (non-strict) subset of the actual glob. This just means giving up on any pattern that uses features we don’t explicitly recognize. $(cmd)foo@(ab|c) is rejected, while *foo* is allowed.

It then creates B to guarantee that it matches a (non-strict) superset of the actual glob. This is done by replacing anything we don’t support with a *. $(cmd)foo@(ab|c) just becomes *foo*.

Now we can just match the two patterns against each other with an inefficient but simple recursive matcher. Matching two patterns is slightly trickier than matching a pattern against a string, but it’s still a first year level CS exercise.

It just involves breaking down the patterns by prefix, and matching until you reach a trivial base case:

  • superset(“”, “”) = True
  • superset(“”, cY) = False
  • superset(cX, cY) = superset(X, Y)
  • superset(*X, *Y) = superset(*X, Y)

The actual code calls the simplified patterns “PseudoGlobs”, inhabited by PGAny ?, PGMany *, and PGChar c:

pseudoGlobIsSuperSetof :: [PseudoGlob] -> [PseudoGlob] -> Bool
pseudoGlobIsSuperSetof = matchable
  where
    matchable x@(xf:xs) y@(yf:ys) =
        case (xf, yf) of
            (PGMany, PGMany) -> matchable x ys
            (PGMany, _) -> matchable x ys || matchable xs y
            (_, PGMany) -> False
            (PGAny, _) -> matchable xs ys
            (_, PGAny) -> False
            (_, _) -> xf == yf && matchable xs ys

    matchable [] [] = True
    matchable (PGMany : rest) [] = matchable rest []
    matchable _ _ = False

That’s really all there is to it. ShellCheck just goes through each pattern, and flags the first pattern (if any) that it shadows. There’s also a pattern simplifier which rearranges c*?*?****d into c??*d to add some efficiency to obviously diseased patterns.

Future work could include supporting character sets/ranges since [yY] is at least occasionally used, but it’s rare to find any extglob to warrant full regex support.

Of course, 99% of the time, there are no duplicates. 99.9% of the time, you’d get the same result with simple string matches.

However, that 0.1% of cases where you get delightful insights like -? shadowing -v or Linux-3.1* shadowing Linux-3.12* makes it all worthwhile.

Buffers and windows: The mystery of ‘ssh’ and ‘while read’ in excessive detail

If you’ve ever tried to use ssh, and similarly ffmpeg or mplayer, in a while read loop, you’ll have stumbled upon a surprising interaction: the loop mysteriously aborts after the first iteration!

The solution, using ssh -n or ssh < /dev/null, is quickly spoiled by ShellCheck (shellcheck this code online), but why stop there? Let’s take a deep dive into the technical details surrounding this issue.

Note that all numbers given here are highly tool and platform specific. They apply to GNU coreutils 8.26, Linux 4.9 and OpenSSH 7.4, as found on Debian in July, 2017. If you use a different platform, or even just sufficiently newer versions, and wish to repeat the experiments, you may have to follow the reasoning and update the numbers accordingly.

Anyways, to first demonstrate the problem, here’s a while read loop that runs a command for each line in a file:

while IFS= read -r host
do
  echo ssh "$host" uptime
done < hostlist.txt

It works perfectly and iterates over all lines in the file:

ssh localhost uptime
ssh 10.0.0.4 uptime
ssh 10.0.0.7 uptime

However, if we remove the echo and actually run ssh, it will stop after the first iteration with no warnings or errors:

 16:12:41 up 21 days,  4:24, 12 users,  load average: 0.00, 0.00, 0.00

Even uptime itself works fine, but ssh localhost uptime will stop after the first one, even though it runs the same command on the same machine.

Of course, applying the aforementioned fix, ssh -n or ssh < /dev/null solves the problem and gives the expected result:

16:14:11 up 21 days,  4:24, 12 users,  load average: 0.00, 0.00, 0.00
16:14:11 up 14 days,  6:59, 15 users,  load average: 0.00, 0.00, 0.00
01:14:13 up 73 days, 13:17,  8 users,  load average: 0.08, 0.15, 0.11

If we merely wanted to fix the problem though, we'd just have followed ShellCheck's advice from the start. Let's keep digging.

You see similar behavior if you try to use ffmpeg to convert media or mplayer to play it. However, now it's even worse: not only does it stop after one iteration, it may abort in the middle of the first one!

All other commands work fine -- even other converters, players and ssh-based commands like sox, vlc and scp. Why do certain commands fail?

The root of the problem is that when you pipe or redirect to a while read loop, you're not just redirecting to read but to the entire loop body. Everything in both condition and body will share the same file descriptor for standard input. Consider this loop:

while IFS= read -r line
do
  cat > rest
done < file.txt

First read will successfully read a line and start the first iteration. Then cat will read from the same input source, where read left off. It reads until EOF and exits, and the loop iterates. read again tries to read from the same input, which remains at EOF. This terminates the loop. In effect, the loop only iterated once.

The question remains, though: why do our three commands in particular drain stdin?

ffmpeg and mplayer are simple cases. They both accept keyboard controls from stdin.

While ffmpeg encodes a video, you can use '+' to make the process more verbose or 'c' to input filter commands. While mplayer plays a video, you can use 'space' to pause or 'm' to mute. The commands drain stdin while processing these keystrokes.

They both also share a shortcut to quit: they will stop abruptly if any of the input they read is a "q".

But why ssh? Shouldn't it mirror the behavior of the remote command? If uptime doesn't read anything, why should ssh localhost uptime?

The Unix process model has no good way to detect when a process wants input. Instead, ssh has to preemptively read data, send it over the wire, and have sshd offer it on a pipe to the process. If the process doesn't want it, there's no way to return the data to the FD from whence it came.

We get a toy version of the same problem with cat | uptime. Output in this case is the same as when using ssh localhost uptime:

 16:25:51 up 21 days,  4:34, 12 users,  load average: 0.16, 0.03, 0.01

In this case, cat will read from stdin and write to the pipe until the pipe's buffer is full, at which time it'll block until something reads. Using strace, we can see that GNU cat from coreutils 8.26 uses a 128KiB buffer -- more than Linux's current 64KiB pipe buffer -- so one 128KiB buffer is the amount of data we can expect to lose.

This implies that the loop doesn't actually abort. It will continue if there is still data left after 128KiB has been read from it. Let's try that:

{
  echo first
  for ((i=0; i < 16384; i++)); do echo "garbage"; done
  echo "second"
} > file

while IFS= read -r line
do
  echo "Read $line"
  cat | uptime > /dev/null
done < file

Here, we write 16386 lines to the file. "first", 16384 lines of "garbage", followed by "second". "garbage" + linefeed is 8 bytes, so 16384 of them make up exactly 128KiB. The file prevents any race conditions between producer and consumer.

Here's what we get:

Read first
Read second

If we add a single line additional line of "garbage", we'll see that instead. If we write one less, "second" disappears. In other words, the expected 128KiB of data were lost between iterations.

ssh has to do the same thing, but more: it has to read input, encrypt it, and transmit it over the wire. On the other side, sshd receives it, decrypts it, and feeds it into the pipe. Both sides work asynchronously in duplex mode, and one side can shut down the channel at any time.

If we use ssh localhost uptime we're racing to see how much data we can push before sshd notifies us that the command has already exited. The faster the computer and slower the roundtrip time, the more we can write. To avoid this and ensure deterministic results, we'll use sleep 5 instead of uptime from now on.

Here's one way of measuring how much data we write:

$ tee >(wc -c >&2) < /dev/zero | { sleep 5; }
65536

Of course, by showing how much it writes, it doesn't directly show how much sleep reads: the 65536 bytes here is the Linux pipe buffer size.

This is also not a general way to get exact measurements because it relies on buffers aligning perfectly. If nothing is reading from the pipe, you can successfully write two blocks of 32768 bytes, but only one block of 32769.

Fortunately, GNU tee currently uses a buffer size of 8192, so given 8 full reads, it will perfectly fill the 65536 byte pipe buffer. strace also reveals that ssh (OpenSSH 7.4) uses a buffer size of 16384, which is exactly 2x of tee and 1/4x of the pipe buffer, so they will all align nicely and give an accurate count.

Let's try it with ssh:

$ tee >(wc -c >&2) < /dev/zero | ssh localhost sleep 5
2228224 

As discussed, we'll subtract the pipe buffer, so we can surmise that 2162688 bytes has been read by ssh. We can verify this manually with strace if we want. But why 2162688?

On the other side, sshd has to feed this data into sleep through a pipe with no readers. That's another 65536. We're now left with 2097152 bytes. How can we account for these?

This number is in fact the OpenSSH transport layer's default window size for non-interactive channels!

Here's an excerpt from channels.h in the OpenSSH source code:

/* default window/packet sizes for tcp/x11-fwd-channel */
#define CHAN_SES_PACKET_DEFAULT	(32*1024)
#define CHAN_SES_WINDOW_DEFAULT	(64*CHAN_SES_PACKET_DEFAULT)

There it is: 64*32*1024 = 2097152.

If we adapt our previous example to use ssh anyhost sleep 5 and write "garbage"
(64*32*1024+65536)/8 = 270336 times, we can again game the buffers and cause our iterator to get exactly the lines we want:

{
  echo first
  for ((i=0; i < $(( (64*32*1024 + 65536) / 8)); i++)); do echo "garbage"; done
  echo "second"
} > file

while IFS= read -r line
do
  echo "Read $line"
  ssh localhost sleep 5
done < file

Again, this results in:

Read first
Read second

An entirely useless experiment of course, but pretty nifty!

Compiling Haskell for Windows on Travis CI

or: How I finally came around and started appreciating Docker.

tl;dr: ShellCheck is now automatically compiled for Windows using Wine+GHC in Docker, without any need for additional Windows CI.

I don’t know what initially surprised me more: that people were building ShellCheck on Windows before and without WSL, or that it actually worked.

Unless you’re a fan of the language, chances are that if you run any Haskell software at all, it’s one of pandoc, xmonad, or shellcheck. Unlike GCC, Haskell build tools are not something you ever just happen to have lying around.

While Haskell is an amazing language, it doesn’t come cheap. At 550 MB, GHC, the Haskell Compiler, is the single largest package on my Debian system. On Windows, the Haskell Platform — GHC plus standard tools and libraries — weighs in at 4,200 MB.

If you need to build your own software from source, 4 GB of build dependencies unique to a single application is enough to make you reconsider. This is especially true on Windows where you can’t just close your eyes and hit “yes” in your package manager.

Thanks to the awesome individuals who package ShellCheck for various distros, most people have no idea. To them, ShellCheck is just a 5 MB download without external dependencies.

Starting today, this includes Windows users!

This is obviously great for them, but the more interesting story is how this happens.

There’s no parallel integration with yet another CI system like Appveyor, eating the cost of Windows licenses in the hopes of future business. There’s not been a rise of a much-needed de facto standard package manager, with generous individuals donating their time.

It’s also not me booting Windows at home to manually compile executables on every release, nor a series of patches trying to convince GHC to target Windows from GNU/Linux.

It’s a Docker container with GHC and Cabal running in Wine.

Ugly? Yes. Does it matter? No. The gory details are all hidden away by Docker.

Anyone, including Travis CI, can now easily and automatically compile ShellCheck (or any other Haskell project for that matter) for Windows in two lines, without a Windows license.

If you want ShellCheck binaries for Windows, they’re linked to on the ShellCheck github repo. If you want to take a look at the Docker image, there’s a repo for that too.

ShellCheck has had an official Docker build for quite a while, but it was contribution (thanks, Nikyle!). I never really had any feelings for Docker, one way or the other.

Consider me converted.