Compiling Haskell for Windows on Travis CI

or: How I finally came around and started appreciating Docker.

tl;dr: ShellCheck is now automatically compiled for Windows using Wine+GHC in Docker, without any need for additional Windows CI.

I don’t know what initially surprised me more: that people were building ShellCheck on Windows before and without WSL, or that it actually worked.

Unless you’re a fan of the language, chances are that if you run any Haskell software at all, it’s one of pandoc, xmonad, or shellcheck. Unlike GCC, Haskell build tools are not something you ever just happen to have lying around.

While Haskell is an amazing language, it doesn’t come cheap. At 550 MB, GHC, the Haskell Compiler, is the single largest package on my Debian system. On Windows, the Haskell Platform — GHC plus standard tools and libraries — weighs in at 4,200 MB.

If you need to build your own software from source, 4 GB of build dependencies unique to a single application is enough to make you reconsider. This is especially true on Windows where you can’t just close your eyes and hit “yes” in your package manager.

Thanks to the awesome individuals who package ShellCheck for various distros, most people have no idea. To them, ShellCheck is just a 5 MB download without external dependencies.

Starting today, this includes Windows users!

This is obviously great for them, but the more interesting story is how this happens.

There’s no parallel integration with yet another CI system like Appveyor, eating the cost of Windows licenses in the hopes of future business. There’s not been a rise of a much-needed de facto standard package manager, with generous individuals donating their time.

It’s also not me booting Windows at home to manually compile executables on every release, nor a series of patches trying to convince GHC to target Windows from GNU/Linux.

It’s a Docker container with GHC and Cabal running in Wine.

Ugly? Yes. Does it matter? No. The gory details are all hidden away by Docker.

Anyone, including Travis CI, can now easily and automatically compile ShellCheck (or any other Haskell project for that matter) for Windows in two lines, without a Windows license.

If you want ShellCheck binaries for Windows, they’re linked to on the ShellCheck github repo. If you want to take a look at the Docker image, there’s a repo for that too.

ShellCheck has had an official Docker build for quite a while, but it was contribution (thanks, Nikyle!). I never really had any feelings for Docker, one way or the other.

Consider me converted.

[ -z $var ] works unreasonably well

There is a subreddit /r/nononoyes for videos of things that look like they’ll go horribly wrong, but amazingly turn out ok.

[ -z $var ] would belong there.

It’s a bash statement that tries to check whether the variable is empty, but it’s missing quotes. Most of the time, when dealing with variables that can be empty, this is a disaster.

Consider its opposite, [ -n $var ], for checking whether the variable is non-empty. With the same quoting bug, it becomes completely unusable:

Input Expected [ -n $var ]
“” False True!
“foo” True True
“foo bar” True False!

These issues are due to a combination of word splitting and the fact that [ is not shell syntax but traditionally just an external binary with a funny name. See my previous post Why Bash is like that: Pseudo-syntax for more on that.

The evaluation of [ is defined in terms of the number of argument. The argument values have much less to do with it. Ignoring negation, here’s a simplified excerpt from POSIX test:

# Arguments Action Typical example
0 False [ ]
1 True if $1 is non-empty [ "$var" ]
2 Apply unary operator $1 to $2 [ -x "/bin/ls" ]
3 Apply binary operator $2 to $1 and $3 [ 1 -lt 2 ]

Now we can see why [ -n $var ] fails in two cases:

When the variable is empty and unquoted, it’s removed, and we pass 1 argument: the literal string “-n”. Since “-n” is not an empty string, it evaluates to true when it should be false.

When the variable contains foo bar and is unquoted, it’s split into two arguments, and so we pass 3: “-n”, “foo” and “bar”. Since “foo” is not a binary operator, it evaluates to false (with an error message) when it should be true.

Now let’s have a look at [ -z $var ]:

Input Expected [ -z $var ] Actual test
“” True: is empty True 1 arg: is “-z” non-empty
“foo” False: not empty False 2 args: apply -z to foo
“foo bar” False: not empty False (error) 3 args: apply “foo’ to -z and bar

It performs a completely wrong and unexpected action for both empty strings and multiple arguments. However, both cases fail in exactly the right way!

In other words, [ -z $var ] works way better than it has any possible business doing.

This is not to say you can skip quoting of course. For “foo bar”, [ -z $var ] in bash will return the correct exit code, but prints an ugly error in the process. For ” ” (a string with only spaces), it returns true when it should be false, because the argument is removed as if empty. Bash will also incorrectly pass var="foo -o x" because it ends up being a valid test through code injection.

The moral of the story? Same as always: quote, quote quote. Even when things appear to work.

ShellCheck is aware of this difference, and you can check the code used here online. [ -n $var ] gets an angry red message, while [ -z $var ] merely gets a generic green quoting warning.

Technically correct: floating point calculations in bc

Whenever someone asks how to do floating point math in a shell script, the answer is typically bc:

$  echo "scale=9; 22/7" | bc
3.142857142

However, this is technically wrong: bc does not support floating point at all! What you see above is arbitrary precision FIXED point arithmetic.

The user’s intention is obviously to do math with fractional numbers, regardless of the low level implementation, so the above is a good and pragmatic answer. However, technically correct is the best kind of correct, so let’s stop being helpful and start pedantically splitting hairs instead!

Fixed vs floating point

There are many important things that every programmer should know about floating point, but in one sentence, the larger they get, the less precise they are.

In fixed point you have a certain number of digits, and a decimal point fixed in place like on a tax form: 001234.56. No matter how small or large the number, you can always write down increments of 0.01, whether it’s 000000.01 or 999999.99.

Floating point, meanwhile, is basically scientific notation. If you have 1.23e-4 (0.000123), you can increment by a millionth to get 1.24e-4. However, if you have 1.23e4 (12300), you can’t add less than 100 unless you reserve more space for more digits.

We can see this effect in practice in any language that supports floating point, such as Haskell:

> truncate (16777216 - 1 :: Float)
16777215
> truncate (16777216 + 1 :: Float)
16777216

Subtracting 1 gives us the decremented number, but adding 1 had no effect with floating point math! bc, with its arbitrary precision fixed points, would instead correctly give us 16777217! This is clearly unacceptable!

Floating point in bc

The problem with the bc solution is, in other words, that the math is too correct. Floating point math always introduces and accumulates rounding errors in ways that are hard to predict. Fixed point doesn’t, and therefore we need to find a way to artificially introduce the same type of inaccuracies! We can do this by rounding a number to a N significant bits, where N = 24 for float and 52 for double. Here is some bc code for that:

scale=30

define trunc(x) {
  auto old, tmp
  old=scale; scale=0; tmp=x/1; scale=old
  return tmp
}
define fp(bits, x) {
  auto i
  if (x < 0) return -fp(bits, -x);
  if (x == 0) return 0;
  i=bits
  while (x < 1) { x*=2; i+=1; }
  while (x >= 2) { x/=2; i-=1; }
  return trunc(x * 2^bits + 0.5) / 2^(i)
}

define float(x) { return fp(24, x); }
define double(x) { return fp(52, x); }
define test(x) {
  print "Float:  ", float(x), "\n"
  print "Double: ", double(x), "\n"
}

With this file named fp, we can try it out:

$ bc -ql fp <<< "22/7"
3.142857142857142857142857142857

$ bc -ql fp <<< "float(22/7)"
3.142857193946838378906250000000

The first number is correct to 30 decimals. Yuck! However, with our floating point simulator applied, we get the desired floating point style errors after ~7 decimals!

Let's write a similar program for doing the same thing but with actual floating point, printing them out up to 30 decimals as well:

{-# LANGUAGE RankNTypes #-}
import Control.Monad
import Data.Number.CReal
import System.Environment

main = do
    input <- liftM head getArgs
    putStrLn . ("Float:  " ++) $ showNumber (read input :: Float)
    putStrLn . ("Double: " ++) $ showNumber (read input :: Double)
  where
    showNumber :: forall a. Real a => a -> String
    showNumber = showCReal 30 . realToFrac

Here's a comparison of the two:

$ bc -ql fp <<< "x=test(1000000001.3)"
Float:  1000000000.000000000000000000000000000000
Double: 1000000001.299999952316284179687500000000

$ ./fptest 1000000001.3
Float:  1000000000.0
Double: 1000000001.2999999523162841796875

Due to differences in rounding and/or off-by-one bugs, they're not always identical like here, but the error bars are similar.

Now we can finally start doing floating point math in bc!

Parameterized Color Cell Compression

I came across a quaint and adorable paper from SIGGRAPH’86: Two bit/pixel Full Color Encoding. It describes Color Cell Compression, an early ancestor of Adaptive Scalable Texture Compression which is all the rage these days.

Like ASTC, it offers a fixed 2 bits/pixel encoding for color images. However, the first of many d’awwws in this paper comes as early as the second line of the abstract, when it suggests that a fixed rate is useful not for the random access we covet for rendering today, but simply for doing local image updates!

The algorithm can compress a 640×480 image in just 11 seconds on a 3MHz VAX 11/750, and decompress it basically in real time. This means that it may allow full color video, unlike these impractical, newfangled transform based algorithms people are researching.

CCC actually works astonishingly well. Here’s our politically correct Lenna substitute:

mandrill

The left half of the image is 24bpp, while the right is is 2bpp. Really the only way to tell is in the eyes, and I’m sure there’s an interesting, evolutionary explanation for that.

If we zoom in, we can get a sense of what’s going on:

mandrill_eye

The image is divided into 4×4 cells, and each cell is composed of only two different colors. In other words, the image is composed of 4×4 bitmaps with a two color palette, each chosen from an image-wide 8bit palette. A 4×4 bitmap would take up 16 bits, and two 8bit palette indices would take up 16 bits, for a total of 32 bits per 16 pixels — or 2 bits per pixel.

The pixels in each cell are divided into two groups based on luminosity, and each group gets its own color based on the average color in the group. One of the reasons this works really well, the author says, is because video is always filmed so that a change in chromaticity has an associated change in luminosity — otherwise on-screen features would be invisible to the folks at home who still have black&white TVs!

We now know enough to implement this lovely algorithm: find an 8bit palette covering the image, then for each cell of 4×4 pixels, divide the pixels into two groups based on whether their luminosity is over or under the cell average. Find the average color of each part, and find its closest match in the palette.

However, let’s experiment! Why limit ourselves to 4×4 cells with 2 colors each from a palette of 256? What would happen if we used 8×8 cells with 3 colors each from a palette of 512? That also comes out to around 2 bpp.

Parameterizing palette and cell size is easy, but how do we group pixels into k colors based on luminosity? Simple: instead of using the mean, we use k-means!

Here’s a colorful parrot in original truecolor on the left, CCC with 4×4 cells in the middle, and 32×32 cells (1.01 bpp) on the right. Popartsy!

ara3

Here’s what we get if we only allow eight colors per horizontal line. The color averaging effect is really pronounced:

ara4

And here’s 3 colors per 90×90 cell:
ara6

The best part about this paper is the discussion of applications. For 24fps video interlaced at 320×480, they say, you would need a transfer rate of 470 kb/s. Current microcomputers have a transfer rate of 625 kb/s, so this is well within the realm of possibility. Today’s standard 30 megabyte hard drives could therefore store around 60 seconds of animation!

Apart from the obvious benefits of digital video like no copy degradation and multiple resolutions, you can save space when panning a scene by simply transmitting the edge in the direction of the pan!

You can even use CCC for electronic shopping. Since the images are so small and decoding so simple, you can make cheap terminals in great quantities, transmit images from a central location and provide accompanying audio commentary via cable!

In addition to one-to-many applications, you can have one-to-one electronic, image based communication. In just one minute on a 9600bps phone connection, a graphic arts shop can transmit a 640×480 image to clients for approval and comment.

You can even do many-to-many teleconferencing! Imagine the ability to show the speaker’s face, or a drawing they wish to present to the group on simple consumer hardware!

Truly, the future is amazing.


Here’s the JuicyPixel based Haskell implementation I used. It doesn’t actually encode the image, it just simulates the degradation. Ironically, this version is slower than the authors’ original, even though the hardware is five or six orders of magnitude faster!

Apart from the parameterization, I added two other improvements: Firstly, instead of the naive RGB based average suggestion in the paper, it uses the YCrCb average. Second, instead of choosing the palette from the original image, it chooses it from the averages. This doesn’t matter much for colorful photograph, but gives better results for images lacking gradients.

Basics of a Bash action game

If you want to write an action game in bash, you need the ability to check for user input without actually waiting for it. While bash doesn’t let you poll the keyboard in a great way, it does let you wait for input for a miniscule amount of time with read -t 0.0001.

Here’s a snippet that demonstrates this by bouncing some text back and forth, and letting the user control position and color. It also sets (and unsets) the necessary terminal settings for this to look good:

#!/usr/bin/env bash

# Reset terminal on exit
trap 'tput cnorm; tput sgr0; clear' EXIT

# invisible cursor, no echo
tput civis
stty -echo

text="j/k to move, space to color"
max_x=$(($(tput cols) - ${#text}))
dir=1 x=1 y=$(($(tput lines)/2))
color=3

while sleep 0.05 # GNU specific!
do
    # move and change direction when hitting walls
    (( x == 0 || x == max_x )) && \
        ((dir *= -1))
    (( x += dir ))


    # read all the characters that have been buffered up
    while IFS= read -rs -t 0.0001 -n 1 key
    do
        [[ $key == j ]] && (( y++ ))
        [[ $key == k ]] && (( y-- ))
        [[ $key == " " ]] && color=$((color%7+1))
    done

    # batch up all terminal output for smoother action
    framebuffer=$(
        clear
        tput cup "$y" "$x"
        tput setaf "$color"
        printf "%s" "$text"
    )

    # dump to screen
    printf "%s" "$framebuffer"
done

Adventures in String Reversal

Oh, string reversal! The bread and butter of Programming 101 exams. If I ask you to prove your hacker worth by implementing it in your favorite language, how long would it take you and how many tries will you need to get it right?

Five minutes with one or two tries? 30 seconds and nail it on the first try?

What if I say that this is 2013 and your software can’t just fail because a user inputs non-ASCII data?

Well… Java, C#, Python, Haskell and all other modern languages have native Unicode string types, so at most you’ll just need another minute to verify that it does indeed work, right?

No, you will in fact need several hours and hundreds of lines of code. Reversing a string is much harder than one would think.

The following are cases that a string reversal algorithm could reasonably be expected to handle, but which your initial, naive implementation most likely fails:

Byte order marks

Wikipedia says that “The byte order mark (BOM) is a Unicode character used to signal the endianness (byte order) of a text file or stream. It is encoded at U+FEFF byte order mark (BOM). BOM use is optional, and, if used, should appear at the start of the text stream.”

It’s obviously a bug if the BOM ends up at the end of the string when it’s reversed. At least that’s a simple fix, right?

Surrogate pairs

Environment based around 16-bit character types, like Java and C#’s char and some C/++ compilers’ wchar_t, had an awkward time when Unicode 2.0 came along, which expanded the number of characters from 65536 to 1114112. Characters in so-called supplementary planes will not fit in a 16-bit char, and will be encoded as a surrogate pair – two chars next to each other.

If two chars form a single code point (see e.g. Java’s String.codePointAt(int)), reversing them produces an invalid character.

Trashing characters in the string is not a property of correct string reversers. Please fix.

Composing characters

While there is a separate character for “ñ”, n with tilde, it can also be written as two characters: regular “n” (U+006E) plus composing tilde (U+0303), which I’ll write as a regular tilde for illustration.

In this way, you can encode “pin~a colada”, and it will render as “piña colada”. If the string is trivially reversed, it becomes “adaloc a~nip” which will render as “adaloc ãnip”. The tilde is now on the wrong character.

Please don’t shuffle diacritical marks in the input string. Just reverse it.

By the way, if you try to fix this by ensuring that composing characters stay behind their preceding character, you’ll introduce a regression. Double composing characters go between the characters they compose.

To put a ‘double macron below’ under the characters “ea” in “mean”, you’d encode “me_an” which renders as “mean”. If you try to reverse it while keeping the macron after the “e”, you end up with “nae_m” (“naem“) rather than the original, correct “na_em” (“naem”).

Directional overrides

What’s “hello world” backwards? It’s “hello world” if your implementation is to be believed.

It happens to be encoded with left-to-right and right-to-left overrides as “U+202D U+202E dlrow olleh”.

In this direction, everything from the second character onward is shown right-to-left as “hello world”. With trivial reversion, it becomes “hello world” followed by a RLO immediately cancelled by a LRO.

Your string reverser doesn’t actually reverse strings. Would you kindly sort that out?

Obviously, it also has to handle explicit directional embedding, U+202A and U+202B, which are similar but not identical to directional overrides.

RTL scripts

Reversal issues occur naturally in bidirectional text. A mix such as “hello עולם” will render “hello” LTR and “עולם” RTL (the “ם” is encoded last, but displays leftmost in that word). When the latin script is first, the string starts from the left margin, with the first encoded character to the left.

If we trivially reverse this string, we get “olleh םלוע” as it starts rendering from the right margin. The first encoded character appears rightmost in the right word, while the last encoded displays rightmost of the leftmost word, i.e. in the middle.

Obviously, “hello עולם” backwards is “םלוע olleh”. Please add this to your list.

Left-to-right and right-to-left markers

Similarly to the two above cases, the LRM (U+200E) and RLM (U+200F) codes allows changing the direction neutral characters (such as punctuation) are rendered.

“(RLM)O:” will be rendered as “:O” in the right margin. With trivial string reversal, it will still render as “:O”, starting at the left margin.

Didn’t we already file two bugs about this?

Pop directional formatting

Once your kludged and fragile directional string reversal support appears to work reasonably ok, along comes the U+202C Pop Directional Format character. It never ends!

This character undoes the previous explicit directional override, whatever it happened to be. You can no longer try to be clever by splitting the string up into linear sections based on directional markers; you have to go full stack parsing.

Here’s the ten thousand word specification of the Unicode directionality algorithm. Have fun.

Interlinear annotations

Even if you give up and add a TODO to handle directionality, you still have some cases to go. In logographic languages like Chinese and Japanese, you can have pronunciation guides, so called ruby characters, alongside the text.

If your browser supports it, here’s an example: kanji.

To support this in plain text, Unicode has U+FFF9 through U+FFFB, the Interlinear Annotation Anchor, Separator and Terminator characters respectively. The above could be encoded as “U+FFFF9 漢字 U+FFFA kan U+FFFA ji U+FFFB”.

Reversing the anchor and terminating characters is obviously a bug.

Your string reverser produces garbled output instead of a reversed string… Is it going to be much longer?

Note that reversing just the contents is still wrong. Instead of correctly annotating “字漢” with “ij nak”, you’d be annotating “ij” with “nak 字漢”.

Once you’ve correctly handled this case, try it again when you have an excess of separators at the end of the ruby text. Normally, these would just be ignored, but if you reversed them and put them in front, they’ll push all ruby characters away from where they were supposed to be.

For “U+FFFF9 漢字 U+FFFA kan U+FFFA ji U+FFFA U+FFFB”, instead of ijnak you’d get 字漢ijnak.

(Update: Commenter Jim convincingly argues that you’d want to reverse the ruby logograph groups but not the characters themselves, resulting in jikan )

Like with the composing characters, your string reversal shuffles ruby characters around. Please… oh, why bother.

Conclusion

Your implementation most likely had half a dozen bugs. Maybe string reversal is beyond your abilities? Join the club!

Hopefully you had fun anyways.