Swap on HDD: Does placement matter? (tl;dr: Yes)

Someone was partitioning an ancient laptop’s spinning rust, and asked where they should put their swap. At the start of the drive? At the end? As a swap file? Does it even matter?

While I’m on my 4th generation SSD at this point, I was interested enough to benchmark and find out. I would have used a swap file thinking it wouldn’t matter much, but boy was I wrong!

Test setup

My only remaining functional HDD — after putting two others in the electronics recycling bin — was a Western Digital 2.0TB WD20EARX SATA drive with 64MB cache.

I installed Ubuntu 21.04 Server on it, and partitioned it with 1G boot, 4G swap, 1.8T ext4 root, and another 4G swap. For easy testing, I did it all through kvm using the raw device without cache:

kvm -m 2048 -drive file=/dev/sda,format=raw,cache=none

I verified that the host did indeed not use its rather more generous RAM to cache the device.

The standard test, building Linux, did not swap as much as I hoped. With enough RAM to successfully boot, it didn’t really touch swap for as long as I bothered to watch it. I instead turned to my favorite RAM hog: building ShellCheck with the GHC Haskell compiler. This is a high RAM, low disk process.

I tried once with plenty of RAM, then each of the following with 2GB RAM and 4GB swap:

  • Swap at the start of the drive
  • Swap at the end of the drive
  • Swap in a file created with dd if=/dev/zero of=4g bs=1M count=4096 (which hdparm --fibmap revealed to be allocated across four contiguous areas in the first 14GB of the device)

For fun, I also included:

  • Swap on an old OCZ Vertex 4 SATA 256GB SSD
  • Swap on my current Samsung 970 Evo Plus 1TB SSD

Each test was preceded by a git clean -fdx, swapoff/swapon, dropping caches, and finally a time cabal v1-build shellcheck.

I ran each test twice and picked the lowest number, though the variance was not large. For the swap file, I generated a second one to see if it was the luck of the allocator, but results were similar. dmesg showed no read errors after the tests finished.

Results

Bar chart of numbers below

Or in numbers:

  • 165s (2:45) — RAM only
  • 451s (7:31) — NVMe SSD
  • 771s (12:51) — SATA SSD
  • 1701s (28:21) — Start of HDD
  • 2600s (43:20) — Swap file on HDD
  • 3161s (52:41) — End of HDD

In this test, using a swap file was surprisingly 50%+ slower than simply allocating a swap partition at the start of the drive, in spite of the low fragmentation and Linux’s bypass of the FS layer.

Drives and workloads obviously vary, but if nothing else we can safely say that yes, placement matters.

What exactly was the point of [ “x$var” = “xval” ]?

In shell scripting you sometimes come across comparisons where each value is prefixed with "x". Here are some examples from GitHub:

if [ "x${JAVA}" = "x" ]; then
if [ "x${server_ip}" = "xlocalhost" ]; then
if test x$1 = 'x--help' ; then

I’ll call this the x-hack.

For any POSIX compliant shell, the value of the x-hack is exactly zero: this comparison works without the x 100% of the time. But why was it a thing?

Online sources like this stackoverflow Q&A are a little handwavy, saying it’s an alternative to quoting (oof), pointing towards issues with "some versions" of certain shells, or generally cautioning against the mystic behaviors of especially ancient Unix system without concrete examples.

To determine whether or not ShellCheck should warn about this, and if so, what its long form rationale should be, I decided to dig into the history of Unix with the help of The Unix Heritage Society‘s archives. I was unfortunately unable to peer into the closely guarded world of the likes of HP-UX and AIX, so dinosaur herders beware.

These are the cases I found that can fail.

Left-hand side matches a unary operator

The AT&T Unix v6 shell from 1973, at least as found in PWB/UNIX from 1977, would fail to run test commands whose left-hand side matched a unary operator. This must have been immediately obvious to anyone who tried to check for command line parameters:

% arg="-f"
% test "$arg" = "-f"
syntax error: -f
% test "x$arg" = "x-f"
(true)

This was fixed in the AT&T Unix v7 Bourne shell builtin in 1979. However, test and [ were also available as separate executables, and appear to have retained a variant of the buggy behavior:

$ arg="-f"
$ [ "$arg" = "-f" ]
(false)
$ [ "x$arg" = "x-f" ]
(true)

This happened because the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.

The "modern" Bourne shell behavior was copied by the Public Domain KornShell in 1988, and made part of POSIX.2 in 1992. GNU Bash 1.14 did the same thing for its builtin [, and the GNU shellutils package that provided the external test/[ binaries followed POSIX, so the early GNU/Linux distros like SLS were not affected, nor was FreeBSD 1.0.

The x-hack is effective because no unary operators can start with x.

Either side matches string length operator -l

A similar issue that survived longer was with the string length operator -l. Unlike the normal unary predicates, this one was only parsed as part as part of an operand to binary predicates:

var="helloworld"
[ -l "$var" -gt 8 ] && echo "String is longer than 8 chars"

It did not make it into POSIX because, as the rationale puts it, "it was undocumented in most implementations, has been removed from some implementations (including System V), and the functionality is provided by the shell", referring to [ ${#var} -gt 8 ].

It was not a problem in UNIX v7 where = took precedence, but Bash 1.14 from 1996 would parse it greedily up front:

$ var="-l"
$ [ "$var" = "-l" ]
test: -l: binary operator expected
$ [ "x$var" = "x-l" ]
(true)

It was also a problem on the right-hand side, but only in nested expressions. The -l check made sure there was a second argument, so you would need an additional expression or parentheses to trigger it:

$ [ "$1" = "-l" -o 1 -eq 1 ]
[: too many arguments
$ [ "x$1" = "x-l" -o 1 -eq 1 ]
(true)

This operator was removed in Bash 2.0 later that year, eliminating the problem.

Left-hand side is !

Another issue in early shells was when the left-hand side was the negation operator !:

$ var="!"
$ [ "$var" = "!" ]
test: argument expected            (UNIX v7, 1979)
test: =: unary operator expected   (bash 1.14, 1996)
(false)                            (pd-ksh88, 1988)
$ [ "x$var" = "x!" ]
(true)

Again, the x-hack is effective by preventing the ! from being recognized as a negation operator.

ksh treated this the same as [ ! "=" ], and ignored the rest of the arguments. This quiety returned false, as = is not a null string. Ksh continues to ignore trailing arguments to this day:

$ [ -e / random words/ops here ]
(true)                              (ksh93, 2021)
bash: [: too many arguments         (bash5, 2021)

Bash 2.0 and ksh93 both fixed this problem by letting = take precedence in the 3-argument case, in accordance with POSIX.

Left-hand side is "("

This is by far my favorite.

The UNIX v7 builtin failed when the left-hand side was a left-parenthesis:

$ left="(" right="("
$ [ "$left" = "$right" ]
test: argument expected
$ [ "x$left" = "x$right" ]
(true)

This happens because the ( takes precedence over the =, and becomes an invalid parenthesis group.

Why is this my favorite? Behold Dash 0.5.4 up until 2009:

$ left="(" right="("
$ [ "$left" = "$right" ]
[: 1: closing paren expected
$ [ "x$left" = "x$right" ]
(true)

That was an active bug when the StackOverflow Q&A was posted.

But wait, there’s more!

Here’s Zsh in late 2015, right before version 5.3:

% left="(" right=")"
% [ "$left" = "$right" ]
(true)
% [ "x$left" = "x$right" ]
(false)

Amazingly, the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!

The bugs are of course increasingly hard to come across. The Zsh one only triggers when comparing left-paren against right-paren, as otherwise the parser will backtrack and figure it out.

Another late holdout was Solaris, whose /bin/sh was the legacy Bourne shell as late as Solaris 10 in 2009. However, this was undoubtedly for compatibility, and not because they believed this was a viable shell. A "standards compliant" shell had been an option for a long time before Solaris 11 dragged it kicking and screaming into 21th century — or at least into the 90s — by switching to ksh93 by default in 2011.

In all cases, the x-hack is effective because it prevents the operands from being recognized as parentheses.

Conclusion

The x-hack was indeed useful and effective against several real and practical problems in multiple shells.

However, the value was mostly gone by the mid-to-late 1990s, and the few remaining issues were cleaned up before 2010 — shockingly late, but still over a decade ago.

The last one managed to stay until 2015, but only in the very specific case of comparing opening parenthesis to a closed parenthesis in one specific non-system shell.

I think it’s time to retire this idiom, and ShellCheck now offers a style suggestion by default.

Epilogue

The Dash issue of [ "(" = ")" ] was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today:

$ str="-e"
$ [ \( ! "$str" \) ]
[: 1: closing paren expected     # dash
bash: [: `)' expected, found ]   # bash

POSIX fixes all these ambiguities for up to 4 parameters, ensuring that shells conditions work the same way, everywhere, all the time.

Here’s how Dash maintainer Herbert Xu put it in the fix:

/*
 * POSIX prescriptions: he who wrote this deserves the Nobel
 * peace prize.
 */

Why is /usr/bin/test 4kiB smaller than /usr/bin/[ ?

Reddit user mathisweirdaf posted this interesting observation:

 $ ls -lh /usr/bin/{test,[}
-rwxr-xr-x 1 root root 59K  Sep  5  2019 '/usr/bin/['
-rwxr-xr-x 1 root root 55K  Sep  5  2019  /usr/bin/test

[ and test are supposed to be aliases for each other, and yet there is a 4kiB difference between their GNU coreutils binaries. Why?

First, for anyone surprised: yes, there is a /usr/bin/[. I have a previous post on this subject, but here’s a quick recap:

When you write if [ -e /etc/passwd ]; then .. that bracket is not shell syntax but just a regular command with a funny name. It’s serviced by /usr/bin/[, or (more likely) a shell builtin. This explains a lot of its surprising behavior, e.g. why it’s notoriously space sensitive: [1=2] is no more valid than ls-l/tmp.

Anyways, why is there a size difference? We can compare objdump output to see where the data goes. Here’s an excerpt from objdump -h /usr/bin/[:

                 size                                          offset
15 .text         00006e82  0000000000002640  0000000000002640  00002640  2**4
16 .fini         0000000d  00000000000094c4  00000000000094c4  000094c4  2**2
17 .rodata       00001e4c  000000000000a000  000000000000a000  0000a000  2**5

and here’s objdump -h /usr/bin/test:

15 .text         000068a2  0000000000002640  0000000000002640  00002640  2**4
16 .fini         0000000d  0000000000008ee4  0000000000008ee4  00008ee4  2**2
17 .rodata       00001aec  0000000000009000  0000000000009000  00009000  2**5

We can see that the .text segment (compiled executable code) — is 1504 bytes larger, while .rodata (constant values and strings) is 864 bytes larger.

Most crucially, the increased size of the .text segment causes it to go from the 8000s to the 9000s, crossing a 0x1000 (4096) page size boundary, and therefore shifting all other segments up by 4096 bytes. This is the size difference we’re seeing.

The only nominal difference between [ and test is that [ requires a ] as a final argument. Checking for that would be a very minuscule amount of code, so what are those ~1500 bytes used for?

Since it’s hard to inspect stripped binaries, I built my own copy of coreutils and compared the list of functions in each:

$ diff -u <(nm -S --defined-only src/[ | cut -d ' ' -f 2-) <(nm -S --defined-only src/test | cut -d ' ' -f 2-)
--- /dev/fd/63      2021-02-02 20:21:35.337942508 -0800
+++ /dev/fd/62      2021-02-02 20:21:35.341942491 -0800
@@ -37,7 +37,6 @@
 D __dso_handle
 d _DYNAMIC
 D _edata
-0000000000000099 T emit_bug_reporting_address
 B _end
 0000000000000004 D exit_failure
 0000000000000008 b file_name
@@ -63,7 +62,7 @@
 0000000000000022 T locale_charset
 0000000000000014 T __lstat
 0000000000000014 t lstat
-0000000000000188 T main
+00000000000000d1 T main
 000000000000000b T make_timespec
 0000000000000004 d nslots
 0000000000000022 t one_argument
@@ -142,16 +141,10 @@
 0000000000000032 T umaxtostr
 0000000000000013 t unary_advance
 00000000000004e5 t unary_operator
-00000000000003d2 T usage
+0000000000000428 T usage
 0000000000000d2d T vasnprintf
 0000000000000013 T verror
 00000000000000ae T verror_at_line
-0000000000000008 D Version
-00000000000000ab T version_etc
-0000000000000018 T version_etc_ar
-000000000000042b T version_etc_arn
-000000000000002f R version_etc_copyright
-000000000000007a T version_etc_va
 000000000000001c r wide_null_string.2840
 0000000000000078 T x2nrealloc
 000000000000000e T x2realloc

The major contributors are the version_etc* functions. What do they do?

Well, let’s have a look:

/* The three functions below display the --version information the
   standard way. [...]

These are 260 lines of rather elaborate, internationalized, conditional ways of formatting data that makes up --version output. Together they take about bc <<< "ibase=16; 7A+2F+42B+18+AB+8+99" = 1592 bytes.

What does this mean? Simple. This is what we’re paying an extra 4kb for:

$ /usr/bin/[ --version
[ (GNU coreutils) 8.30
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by Kevin Braunsdorf and Matthew Bradburn.

[ --version is missing the final ], so the invocation is invalid and the result is therefore implementation defined. GNU is free to let it output version info.

Meanwhile, /usr/bin/test --version is a valid invocation, and POSIX mandates that it simply returns success when the first parameter (--version) is a non-empty string.

This difference is even mentioned in the documentation:

NOTE: [ honors the --help and --version options, but test does not.
test treats each of those as it treats any other nonempty STRING.

Mystery solved!

(Exercise: what would have been the implications of having test support --help and --version in spite of POSIX?)

${var?} and &&: Two simple tips for shell commands in tech docs

tl;dr: Use error-if-unset ${placeholders?} and join commands with && to make it easier and safer to copy-paste shell commands from technical documentation.

I frequently read documentation that includes shell commands, and copy-paste them into a shell. It might look something like this:

To install a JDK:

sudo apt-get update
sudo apt-get install openjdk-<VERSION>-jdk

Obviously this example is particularly simple and clear. A more realistic case might have several 160+ character commands invoking unfamiliar tools and utility scripts with a variety of fixed and variable parameters. To see far too many instances of this, check out your company’s internal wiki.

You, as the documentation author or editor, can improve such command listings in two simple ways.

Use ${NAME?} instead of ad-hoc placeholders like <NAME> or NAME

If I accidentally miss one of the placeholders where I was supposed to insert something, this error-if-unset POSIX compatible syntax will cause a command to fail fast with a simple and clear error message:

$ sudo apt-get install openjdk-${VERSION?}-jdk
bash: VERSION: parameter not set

(Assuming there is no existing environment variable by the same name, that is!)

Compare this to the original angle brackets, which in the best case fail somewhat obtusely:

$ sudo apt-get install openjdk-<VERSION>-jdk
bash: VERSION: No such file or directory

and otherwise, will create/truncate files while giving misleading errors (if I’m lucky enough to get one):

$ sudo apt-get install openjdk-<VERSION>-jdk
E: Unable to locate package openjdk
$ grep VERSION *
grep: invalid option -- 'j'

Here the redirection >-jdk was interpreted as a file redirection (just like in to echo Hi > foo.txt), and created a file -jdk that causes otherwise fine commands with globs to fail in unexpected ways (and imagine what would happen with grep <alias name> ~/.bashrc!)

For just an uppercase word, it can be hard to tell whether something like ID is part of the command or whether it’s a placeholder. The command will try to execute whether or not I guess correctly.

Use && between consecutive commands

I get not just one but two benefits when you use && to join multiple consecutive commands:

sudo apt-get update &&
sudo apt-get install openjdk-${VERSION?}-jdk

The first and well known one is that each command will only run if the previous ones succeeded. Instead of powering through errors and running commands from an increasingly unknown state, the shell will stop so I can get it back on track and continue correctly.

The second and more subtle benefit is that this will make the shell read all commands up front, before it starts executing any. This matters when any of the commands request input. Here’s what happens when I paste the original example:

$ sudo apt-get update
[sudo] password for vidar:
Sorry, try again.
[sudo] password for vidar:

I pasted two commands, but the first one requested a password. The second command was then read as that password, leading to the "Sorry, try again" error.

When I now enter my actual password, only the first command will run. I won’t have any further indication that one of the commands has been swallowed up.

Compare this to using &&, where the shell patiently reads all of the commands beforehand (with a continuation prompt >), and only then executes them:

$ sudo apt-get update &&
> sudo apt-get install openjdk-14-jdk
[sudo] password for vidar:

When I enter my password now, both commands will run as expected.

Conclusion

These simple tips, alone or together, can help make it easier for users to follow instructions involving shell commands, leading to fewer and more easily fixed mistakes. This makes their lives easier, and by extension, yours.

Use echo/printf to write images in 5 LoC with zero libraries or headers

tl;dr: With the Netpbm file formats, it’s trivial to output pixels using nothing but text based IO

To show that there’s nothing up my sleeves, here’s an image:

A computer generated image of gently shaded, repeating squares

And here’s the complete, dependency free bash script that generates it:

#!/bin/bash
exec > my_image.ppm    # All echo statements will write here
echo "P3 250 250 255"  # magic, width, height, max component value
for ((y=0; y<250; y++)) {
  for ((x=0; x<250; x++)) {
    echo "$((x^y)) $((x^y)) $((x|y))" # r, g, b
  }
}

That’s it. That’s all you need to generate an image that can be read by common tools like GIMP, ImageMagick and Netpbm.

To rewind for a second, it’s sometimes useful to output an image to do printf debugging of 2D algorithms, to visualize data, or simply because you have some procedural pixels you want to put on screen.

However — at least if you hadn’t seen the above example — the threshold to start outputting graphics could seem rather high. Even with a single file library, that’s one more thing to set up and figure out. This is especially annoying during debugging, when you know you’re going to delete it within the hour.

Fortunately, the Netpbm suite of tools have developed an amazingly flexible solution: a set of lowest common denominator file formats for full color Portable PixMaps (PPM), Portable GrayMaps (PGM), and monochrome Portable BitMaps (PBM), that can all be written as plain ASCII text using any language’s basic text IO.

Collectively, the formats are known as PNM: Portable aNyMaps.

The above bash script is more than enough to get started, but a detailed description of the file format with examples can be found in man ppm, man pgm, and man pbm on a system with Netpbm installed.

Each man page describes two version of a simple file format: one binary and one ASCII. Either is completely trivial to implement, though the ASCII ones are my favorite for being so ridiculously barebones that you can write them by hand in Notepad.

To convert them to more common file formats, either open and export in GIMP, use ImageMagick convert my_file.ppm my_file.png, or NetPBM pnmtopng < my_file.ppm > my_file.png

Should you wish to input images using this trivial ASCII format, the NetPBM tool pnmtoplainpnm will convert a binary ppm/pgm/pbm (as produced by any tool including Netpbm’s anytopnm) into an ASCII ppm/pgm/pbm.

If your goal is to experiment with any kind of image processing algorithm, you can easily slot into Netpbm’s wonderfully Unix-y set of tools by reading/writing PPM on stdin/stdout:

curl http://example.com/input.png | 
    pngtopnm | 
    ppmbrighten -v +10 |
    yourtoolhere |
    pnmscale 2 |
    pnmtopng > output.png

Zsh and Fish’s simple but clever trick for highlighting missing linefeeds

tl;dr: We look at how Zsh and Fish is able to indicate a missing terminating linefeed in program output when the Unix programming model precludes examining the output itself.

Most shells, including bash, ksh, dash, and ash, will show a prompt wherever the previous command left the cursor when it exited.

The fact that the prompt (almost) always shows up on the familiar left-most column of the next line is because Unix programs universally cooperate to park the cursor there when they exit.

This is done by always making sure to output a terminating linefeed \n (aka newline):

vidar@vidarholen-vm2 ~ $ whoami
vidar
vidar@vidarholen-vm2 ~ $ whoami | hexdump -c
0000000   v   i   d   a   r  \n  

If a program fails to follow this convention, the prompt will end up in the wrong place:

vidar@vidarholen-vm2 ~ $ echo -n "hello world"
hello worldvidar@vidarholen-vm2 ~ $

However, I recently noticed that zsh and fish will instead show a character indicating a missing linefeed, and still start the prompt where you’d expect to find it:

vidarholen-vm2% echo -n "hello zsh"
hello zsh% 
vidarholen-vm2%

vidar@vidarholen-vm2 ~> echo -n "hello fish"
hello fish⏎
vidar@vidarholen-vm2 ~> 

If you’re disappointed that this is what there’s an entire blog post about, you probably haven’t tried to write a shell. This is one of those problems where the more you know, the harder it seems (obligatory XKCD).

If you have a trivial solution in mind, maybe along the lines of if (!output.ends_with("\n")) printf("%\n");, consider the following restrictions*:

  • Contrary to popular belief, the shell does not sit between programs and the terminal. The shell has no ability to intercept or examine the terminal output of programs.
  • The terminal programming model is based on teletypes (aka TTYs), electromechanical typewriters from the early 1900s. They printed letter by letter onto paper, so there is no memory or screen buffer that can be programmatically read back.

Given this, here are some flawed ways to make it happen:

  • The shell could use pipes to intercept all output, and relay it onto the terminal. While it works in trivial cases like whoami, some programs check whether stdout is a terminal and change their behavior, others go over your head and talk to the TTY directly (e.g. ssh‘s password prompt), and some use TTY specific ioctls that fail if the output is not a TTY, such as querying window size or disabling local echo for password input.

  • The shell can ptrace the process to see what it writes where. This has a huge overhead and breaks sudo, ping, and other commands that rely on suid.

  • The shell can create a pseudo-tty (pty), run commands in that, and relay information back and forth much like ssh or script does. This is an annoying and heavy-handed approach, which in its ultimate form would require re-implementing an entire terminal emulator.

  • The shell can use ECMA-48 cursor position reporting features: printf '\e[6n' on a supported terminal will cause the terminal to simulate user input on the form ^[[y;xR where y and x is the row and column. The shell could then read this to figure out where the cursor is. These kinds of round trips are feasible, but somewhat slow and annoying to implement for such a simple feature.

Zsh and Fish instead have a much simpler and far more clever way of doing it:

  1. They always output the missing linefeed indicator, whether or not it’s needed.
  2. They then pad out the line with $COLUMN-1 spaces
  3. This is followed by a carriage return to move to the first column
  4. Finally, they show the prompt.

This solution is very simple because it only requires printing a fixed string before every prompt, but it’s highly effective on all terminals§.

Why?

Let’s pretend our terminal is 10 columns wide and 3 rows tall, and a canonical program just wrote a short string with a trailing linefeed:

[vidar     ]
[|         ]
[          ]

The cursor, indicated by |, is at the start of the line. This is what would happen in step 1 and 2:

[vidar      ]
[%         |]
[           ]

The indicator is shown, and since we have written exactly $COLUMN characters, the cursor is after the last column. Step 3, a carriage return, now moves it back to the start:

[vidar      ]
[|%         ]
[           ]

The prompt now draws over the indicator, and is shown on the same line:

[vidar      ]
[~ $ |      ]
[           ]

The final result is exactly the same as if we had simply written out the prompt wherever the cursor was.

Now, let’s look at what happens when a program does not output a terminating linefeed:

[vidar|     ]
[           ]
[           ]

The indicator is shown, but this time the spaces in step 2 causes the line to wrap all the way around to the next line:

[vidar%     ]
[     |     ]
[           ]

The carriage return moves the cursor back to the start of the next line:

[vidar%     ]
[|          ]
[           ]

The prompt is now shown on that line, and therefore doesn’t overwrite the indicator:

[vidar%     ]
[~ $ |      ]
[           ]

And there you have it. A seemingly simple problem turned out harder than expected, but a clever use of line wrapping made it easy again.

Now that we know the secret sauce, we can of course do the same thing in Bash:

PROMPT_COMMAND='printf "%%%$((COLUMNS-1))s\\r"'

* These same restrictions are reflected in several other aspects of Unix:

  • While useful and often requested, there is no robust way to get the output of the previously executed command.
  • It’s surprisingly tricky to take screenshots/dumps of terminals, and it only works on specific terminals.
  • The phenomenon of background process output cosmetically trashing foreground processes is well known, and yet there’s no solution

§ Fish developer and Hacker News reader ComputerGuru explains that there are many caveats related to various terminals’ line wrapping that make this trickier than shown here.