In shell scripting you sometimes come across comparisons where each value is prefixed with "x". Here are some examples from GitHub:
if [ "x${JAVA}" = "x" ]; then
if [ "x${server_ip}" = "xlocalhost" ]; then
if test x$1 = 'x--help' ; then
I’ll call this the x-hack.
For any POSIX compliant shell, the value of the x-hack is exactly zero: this comparison works without the x
100% of the time. But why was it a thing?
Online sources like this stackoverflow Q&A are a little handwavy, saying it’s an alternative to quoting (most definitely NOT the case!), pointing towards issues with "some versions" of certain shells, or generally cautioning against the mystic behaviors of especially ancient Unix system without concrete examples.
To determine whether or not ShellCheck should warn about this, and if so, what its long form rationale should be, I decided to dig into the history of Unix with the help of The Unix Heritage Society‘s archives. I was unfortunately unable to peer into the closely guarded world of the likes of HP-UX and AIX, so dinosaur herders beware.
These are the cases I found that can fail.
Left-hand side matches a unary operator
The AT&T Unix v6 shell from 1973, at least as found in PWB/UNIX from 1977, would fail to run test commands whose left-hand side matched a unary operator. This must have been immediately obvious to anyone who tried to check for command line parameters:
% arg="-f"
% test "$arg" = "-f"
syntax error: -f
% test "x$arg" = "x-f"
(true)
This was fixed in the AT&T Unix v7 Bourne shell builtin in 1979. However, test
and [
were also available as separate
executables, and appear to have retained a variant of the buggy behavior:
$ arg="-f"
$ [ "$arg" = "-f" ]
(false)
$ [ "x$arg" = "x-f" ]
(true)
This happened because the utility used a simple recursive descent parser without backtracking, which gave unary operators precedence over binary operators and ignored trailing arguments.
The "modern" Bourne shell behavior was copied by the Public Domain KornShell in 1988, and made part of POSIX.2 in 1992. GNU Bash 1.14 did the same thing for its builtin [
, and the GNU shellutils package that provided the external test
/[
binaries followed POSIX, so the early GNU/Linux distros like SLS were not affected, nor was FreeBSD 1.0.
The x-hack is effective because no unary operators can start with x
.
Either side matches string length operator -l
A similar issue that survived longer was with the string length operator -l
. Unlike the normal unary predicates, this one was only parsed as part as part of an operand to binary predicates:
var="helloworld"
[ -l "$var" -gt 8 ] && echo "String is longer than 8 chars"
It did not make it into POSIX because, as the rationale puts it, "it was undocumented in most implementations, has been removed from some implementations (including System V), and the functionality is provided by the shell", referring to [ ${#var} -gt 8 ]
.
It was not a problem in UNIX v7 where =
took precedence, but Bash 1.14 from 1996 would parse it greedily up front:
$ var="-l"
$ [ "$var" = "-l" ]
test: -l: binary operator expected
$ [ "x$var" = "x-l" ]
(true)
It was also a problem on the right-hand side, but only in nested expressions.
The -l
check made sure there was a second argument, so you would need an
additional expression or parentheses to trigger it:
$ [ "$1" = "-l" -o 1 -eq 1 ]
[: too many arguments
$ [ "x$1" = "x-l" -o 1 -eq 1 ]
(true)
This operator was removed in Bash 2.0 later that year, eliminating the problem.
Left-hand side is !
Another issue in early shells was when the left-hand side was the negation operator !
:
$ var="!"
$ [ "$var" = "!" ]
test: argument expected (UNIX v7, 1979)
test: =: unary operator expected (bash 1.14, 1996)
(false) (pd-ksh88, 1988)
$ [ "x$var" = "x!" ]
(true)
Again, the x-hack is effective by preventing the !
from being recognized as a negation operator.
ksh treated this the same as [ ! "=" ]
, and ignored the rest of the arguments. This quiety returned false, as =
is not a null string.
Ksh continues to ignore trailing arguments to this day:
$ [ -e / random words/ops here ]
(true) (ksh93, 2021)
bash: [: too many arguments (bash5, 2021)
Bash 2.0 and ksh93 both fixed this problem by letting =
take precedence in the 3-argument case, in accordance with POSIX.
Left-hand side is "("
This is by far my favorite.
The UNIX v7 builtin failed when the left-hand side was a left-parenthesis:
$ left="(" right="("
$ [ "$left" = "$right" ]
test: argument expected
$ [ "x$left" = "x$right" ]
(true)
This happens because the (
takes precedence over the =
, and becomes an invalid parenthesis group.
Why is this my favorite? Behold Dash 0.5.4 up until 2009:
$ left="(" right="("
$ [ "$left" = "$right" ]
[: 1: closing paren expected
$ [ "x$left" = "x$right" ]
(true)
That was an active bug when the StackOverflow Q&A was posted.
But wait, there’s more!
Here’s Zsh in late 2015, right before version 5.3:
% left="(" right=")"
% [ "$left" = "$right" ]
(true)
% [ "x$left" = "x$right" ]
(false)
Amazingly, the x-hack could be used to work around certain bugs all the way up until 2015, seven years after StackOverflow wrote it off as an archaic relic of the past!
The bugs are of course increasingly hard to come across. The Zsh one only triggers when comparing left-paren against right-paren, as otherwise the parser will backtrack and figure it out.
Another late holdout was Solaris, whose /bin/sh was the legacy Bourne shell as late as Solaris 10 in 2009. However, this was undoubtedly for compatibility, and not because they believed this was a viable shell. A "standards compliant" shell had been an option for a long time before Solaris 11 dragged it kicking and screaming into 21th century — or at least into the 90s — by switching to ksh93 by default in 2011.
In all cases, the x-hack is effective because it prevents the operands from being recognized as parentheses.
Conclusion
The x-hack was indeed useful and effective against several real and practical problems in multiple shells.
However, the value was mostly gone by the mid-to-late 1990s, and the few remaining issues were cleaned up before 2010 — shockingly late, but still over a decade ago.
The last one managed to stay until 2015, but only in the very specific case of comparing opening parenthesis to a closed parenthesis in one specific non-system shell.
I think it’s time to retire this idiom, and ShellCheck now offers a style suggestion by default.
Epilogue
The Dash issue of [ "(" = ")" ]
was originally reported in a form that affected both Bash 3.2.48 and Dash 0.5.4 in 2008. You can still see this on macOS bash today:
$ str="-e"
$ [ \( ! "$str" \) ]
[: 1: closing paren expected # dash
bash: [: `)' expected, found ] # bash
POSIX fixes all these ambiguities for up to 4 parameters, ensuring that shells conditions work the same way, everywhere, all the time.
Here’s how Dash maintainer Herbert Xu put it in the fix:
/*
* POSIX prescriptions: he who wrote this deserves the Nobel
* peace prize.
*/
Thanks for the investigation and detailed write-up! In the last paragraph, it’s Herbert Xu, not Howard.
Also, he is the dash maintainer, not just a contributor.
Oops, he definitely deserves better for his good work. I’ve updated it!
Real Bourne shells, like /bin/sh on Solaris 10 and earlier, lacked the -n and -z tests, so the “x hack” is also used to determine whether a value is set. That’s what the first example you found is doing.
On those shells, doing something like ‘[ “$JAVA” = “” ]’ is nonsense regardless of whether JAVA is set or not; = requires a right-hand side. So ‘[ “x$VAR” = x ]’ is the old long-form version of ‘[ -z “$VAR” ]’ and ‘[ “x$VAR” != x ]’ is the old long-form version of ‘[ -n “$VAR” ]’.
I don’t believe this is the case. I have Solaris 10 u8 (SunOS 5.10) in front of me and it supports `-n` and `-z`.
I additionally see the following in the UNSW backup disk labelled “UNIX Level 7 Source” suggesting it was supported in Bourne from the beginning:
IF eq(a, “-n”) THEN return(!eq(nxtarg(0), “”)) FI
IF eq(a, “-z”) THEN return(eq(nxtarg(0), “”)) FI
‘[ “$JAVA” = “” ]’ does have a right-hand side: an empty string. The external test tool is written with empty arguments as a possibility, so it doesn’t appear that it expected that empty arguments would be a problem.
Anyway, the startxfce4 script (Copyright 1996-2003, looks like never touched since then) on an uptodate Arch system has lots of `if test “x$DISPLAY” = “x”` and so on.
I think, exactly for this reason.
But an unquoted empty variable would disappear, and cause a parsing problem. That’s the one thing I thought you were going to mention, but didn’t.
Rhialto is right. My main use of the x idiom were to avoid the error when the variable is empty so Bash don’t complain with “unary operator expected”, and I hoped to see here some solution. Then I found that if the variable is quoted, i.e. [ “$var” = “value” ], Bash parses both empty/not empty cases.
However, I can confirm that this behaviour is fixed in the doble bracket command ‘[[‘, even if the variable is not quoted, i.e. the construct
v=””
if [[ $v = hello ]] ; then echo “v equals hello”
else echo “v variable is empty”
fi
works as expected (as of 3.2 onwards, where I tested).
Using x to determine if a value is set is alive and well for Windows batch.
right=”(” would be better left out. then compare “$left” = “$left”. the effect is the same, and the text more clear. your set of examples culminates in a case where you assign right=”)” and in some code perhaps having 2 different vars (not named left and right, tho), even if you know they are equal, gives the code more symmetry and readability, but for the purpose of exposition as here, it’s like writing two=”1″. It just introduces confusion.
> Left-hand side is !
Busybox ash had this problem until 1.22.1. This shell was used as a system shell in some embedded systems (e.g. OpenWrt). Of course, this is old history.
“`sh
$ var=”!”
$ [ “$var” = “!” ]
ash: !: unknown operand
$ echo $?
2
$ busybox ash –help
BusyBox v1.22.1 (2014-09-21 13:01:44 CEST) multi-call binary.
Usage: ash [-/+OPTIONS] [-/+o OPT]… [-c ‘SCRIPT’ [ARG0 [ARGS]] / FILE [ARGS]]
Unix shell interpreter
“`
You’re forgetting this one, still live today in 2023, with bash 5.2.15:
$ bash –posix -c +H ‘test ! -a !; echo $?’
0
$ /bin/test ! -a !; echo $?
/bin/test: ‘-a’: unary operator expected
2
Bash has both a unary and binary operator -a, while coreutils only has a binary. POSIX says that when there are 3 arguments, the first is !, and the second is a unary operator, then the second is taken as a unary operator, not a binary. But -a is not required to be an operator.
So in the first example of:
% arg=”-f”
% test “$arg” = “-f”
The “match” that Unix was seeing was on that minus symbol, mistaking it for the unary negation operator?
Thank you for the hard work of investigating legacy concerns so that this shell wart doesn’t continue until 2050 (although it probably will).
It’s good that eventually *someone* asks the question: why? And actually answers it.
There’s a tiny detail in the article: “as part as part”.
I’m not disagreeing with you, but… you will be surprised the corner cases that still exists where a conditional isn’t working as expected, and it just seems logical to use the x-hack.
I had to do this recently, because I had call to make changes to grub.cfg on a dual-boot computer. Secure Boot was on, and I was compounding expressions that included grub’s regexp command.
Because regexp is a grub module, and not embedded with the signed bootloader, regexp was failing when the module load would fail and causing the if statement to go to the false clause.
A message is put out when secure-boot rejects the regexp.o load, but you’ll never see that in a million years when grub clears the screen and puts up a prompt or a menu.
What I’m trying to say is… you can *see* the “x”… you can’t see eg. if a variable substitution is actually happening, if grub or other has a bug where it’s not accepting double-quote enclosures, etc. But you can see an “x = x”, and at least feel comfortable, hey, all things being equal, this equality test should work. I accept my description is a bit nebulous, but I hope you catch my meaning.
But yes… it’s hardly worth leaving in for “production” quality scripts.