If you've ever developed anything using pure JavaScript, you probably know the language has a few quirks.
Now I don't want to jump on the hate-wagon, some stuff actually makes sense once you understand how the ECMA-262 is implemented and how certain things are processed by the syntactic and semantic analyzers. And let's be honest, most of the "gotcha" things people come up with are made to look weird. That being said though, some of the results you get from specific expressions are borderline insane.
Let's look into the easier stuff first. You've probably already seen that in JavaScript '3' - 1
equals 2 whereas '3' + 1
equals 31. There's nothing weird about this though, +
is an operator used for both string concatenation and integer addition; Which one is used is determined by the type of the first operand. It works the same way in C#, Java and some other languages that converts the second operand to string automatically.
What might be considered slightly unusual is that since -
is used only for subtractions it implicitly converts the first operand to integer. But that's just the language trying to be helpfull - maybe too helpful sometimes, but it's not that different from the example with the addition.
Let's get into something more interesting. What do you thing is the result of this expression?
3 > 2 > 1
If you try this example in Python, you get true
, but in JavaScript it is false
. Why is that?
Well, both languages interpret the expression in different ways. Python sees it as two separate conditions, testing whether 3 > 2
and 2 > 1
. You can use this syntax for intervals and it can be quite handy. However, JavaScript doesn't really know how to evaluate this condition other than just from left to right. So it tests wheter 3 > 2
. That is true
. Then it tries to test wheter true > 1
. To do this, it does the only thing it can and converts true to 1, so the final condition is actually 1 > 1
, which is false
.
If you wish to test the interval in JavaScript the same way it works in Python, you need to write it down as (3 > 2) && (2 > 1)
, there is no shorter way.
Alright, let's try something different. What do you thing is the result of the next expression?
0.2 + 0.1 == 0.3
Any sane person would say the result is true. But it's not. And you know what? This is not just a JavaScript thing. Try it in C#, Java, Python or any other language. most of them will tell you the result is false.
Why is that? Well, that's due to the way floating-point numbers are represented and stored in computer memory. Most programming languages use the IEEE 754 standard for representing floating-point numbers, which uses binary representation.
In binary representation, numbers that seem simple in our base-10 system (0.1, for instance) become infinitely repeating fractions when converted to binary. This leads to rounding errors when performing arithmetic operations, as not all numbers can be represented exactly in binary.
As a result, when you perform operations like addition, subtraction, multiplication, or division with floating-point numbers, you may encounter small inaccuracies in the result due to rounding errors. These inaccuracies can accumulate over multiple arithmetic operations, leading to unexpected behavior in some cases.
So yeah, there's really no point in hating on JavaScript for this. This is just how computers work, it's not the fault of the language. But do you know what the fault of the language is? This thing.
parseInt(0.0000005);
In any other language you will get either 0
or some form of exception, depending on what parsing function or form of casting you're trying to use. Do you know what the result in JavaScript is? The result is 5
. Do you know what's even wilder? parseInt(0.5)
is 0
. Even parseInt(0.000005)
is 0
. But parseInt(0.0000005)
or let's say parseInt(0.00000000000005)
is 5. Weird, huh?
The reason for this is that parseInt
expects string as the first argument. But what happens when you convert really small number to string in JavaScript? For some reason JavaScript starts to use scientific notation. So 0.0000005
is actually 5e-7
as a string. And how does parseInt
work? Well, it just reads numbers and stops at the first non-numeric character. So parseInt("42omgWTF")
returns 42
. And you've guessed it, parseInt("5e-7")
is 5
.
Since we're doing math, do you know what the result of this little addition is?
010 + 03
No, it's not 13, it's actually 11. But it's not that strange once you realize numbers with leading zeros are interpreted as octal (base-8) literals. Therefore, when you write 010, it's treated as the octal representation of the number 8, and 03 is treated as the octal representation of the number 3. Starting with ECMAScript 6 we now have prefixes like 0b
(binary), 0o
(octal) and 0x
(hexadecimal), but simple 0 still works for historical reasons, although it won't work in strict mode.
Let's leave math behind and talk about types. As you probably know, types aren't really JavaScript's strongest suite. I mean, that's the reason TypeScript was made after all. But anyway, do you know what the typeof(null)
is? In Java, null literal is of the special null type which has no name (so that you can not declare a variable of that type). In C# it's pretty much the same (at least before C# 3.0, nowadays it doesn't oficially have a type at all, although it's a bit more complicated). In C++11, there is nullptr
which is of type std::nullptr_t
. And Python has None
of the NoneType
.
So what is the type of null
in JavaScript? It is an object
.
Now before we grab pitchforks, it's not as crazy as it sounds at first. Many languages treat null
as a special singleton object under the hood. But it always has it's own type (or "no type" at all, though that's just semantics). But yeah, only in JavaScript can you write typeof({'wtf': 'is_this'}) == typeof(null)
and get true
.
By the way, did you know that typeof(NaN)
is number
? I mean, it sort of makes sense from a programmer's point of view, but it's still quite funny that "something that's not a number is a number".
But let's get into the crazy teritory now. One of my all time favourites is this thing:
('b' + 'a' + + 'b' + 'a').toUpperCase();
Do you know what the resulting text of the above expression is? You might not believe me, but the result is BANANA
. That's quite bananas, isn't it? (I'll let myself out.)
No but seriously. I'm not joking. And the crazy thing is, it sort of makes sense (again). But for this trick to work you kind of need the upper-case conversion, othervise it's way too obvious what's going on.
You see, the second plus in the expression is actually an unary operator for the second b, not a binary operator concatenating "nothing". So what we are doing is something like:
('b' + 'a' + (+'b') + 'a').toUpperCase();
Now you see what's going on? the unary operator +
is trying to convert the second b
into a number, but it can't, so it returns NaN
. And then that is simply converted into a text 'NaN'
, so that we get 'b' + 'a' + 'NaN' + 'a'
. And now you see why we needed the upercase, othervise the result would be 'baNaNa'
.
And for the grand finale let's go with this beauty I've recently discovered:
new RegExp({}).test('mom');
new RegExp({}).test('dad');
What do you thing are the results of the above code? Whatever it is, surely it's the same value twice, right?
Surprise, the results are true
and false
respectively. And what's going on? Well, to put it simply, RegExp constructor expect its first argument to be either string, or another RegExp. Since it's neither, it tries to convert the argument into a string. And what is the string representation of an object {}
(or any other object)?
If you've ever had to deal with JavaScript, you've probably seen a monstrosity in form of [object Object]
. That's the return value of toString
function that's baked in non-primitive prototypes. You can override the function in your own objects of course, or you can use something like JSON.stringify
. But if you use the purest form of toString
conversion on any object, you will get '[object Object]'
string.
So the above code is quite literally just this:
new RegExp('[object Object]').test('mom');
new RegExp('[object Object]').test('dad');
What makes this work is that the string '[object Object]'
is actually valid regular expression and means "match any letters between these brackets". And as you can see, "mom" contains the letter "o", so that is a successfull match resulting in true
. "dad" on the other hand does not include any letter from the string "[object Object]" and therefore the result is false
.
So yeah, it makes sense. Kind of. From a really skewed, clinically insane sort of view, it all makes sense. Now you can see why JavaScript has this reputation of weird, quirky language, although there are usually reasons for why we get what we get.
Anyway, if you want to test your knowledge of quirkiness of JavaScript, I recomend this simple quiz with 25 questions: https://jsisweird.com
I only got 14/25 on my first try but now I'm able to pretty much ace it. Can you?
Add new comment