Always wondered why jQuery returns true if you are trying to find elements by a selector that doesn’t exist in the DOM structure. Like this example using an id- selector:

<div id="one">one</div>

console.log( !!$('#one') ) // prints true
console.log( !!$('#two') ) // is also true! (empty jQuery object)
console.log( !!document.getElementById('two') ) // prints false

You can use !!$('#two').length since length === 0 if the object is empty, but it seems logical to that a selector would return the element if found, otherwise null (like the native document.getElementById does).

Consequentially, as an example, this logic can’t be done in jQuery:

var div = $('#two') || $('<div id="two"></div>');

Wouldn’t it be more logical if the id-selector returned null if not found?


Almost all jQuery functions return a jQuery object as a wrapper around the DOM elements in question, so you can use dot notation. This behaviour was chosen because otherwise jQuery would regularly throw errors.


Now imagine $("#balloon") returned null. That means that $("#balloon").css({"color":"red"}); would throw an error, rather than silently doing nothing as you would normally want.

Hence, you just gotta use .length or .size().

  1. Stack Overflow: Why does $(‘#id’) return true if id doesn’t exist?