The window.onload Problem (Still)
The goal of unobtrusive JavaScript programming it to separate the JavaScript behavior from from the HTML content and is analogous to the goal of unobtrusive CSS design to separate the CSS presentation from the HTML content. Separation of presentation and content has been possible for years but there is one wrinkle standing in the way of completely separating the behavior. This article is about previously suggested techniques to enable this separation and the strengths and weaknesses of each technique. Some enhancements to the Yahoo! UI polling technique are introduced. A new technique using global listeners during the page loading phase is also presented.
The Problem
Suppose we have the following simple HTML document.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
</head>
<body>
<h1>Search Engines</h1>
<ul style="list-style-type:square;">
<li onclick="alert('Google');">Google</li>
<li onclick="alert('Yahoo!');">Yahoo!</li>
</ul>
<p><img src="hawaii.jpg" alt="hawaii"></p>
</body>
</html>
In the document above we can see that the HTML, CSS and JavaScript are mixed together. This means that the content author, presentation designer and behavior programmer have to work on the same document. The page lacks modularity and the three page collaborators will be in each others way constantly. And what socially capable designer wants to deal with the gruff JavaScript programmer more than absolutely necessary?
We would like to separate the content, presentation, and behavior. Separating the content and presentation is possible with an external CSS stylesheet. However separating the behavior with an external JavaScript file is difficult. For this discussion our goal is to remove the onclick, onmouseover, etc attributes from HTML elements. If we can do that then we have achieved our goal of unobtrusive JavaScript. (We will see that trying to do more complex page manipulation like reordering elements will cause greater problems.)
The user experience of the above page is what we want and that is important to remember. Based on how browsers have been implemented, from the moment the page is visible the user can click on one of the list items and the alert will be shown.
window.onload
and The Wrinkle
The previous example shows that we don't need to separate the three concerns but if we want to we can. We can create three files for each of content, presentation and behavior.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<link href="presentation.css" rel="stylesheet" type="text/css">
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<ul>
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul>
<p><img src="hawaii.jpg" alt="hawaii"></p>
</body>
</html>
presentation.css
ul {
list-style-type:square;
}
behavior.js
window.onload = function() {
document.getElementById('google').onclick = function() {alert('Google');};
document.getElementById('yahoo').onclick = function() {alert('Yahoo!');};
}
That is certainly nice separation of concerns. As long as the content author provides enough markup hooks (eg. id
and class
attributes) then the CSS designer and JavaScript programmer can hook into the page where needed. The CSS is applied automatically by the browser. The window.onload
event allows the programmer to enliven the page by attaching event handlers to the necessary elements. (I will refer to this attaching of events handlers and enabling the pages behavior as enlivening the elements or the page.) This enlivenment occurs when the page is finished loading, parsing and rendering and the window.onload
event fires. And that last "when" is where the wrinkle enters.
The hawaii.jpg
image happens to be very big and the window.onload
event doesn't fire until after the image loads. Unfortunately the text for the two search engines will be visible to the user for a period of time before window.onload
. On many real web pages this period could commonly be any where between one to ten seconds or even more. This is enough time for the user to evaluate the page and if the user clicks one of the search engines during this period the alert will not show. In order to achieve the modularity of the three separate concerns and gain all the benefits unobtrusive JavaScript, we have degraded the user experience too much. We need a technique so we can enliven elements before the user has a chance to interact with the page. If we can't find a satisfactory solution then we must revert to the mixed version with html event attributes for the sake of the user experience.
Bottom Script
This is an old technique that is robust but is a compromise that doesn't satisfy the unobtrusive JavaScript zealots. Ignoring the presentation concerns, we can almost separate all the JavaScript from the HTML page by using a script
element at the bottom of the document's body to initiate page enlivenment. The idea is that by the time the browser parses this final script
element and runs the contained JavaScript, the elements that precede this final script
element will be parsed and available as part of the DOM. This assumption based on de facto standard browser behavior and may not satisfy some developers following the letter of specifications. Many developers do depend on this behavior working in many scripts.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<ul>
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul>
<p><img src="hawaii.jpg" alt="hawaii"></p>
<script type="text/javascript">init();</script>
</body>
</html>
behavior.js
function init() {
document.getElementById('google').onclick = function() {alert('Google');};
document.getElementById('yahoo').onclick = function() {alert('Yahoo!'););
}
It is even difficult to see the seven characters of JavaScript at the bottom of the HTML page. This is relatively good separation and developing for the web is frequently about accepting small compromises to big philosophical ideals. One problem would be that even after developing your thousandth page like this you would still kick yourself every time you forget to include the bottom script to call init()
. At least this omission would be very apparent during development testing and not go into production undetected.
If you really want to get around the potential to forget the bottom script then you could do the following. However, if you do forget then the enlivenment will happen on the window.onload
event which may go undetected during testing.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<ul>
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul>
<p><img src="hawaii.jpg" alt="hawaii"></p>
<script type="text/javascript">window.onload();</script>
</body>
</html>
var alreadyRun = false;
window.onload = function() {
if (alreadyRun) {return;}
alreadyRun = true;
document.getElementById('google').onclick = function() {alert('Google');}
document.getElementById('yahoo').onclick = function() {alert('Yahoo!');}
}
In the above example, we protect against the enlivenment being applied twice to the page. This is a recurrent theme we will see later in this discussion.
Note that it is not necessary that the bottom script be at the absolute bottom of the page. This script just has to be after the last element which the JavaScript will enliven. This could make a difference if only some elements at the top of a very long page need enlivenment. In such a long page the browser may parse and display the first chunk of HTML is receives while it waits for more chunks to arrive from the server. This leaves us back with the old wrinkle. In tests even very long pages over reasonable connections don't seem to expose this flaw frequently.
You may be a satisfied reader at this point and this technique may serve you very well. But can we remove that little bit of JavaScript from HTML page for completely pure separation? People have tried and the remainder of this article looks at techniques to do just that and more. When we discuss document.readyState
in the next section we will see that even this old bottom script technique is potentially flawed. It's worth your while to continue reading.
Dean Edwards and browser sniffing
On his blog Dean Edwards posted the following script to allow early page enlivenment (i.e. earlier than window.onload
allows). This script and a detailed analysis are important because prominent libraries like JQuery, MooTools and Low Pro for Prototype include this code.
function init() {
// quit if this function has already been called
if (arguments.callee.done) return;
// flag this function so we don't do the same thing twice
arguments.callee.done = true;
// kill the timer
if (_timer) clearInterval(_timer);
// do stuff
};
/* for Mozilla/Opera9 */
if (document.addEventListener) {
document.addEventListener("DOMContentLoaded", init, false);
}
/* for Internet Explorer */
/*@cc_on @*/
/*@if (@_win32)
document.write("<script id=__ie_onload defer src=javascript:void(0)><\/script>");
var script = document.getElementById("__ie_onload");
script.onreadystatechange = function() {
if (this.readyState == "complete") {
init(); // call the onload handler
}
};
/*@end @*/
/* for Safari */
if (/WebKit/i.test(navigator.userAgent)) { // sniff
var _timer = setInterval(function() {
if (/loaded|complete/.test(document.readyState)) {
init(); // call the onload handler
}
}, 10);
}
/* for other browsers */
window.onload = init;
The init()
function is to be run as soon as the DOM is completely constructed and available. The last line uses window.onload
and is the most rock-solid fallback we could hope for and will eventually run baring any syntax or runtime errors.
This script works in current (circa January 2007) versions of Mozilla (e.g. Firefox), Internet Explorer, Webkit (e.g. Safari) and Opera browsers. These are the big four browsers and the argument is that if you get all of these browsers on board, at least for now, then then you've done well enough. The next few sections look at the tricks used in this script to enliven the page early and problems with these tricks.
Mozilla, Opera and DOMContentLoaded
Since Netscape Navigator 7 and Firefox 1, the Mozilla-based browsers have provided the DOMContentLoaded
event. Opera 9 has also added this event. This is exactly the event we are after. It tells us when the DOM is complete and ready for us. By being an event, we can attach multiple handlers with document.addEventListener()
and easily maintain good modularity in our code. We will remember this event for the new solution at the end of this article.
Internet Explorer and the defer
attribute
Internet Explorer is one of the few browsers that recognizes the defer
attribute of a script
tag. The specifications are very specific what this attribute indicates.
When set, this boolean attribute provides a hint to the user agent that the script is not going to generate any document content (e.g., no "document.write" in javascript) and thus, the user agent can continue parsing and rendering.
The word "hint" is deliberately vague. It tells developers not to depend on a browsers reaction to defer
. This specification does not say that the script must be deferred until after the DOM is complete and available. This hint is likely to let the browser optimize page display however the browser wishes to do so.
David Flanagan wisely points out that the specification says "All SCRIPT elements are evaluated in order as the document is loaded." which means that if a first script
element with defer
is followed by a second script
element without defer
, then the first script will need to execute before the second and hence before scripts execute the DOM is complete. This further indicates that the defer
hint is for optimization of page rendering and that scripts should be able to use code in earlier scripts. This is a strong argument for why a deferred script may not wait until after the DOM is complete and so not be an accurate indicator the DOM is ready for us to enliven the page.
Most importantly, a particular behavior in response to defer
in one version of one browser should not be assumed in another browser or even another version of the same browser. The implementation of the defer
attribute has not standardized and any reliance on defer
to determine when the DOM is complete should be avoided.
Internet Explorer and document.readyState
Internet Explorer introduced the non-standard document.readyState
property with the following states.
State | Meaning |
---|---|
uninitialized | Object is not initialized with data. |
loading | Object is loading its data. |
loaded | Object has finished loading its data. |
interactive | User can interact with the object even though it is not fully loaded. (read-only) |
complete | Object is completely initialized. |
Internet Explorer 6 reports a state of loading
while the document is parsing, constructing the DOM and loading any images. I made a small experiment polling document.readyState
for either the loaded
or complete
states. Comparing the results from similar experiments for bottom script and the defer technique make it clear that document.readyState
reports neither loaded
or complete
states until after after the images are loaded. This is very late and around the same time as the window.onload
event.
Internet Explorer 7 reports a state of interactive
while the document is parsing, constructing the DOM and loading any images. The leaves only the complete
state as the first state that guarantees that the DOM ready. Unfortunately this state is only reported after images are loaded which is very late and experiments show that the complete
state is reported about the time of the window.onload
event.
In the comments below, Eric Gerds mentions that there are problems using with the Edwards script and frames together.
So for more than one reason using document.readyState
in Internet Explorer doesn't help us achieve our goal. But we can learn something very important from document.readyState
.
Internet Explorer 7 seems to be in the interactive
state until the window.onload
event. The interactive
state is a read-only state. That means that if we try to do anything with the DOM or it's objects before the window.onload
event we are definitely taking a risk of failure. This is a sad realization because it means a robust technique for early enlivenment is not possible across the big four browsers.
The argument gets really messy here. In the comments below Jack Slocum and The Doctor What have both reported that trying to make large DOM manipulations during this interactive state and before window.onload can cause problems. However they have reported that somehow the Edwards script and it's use of defer
finds a time in the Internet Explorer 7 page loading process where these large manipulations will not cause error. This is a convenient fluke and not something upon which I would rely. The goal of this article is to achieve unobtrusive JavaScript which, at a minimum, amounts to just removing the onclick, onmouseover etc attributes from the HTML. No one has reported problems adding these properties at any point during the interactive state. So we continue...
Webkit and DOMContentLoaded
Webkit does not implement the non-standard DOMContentLoaded
event. Simon Willison created a ticket for DOMContentLoaded
and a patch has been sitting on the Webkit trac for a long time. The ticket hasn't even received any votes! (People are probably saving their votes for FCKEditor though both tickets are worthy.) If you are interested in Webkit implementing this let them know by voting or visiting the #webkit IRC channel and talking with the developers.
One Webkit developer told me they are more likely to support the script
tag's defer
attribute and this could be used to help with the window.onload
problem. This was disturbing news because of the intent of the two approaches. The defer
attribute is intended to give a hint to the browser about a possible page loading optimization opportunity. Using some apparent browser behavior that defers until after the DOM is ready is a hack at best. The intent of DOMContentLoaded
is to provide developers a hook into exactly the correct time and if a browser implements this event then it is very likely trustworthy.
Webkit and document.readyState
Webkit is an open source software project and produces the engine used by Safari and other browsers. Webkit has an implementation of Internet Explorer's document.readyState
. Below is the webkit document.readyState
code (revision 19086; Jan 26, 2007).
701 String Document::readyState() const
702 {
703 if (Frame* f = frame()) {
704 if (f->loader()->isComplete())
705 return "complete";
706 if (parsing())
707 return "loading";
708 return "loaded";
709 // FIXME: What does "interactive" mean?
710 // FIXME: Missing support for "uninitialized".
711 }
712 return String();
713 }
Luckily the code is quite readable and looking at this code a few important details are quite clear.
As it stands now, the loaded
state is reported when the DOM is completely parsed and available. Later when all the images are loaded into the frame, the complete
state is reported. So for now, for Webkit only, we can poll document.readyState
and if either of loaded
or complete
are reported we know the DOM is ready. For now. More about that a little later.
We see that not all possible readyStates
are implemented. Relative to this discussion, the interactive
state is never returned. Internet Explorer 7 does return interactive
at times and so Webkit and Internet Explorer are now incompatible. This incompatibility will be important a couple paragraphs further down.
The meanings of the implemented states are a bit confused. When the document is parsing and building the DOM it will return loading
when we might expect interactive
. The Webkit developers explained to me that Webkit parses the HTML as it arrives in chunks to the browser. That means Webkit isn't going sequentially through states of loading and then parsing. For documents that require multiple chunks, there is a period of time when Webkit is simultaneously in the loading
and interactive
states. In this time, Webkit currently reports that it is in the loading
state which is legitimate; however, Internet Explorer 7 reports interactive
during this parsing time.
Most importantly, the Webkit document.readyState
code will likely change in the future. The Webkit developers tell me that since document.readyState
is an Internet Explorer extension that Webkit should match Internet Explorer's behavior. Webkit may change to report interactive
while the document is being parsed. This means only the complete
state could indicate that the DOM is ready and this is after images load and therefore too late. It is unclear what will happen with this code in the future and so depending on the reported document.readyState
value reported by Webkit is fragile and worrisome for longevity of any JavaScript code.
Which browser features to use?
If we want to program in the unobtrusive JavaScript style then we know we can use window.onLoad()
as our fall back. We know we can enliven the page sooner in recent Mozilla and Opera browsers using DOMContentLoaded
. The Internet Explorer defer
technique must break if the browser is to comply with current specifications or if Microsoft exercises it's prerogative and changes the implementation regarding defer
. The document.readyState
technique will likely break in future versions of Webkit to make it compatible with Internet Explorer 7. Unfortunately the tricks to get Internet Explorer and Webkit on board are too fragile for use and there are still other browsers to consider.
We have also learned from the document.readyState
property's read-only status that early enlivenment is risky no matter what technique is used.
The script Dean Edwards posted does allow for complete separation of JavaScript from HTML. Unfortunately the potential negative tradeoffs are quite large in the long run for this tiny bit of separation purity in comparison to the bottom script technique.
DOM Polling
The Yahoo! UI event library has two functions to help implement unobtrusive JavaScript and these functions cleverly avoid all the assumptions and problems of the bottom script and Dean Edwards script. Instead of trying to determine when the entire DOM is ready, the YUI library polls the DOM with document.getElementById()
until a particular element is found or the window.onload
event fires. When document.getElementById()
does finally return the element then the element is clearly in the DOM and it should be relatively safe to add event listeners to the element. This polling is a great idea. The concept can work in browsers as old as Internet Explorer 4, Netscape Navigator 4 and Opera 5 although it is not implemented in YUI to work in such old browsers (which is completely understandable.) This polling technique seems to be robust when used only to add event listeners but other DOM manipulations are questionable (as reported by Jack Slocum and The Doctor What).
onAvailable()
The following example is similar to YAHOO.util.Event.onAvailable()
but without the extra features of the Yahoo! UI library.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<script src="onAvailable.js" type="text/javascript"></script>
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<div><ul id="engines">
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul></div>
</body>
</html>
onAvailable.js
var stack = [],
interval,
loaded; // has window.onload fired?
function doPoll() {
var notFound = [];
for (var i=0; i<stack.length; i++) {
if (document.getElementById(stack[i].id)) {
stack[i].callback();
} else {
notFound.push(stack[i]);
}
}
stack = notFound;
if (notFound.length < 1 || loaded) {
stopPolling();
}
}
function startPolling() {
if (interval) {return;}
interval = setInterval(doPoll, 10);
}
function stopPolling() {
if (!interval) {return;}
clearInterval(interval);
interval = null;
}
function onAvailable(id, callback) {
stack.push({id:id, callback:callback});
startPolling();
}
window.onload = function() {
loaded = true;
doPoll();
};
behavior.js
onAvailable('google', function(){
document.getElementById('google').onclick = function() {alert(this.id);};
});
onAvailable('yahoo', function(){
document.getElementById('yahoo').onclick = function() {alert(this.id);};
});
In the above example the JavaScript has been broken into two files to show the library-type code in onAvailable.js
and the use of this library code for the particular page in behavior.js
.
The code in behavior.js
is repetitive and if the list of search engines in the HTML page grows any longer than two this repetition would definitely be intolerable. Another problem could be that the page may be created dynamically with a variable number of search engines in the list. Perhaps the li
elements possible have a class attribute and only those elements should be enlivened. The library needs to be able to handle these situations cleanly and the Yahoo! UI library does.
onContentAvailable()
The YUI YAHOO.util.Event.onContentReady()
function solves the problems with onAvailable
for some situations. The onContentAvailable()
function is similar to onAvailable()
but onContentAvailable()
declares an element available when it's nextSibling
is also found in the DOM. If a nextSibling
element is not found then the element is declared available when the window.onload
event fires.
Why wait for nextSibling
? Suppose in the previous example that we poll the DOM for the unordered list engines
element. When this element is found in the DOM it is not necessarily true that all of it's child elements are also in the DOM. The HTML parser may have only parsed the first element in the list and not the rest of the list. If the engines
element has a nextSibling
existent in the DOM then it is safe to assume that the HTML parser has finished creating the entire list and that all of the list's elements are also available in the DOM. It would take an unreasonably huge amount of paranoia to suspect that browser parsing and DOM construction works any other way.
In the above example there unfortunately isn't an element after the engines
element to act as nextSibling
. The closing </ul>
and </div>
tags don't even have a space between them. In this case, onContentAvailable()
must wait until window.onload
event because there is no way to know if the DOM is complete or if it just hasn't parsed the next element. To help the onContentAvailable()
function enliven the page early we must add an element after the list.
We might be able to solve the above problem by adding a space between the closing tags. This space will be a text node and serve as the necessary nextSibling
. However some browsers may not add a text node to the DOM for unnecessary whitespace between tags. Also if we are using a HTML minimization to remove unnecessary whitespace and save bandwidth, then we might need to add a dummy element after the list like this
<div><ul id="engines">
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul><div></div></div>
We don't want to pepper the page with these dummy elements. We can avoid this by adding one dummy element at the very bottom of the document. When polling, if the element of interest doesn't have a nextSibling
we can walk up the element's ancestors to see if any of the ancestors has a nextSibling
. With the one dummy element at the end of the page, we are assured by the time the DOM has finished parsing that at least one of the ancestors has a nextSibling
and we can conclude that the content of all elements is available. With the walk up the tree it is highly likely that at least one of the ancestors naturally has a nextSibling
and it would be a rare case where this dummy element is genuinely needed. The Yahoo! UI event library (currently version 12.2) does not walk up the ancestors. The following example also adds the use of the DOMContentLoaded
event as an earlier fallback than thewindow.onload
event.
Here is an example of all these concepts put together for onContentAvailable()
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<script src="onContentAvailable.js" type="text/javascript"></script>
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<ul id="engines">
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul>
<div></div>
</body>
</html>
onContentAvailable.js
var stack = [],
interval,
loaded; // has DOMContentLoaded or window.onload fired
// does the element or one of it's ancestors have a nextSibling?
function hasNextSibling(el) {
return el.nextSibling ||
(el.parentNode && hasNextSibling(el.parentNode));
}
function doPoll() {
var notFound = [];
for (var i=0; i<stack.length; i++) {
var el = document.getElementById(stack[i].id);
if (el && (hasNextSibling(el) || loaded)) {
stack[i].callback();
} else {
notFound.push(stack[i]);
}
}
stack = notFound;
if (notFound.length < 1 || loaded) {
stopPolling();
}
}
function startPolling() {
if (interval) {return;}
interval = setInterval(doPoll, 10);
}
function stopPolling() {
if (!interval) {return;}
clearInterval(interval);
interval = null;
}
function onContentAvailable(id, callback) {
stack.push({id:id, callback:callback});
startPolling();
}
function lastPoll() {
if (loaded) {return;}
loaded = true;
doPoll();
}
// Force one poll immediately when the document DOMContentLoaded event fires.
// This may be sooner than the next schedualed poll.
// Can't add this listener in at least Firefox 2 through DOM0 property assignment.
if (document.addEventListener) {
document.addEventListener('DOMContentLoaded', lastPoll, false);
} else if (document.attachEvent) {
// optimistic that one day Internet Explorer will support this event
document.attachEvent('onDOMContentLoaded', lastPoll);
}
// Force one poll immediately when window.onload fires. For some pages if
// the brower doesn't support DOMContentLoaded the window.onload event
// may be sooner than the next schedualed poll.
window.onload = lastPoll;
behavior.js
onContentAvailable('engines', function(){
var list = document.getElementById('engines').childNodes;
for (var i=0; i<list.length; i++) {
list[i].onclick = function() {alert(this.id);};
}
});
Compared with the onAvailable()
, using onContentAvailable()
is much better for this list example. Both of these functions are useful as onAvailable()
is more efficient for finding just one element. The behavior.js
code is not repetitive and can handle a variable number of list elements. Also the polling only has to poll for one element instead of each element in the list. Some of this small time saving can be used for the walk up the ancestors for nextSibling
if necessary. If we have a long document and are polling for many elements then the polling could bog down the rendering process. This is a very unlikely but extensive walks could be stopped with a few dummy elements in strategic locations. We can probably live with this solution for almost all pages without any dummy elements. To achieve the goals of this article this seems to be a darn good solution!
As reported by Jack Slocum and The Doctor What in the comments, just because an element and its preceding elements are available in the DOM it doesn't mean you can do as you please with them. This polling technique may find the elements before they are able to be moved in the DOM or otherwise manipulated.
Global Delegation
In the comments Jesse Rudderman correctly points out that
All of these solutions leave open the possibility that someone could click an element and have nothing happen. onAvailable / onContentAvailable come close but use polling, which slows things down if there are many elements.
How about taking advantage of event bubbling instead of trying to attach an event hander to the element in time? That is, add a global onclick handler and look to see if event.target or event.originalTarget is one of the elements you're interested in. For hover effects you can do the same with onmouseover.
When I first read Jesse's comment, I was not thinking about the window.onload
problem as simply a means of attaching event handlers to elements. I was also thinking about making DOM manipulations by reordering elements or inserting new elements. I know now that the DOM in Internet Explorer 7 is officially in a read-only state and there is no guarantee that DOM manipulations will be successful until the window.onload
event. If we are minimal about the definition of "unobtrusive JavaScript" then we only have to attach event handlers to achieve the require separation of content and behavior. We don't have to worry about DOM manipulations. Taking Jesse's suggestion where can we go? For this discussion I will focus on click events.
Testing browsers back to Internet Explorer 4 and Netscape 4 it seems all the browsers support the window.document.onclick
event. In the following example I have used this as the "global" event.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Search Engines</title>
<script src="globalListener.js" type="text/javascript"></script>
<script src="behavior.js" type="text/javascript"></script>
</head>
<body>
<h1>Search Engines</h1>
<ul id="engines">
<li id="google">Google</li>
<li id="yahoo">Yahoo!</li>
</ul>
<p><img src="hawaii.jpg" alt="hawaii"></p>
</body>
</html>
globalListener.js
var clickHandlers = {};
// API for attaching click
function attachClickListener(id, handler) {
clickHandlers[id] = handler;
}
window.document.onclick = function(event) {
// perhaps not necessary according to David Flanagan's
// recent blog articles saying IE does send the event as
// the first argument to event handlers
event = event || window.event;
// get the event target: DOM vs IE
var targ = event.target || event.srcElement;
// walk up the DOM starting with the target element
while (targ) {
// if the element has a handler then call the handler
// with the correct scope for keyword "this" and send
// the handler the event.
if (targ.id && clickHandlers[targ.id]) {
clickHandlers[targ.id].call(targ, event);
}
targ = targ.parentNode;
}
};
// When window.onload fires we can attach the event handlers
// to the appropriate elements and let the browsers built-in
// event system handle events more efficiently.
// This is optional. See comment below about this.
window.onload = function() {
// attach the click event handlers to the appropriate elements
for (var p in clickHandlers) {
document.getElementById(p).onclick = clickHandlers[p];
}
// stop using the global event listener we needed during loading.
window.document.onclick = null;
};
behavior.js
attachClickListener('google', function(event) {alert(this.id);});
attachClickListener('yahoo', function(event) {alert(this.id);});
attachClickListener('engines', function(event) {alert(this.id);});
For clarity, this example is limited to a single click event handler per element. Also elements can only be specified by their id. Both of these limitations could be removed. For the latter, the first argument to attachClickListener()
could be a CSS selector string which would make attaching click events as easy as styling a page with CSS.
In the above example, when the window.onload
event occurs the code moves the event handlers to the individual elements and uses the browser's built-in event handling system. In an email to me, Jesse pointed out that "for a large page, O(size of page) work onload might cause temporary unresponsiveness when the page finishes loading. Leaving it as O(depth of element) work upon clicking won't be noticeable at all." I think that this really depends on the particular situation: how many event handlers, how deeply nested the elements are in the DOM and how the DOM will be manipulated during the life of the page. It would also depend on how expensive the handler lookup is with the global listener system. If there are many handers and CSS selectors are used like I described, then that is a lot of looping and analysis with each event handler lookup. This could be quite noticeable with the frequently active mousemove/mouseout/mouseover listeners. If the handlers are actually attached to elements at window.onload then I think the browser would do the handler lookup much more efficiently.
There is an advantage of rolling your own event system with the global listener and sticking with it. The CSS selectors could be used for the entire life of the page which would be advantageous as elements in a list are added and removed. Elements added to the list will automatically have the event handlers. It is the highest level of event delegation.
Extending this global listener system to other types of events could be difficult in some cases. The mousemove
, mouseover
and mouseout
events could probably be handled using a global mousemove
listener and recording the current target and comparing it to the previous target. Focus events could be tricky. If a user tabs to focus a certain form input I'm not sure what could be done to catch the input's onfocus event. None of the window.onfocus
, window.document.onfocus
and document.body.onfocus
fire and so could not be used as global listeners. Perhaps a global keypress listener could be used but then how to determine which element in the page is focused?
Summary
Event handler attributes in the HTML are the most robust but do not allow separation of concerns.
The read-only status of the document.readyState
property's interactive
state makes early enlivenment officially impossible but there have not been reports that adding an event listener causes a problem.
The bottom script technique works cross browser based on de facto standard browser behavior but has a compromise in separation of concerns. Must remember to put the JavaScript element at the bottom of each page.
Dean Edwards script allows for complete separation but is brittle when looking towards the future. Old and exotic browsers will not enliven a page until window.onload
. The Edwards script does enliven the page in Internet Explorer at some magic timing where complex DOM manipulations are possible.
DOM polling is cross browser and allows for complete separation of concerns. In extremely rare cases a dummy element can help the code know that content is available. In Internet Explorer 7, DOM polling may find elements before they can be moved in the DOM however adding event listeners seems to be ok.
The global listener system is the best solution in certain situations as there is no opportunity for the user to click an element before the handler is attached.
What we really need is for browser makers to realize the importance of unobtrusive JavaScript programming and to give us an official way to enliven elements as they appear. This would be analogous to how CSS selectors are used to style elements in the page based on id
and class
attributes.
Update February 5, 2007: Simon Willison (not Dean Edwards) created the ticket on the Webkit trac for DOMContentLoaded.
Update February 9, 2007: Based on the comments below from Jack Slocum and The Doctor What the things you can do with the DOM when polling finds an element is limited in some cases in Internet Explorer 7. I will research this more and update the article accordingly.
Update February 13, 2007: Fixed bug in example code for onContentAvailable()
thanks to Richard Davies' comments. Also updated the onAvailable()
example code for similar style of implementation.
Update March 5, 2007: Really emphasize the importance of the read-only status of the document.readyState
property's interactive
state in Internet Explorer 7.
Update March 21, 2007: Change comments about frames and readyState thanks to comments by Eric Gerds.
Update March 30, 2007: Added the section on using a global event listener based on comments from Jesse Ruderman and Diego Perini.
Update April 18, 2007: Extended the discussion of Jesse Ruderman's suggestion.
Update July 25, 2007: another article about the window.onload problem.
Update August 14, 2007: There is precident in ECMA-262 (first line of page 2) for the term "enliven" as I've used here.
Update August 23, 2007: I've written a followup article The window.onload Problem - Really Solved! with a new solution to the window.onload problem.
Update October 19, 2007 comp.lang.javascript thread and Ajaxian post. Both will require heavy scrutiny as this is a tricky issue.
Comments
Have something to write? Comment on this article.
Daniel,
Perhaps there can be incremental improvements in Dean's script but the sniffing will never be perfect and is a generally failed technique. For this discussion, throwing away success in old, spoofed and unknown browsers is unnecessary so why do it?. The real beauty of the polling technique is that it has worked for a long time already and will work into the future with no modification.
Well, there are reliable techniques for sniffing IE and Opera. There are techniques that are usually reliable for sniffing Safari and Firefox. (Maybe there are better techniques, I don't actually use sniffing much). When used correctly and conservatively, sniffing works.
Why throw away success? You're not - window.onload is a good fallback. Also: you're not even supporting old browsers, unknown and spoofed browsers are rare.
There are advantages to both methods. An onDocumentLoad is a more general solution and better separated from the document - sometimes you just can't use ids.
By the way, I noticed a bug:
onAvailable('google', function(){
document.getElementById('yahoo').onclick = function() {alert(this.id);};
});
Bugs happen, especially in examples. But the biggest problem here is that this is a race condition - testing could easily miss it. This suggests you should pass the element to the callback, so you get:
onAvailable('yahoo', function(element){ element.onclick = function() {alert(this.id);}; });
That way, you're less likely to get a race condition.
It also shows a disadvantage of 'onAvailable' - it can only depend on the element in question (and its children) being available. Sometimes that's fine, sometimes not.
Even if you can sniff for particular versions of IE and Webkit reliably, what do you do when the defer or readyState technique breaks? Then the whole approach is relatively lost because it doesn't work in one of the big four browsers. Like the Webkit developers said, they will likely change the Webkit readyStates to match IE7 in which case the Dean Edwards script can't work for Webkit browsers even with a successful sniff. If the defer implementation changes in Internet Explorer it is even worse because the code will try to enliven the page too early before elements are available.
For situations where you cannot use ids for particular elements, you can poll for an element with an id near the end of the document. For example a footer element. When that element appears in the DOM then the previous elements are available also.
Typo corrected. Thanks.
Interesting thoughts as always!
A thought on timing - could you simply have a dummy "end" element just before the close of BODY, and test for the availability of that element? (Maybe you covered this approach and I missed it.)
Assuming the DOM becomes available from top to bottom, ie. parsing of nested elements is finished by the time the last child node of the body is reached, the availability of a "dummy" element should indicate that all other nodes (and their children) are available, correct?
One thing to keep in mind is that Safari and Opera load CSS files in parallel with JavaScript and the DOM. This means that until (the real) window.onload(), you cannot rely on the document being fully rendered.
There is also code which can't use polling, such as my sIFR project. DOMContentLoaded and the IE hack are the way to go here. I do like your point about locking it to current versions.
One big risk for code relying on style information is when Firefox and IE start loading CSS in parallel like Safari does, without blocking JavaScript. Let's hope they're smart enough not to do this.
Scott,
You are right about polling for a dummy element at the end. I had a long section written about this called "bottom polling" but it really wasn't much new information. Equally good as a dummy element would be just polling for a footer element or something that is in your pages after the enlivened elements.
I also had a section called "marker polling" which was basically the same idea as bottom polling. In marker polling there could be a few marker elements in the document at strategic locations. When each marker is found it is assumed all previous elements are available.
Mark,
Certainly there are many situations where the CSS information shouldn't be used until window.onload. For example, this is true as element size and position changes while images arrive and change the page flow.
I don't particularly like the idea of fixing the Dean Edwards script to whichever browser versions will work with that script. That would mean the script works in old versions but not a current version of a browser. That seems like a failed solution to me.
Of course, no script is good for all occasions; however, I imagine that the number of times polling would not be possible is low and many developers in the HTML/CSS/JavaScript world wouldn't encounter these situations. I don't know about sIFR. Why can't that project use polling?
sIFR lets the user specify a selector which is resolved later. Before onload it can't be resolved, so it's impossible to know what to poll for.
Mark,
If you mean a CSS-type selector that could apply to multiple elements in the page then you can still use polling. The bottom polling like I mentioned in the reply to Scott would work and continue to work into the future without the disadvantages of the browser sniffing.
Since BODY is HEAD's nextSibling, could you put an ID on HEAD just get that byId?
Steve,
With what you are suggesting you would know that the entire head is finished parsing and the body has started parsing. You wouldn't know the body is finished parsing.
In the way you are thinking, you would could get BODY by ID and look for it's nextSibling to know that the entire body is parsed and ready. Unfortunately BODY doesn't have a nextSibling unless the whitespace after the BODY element works as a nextSibling. I don't know if this is safe in all browsers. It seems a little iffy to me.
Unfortunately, the onContentAvailable and onAvailable methods don't work reliably in IE7. :-(
I have some cases where IE7 has parts of the DOM available (ie, some of the methods work), but there isn't enough of the DOM object to work correctly. If you make changes to the DOM in the "not ready" state, then you get garbage on the display.
This is only likely to happen when the page is complex (ie, has 10 images and is doing some fancy rendering), but it still happens.
I have found a lot of stupid bugs in IE7. I'm not sure how it's an improvement for web developers.
Ciao!
Doc,
Please post links to these test cases. I haven't encountered any problems yet with adding event handlers to DOM elements with the onContextAvailable or onAvailable functions.
Hi Peter,
The Doctor is correct, here's my post from Ajaxian:
The final solution listed "onAvailable" and "onContentReady" are not the best solutions. I use YUI extensively, and I have abandoned both. Why?
The reason is in IE, polling can catch the document in an interactive state (read-only), which causes IE throw very strange errors when you try to modify the DOM. Unfortunately testing the readyState of the document won't solve the problem, because by the time it's readyState is complete, onload has fired.
Dean Edwards script defer solution is still the best for IE as it gets the document at a point between interactive and complete, something that you can't reliably do without it.
It's a very difficult to debug problem because it happens here and there. The polling has to catch the document at just the right time. IE puts up some cryptic message (I forget what it is, something about aborting) and then it goes to the built in 404 page.
Webkit does not implement the non-standard DOMContentLoaded event. Dean Edwards created a ticket for DOMContentLoaded and a patch has been sitting on the Webkit trac for a long time. The ticket hasn't even received any votes! (People are probably saving their votes for FCKEditor though both tickets are worthy.)
This is rather misleading. Simon Willison, not Dean Edwards, filed the bug report against WebKit asking for DOMContentLoaded support. A month or so later Daniel Peebles submitted a patch which adds such support, but the patch was not accepted into the source tree due to various issues which are mentioned in the bug report. Since then the author has not updated or resubmitted the patch.
It's mentioning that voting isn't something we have used much in WebKit's bug database. We turned it on recently to see if it would be used by community members to suggest which bugs they would value being fixed, so a lack of votes on a big doesn't really square with lack of interest.
The patch to add DOMContentLoaded support to WebKit looks relatively complete, and I doubt it would take much work to bring up to speed with the current state of development WebKit. It would be a good place to start for someone interested in getting involved with WebKit development.
sIFR lets the user specify a selector which is resolved later. Before onload it can't be resolved, so it's impossible to know what to poll for.
Jack,
Thanks for the info. It is disturbing. The Webkit part of Dean's script is too shaky for long term use and the defer part is against the standards. And now apparently the polling doesn't work sometimes in IE7. Back to square one? The whole point of this investigation was to get to the bottom of the issue and determine if there is a robust solution.
Mark,
Thanks for the correction about Simon. I hope that someone does get the DOMContentLoaded event working in Webkit.
All of these solutions leave open the possibility that someone could click an element and have nothing happen. onAvailable / onContentAvailable come close but use polling, which slows things down if there are many elements.
How about taking advantage of event bubbling instead of trying to attach an event hander to the element in time? That is, add a global onclick handler and look to see if event.target or event.originalTarget is one of the elements you're interested in. For hover effects you can do the same with onmouseover.
Great article and it shows the complexity of the issue.
I still have concerns about these events firing before the DOM has fully loaded. You need more JavaScript (which slows down the page load anyway), and debugging across multiple browsers would be problematic.
Personally, I prefer to keep it much simpler and try to ensure that:
1. The page uses progressive enhancement, so it will still work without JavaScript.
2. Use concise, semantic XHTML to ensure the page is as small as possible.
3. If large images are necessary, it's possible to specify them as element backgrounds in CSS. The JS onload event will then fire before the images load.
I tried really hard to make a test case, but I haven't been able to. I have a client's page that I cannot release that causes the problem near perfectly in my office. The moment I touch any of the HTML to strip out identifying marks or try to localize the images, etc., then the page works again.
There are two failure modes for the onContentLoad and onAvailable methods in IE7: 1) The dom is in some weird "not ready, but there state". I'm guessing this is onReadyState of interactive, but I'm not sure. The JavaScript (in my case, I am creating a histogram using divs as the bars) runs with no errors, but nothing is there. The new DOM I manipulated (the background) is somehow taken out of normal flow and overlaps other parts of the page.
2) Something goes very bad and IE7 says that the page is not available and I get a 404. This is the rarer failure mode. I didn't realize it was caused by this till I saw Jack Slocum's post. And sure enough the correlation seems to be very very high. Obviously, I can't be 100% sure since I didn't fire up a debugger and I don't have IE7 source code available.
This is just insanely irritating.
I'm willing to try any suggestions to this page you have in an attempt to make this work with out the browser-quick detection. You have my email and site.
Ciao!
We, my co worker and I, made our own contentOnLoad or dom.onload class. Maybe it is interesting to look in to. We used the polling method in a little different approach.
We look if the 'tag' document.getElementsByTagName('body')[0] is available. If it is available, the DOM is available.
http://www.domnodes.org/onload.html www.domnodes.org/onload.html (domnodes.org)
Peter,
In the way you are thinking, you would could get BODY by ID and look for it's nextSibling to know that the entire body is parsed and ready. Unfortunately BODY doesn't have a nextSibling unless the whitespace after the BODY element works as a nextSibling. I don't know if this is safe in all browsers. It seems a little iffy to me.
I think you can get BODY nextSibling element by putting a comment next to it:
<html>
<head>...</head>
<body>...</body>
<!-- body nextSibling -->
</html>
nextSibling returns the node immediately following the specified one in its parent's childNodes list and comments are in this list. But I didn't tested it so testing is needed.
I believe I found another minor problem with your code. There doesn't seem to be anything in place to prevent doPoll() from executing the callbacks more than once. Since you've added the lastPoll() fallback method to both DOMContentLoaded and window.onload, it's possible that doPoll() could execute the callbacks up to three times!
This is pretty trivial to fix, but something important nonetheless.
Jesse,
You are right that there is still a possibility of a click and nothing happening. I ran some simple tests and I was never able to locate an element, move my mouse to that element and click it before the enlivenment. So although there is some window of opportunity for this problem it may never actually be possible to click during this time.
Using a global event listeners is an interesting idea. I don't know if it is possible during the time period of interest during DOM creation. It seems like it would be an awkward way to program and that if a page was programmed this way then it would be programmed this way for its entire life and not just during page load.
Craig,
I think your three points about keeping the page simple are all consistent with the ideas of the unobtrusive JavaScript style. I don't use XHTML because Internet Explorer doesn't recognize this doctype and using XHTML can cause problems with innerHTML.
Doc,
From your post and Jack Slocum's, it sounds like that when an element is found in the DOM based on polling it doesn't mean the DOM is ready for mutation (adding and moving elements) in Internet Explorer 7. I think that it is still possible to add listeners the DOM elements as soon as polling finds them. If adding event listeners is possible then polling does achieve the goal of enabling unobtrusive JavaScript. I'm going to try to get a reliable error in IE7 and then make some tests. Difficult problem.
Jan,
When document.getElementsByTagName('body')
returns the body element, how can you be sure that all the child elements are also available and that the DOM is complete?
GreLI,
I tried the following document in Firefox 2, Safari 2 and Opera 9. In these browsers the alert was blank which I interpret to mean the trailing comment after the body close tag is not added to the DOM.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>trailing comment</title>
<script type="text/javascript">
window.onload = function() {
var node = document.getElementById('theBody');
var siblings = [];
while (node = node.nextSibling) {
siblings[siblings.length] = node.nodeType;
}
alert(siblings.join('; '));
};
</script>
</head>
<body id="theBody">
<p>asdf</p>
</body>
<!-- trailing comment -->
</html>
Richard,
When an onAvailable()
handler is called it is not added to the notFound
array and so never run twice.
When I tried using the onContentAvailable.js version, my callback function was being executed multiple times (as verified using Firebug's JavaScript logging capabilities.)
lastPoll() is always called by both the DOMContentLoaded event handler (in Firefox) and the window.onload = lastPoll statement. Assuming that both of these "threads" started at about the same time, they would both enter doPoll() and proceed to execute the callback functions from stack[] because there is no lock preventing multiple threads from both executing the callbacks at the same time.
Firstly, a wonderful article Peter - it must have taken an awful lot of time and research.
But wow, all that just to appease those who push the "separation of content" philosophy?
The polling solution is probably better than others, but to me it doesn't achieve separation of content anyway, since you need to ensure an appropriate element is known by the polling function and has been included in the right place in the document. That is only semantically different to inserting a simple script element to call an init() function with a window.onload fall-back.
It would be just as easy to forget the "known element" as the footer-script. Both can be QC'd with a simple search that checks every file has them.
RobG,
Firstly, a wonderful article Peter - it must have taken an awful lot of time and research.
Almost every line of code that has gone into Fork took an equal amount of research. It seems that browser scripting is forensic investigation and the investigations are difficult with plenty of misinformation. The only way I could sort through this problem was by writing it down. The comments this post has generated have been really great.
But wow, all that just to appease those who push the "separation of content" philosophy?
If the end result of the investigation is a good one then it is all worthwhile when the technique is implemented.
The polling solution is probably better than others, but to me it doesn't achieve separation of content anyway, since you need to ensure an appropriate element is known by the polling function and has been included in the right place in the document. That is only semantically different to inserting a simple script element to call an init() function with a window.onload fall-back.
It would be just as easy to forget the "known element" as the footer-script. Both can be QC'd with a simple search that checks every file has them.
I agree with you in the situation where polling is used to find a footer or maker element that is after the element to be enlivened. However, if the polling is looking for the actual element to be enlivened then I think the separation is achieved successfully.
Peter:
You are right. DOM Snapshot showed that browser puts comment before trailing </body>:
<!DOCTYPE ...>
<HTML ...>
<HEAD>
...
</HEAD>
<BODY ID="theBody" >
<P>asdf</P>
<!-- trailing comment -->
</BODY>
</HTML>
GreLI,
I didn't look into what the standards say about this. It doesn't really matter if browsers are moving elements anyway. This may fall into the "be careful what you send and be generous what you accept" mantra of the web.
Richard,
Can you post an example that shows the double or triple calls in Firebug?
To see the callback function being executed multiple times, simply replace the behavior.js in your example with:
onContentAvailable('engines', function(){
console.log("executing callback");
}
);
This uses the Firebug Firefox extension to log to the "Firebug Console" every time the callback runs. You will see "executing callback" appear in the console at least twice.
Richard,
I tried your suggestion and you are right. The code in the examples did not do what I wanted with regard to the stack
and notFound
arrays. When the DOMContentLoaded
event fired I was not resetting the stack
array to the notFound
array. It is changed in both the onContentAvailable()
and onAvailable
functions (even though the latter didn't exhibit the problem.)
What confused me from your original comment was the idea of multiple threads entering the doPoll()
function. I put a logger messages at the start and end of the doPoll()
function to test this. One call to doPoll()
must end before another one starts. This is consistent with the fact that JavaScript is single threaded so there should be no problem with the doPoll()
function being executed twice simultaneously.
Thanks for your feedback.
I've verified that the updated method fixes the problem with the callbacks being executed multiple times. Sorry about my confusing use of "threads"... Based on the behavior it was exhibiting, I'd assumed that JavaScript was multi-threaded. Well, we both learned something. Thanks!
Quote: The reported states can be buggy if a page is being loaded in an iframe element. And just to rub it in, Microsoft gives it's all-to-common "the behavior is by design" excuse.
The article you refer to is http://support.microsoft.com/kb/188763
I believe that document.readyState is behaving correctly. They cite an example where the web page loads FULLY, and document.readyState then becomes "complete", as expected. There is a button on the page, which when clicked will insert an iframe into that fully loaded page. The document.readyState remains "complete" during that time. No surprises here.
Would you expect window.onload to fire a second time when you click on the button? I wouldn't.
It seems to me this was indeed all by design, though the need for different behavior or additional events may be reasonable.
There is another issue with iframes (and possibly frames) that I would like to mention, regarding IE. When you have a parent page (which loads your "OnDOMLoaded" script) and you have an iframe, and this iframe contains a large image that loads very slowly, then guess what? The DOM for all the elements inside that iframe will not become available until AFTER that image fully loads. This appears to be the case for IE 5,6, and 7.
The consequence for this is that Dean's defer trick will fire OnDOMLoaded after that large image has fully downloaded.
The polling method, if it wishes to access the DOM elements in that iframe, must also wait that long. If you are only interested in polling elements that are outside of the iframes, then there is no need to wait for the iframes to fully download their content (whereas the Dean defer method would).
BTW, Mozilla/Firefox do not wait for the iframe to fully load when they fire the DOMContentLoaded event.
So there you have it. Another wrinkle.
In addition, for Mozilla/Firefox when the DOMContentLoaded event fires, the DOM is not necessarily available inside an iframe.
The lines:
if (notFound.length < 1 || loaded)
and
if (el && (hasNextSibling(el) || loaded))
would it not be better to put the variable 'loaded' variable first? So boolean short circuit would never evaluate 'length' or (even better) 'hasNextSibling'. I doubt this will so a dramatic performance improvement, but it is second nature to me to put the cheapest evaluations (in all languages) to the left and the expensive to the right.
Eric,
Thanks for the comments. You are right about the Microsoft site. I read that page wrong. The "by design" behavior does make sense. I removed that line from this article. I also directed readers down to your comment.
iWantToKeepAnon,
I wish I'd made a comment but at some point there was a reason to test loaded
second. Maybe the hasNextSibling() function had a side effect that it no longer has and the function needed to run every time that conditional is tested. I remember I had it the other way around and had to switch it in one case to get the correct behavior. That may not matter now in which case your comment would speed up the process.
Peter,
First of all please read carefully what Jesse Ruderman has said above. He his the right man, follow his advice it will make your life easier...
I presume you are talking about a dynamic environment where pages are served through ASP, PHP or some scripting language so as a second advice, if you really want to be unobtrusive you should act server-side by buffering all the page before sending it to the browser, many of these problem are related to the way the page is served to clients ("Transfer-Encoding: chunked").
If you can do this server-side you are going to save you a lot of time debugging client-side javascript code.
The last trick if the above still does not work or cannot be applied to your environment is to use "insertBefore" where you normally use "appendChild", this will work very well if for example you want to add your own widgets to the BODY before all elements are loaded. In this way your DOM modifications can be executed on safe points inside the BODY, for example before the firstChild like this:
document.body.insertBefore( element, document.body.firstChild);
This is the only way I found on IE to modify the DOM before the page is completely loaded. Obviously not all the problems mentioned above can be solved this way. Sometime the entire node collection is needed.
In many situation it is also required to start as soon as the BODY is available (see BrotherCake) but to avoid another timer in IE the "onactivate" event can be used checking the target is the BODY element, and this in IE roughly correspond to the DOMNodeInserted event available in the latest Mozilla/FireFox/Opera versions.
In these new browser it seems there is also a hidden "DOMFrameContentLoaded" event that can be used for IFRAMES.
Cheers,
Diego
Peter,
Again my compliments for the good work, you have documented several approaches really well, you explained the pro and cons of each method and some inner working of the browsers. This is the most extensive collection of methods...well done Peter. It was something missing.
On the method suggested from Jesse Ruderman, you are correct, it doesn't solve every situations, but it is the right approach to solve many of them where possible. As an example I have my TOOLTIP widget built this way, in the old days I was waiting the "onload" event to be able to have a collection of all elements and go through them all to fiddle with their title/alt attributes, nowadays I just attach a unique event to the document and wait for mouse event on the elements by checking the target, if it matches then work on the title/alt properties and display it.
TO FIRE WHEN ALL THE NODES HAVE BEEN LOADED IN THE DOM In IE I use the "onreadystatechange" event on the document and wait for the "document.readyState" to be "complete", in newer browsers like Mozilla/Firefox/Opera I use the "DOMContentLoaded" event on the document if available, on other browser I use the "onload" or a DOM_Eof "marker" element and an interval to check for it to exists.
TO FIRE EARLIER, BEFORE IMAGES ARE COMPLETELY LOADED In IE I use the "onreadystatechange" event on the document and wait for the " document.readyState" to be "interactive", in newer browsers like Mozilla/Firefox/Opera I use the "DOMNodeInserted" event and check the target to be BODY, on other browsers I use an interval and check for the BODY to exists.
If using the second method then one have to use "insertBefore" instead of "appendChild" to add nodes to the BODY, with this method it is not possible to append elements to the end of the document since doing that will move/change the current "parser insertion point" and that will break most of the time in Internet Explorer (It seems that all other browsers can handle this situation easily).
In general if the triggers of the events for your DOM modification are mouse event you can solve with method #2, while if the triggers are class names or ID you expect to find in the document then there is no other way of using a DOM_Eof "marker" or use "onAvailable" methods or just wait for the "document.readyState " to be "complete".
You may find some notes on "parser insertion point" in a nice article written by Sam Ruby, it's titled "That's not write!".
I like the IDEA of having a DOM event manager that can fire at different stages, only in this way we can differentiate code that need to start as soon as possible and code that needs all the elements with some class/id.
The problem seems to be that everybody think there exists a CATCH-ALL solution that can adapt to any situation, unfortunately it is not like that, several entry points are needed in the fire-up mechanism to really please everybody's needs.
I would very like to hear your thoughts on what I have written, really I would not put my hand on fire about it...
Keep up with the nice work and investigation,
Cheers
Diego Perini
A couple of things I left out,
- On Internet Explorer, Dean Edwards method of detecting all nodes are available in the DOM is still one of the best approximation people may have as a CATCH-ALL solution for all needs, though you are right, it may not work always and is prone to IE versions change (every 5 years)
- On Internet Explorer, if the absolute objective is to preceed the "onload" event, then the document.readyState=="complete" is still unbeatable on both cached and uncached pages, the disadvantage being that also the images or most of them will already be loaded at that time.
- As Eric Gerds pointed out in Dean Edwards blog, by combining the two events, one on the document and one on the script element, both checking for the respective "readyState" property to be in the "complete" state, may give still better results than with the document.write SCRIPT trick, so when the SCRIPT trick misses for some reason we have a fall back on a sure event that still fires before "onload" (one example of this being when no images are present in the page).
- for IE and static HTML pages, not served with "Transfer-Encoding: chunked", a check on the document.fileSize property will say when all the nodes are available through DOM queries, that's because the browser already knows the exact length of the HTML page from the response headers "Content-length: xxxx", I am still unsure if this works exactly the same in "HTTP/1.0" and in "HTTP/1.1"...
For IE the check on the "document.fileSize" is a little tricky:
if(typeof document.fileSizes!='undefined'&&typeof document.fileSizes!='unknown'){
// the static HTML page was completely loaded
alert(document.fileSize);
}
Not because I am a fan of a loop continuously checking this property but I believe here is the right place to report things that may be related to the "window onload" problem, or just because somebody clever than me can use this information in other smarter ways...
In relation to the polling option, an alternative I use (which has its own issues) is to poll the total count of elements available using document.getElementsByTagName("*"), and if this does not change for defined period of time, assume the DOM has finished loading. This saves having to have a dummy element at least.
hi,
I like the IDEA of having a DOM event manager that can fire at different stages, only in this way we can differentiate code that need to start as soon as possible and code that needs all the elements with some class/id.
THANKS for the great informations
Werbeagentur, I also needed to fire a different stages so I built my own event manager, if there is interest and somebody willing to test I will post the code on my test site...
Peter I will borrow the text "early page enlivement" for explaining what my code try to achieve, I really believe I made some progress on the onload problem. Do you have some spare time look at it ?
Cheers,
Diego
Very interesting thoughts. I work as part of a web design team, and we often run into issues with separating out the CSS, HTML, and Java code. We usually end up applying one of your solutions, but we never use one consistently – sometimes, we'll go with splitting up the sections, sometimes with bottom scripting, it all kind of depends on what we're doing and what mood we're in that day. We've also had some issues with Internet Explorer 7 and using both defer and window.onload. We've generally been able to get around them, but they've required some creative coding. The Dean Edwards script is interesting, but as you say, the negative tradeoffs just aren't worth the separation of coding. Unfortunately, that seems to be the case in many instances – getting that nice separation requires either too much extra work or just ends up causing issues later on."
Have something to write? Comment on this article.
I think Dean's script could be made more future proof by more cautious sniffing - so that the more hacky techniques are only used when sure that the current browser supports it, eg. for internet explorer the defer trick can reliably be limited to the range of tested versions using conditional comments. Personally, I'm quite happy to fall back to window.onload for old, spoofed and unknown browsers.
The main danger here is that the sniffing code might be fooled. Some vigilance is required.
And the more popular Dean Edwards' technique gets, the more pressure there will be on browsers not to break it. IE will do what it wants, but the other browsers have to be careful.