rowid,title,contents,year,author,author_slug,published,url,topic 314,Easy Ajax with Prototype,"There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity. But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends. Introducing prototype.js Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff. There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp. First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree. Cutting to the chase Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page: var url = 'myscript.php'; var pars = 'foo=bar'; var target = 'output-div'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page. Knocking up a basic example So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too. So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing: Easy Ajax
As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.) Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility. Meanwhile, back at the server So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it. Here’s a quick PHP script – greeting.php – that Santa brought me early. Season's Greetings, $the_name!

""; ?> You’ll perhaps want to do something a little more complex within your own projects. Just sayin’. Gluing it all together Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init(): Event.observe(window, 'load', init, false); Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field. function init(){ $('greeting-submit').style.display = 'none'; Event.observe('greeting-name', 'keyup', greet, false); } As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this: function greet(){ var url = 'greeting.php'; var pars = 'greeting-name='+escape($F('greeting-name')); var target = 'greeting'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); } The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call. That’s it No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.",2005,Drew McLellan,drewmclellan,2005-12-01T00:00:00+00:00,https://24ways.org/2005/easy-ajax-with-prototype/,code 315,Edit-in-Place with Ajax,"Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn’t go that far towards implementing something really practical. We dipped our toes in, but haven’t learned to swim yet. So here is swimming lesson number one. Anyone who’s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page. Prototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we’ll need here) and a whole bunch of manipulations to the user interface – all through simple library calls. Here’s what we’re building, so let’s do it. Getting Started There are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you’ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here’s what Santa dropped down my chimney: Edit-in-Place with Ajax

Edit-in-place

Dashing through the snow on a one horse open sleigh.

So that’s our page. The editable item is going to be the

called desc. The process goes something like this: Highlight the area onMouseOver Clear the highlight onMouseOut If the user clicks, hide the area and replace with a '; var button = ' OR '; new Insertion.After(obj, textarea+button); Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false); Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false); } The first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object. The last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel. In the event of a cancel, we can clean up behind ourselves like so: function cleanUp(obj, keepEditable){ Element.remove(obj.id+'_editor'); Element.show(obj); if (!keepEditable) showAsEditable(obj, true); } Saving the Changes This is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response. function saveChanges(obj){ var new_content = escape($F(obj.id+'_edit')); obj.innerHTML = ""Saving...""; cleanUp(obj, true); var success = function(t){editComplete(t, obj);} var failure = function(t){editFailed(t, obj);} var url = 'edit.php'; var pars = 'id=' + obj.id + '&content=' + new_content; var myAjax = new Ajax.Request(url, {method:'post', postBody:pars, onSuccess:success, onFailure:failure}); } function editComplete(t, obj){ obj.innerHTML = t.responseText; showAsEditable(obj, true); } function editFailed(t, obj){ obj.innerHTML = 'Sorry, the update failed.'; cleanUp(obj); } As you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to “Saving…” to show that an update is occurring, and make the Ajax POST. If the Ajax fails, editFailed() sets the contents of the object to “Sorry, the update failed.” Admittedly, that’s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to “Saving…”. No one likes a failure – especially a messy one. If the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up. Meanwhile, back at the server The missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here’s what I have in PHP. Not exactly rocket science is it? I’m just catching the content item from the POST and echoing it back. For your application to be useful, however, you’ll need to know exactly which record you should be updating. I’m passing in the ID of my

, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update. You should also check the user’s credentials to make sure they have permission to edit whatever it is they’re editing. Basically the same rules apply as with any script in your application. Limitations There are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I’ve already mentioned. The second is that from an idealistic standpoint, I’d rather not be using innerHTML. However, the reality is that it’s presently the most efficient way of making large changes to the document. If you’re serving as XML, remember that you’ll need to replace these with proper DOM nodes. It’s also important to note that it’s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don’t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all – it fails silently. It is for this reason that this shouldn’t be used as a complete replacement for a traditional, universally accessible edit form. It’s a great time-saver for those with the ability to use it, but it’s no replacement. See it in action I’ve put together an example page using the inert PHP script above. That is to say, your edits aren’t committed to a database, so the example is reset when the page is reloaded.",2005,Drew McLellan,drewmclellan,2005-12-23T00:00:00+00:00,https://24ways.org/2005/edit-in-place-with-ajax/,code 320,DOM Scripting Your Way to Better Blockquotes,"Block quotes are great. I don’t mean they’re great for indenting content – that would be an abuse of the browser’s default styling. I mean they’re great for semantically marking up a chunk of text that is being quoted verbatim. They’re especially useful in blog posts.

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

Notice that you can’t just put the quoted text directly between the
tags. In order for your markup to be valid, block quotes may only contain block-level elements such as paragraphs. There is an optional cite attribute that you can place in the opening
tag. This should contain a URL containing the original text you are quoting:

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

Great! Except… the default behavior in most browsers is to completely ignore the cite attribute. Even though it contains important and useful information, the URL in the cite attribute is hidden. You could simply duplicate the information with a hyperlink at the end of the quoted text:

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

source

But somehow it feels wrong to have to write out the same URL twice every time you want to quote something. It could also get very tedious if you have a lot of quotes. Well, “tedious” is no problem to a programming language, so why not use a sprinkling of DOM Scripting? Here’s a plan for generating an attribution link for every block quote with a cite attribute: Write a function called prepareBlockquotes. Begin by making sure the browser understands the methods you will be using. Get all the blockquote elements in the document. Start looping through each one. Get the value of the cite attribute. If the value is empty, continue on to the next iteration of the loop. Create a paragraph. Create a link. Give the paragraph a class of “attribution”. Give the link an href attribute with the value from the cite attribute. Place the text “source” inside the link. Place the link inside the paragraph. Place the paragraph inside the block quote. Close the for loop. Close the function. Here’s how that translates to JavaScript: function prepareBlockquotes() { if (!document.getElementsByTagName || !document.createElement || !document.appendChild) return; var quotes = document.getElementsByTagName(""blockquote""); for (var i=0; i tags. You can style the attribution link using CSS. It might look good aligned to the right with a smaller font size. If you’re looking for something to do to keep you busy this Christmas, I’m sure that this function could be greatly improved. Here are a few ideas to get you started: Should the text inside the generated link be the URL itself? If the block quote has a title attribute, how would you take its value and use it as the text inside the generated link? Should the attribution paragraph be placed outside the block quote? If so, how would you that (remember, there is an insertBefore method but no insertAfter)? Can you think of other instances of useful information that’s locked away inside attributes? Access keys? Abbreviations?",2005,Jeremy Keith,jeremykeith,2005-12-05T00:00:00+00:00,https://24ways.org/2005/dom-scripting-your-way-to-better-blockquotes/,code 326,Don't be eval(),"JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design. Common mistakes Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it: var property = 'bar'; var value = eval('foo.' + property); Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly. Here’s the right way of doing the above: var property = 'bar'; var value = foo[property]; In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string. Security issues In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks. What’s it good for? Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval(). function evalRequest(url) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { eval(xmlhttp.responseText); } } xmlhttp.open(""GET"", url, true); xmlhttp.send(null); } If you want this to work with Internet Explorer you’ll need to include this compatibility patch.",2005,Simon Willison,simonwillison,2005-12-07T00:00:00+00:00,https://24ways.org/2005/dont-be-eval/,code 327,Improving Form Accessibility with DOM Scripting,"The form label element is an incredibly useful little element – it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element. What happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements? The classic example is date of birth – ideally, you’ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say “this is a date of birth”, “this is the day field”, “this is the month field” and “this is the day field”. Seems like overkill, doesn’t it? And it can uglify a form no end. There are various ways that you can approach it (and I think I’ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this? The technique I am suggesting as another alternative is as follows (here comes the pseudo-code): Start with a totally valid and accessible form Ensure that each form input has a label that is linked to its related form control Apply a class to any label that you don’t want to be visible (for example superfluous) Then, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded: Find all the label elements that are marked as superfluous and hide them Find out what input element each of these label elements is related to Then apply a hint about formatting required for input (gleaned from the original, now-hidden label text) – add it to the form input as default text Finally, add in a behaviour that clears or selects the default text (as you choose) So, here’s the theory put into practice – a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here’s the JavaScript that does the heavy lifting. But why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following: Using the DOM, you can add extra levels of help, potentially across a whole form – or even range of forms – without necessarily increasing your markup (it goes beyond simply hiding labels) Screen readers today may identify a label that is set not to display, but they may not in the future – this might provide a way around By expanding this technique above, it might be possible to visually change the parent container that groups these items – in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers – while still retaining the underlying semantic/logical structure Well, it’s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms?",2005,Ian Lloyd,ianlloyd,2005-12-03T00:00:00+00:00,https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/,code 328,Swooshy Curly Quotes Without Images,"The problem Take a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images – at all. The traditional way Feint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it’s just too difficult to float a closing one. Why is the traditional way bad? Well, for a start there are no actual curly quotes in the text (unless you’re doing some nifty image replacement). Thus with CSS disabled you’ll only have default blockquote styling to fall back on. Secondly, images don’t resize, so scaling text will have no affect on your graphic curlies. The solution Use really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It’ll also make sense when CSS is unavailable. The problem Creating “Drop Caps” with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters – the A to Z or 1 to 10 – and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters. Curly quotes aren’t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space: See how this is emphasized when we add a normal alphabetical character within the span. This is what we’re dealing with here: Then, there’s size. Call in a curly quote at less than 300% font-size and it ain’t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space. Prepare the curlies Firstly, remove the opening “ from the quote. Replace it with the opening curly quote character entity “. Then replace the closing “ with the entity reference for that, which is ”. Now at least the curlies will look nice and swooshy. Add the hooks Two reasons why we aren’t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we’re doing, and secondly we need to affect the last “letter” of our text also – the closing curly quote. So, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters:
Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.
So far nothing will look any different, aside form the curlies looking a bit nicer. I know we’ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download. The CSS OK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right. .bqstart { float: left; font-size: 700%; color: #FF0000; } .bqend { float: right; font-size: 700%; color: #FF0000; } That gives us this, which is rubbish. I’ve highlighted the actual span area with outlines: Note that the curlies don’t even fit inside the span! At this stage on IE 6 PC you won’t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled. Fiddle with margin and padding Think of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it’s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height: .bqstart { float: left; height: 45px; margin-top: -20px; padding-top: 45px; margin-bottom: -50px; font-size: 700%; color: #FF0000; } .bqend { float: right; height: 25px; margin-top: 0px; padding-top: 45px; font-size: 700%; color: #FF0000; } I wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully: I must admit that the heights, margins and spacing don’t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird. Finished The final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here’s a real example (note that I’m specifying Lucida Grande and then Verdana for my curlies): “Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.” Browsers happy As I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it’ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely. It’s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas.",2005,Simon Collison,simoncollison,2005-12-21T00:00:00+00:00,https://24ways.org/2005/swooshy-curly-quotes-without-images/,business 335,Naughty or Nice? CSS Background Images,"Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div:

Your postal/zip code was not in the correct format.

div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users:

Your postal/zip code was not in the correct format.

div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the

...

that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere).",2005,Derek Featherstone,derekfeatherstone,2005-12-20T00:00:00+00:00,https://24ways.org/2005/naughty-or-nice-css-background-images/,code 121,Hide And Seek in The Head,"If you want your JavaScript-enhanced pages to remain accessible and understandable to scripted and noscript users alike, you have to think before you code. Which functionalities are required (ie. should work without JavaScript)? Which ones are merely nice-to-have (ie. can be scripted)? You should only start creating the site when you’ve taken these decisions. Special HTML elements Once you have a clear idea of what will work with and without JavaScript, you’ll likely find that you need a few HTML elements for the noscript version only. Take this example: A form has a nifty bit of Ajax that automatically and silently sends a request once the user enters something in a form field. However, in order to preserve accessibility, the user should also be able to submit the form normally. So the form should have a submit button in noscript browsers, but not when the browser supports sufficient JavaScript. Since the button is meant for noscript browsers, it must be hard-coded in the HTML: When JavaScript is supported, it should be removed: var checkJS = [check JavaScript support]; window.onload = function () { if (!checkJS) return; document.getElementById('noScriptButton').style.display = 'none'; } Problem: the load event Although this will likely work fine in your testing environment, it’s not completely correct. What if a user with a modern, JavaScript-capable browser visits your page, but has to wait for a huge graphic to load? The load event fires only after all assets, including images, have been loaded. So this user will first see a submit button, but then all of a sudden it’s removed. That’s potentially confusing. Fortunately there’s a simple solution: play a bit of hide and seek in the : var checkJS = [check JavaScript support]; if (checkJS) { document.write(''); } First, check if the browser supports enough JavaScript. If it does, document.write an extra So we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again. All the files can be found in the zip file (14k)",2006,Ian Forrester,ianforrester,2006-12-07T00:00:00+00:00,https://24ways.org/2006/beautiful-xml-with-xsl/,code 141,Compose to a Vertical Rhythm,"“Space in typography is like time in music. It is infinitely divisible, but a few proportional intervals can be much more useful than a limitless choice of arbitrary quantities.” So says the typographer Robert Bringhurst, and just as regular use of time provides rhythm in music, so regular use of space provides rhythm in typography, and without rhythm the listener, or the reader, becomes disorientated and lost. On the Web, vertical rhythm – the spacing and arrangement of text as the reader descends the page – is contributed to by three factors: font size, line height and margin or padding. All of these factors must calculated with care in order that the rhythm is maintained. The basic unit of vertical space is line height. Establishing a suitable line height that can be applied to all text on the page, be it heading, body copy or sidenote, is the key to a solid dependable vertical rhythm, which will engage and guide the reader down the page. To see this in action, I’ve created an example with headings, footnotes and sidenotes. Establishing a suitable line height The easiest place to begin determining a basic line height unit is with the font size of the body copy. For the example I’ve chosen 12px. To ensure readability the body text will almost certainly need some leading, that is to say spacing between the lines. A line-height of 1.5em would give 6px spacing between the lines of body copy. This will create a total line height of 18px, which becomes our basic unit. Here’s the CSS to get us to this point: body { font-size: 75%; } html>body { font-size: 12px; } p { line-height 1.5em; } There are many ways to size text in CSS and the above approach provides and accessible method of achieving the pixel-precision solid typography requires. By way of explanation, the first font-size reduces the body text from the 16px default (common to most browsers and OS set-ups) down to the 12px we require. This rule is primarily there for Internet Explorer 6 and below on Windows: the percentage value means that the text will scale predictably should a user bump the text size up or down. The second font-size sets the text size specifically and is ignored by IE6, but used by Firefox, Safari, IE7, Opera and other modern browsers which allow users to resize text sized in pixels. Spacing between paragraphs With our rhythmic unit set at 18px we need to ensure that it is maintained throughout the body copy. A common place to lose the rhythm is the gaps set between margins. The default treatment by web browsers of paragraphs is to insert a top- and bottom-margin of 1em. In our case this would give a spacing between the paragraphs of 12px and hence throw the text out of rhythm. If the rhythm of the page is to be maintained, the spacing of paragraphs should be related to the basic line height unit. This is achieved simply by setting top- and bottom-margins equal to the line height. In order that typographic integrity is maintained when text is resized by the user we must use ems for all our vertical measurements, including line-height, padding and margins. p { font-size:1em; margin-top: 1.5em; margin-bottom: 1.5em; } Browsers set margins on all block-level elements (such as headings, lists and blockquotes) so a way of ensuring that typographic attention is paid to all such elements is to reset the margins at the beginning of your style sheet. You could use a rule such as: body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,form,fieldset,p,blockquote,th,td { margin:0; padding:0; } Alternatively you could look into using the Yahoo! UI Reset style sheet which removes most default styling, so providing a solid foundation upon which you can explicitly declare your design intentions. Variations in text size When there is a change in text size, perhaps with a heading or sidenotes, the differing text should also take up a multiple of the basic leading. This means that, in our example, every diversion from the basic text size should take up multiples of 18px. This can be accomplished by adjusting the line-height and margin accordingly, as described following. Headings Subheadings in the example page are set to 14px. In order that the height of each line is 18px, the line-height should be set to 18 ÷ 14 = 1.286. Similarly the margins above and below the heading must be adjusted to fit. The temptation is to set heading margins to a simple 1em, but in order to maintain the rhythm, the top and bottom margins should be set at 1.286em so that the spacing is equal to the full 18px unit. h2 { font-size:1.1667em; line-height: 1.286em; margin-top: 1.286em; margin-bottom: 1.286em; } One can also set asymmetrical margins for headings, provided the margins combine to be multiples of the basic line height. In our example, a top margin of 1½ lines is combined with a bottom margin of half a line as follows: h2 { font-size:1.1667em; line-height: 1.286em; margin-top: 1.929em; margin-bottom: 0.643em; } Also in our example, the main heading is given a text size of 18px, therefore the line-height has been set to 1em, as has the margin: h1 { font-size:1.5em; line-height: 1em; margin-top: 0; margin-bottom: 1em; } Sidenotes Sidenotes (and other supplementary material) are often set at a smaller size to the basic text. To keep the rhythm, this smaller text should still line up with body copy, so a calculation similar to that for headings is required. In our example, the sidenotes are set at 10px and so their line-height must be increased to 18 ÷ 10 = 1.8. .sidenote { font-size:0.8333em; line-height:1.8em; } Borders One additional point where vertical rhythm is often lost is with the introduction of horizontal borders. These effectively act as shims pushing the subsequent text downwards, so a two pixel horizontal border will throw out the vertical rhythm by two pixels. A way around this is to specify horizontal lines using background images or, as in our example, specify the width of the border in ems and adjust the padding to take up the slack. The design of the footnote in our example requires a 1px horizontal border. The footnote contains 12px text, so 1px in ems is 1 ÷ 12 = 0.0833. I have added a margin of 1½ lines above the border (1.5 × 18 ÷ 12 = 2.5ems), so to maintain the rhythm the border + padding must equal a ½ (9px). We know the border is set to 1px, so the padding must be set to 8px. To specify this in ems we use the familiar calculation: 8 ÷ 12 = 0.667. Hit me with your rhythm stick Composing to a vertical rhythm helps engage and guide the reader down the page, but it takes typographic discipline to do so. It may seem like a lot of fiddly maths is involved (a few divisions and multiplications never hurt anyone) but good type setting is all about numbers, and it is this attention to detail which is the key to success.",2006,Richard Rutter,richardrutter,2006-12-12T00:00:00+00:00,https://24ways.org/2006/compose-to-a-vertical-rhythm/,design 143,Marking Up a Tag Cloud,"Everyone’s doing it. The problem is, everyone’s doing it wrong. Harsh words, you might think. But the crimes against decent markup are legion in this area. You see, I’m something of a markup and semantics junkie. So I’m going to analyse some of the more well-known tag clouds on the internet, explain what’s wrong, and then show you one way to do it better. del.icio.us I think the first ever tag cloud I saw was on del.icio.us. Here’s how they mark it up.
.net advertising ajax ...
Unfortunately, that is one of the worst examples of tag cloud markup I have ever seen. The page states that a tag cloud is a list of tags where size reflects popularity. However, despite describing it in this way to the human readers, the page’s author hasn’t described it that way in the markup. It isn’t a list of tags, just a bunch of anchors in a
. This is also inaccessible because a screenreader will not pause between adjacent links, and in some configurations will not announce the individual links, but rather all of the tags will be read as just one link containing a whole bunch of words. Markup crime number one. Flickr Ah, Flickr. The darling photo sharing site of the internet, and the biggest blind spot in every standardista’s vision. Forgive it for having atrocious markup and sometimes confusing UI because it’s just so much damn fun to use. Let’s see what they do.

 06   africa   amsterdam  ...

Again we have a simple collection of anchors like del.icio.us, only this time in a paragraph. But rather than using a class to represent the size of the tag they use an inline style. An inline style using a pixel-based font size. That’s so far away from the goal of separating style from content, they might as well use a tag. You could theoretically parse that to extract the information, but you have more work to guess what the pixel sizes represent. Markup crime number two (and extra jail time for using non-breaking spaces purely for visual spacing purposes.) Technorati Ah, now. Here, you’d expect something decent. After all, the Overlord of microformats and King of Semantics Tantek Çelik works there. Surely we’ll see something decent here?
  1. Britney Spears
  2. Bush
  3. Christmas
  4. ...
  5. SEO
  6. Shopping
  7. ...
Unfortunately it turns out not to be that decent, and stop calling me Shirley. It’s not exactly terrible code. It does recognise that a tag cloud is a list of links. And, since they’re in alphabetical order, that it’s an ordered list of links. That’s nice. However … fifteen nested tags? FIFTEEN? That’s emphasis for you. Yes, it is parse-able, but it’s also something of a strange way of looking at emphasis. The HTML spec states that is emphasis, and is for stronger emphasis. Nesting tags seems counter to the idea that different tags are used for different levels of emphasis. Plus, if you had a screen reader that stressed the voice for emphasis, what would it do? Shout at you? Markup crime number three. So what should it be? As del.icio.us tells us, a tag cloud is a list of tags where the size that they are rendered at contains extra information. However, by hiding the extra context purely within the CSS or the HTML tags used, you are denying that context to some users. The basic assumption being made is that all users will be able to see the difference between font sizes, and this is demonstrably false. A better way to code a tag cloud is to put the context of the cloud within the content, not the markup or CSS alone. As an example, I’m going to take some of my favourite flickr tags and put them into a cloud which communicates the relative frequency of each tag. To start with a tag cloud in its most basic form is just a list of links. I am going to present them in alphabetical order, so I’ll use an ordered list. Into each list item I add the number of photos I have with that particular tag. The tag itself is linked to the page on flickr which contains those photos. So we end up with this first example. To display this as a traditional tag cloud, we need to alter it in a few ways: The items need to be displayed next to each other, rather than one-per-line The context information should be hidden from display (but not from screen readers) The tag should link to the page of items with that tag Displaying the items next to each other simply means setting the display of the list elements to inline. The context can be hidden by wrapping it in a and then using the off-left method to hide it. And the link just means adding an anchor (with rel=""tag"" for some extra microformats bonus points). So, now we have a simple collection of links in our second example. The last stage is to add the sizes. Since we already have context in our content, the size is purely for visual rendering, so we can just use classes to define the different sizes. For my example, I’ll use a range of class names from not-popular through ultra-popular, in order of smallest to largest, and then use CSS to define different font sizes. If you preferred, you could always use less verbose class names such as size1 through size6. Anyway, adding some classes and CSS gives us our final example, a semantic and more accessible tag cloud.",2006,Mark Norman Francis,marknormanfrancis,2006-12-09T00:00:00+00:00,https://24ways.org/2006/marking-up-a-tag-cloud/,code 144,"The Mobile Web, Simplified","A note from the editors: although eye-opening in 2006, this article is no longer relevant to today’s mobile web. Considering a foray into mobile web development? Following are four things you need to know before making the leap. 1. 4 billion mobile subscribers expected by 2010 Fancy that. Coupled with the UN prediction of 6.8 billion humans by 2010, 4 billion mobile subscribers (source) is an astounding 59% of the planet. Just how many of those subscribers will have data plans and web-enabled phones is still in question, but inevitably this all means one thing for you and me: A ton of potential eyes to view our web content on a mobile device. 2. Context is king Your content is of little value to users if it ignores the context in which it is viewed. Consider how you access data on your mobile device. You might be holding a bottle of water or gripping a handle on the subway/tube. You’re probably seeking specific data such as directions or show times, rather than the plethora of data at your disposal via a desktop PC. The mobile web, a phrase often used to indicate “accessing the web on a mobile device”, is very much a context-, content-, and component-specific environment. Expressed in terms of your potential target audience, access to web content on a mobile device is largely influenced by surrounding circumstances and conditions, information relevant to being mobile, and the feature set of the device being used. Ask yourself, What is relevant to my users and the tasks, problems, and needs they may encounter while being mobile? Answer that question and you’ll be off to a great start. 3. WAP 2.0 is an XHTML environment In a nutshell, here are a few fundamental tenets of mobile internet technology: Wireless Application Protocol (WAP) is the protocol for enabling mobile access to internet content. Wireless Markup Language (WML) was the language of choice for WAP 1.0. Nearly all devices sold today are WAP 2.0 devices. With the introduction of WAP 2.0, XHTML Mobile Profile (XHTML-MP) became the preferred markup language. XHTML-MP will be familiar to anyone experienced with XHTML Transitional or Strict. Summary? The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it. With WML on the decline, the learning curve is much smaller today than it was several years ago. I’m generalizing things gratuitously, but the point remains: Get off yo’ lazy butt and begin to take mobile seriously. I’ll even pass you a few tips for getting started. First, the DOCTYPE for XHTML-MP is as follows: As for MIME type, Open Mobile Alliance (OMA) specifies using the MIME type application/vnd.wap.xhtml+xml, but ultimately you need to ensure the server delivering your mobile content is configured properly for the MIME type you choose to use, as there are other options (see Setting up WAP Servers). Once you’ve made it to the body, the XHTML-MP markup is not unlike what you’re already used to. A few resources worth skimming: Developers Home XHTML-MP Tutorial – An impressively replete resource for all things XHTML-MP XHTML-MP Tags List – A complete list of XHTML-MP elements and accompanying attributes And last but certainly not least, CSS. There exists WAP CSS, which is essentially a subset of CSS2 with WAP-specific extensions. For all intents and purposes, much of the CSS you’re already comfortable using will be transferrable to mobile. As for including CSS in your pages, your options are the same as for desktop sites: external, embedded, and inline. Some experts will argue embedded or inline over external in favor of reducing the number of HTTP connections per page request, yet many popular mobilized sites and apps employ external linking without issue. Stocking stuffers: Flickr Mobile, Fandango Mobile, and Popurls Mobile. A few sites with whom you can do the View Source song and dance for further study. 4. “Cell phone” is so DynaTAC If you’re a U.S. resident, listen up: You must rid your vocabulary of the term “cell phone”. We’re one of the few economies on the planet to refer to a mobile phone accordingly. If you care to find yourself in any of the worthwhile mobile development circles, begin using terms more widely accepted: “mobile” or “mobile phone” or “handset” or “handy”. If you’re not sure which, go for “mobile”. Such as, “Yo dog, check out my new mobile.” More importantly, however, is overcoming the mentality that access to the mobile web can be done only with a phone. Instead, “device” encourages us to think phone, handheld computer, watch, Nintendo DS, car, you name it. Simple enough?",2006,Cameron Moll,cameronmoll,2006-12-19T00:00:00+00:00,https://24ways.org/2006/the-mobile-web-simplified/,ux 147,Christmas Is In The AIR,"That’s right, Christmas is coming up fast and there’s plenty of things to do. Get the tree and lights up, get the turkey, buy presents and who know what else. And what about Santa? He’s got a list. I’m pretty sure he’s checking it twice. Sure, we could use an existing list making web site or even a desktop widget. But we’re geeks! What’s the fun in that? Let’s build our own to-do list application and do it with Adobe AIR! What’s Adobe AIR? Adobe AIR, formerly codenamed Apollo, is a runtime environment that runs on both Windows and OSX (with Linux support to follow). This runtime environment lets you build desktop applications using Adobe technologies like Flash and Flex. Oh, and HTML. That’s right, you web standards lovin’ maniac. You can build desktop applications that can run cross-platform using the trio of technologies, HTML, CSS and JavaScript. If you’ve tried developing with AIR before, you’ll need to get re-familiarized with the latest beta release as many things have changed since the last one (such as the API and restrictions within the sandbox.) To get started To get started in building an AIR application, you’ll need two basic things: The AIR runtime. The runtime is needed to run any AIR-based application. The SDK. The software development kit gives you all the pieces to test your application. Unzip the SDK into any folder you wish. You’ll also want to get your hands on the JavaScript API documentation which you’ll no doubt find yourself getting into before too long. (You can download it, too.) Also of interest, some development environments have support for AIR built right in. Aptana doesn’t have support for beta 3 yet but I suspect it’ll be available shortly. Within the SDK, there are two main tools that we’ll use: one to test the application (ADL) and another to build a distributable package of our application (ADT). I’ll get into this some more when we get to that stage of development. Building our To-do list application The first step to building an application within AIR is to create an XML file that defines our default application settings. I call mine application.xml, mostly because Aptana does that by default when creating a new AIR project. It makes sense though and I’ve stuck with it. Included in the templates folder of the SDK is an example XML file that you can use. The first key part to this after specifying things like the application ID, version, and filename, is to specify what the default content should be within the content tags. Enter in the name of the HTML file you wish to load. Within this HTML file will be our application. ui.html Create a new HTML document and name it ui.html and place it in the same directory as the application.xml file. The first thing you’ll want to do is copy over the AIRAliases.js file from the frameworks folder of the SDK and add a link to it within your HTML document. The aliases create shorthand links to all of the Flash-based APIs. Now is probably a good time to explain how to debug your application. Debugging our application So, with our XML file created and HTML file started, let’s try testing our ‘application’. We’ll need the ADL application located in BIN folder of the SDK and tell it to run the application.xml file. /path/to/adl /path/to/application.xml You can also just drag the XML file onto ADL and it’ll accomplish the same thing. If you just did that and noticed that your blank application didn’t load, you’d be correct. It’s running but isn’t visible. Which at this point means you’ll have to shut down the ADL process. Sorry about that! Changing the visibility You have two ways to make your application visible. You can do it automatically by setting the placing true in the visible tag within the application.xml file. true The other way is to do it programmatically from within your application. You’d want to do it this way if you had other startup tasks to perform before showing the interface. To turn the UI on programmatically, simple set the visible property of nativeWindow to true. Sandbox Security Now that we have an application that we can see when we start it, it’s time to build the to-do list application. In doing so, you’d probably think that using a JavaScript library is a really good idea — and it can be but there are some limitations within AIR that have to be considered. An HTML document, by default, runs within the application sandbox. You have full access to the AIR APIs but once the onload event of the window has fired, you’ll have a limited ability to make use of eval and other dynamic script injection approaches. This limits the ability of external sources from gaining access to everything the AIR API offers, such as database and local file system access. You’ll still be able to make use of eval for evaluating JSON responses, which is probably the most important if you wish to consume JSON-based services. If you wish to create a greater wall of security between AIR and your HTML document loading in external resources, you can create a child sandbox. We won’t need to worry about it for our application so I won’t go any further into it but definitely keep this in mind. Finally, our application Getting tired of all this preamble? Let’s actually build our to-do list application. I’ll use jQuery because it’s small and should suit our needs nicely. Let’s begin with some structure:
    Now we need to wire up that button to actually add a new item to our to-do list. And just like that, we’ve got a to-do list! That’s it! Just never close your application and you’ll remember everything. Okay, that’s not very practical. You need to have some way of storing your to-do items until the next time you open up the application. Storing Data You’ve essentially got 4 different ways that you can store data: Using the local database. AIR comes with SQLLite built in. That means you can create tables and insert, update and select data from that database just like on a web server. Using the file system. You can also create files on the local machine. You have access to a few folders on the local system such as the documents folder and the desktop. Using EcryptedLocalStore. I like using the EcryptedLocalStore because it allows you to easily save key/value pairs and have that information encrypted. All this within just a couple lines of code. Sending the data to a remote API. Our to-do list could sync up with Remember the Milk, for example. To demonstrate some persistence, we’ll use the file system to store our files. In addition, we’ll let the user specify where the file should be saved. This way, we can create multiple to-do lists, keeping them separate and organized. The application is now broken down into 4 basic tasks: Load data from the file system. Perform any interface bindings. Manage creating and deleting items from the list. Save any changes to the list back to the file system. Loading in data from the file system When the application starts up, we’ll prompt the user to select a file or specify a new to-do list. Within AIR, there are 3 main file objects: File, FileMode, and FileStream. File handles file and path names, FileMode is used as a parameter for the FileStream to specify whether the file should be read-only or for write access. The FileStream object handles all the read/write activity. The File object has a number of shortcuts to default paths like the documents folder, the desktop, or even the application store. In this case, we’ll specify the documents folder as the default location and then use the browseForSave method to prompt the user to specify a new or existing file. If the user specifies an existing file, they’ll be asked whether they want to overwrite it. var store = air.File.documentsDirectory; var fileStream = new air.FileStream(); store.browseForSave(""Choose To-do List""); Then we add an event listener for when the user has selected a file. When the file is selected, we check to see if the file exists and if it does, read in the contents, splitting the file on new lines and creating our list items within the interface. store.addEventListener(air.Event.SELECT, fileSelected); function fileSelected() { air.trace(store.nativePath); // load in any stored data var byteData = new air.ByteArray(); if(store.exists) { fileStream.open(store, air.FileMode.READ); fileStream.readBytes(byteData, 0, store.size); fileStream.close(); if(byteData.length > 0) { var s = byteData.readUTFBytes(byteData.length); oldlist = s.split(“\r\n”); // create todolist items for(var i=0; i < oldlist.length; i++) { createItem(oldlist[i], (new Date()).getTime() + i ); } } } } Perform Interface Bindings This is similar to before where we set the click event on the Add button but we’ve moved the code to save the list into a separate function. $('#add').click(function(){ var t = $('#text').val(); if(t){ // create an ID using the time createItem(t, (new Date()).getTime() ); } }) Manage creating and deleting items from the list The list management is now in its own function, similar to before but with some extra information to identify list items and with calls to save our list after each change. function createItem(t, id) { if(t.length == 0) return; // add it to the todo list todolist[id] = t; // use DOM methods to create the new list item var li = document.createElement('li'); // the extra space at the end creates a buffer between the text // and the delete link we're about to add li.appendChild(document.createTextNode(t + ' ')); // create the delete link var del = document.createElement('a'); // this makes it a true link. I feel dirty doing this. del.setAttribute('href', '#'); del.addEventListener('click', function(evt){ var id = this.id.substr(1); delete todolist[id]; // remove the item from the list this.parentNode.parentNode.removeChild(this.parentNode); saveList(); }); del.appendChild(document.createTextNode('[del]')); del.id = 'd' + id; li.appendChild(del); // append everything to the list $('#list').append(li); //reset the text box $('#text').val(''); saveList(); } Save changes to the file system Any time a change is made to the list, we update the file. The file will always reflect the current state of the list and we’ll never have to click a save button. It just iterates through the list, adding a new line to each one. function saveList(){ if(store.isDirectory) return; var packet = ''; for(var i in todolist) { packet += todolist[i] + '\r\n'; } var bytes = new air.ByteArray(); bytes.writeUTFBytes(packet); fileStream.open(store, air.FileMode.WRITE); fileStream.writeBytes(bytes, 0, bytes.length); fileStream.close(); } One important thing to mention here is that we check if the store is a directory first. The reason we do this goes back to our browseForSave call. If the user cancels the dialog without selecting a file first, then the store points to the documentsDirectory that we set it to initially. Since we haven’t specified a file, there’s no place to save the list. Hopefully by this point, you’ve been thinking of some cool ways to pimp out your list. Now we need to package this up so that we can let other people use it, too. Creating a Package Now that we’ve created our application, we need to package it up so that we can distribute it. This is a two step process. The first step is to create a code signing certificate (or you can pay for one from Thawte which will help authenticate you as an AIR application developer). To create a self-signed certificate, run the following command. This will create a PFX file that you’ll use to sign your application. adt -certificate -cn todo24ways 1024-RSA todo24ways.pfx mypassword After you’ve done that, you’ll need to create the package with the certificate adt -package -storetype pkcs12 -keystore todo24ways.pfx todo24ways.air application.xml . The important part to mention here is the period at the end of the command. We’re telling it to package up all files in the current directory. After that, just run the AIR file, which will install your application and run it. Important things to remember about AIR When developing an HTML application, the rendering engine is Webkit. You’ll thank your lucky stars that you aren’t struggling with cross-browser issues. (My personal favourites are multiple backgrounds and border radius!) Be mindful of memory leaks. Things like Ajax calls and event binding can cause applications to slowly leak memory over time. Web pages are normally short lived but desktop applications are often open for hours, if not days, and you may find your little desktop application taking up more memory than anything else on your machine! The WebKit runtime itself can also be a memory hog, usually taking about 15MB just for itself. If you create multiple HTML windows, it’ll add another 15MB to your memory footprint. Our little to-do list application shouldn’t be much of a concern, though. The other important thing to remember is that you’re still essentially running within a Flash environment. While you probably won’t notice this working in small applications, the moment you need to move to multiple windows or need to accomplish stuff beyond what HTML and JavaScript can give you, the need to understand some of the Flash-based elements will become more important. Lastly, the other thing to remember is that HTML links will load within the AIR application. If you want a link to open in the users web browser, you’ll need to capture that event and handle it on your own. The following code takes the HREF from a clicked link and opens it in the default web browser. air.navigateToURL(new air.URLRequest(this.href)); Only the beginning Of course, this is only the beginning of what you can do with Adobe AIR. You don’t have the same level of control as building a native desktop application, such as being able to launch other applications, but you do have more control than what you could have within a web application. Check out the Adobe AIR Developer Center for HTML and Ajax for tutorials and other resources. Now, go forth and create your desktop applications and hopefully you finish all your shopping before Christmas! Download the example files.",2007,Jonathan Snook,jonathansnook,2007-12-19T00:00:00+00:00,https://24ways.org/2007/christmas-is-in-the-air/,code 152,CSS for Accessibility,"CSS is magical stuff. In the right hands, it can transform the plainest of (well-structured) documents into a visual feast. But it’s not all fur coat and nae knickers (as my granny used to say). Here are some simple ways you can use CSS to improve the usability and accessibility of your site. Even better, no sexy visuals will be harmed by the use of these techniques. Promise. Nae knickers This is less of an accessibility tip, and more of a reminder to check that you’ve got your body background colour specified. If you’re sitting there wondering why I’m mentioning this, because it’s a really basic thing, then you might be as surprised as I was to discover that from a sample of over 200 sites checked last year, 35% of UK local authority websites were missing their body background colour. Forgetting to specify your body background colour can lead to embarrassing gaps in coverage, which are not only unsightly, but can prevent your users reading the text on your site if they use a different operating system colour scheme. All it needs is the following line to be added to your CSS file: body {background-color: #fff;} If you pair it with color: #000; … you’ll be assured of maintaining contrast for any areas you inadvertently forget to specify, no matter what colour scheme your user needs or prefers. Even better, if you’ve got standard reset CSS you use, make sure that default colours for background and text are specified in it, so you’ll never be caught with your pants down. At the very least, you’ll have a white background and black text that’ll prompt you to change them to your chosen colours. Elbow room Paying attention to your typography is important, but it’s not just about making it look nice. Careful use of the line-height property can make your text more readable, which helps everyone, but is particularly helpful for those with dyslexia, who use screen magnification or simply find it uncomfortable to read lots of text online. When lines of text are too close together, it can cause the eye to skip down lines when reading, making it difficult to keep track of what you’re reading across. So, a bit of room is good. That said, when lines of text are too far apart, it can be just as bad, because the eye has to jump to find the next line. That not only breaks up the reading rhythm, but can make it much more difficult for those using Screen Magnification (especially at high levels of magnification) to find the beginning of the next line which follows on from the end of the line they’ve just read. Using a line height of between 1.2 and 1.6 times normal can improve readability, and using unit-less line heights help take care of any pesky browser calculation problems. For example: p { font-family: ""Lucida Grande"", Lucida, Verdana, Helvetica, sans-serif; font-size: 1em; line-height: 1.3; } or if you want to use the shorthand version: p { font: 1em/1.3 ""Lucida Grande"", Lucida, Verdana, Helvetica, sans-serif; } View some examples of different line-heights, based on default text size of 100%/1em. Further reading on Unitless line-heights from Eric Meyer. Transformers: Initial case in disguise Nobody wants to shout at their users, but there are some occasions when you might legitimately want to use uppercase on your site. Avoid screen-reader pronunciation weirdness (where, for example, CONTACT US would be read out as Contact U S, which is not the same thing – unless you really are offering your users the chance to contact the United States) caused by using uppercase by using title case for your text and using the often neglected text-transform property to fake uppercase. For example: .uppercase { text-transform: uppercase } Don’t overdo it though, as uppercase text is harder to read than normal text, not to mention the whole SHOUTING thing. Linky love When it comes to accessibility, keyboard only users (which includes those who use voice recognition software) who can see just fine are often forgotten about in favour of screen reader users. This Christmas, share the accessibility love and light up those links so all of your users can easily find their way around your site. The link outline AKA: the focus ring, or that dotted box that goes around links to show users where they are on the site. The techniques below are intended to supplement this, not take the place of it. You may think it’s ugly and want to get rid of it, especially since you’re going to the effort of tarting up your links. Don’t. Just don’t. The non-underlined underline If you listen to Jacob Nielsen, every link on your site should be underlined so users know it’s a link. You might disagree with him on this (I know I do), but if you are choosing to go with underlined links, in whatever state, then remove the default underline and replacing it with a border that’s a couple of pixels away from the text. The underline is still there, but it’s no longer cutting off the bottom of letters with descenders (e.g., g and y) which makes it easier to read. This is illustrated in Examples 1 and 2. You can modify the three lines of code below to suit your own colour and border style preferences, and add it to whichever link state you like. text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; Standing out from the crowd Whatever way you choose to do it, you should be making sure your links stand out from the crowd of normal text which surrounds them when in their default state, and especially in their hover or focus states. A good way of doing this is to reverse the colours when on hover or focus. Well-focused Everyone knows that you can use the :hover pseudo class to change the look of a link when you mouse over it, but, somewhat ironically, people who can see and use a mouse are the group who least need this extra visual clue, since the cursor handily (sorry) changes from an arrow to a hand. So spare a thought for the non-pointing device users that visit your site and take the time to duplicate that hover look by using the :focus pseudo class. Of course, the internets being what they are, it’s not quite that simple, and predictably, Internet Explorer is the culprit once more with it’s frustrating lack of support for :focus. Instead it applies the :active pseudo class whenever an anchor has focus. What this means in practice is that if you want to make your links change on focus as well as on hover, you need to specify focus, hover and active. Even better, since the look and feel necessarily has to be the same for the last three states, you can combine them into one rule. So if you wanted to do a simple reverse of colours for a link, and put it together with the non-underline underlines from before, the code might look like this: a:link { background: #fff; color: #000; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } a:visited { background: #fff; color: #800080; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } a:focus, a:hover, a:active { background: #000; color: #fff; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } Example 3 shows what this looks like in practice. Location, Location, Location To take this example to it’s natural conclusion, you can add an id of current (or something similar) in appropriate places in your navigation, specify a full set of link styles for current, and have a navigation which, at a glance, lets users know which page or section they’re currently in. Example navigation using location indicators. and the source code Conclusion All the examples here are intended to illustrate the concepts, and should not be taken as the absolute best way to format links or style navigation bars – that’s up to you and whatever visual design you’re using at the time. They’re also not the only things you should be doing to make your site accessible. Above all, remember that accessibility is for life, not just for Christmas.",2007,Ann McMeekin,annmcmeekin,2007-12-13T00:00:00+00:00,https://24ways.org/2007/css-for-accessibility/,design 153,JavaScript Internationalisation,"or: Why Rudolph Is More Than Just a Shiny Nose Dunder sat, glumly staring at the computer screen. “What’s up, Dunder?” asked Rudolph, entering the stable and shaking off the snow from his antlers. “Well,” Dunder replied, “I’ve just finished coding the new reindeer intranet Santa Claus asked me to do. You know how he likes to appear to be at the cutting edge, talking incessantly about Web 2.0, AJAX, rounded corners; he even spooked Comet recently by talking about him as if he were some pushy web server. “I’ve managed to keep him happy, whilst also keeping it usable, accessible, and gleaming — and I’m still on the back row of the sleigh! But anyway, given the elves will be the ones using the site, and they come from all over the world, the site is in multiple languages. Which is great, except when it comes to the preview JavaScript I’ve written for the reindeer order form. Here, have a look…” As he said that, he brought up the textileRef:8234272265470b85d91702:linkStartMarker:“order form in French”:/examples/javascript-internationalisation/initial.fr.html on the screen. (Same in English). “Looks good,” said Rudolph. “But if I add some items,” said Dunder, “the preview appears in English, as it’s hard-coded in the JavaScript. I don’t want separate code for each language, as that’s just silly — I thought about just having if statements, but that doesn’t scale at all…” “And there’s more, you aren’t displaying large numbers in French properly, either,” added Rudolph, who had been playing and looking at part of the source code: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; var out = 'You are ordering ' + pretty_num(hay) + ' bushel' + pluralise(hay) + ' of hay, ' + pretty_num(carrots) + ' carrot' + pluralise(carrots) + ', and ' + pretty_num(bells) + ' shiny bell' + pluralise(bells) + ', at a total cost of ' + pretty_num(total) + ' gold pieces. Thank you.'; document.getElementById('preview').innerHTML = out; } function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = ',' + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } function pluralise(n) { if (n!=1) return 's'; return ''; } “Oh, botheration!” cried Dunder. “This is just so complicated.” “It doesn’t have to be,” said Rudolph, “you just have to think about things in a slightly different way from what you’re used to. As we’re only a simple example, we won’t be able to cover all possibilities, but for starters, we need some way of providing different information to the script dependent on the language. We’ll create a global i18n object, say, and fill it with the correct language information. The first variable we’ll need will be a thousands separator, and then we can change the pretty_num function to use that instead: function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = i18n.thousands_sep + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } “The i18n object will also contain our translations, which we will access through a function called _() — that’s just an underscore. Other languages have a function of the same name doing the same thing. It’s very simple: function _(s) { if (typeof(i18n)!='undefined' && i18n[s]) { return i18n[s]; } return s; } “So if a translation is available and provided, we’ll use that; otherwise we’ll default to the string provided — which is helpful if the translation begins to lag behind the site’s text at all, as at least something will be output.” “Got it,” said Dunder. “ _('Hello Dunder') will print the translation of that string, if one exists, ‘Hello Dunder’ if not.” “Exactly. Moving on, your plural function breaks even in English if we have a word where the plural doesn’t add an s — like ‘children’.” “You’re right,” said Dunder. “How did I miss that?” “No harm done. Better to provide both singular and plural words to the function and let it decide which to use, performing any translation as well: function pluralise(s, p, n) { if (n != 1) return _(p); return _(s); } “We’d have to provide different functions for different languages as we employed more elves and got more complicated — for example, in Polish, the word ‘file’ pluralises like this: 1 plik, 2-4 pliki, 5-21 plików, 22-24 pliki, 25-31 plików, and so on.” (More information on plural forms) “Gosh!” “Next, as different languages have different word orders, we must stop using concatenation to construct sentences, as it would be impossible for other languages to fit in; we have to keep coherent strings together. Let’s rewrite your update function, and then go through it: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; hay = sprintf(pluralise('%s bushel of hay', '%s bushels of hay', hay), pretty_num(hay)); carrots = sprintf(pluralise('%s carrot', '%s carrots', carrots), pretty_num(carrots)); bells = sprintf(pluralise('%s shiny bell', '%s shiny bells', bells), pretty_num(bells)); var list = sprintf(_('%s, %s, and %s'), hay, carrots, bells); var out = sprintf(_('You are ordering %s, at a total cost of %s gold pieces.'), list, pretty_num(total)); out += ' '; out += _('Thank you.'); document.getElementById('preview').innerHTML = out; } “ sprintf is a function in many other languages that, given a format string and some variables, slots the variables into place within the string. JavaScript doesn’t have such a function, so we’ll write our own. Again, keep it simple for now, only integers and strings; I’m sure more complete ones can be found on the internet. function sprintf(s) { var bits = s.split('%'); var out = bits[0]; var re = /^([ds])(.*)$/; for (var i=1; i%s
    gold pieces."": '', ""Thank you."": '' }; “If you implement this across the intranet, you’ll want to investigate the xgettext program, which can automatically extract all strings that need translating from all sorts of code files into a standard .po file (I think Python mode works best for JavaScript). You can then use a different program to take the translated .po file and automatically create the language-specific JavaScript files for us.” (e.g. German .po file for PledgeBank, mySociety’s .po-.js script, example output) With a flourish, Rudolph finished editing. “And there we go, localised JavaScript in English, French, or German, all using the same main code.” “Thanks so much, Rudolph!” said Dunder. “I’m not just a pretty nose!” Rudolph quipped. “Oh, and one last thing — please comment liberally explaining the context of strings you use. Your translator will thank you, probably at the same time as they point out the four hundred places you’ve done something in code that only works in your language and no-one else’s…” Thanks to Tim Morley and Edmund Grimley Evans for the French and German translations respectively.",2007,Matthew Somerville,matthewsomerville,2007-12-08T00:00:00+00:00,https://24ways.org/2007/javascript-internationalisation/,code 154,Diagnostic Styling,"We’re all used to using CSS to make our designs live and breathe, but there’s another way to use CSS: to find out where our markup might be choking on missing accessibility features, targetless links, and just plain missing content. Note: the techniques discussed here mostly work in Firefox, Safari, and Opera, but not Internet Explorer. I’ll explain why that’s not really a problem near the end of the article — and no, the reason is not “everyone should just ignore IE anyway”. Basic Diagnostics To pick a simple example, suppose you want to call out all holdover font and center elements in a site. Simple: you just add the following to your styles. font, center {outline: 5px solid red;} You could take it further and add in a nice lime background or some such, but big thick red outlines should suffice. Now you’ll be able to see the offenders wherever as you move through the site. (Of course, if you do this on your public server, everyone else will see the outlines too. So this is probably best done on a development server or local copy of the site.) Not everyone may be familiar with outlines, which were introduced in CSS2, so a word on those before we move on. Outlines are much like borders, except outlines don’t affect layout. Eh? Here’s a comparison. On the left, you have a border. On the right, an outline. The border takes up layout space, pushing other content around and generally being a nuisance. The outline, on the other hand, just draws into quietly into place. In most current browsers, it will overdraw any content already onscreen, and will be overdrawn by any content placed later — which is why it overlaps the images above it, and is overlapped by those below it. Okay, so we can outline deprecated elements like font and center. Is that all? Oh no. Attribution Let’s suppose you also want to find any instances of inline style — that is, use of the style attribute on elements in the markup. This is generally discouraged (outside of HTML e-mails, which I’m not going to get anywhere near), as it’s just another side of the same coin of using font: baking the presentation into the document structure instead of putting it somewhere more manageable. So: *[style], font, center {outline: 5px solid red;} Adding that attribute selector to the rule’s grouped selector means that we’ll now be outlining any element with a style attribute. There’s a lot more that attribute selectors will let use diagnose. For example, we can highlight any images that have empty alt or title text. img[alt=""""] {border: 3px dotted red;} img[title=""""] {outline: 3px dotted fuchsia;} Now, you may wonder why one of these rules calls for a border, and the other for an outline. That’s because I want them to “add together” — that is, if I have an image which possesses both alt and title, and the values of both are empty, then I want it to be doubly marked. See how the middle image there has both red and fuchsia dots running around it? (And am I the only one who sorely misses the actual circular dots drawn by IE5/Mac?) That’s due to its markup, which we can see here in a fragment showing the whole table row. empty title Right, that’s all well and good, but it misses a rather more serious situation: the selector img[alt=""""] won’t match an img element that doesn’t even have an alt attribute. How to tackle this problem? Not a Problem Well, if you want to select something based on a negative, you need a negative selector. img:not([alt]) {border: 5px solid red;} This is really quite a break from the rest of CSS selection, which is all positive: “select anything that has these characteristics”. With :not(), we have the ability to say (in supporting browsers) “select anything that hasn’t these characteristics”. In the above example, only img elements that do not have an alt attribute will be selected. So we expand our list of image-related rules to read: img[alt=""""] {border: 3px dotted red;} img[title=""""] {outline: 3px dotted fuchsia;} img:not([alt]) {border: 5px solid red;} img:not([title]) {outline: 5px solid fuchsia;} With the following results: We could expand this general idea to pick up tables who lack a summary, or have an empty summary attribute. table[summary=""""] {outline: 3px dotted red;} table:not([summary]) {outline: 5px solid red;} When it comes to selecting header cells that lack the proper scope, however, we have a trickier situation. Finding headers with no scope attribute is easy enough, but what about those that have a scope attribute with an incorrect value? In this case, we actually need to pull an on-off maneuver. This has us setting all th elements to have a highlight style, and then turn it off for the elements that meet our criteria. th {border: 2px solid red;} th[scope=""col""], th[scope=""row""] {border: none;} This was necessary because of the way CSS selectors work. For example, consider this: th:not([scope=""col""]), th:not([scope=""row""]) {border: 2px solid red;} That would select…all th elements, regardless of their attrributes. That’s because every th element doesn’t have a scope of col, doesn’t have a scope of row, or doesn’t have either. There’s no escaping this selector o’ doom! This limitation arises because :not() is limited to containing a single “thing” within its parentheses. You can’t, for example, say “select all elements except those that are images which descend from list items”. Reportedly, this limitation was imposed to make browser implementation of :not() easier. Still, we can make good use of :not() in the service of further diagnosing. Calling out links in trouble is a breeze: a[href]:not([title]) {border: 5px solid red;} a[title=""""] {outline: 3px dotted red;} a[href=""#""] {background: lime;} a[href=""""] {background: fuchsia;} Here we have a set that will call our attention to links missing title information, as well as links that have no valid target, whether through a missing URL or a JavaScript-driven page where there are no link fallbacks in the case of missing or disabled JavaScript (href=""#""). And What About IE? As I said at the beginning, much of what I covered here doesn’t work in Internet Explorer, most particularly :not() and outline. (Oh, so basically everything? -Ed.) I can’t do much about the latter. For the former, however, it’s possible to hack your way around the problem by doing some layered on-off stuff. For example, for images, you replace the previously-shown rules with the following: img {border: 5px solid red;} img[alt][title] {border-width: 0;} img[alt] {border-color: fuchsia;} img[alt], img[title] {border-style: double;} img[alt=""""][title], img[alt][title=""""] {border-width: 3px;} img[alt=""""][title=""""] {border-style: dotted;} It won’t have exactly the same set of effects, given the inability to use both borders and outlines, but will still highlight troublesome images. It’s also the case that IE6 and earlier lack support for even attribute selectors, whereas IE7 added pretty much all the attribute selector types there are, so the previous code block won’t have any effect previous to IE7. In a broader sense, though, these kinds of styles probably aren’t going to be used in the wild, as it were. Diagnostic styles are something only you see as you work on a site, so you can make sure to use a browser that supports outlines and :not() when you’re diagnosing. The fact that IE users won’t see these styles is irrelevant since users of any browser probably won’t be seeing these styles. Personally, I always develop in Firefox anyway, thanks to its ability to become a full-featured IDE through the addition of extensions like Firebug and the Web Developer Toolbar. Yeah, About That… It’s true that much of what I describe in this article is available in the WDT. I feel there are two advantages to writing your own set of diagnostic styles. When you write your own styles, you can define exactly what the visual results will be, and how they will interact. The WDT doesn’t let you make its outlines thicker or change their colors. You can combine a bunch of diagnostics into a single set of rules and add it to your site’s style sheet during the diagnostic portion, thus ensuring they persist as you surf around. This can be done in the WDT, but it isn’t as easy (and, at least for me, not as reliable). It’s also true that a markup validator will catch many of the errors I mentioned, such as missing alt and summary attributes. For some, that’s sufficient. But it won’t catch everything diagnostic styles can, like empty alt values or untargeted links, which are perfectly valid, syntactically speaking. Diagnosis Complete? I hope this has been a fun look at the concept of diagnostic styling as well as a quick introduction into possibly new concepts like :not() and outlines. This isn’t all there is to say, of course: there is plenty more that could be added to a diagnostic style sheet. And everyone’s diagnostics will be different, tuned to meet each person’s unique situation. Mostly, though, I hope this small exploration triggers some creative thinking about the use of CSS to do more than just lay out pages and colorize text. Given the familiarity we acquire with CSS, it only makes sense to use it wherever it might be useful, and setting up visible diagnostic flags is just one more place for it to help us.",2007,Eric Meyer,ericmeyer,2007-12-20T00:00:00+00:00,https://24ways.org/2007/diagnostic-styling/,process 156,Mobile 2.0,"Thinking 2.0 As web geeks, we have a thick skin towards jargon. We all know that “Web 2.0” has been done to death. At Blue Flavor we even have a jargon bucket to penalize those who utter such painfully overused jargon with a cash deposit. But Web 2.0 is a term that has lodged itself into the conscience of the masses. This is actually a good thing. The 2.0 suffix was able to succinctly summarize all that was wrong with the Web during the dot-com era as well as the next evolution of an evolving media. While the core technologies actually stayed basically the same, the principles, concepts, interactions and contexts were radically different. With that in mind, this Christmas I want to introduce to you the concept of Mobile 2.0. While not exactly a new concept in the mobile community, it is relatively unknown in the web community. And since the foundation of Mobile 2.0 is the web, I figured it was about time for you to get to know each other. It’s the Carriers’ world. We just live in it. Before getting into Mobile 2.0, I thought first I should introduce you to its older brother. You know the kind, the kid with emotional problems that likes to beat up on you and your friends for absolutely no reason. That is the mobile of today. The mobile ecosystem is a very complicated space often and incorrectly compared to the Web. If the Web was a freewheeling hippie — believing in freedom of information and the unity of man through communities — then Mobile is the cutthroat capitalist — out to pillage and plunder for the sake of the almighty dollar. Where the Web is relatively easy to publish to and ultimately make a buck, Mobile is wrought with layers of complexity, politics and obstacles. I can think of no better way to summarize these challenges than the testimony of Jason Devitt to the United States Congress in what is now being referred to as the “iPhone Hearing.” Jason is the co-founder and CEO of SkyDeck a new wireless startup and former CEO of Vindigo an early pioneer in mobile content. As Jason points out, the mobile ecosystem is a closed door environment controlled by the carriers, forcing the independent publisher to compete or succumb to the will of corporate behemoths. But that is all about to change. Introducing Mobile 2.0 Mobile 2.0 is term used by the mobile community to describe the current revolution happening in mobile. It describes the convergence of mobile and web services, adding portability, ubiquitous connectivity and location-aware services to add physical context to information found on the Web. It’s an important term that looks toward the future. Allowing us to imagine the possibilities that mobile technology has long promised but has yet to deliver. It imagines a world where developers can publish mobile content without the current constraints of the mobile ecosystem. Like the transition from Web 1.0 to 2.0, it signifies the shift away from corporate or brand-centered experiences to user-centered experiences. A focus on richer interactions, driven by user goals. Moving away from proprietary technologies to more open and standard ones, more akin to the Web. And most importantly (from our perspective as web geeks) a shift away from kludgy one-off mobile applications toward using the Web as a platform for content and services. This means the world of the Web and the world of Mobile are coming together faster than you can say ARPU (Average Revenue Per User, a staple mobile term to you webbies). And this couldn’t come at a better time. The importance of understanding and addressing user context is quickly becoming a crucial consideration to every interactive experience as the number of ways we access information on the Web increases. Mobile enables the power of the Web, the collective information of millions of people, inherit payment channels and access to just about every other mass media to literally be overlaid on top of the physical world, in context to the person viewing it. Anyone who can’t imagine how the influence of mobile technology can’t transform how we perform even the simplest of daily tasks needs to get away from their desktop and see the new evolution of information. The Instigators But what will make Mobile 2.0 move from idillic concept to a hardened market reality in 2008 will be four key technologies. Its my guess that you know each them already. 1. Opera Opera is like the little train that could. They have been a driving force on moving the Web as we know it on to mobile handsets. Opera technology has proven itself to be highly adaptable, finding itself preloaded on over 40 million handsets, available on televisions sets through Nintendo Wii or via the Nintendo DS. 2. WebKit Many were surprised when Apple chose to use KHTML instead of Gecko (the guts of Firefox) to power their Safari rendering engine. But WebKit has quickly evolved to be a powerful and flexible browser in the mobile context. WebKit has been in Nokia smartphones for a few years now, is the technology behind Mobile Safari in the iPhone and the iPod Touch and is the default web technology in Google’s open mobile platform effort, Android. 3. The iPhone The iPhone has finally brought the concepts and principles of Mobile 2.0 into the forefront of consumers minds and therefore developers’ minds as well. Over 500 web applications have been written specifically for the iPhone since its launch. It’s completely unheard of to see so many applications built for the mobile context in such a short period of time. 4. CSS & Javascript Web 2.0 could not exist without the rich interactions offered by CSS and Javascript, and Mobile 2.0 is no different. CSS and Javascript support across multiple phones historically has been, well… to put it positively… utter crap. Javascript finally allows developers to create interesting interactions that support user goals and the mobile context. Specially, AJAX allows us to finally shed the days of bloated Java applications and focus on portable and flexible web applications. While CSS — namely CSS3 — allows us to create designs that are as beautiful as they are economical with bandwidth and load times. With Leaflets, a collection of iPhone optimized web apps we created, we heavily relied on CSS3 to cache and reuse design elements over and over, minimizing download times while providing an elegant and user-centered design. In Conclusion It is the combination of all these instigators that is significantly decreasing the bar to mobile publishing. The market as Jason Devitt describes it, will begin to fade into the background. And maybe the world of mobile will finally start looking more like the Web that we all know and love. So after the merriment and celebration of the holiday is over and you look toward the new year to refresh and renew, I hope that you take a seriously consider the mobile medium. By this time next year, it is predicted that one-third of humanity will be using mobile devices to access the Web.",2007,Brian Fling,brianfling,2007-12-21T00:00:00+00:00,https://24ways.org/2007/mobile-2-0/,business 162,Conditional Love,"“Browser.” The four-letter word of web design. I mean, let’s face it: on the good days, when things just work in your target browsers, it’s marvelous. The air smells sweeter, birds’ songs sound more melodious, and both your design and your code are looking sharp. But on the less-than-good days (which is, frankly, most of them), you’re compelled to tie up all your browsers in a sack, heave them into the nearest river, and start designing all-imagemap websites. We all play favorites, after all: some will swear by Firefox, Opera fans are allegedly legion, and others still will frown upon anything less than the latest WebKit nightly. Thankfully, we do have an out for those little inconsistencies that crop up when dealing with cross-browser testing: CSS patches. Spare the Rod, Hack the Browser Before committing browsercide over some rendering bug, a designer will typically reach for a snippet of CSS fix the faulty browser. Historically referred to as “hacks,” I prefer Dan Cederholm’s more client-friendly alternative, “patches”. But whatever you call them, CSS patches all work along the same principle: supply the proper property value to the good browsers, while giving higher maintenance other browsers an incorrect value that their frustrating idiosyncratic rendering engine can understand. Traditionally, this has been done either by exploiting incomplete CSS support: #content { height: 1%; // Let's force hasLayout for old versions of IE. line-height: 1.6; padding: 1em; } html>body #content { height: auto; // Modern browsers get a proper height value. } or by exploiting bugs in their rendering engine to deliver alternate style rules: #content p { font-size: .8em; /* Hide from Mac IE5 \*/ font-size: .9em; /* End hiding from Mac IE5 */ } We’ve even used these exploits to serve up whole stylesheets altogether: @import url(""core.css""); @media tty { i{content:""\"";/*"" ""*/}} @import 'windows-ie5.css'; /*"";} }/* */ The list goes on, and on, and on. For every browser, for every bug, there’s a patch available to fix some rendering bug. But after some time working with standards-based layouts, I’ve found that CSS patches, as we’ve traditionally used them, become increasingly difficult to maintain. As stylesheets are modified over the course of a site’s lifetime, inline fixes we’ve written may become obsolete, making them difficult to find, update, or prune out of our CSS. A good patch requires a constant gardener to ensure that it adds more than just bloat to a stylesheet, and inline patches can be very hard to weed out of a decently sized CSS file. Giving the Kids Separate Rooms Since I joined Airbag Industries earlier this year, every project we’ve worked on has this in the head of its templates: The first element is, simply enough, a link element that points to the project’s main CSS file. No patches, no hacks: just pure, modern browser-friendly style rules. Which, nine times out of ten, will net you a design that looks like spilled eggnog in various versions of Internet Explorer. But don’t reach for the mulled wine quite yet. Immediately after, we’ve got a brace of conditional comments wrapped around two other link elements. These odd-looking comments allow us to selectively serve up additional stylesheets just to the version of IE that needs them. We’ve got one for IE 6 and below: And another for IE7 and above: Microsoft’s conditional comments aren’t exactly new, but they can be a valuable alternative to cooking CSS patches directly into a master stylesheet. And though they’re not a W3C-approved markup structure, I think they’re just brilliant because they innovate within the spec: non-IE devices will assume that the comments are just that, and ignore the markup altogether. This does, of course, mean that there’s a little extra markup in the head of our documents. But this approach can seriously cut down on the unnecessary patches served up to the browsers that don’t need them. Namely, we no longer have to write rules like this in our main stylesheet: #content { height: 1%; // Let's force hasLayout for old versions of IE. line-height: 1.6; padding: 1em; } html>body #content { height: auto; // Modern browsers get a proper height value. } Rather, we can simply write an un-patched rule in our core stylesheet: #content { line-height: 1.6; padding: 1em; } And now, our patch for older versions of IE goes in—you guessed it—the stylesheet for older versions of IE: #content { height: 1%; } The hasLayout patch is applied, our design’s repaired, and—most importantly—the patch is only seen by the browser that needs it. The “good” browsers don’t have to incur any added stylesheet weight from our IE patches, and Internet Explorer gets the conditional love it deserves. Most importantly, this “compartmentalized” approach to CSS patching makes it much easier for me to patch and maintain the fixes applied to a particular browser. If I need to track down a bug for IE7, I don’t need to scroll through dozens or hundreds of rules in my core stylesheet: instead, I just open the considerably slimmer IE7-specific patch file, make my edits, and move right along. Even Good Children Misbehave While IE may occupy the bulk of our debugging time, there’s no denying that other popular, modern browsers will occasionally disagree on how certain bits of CSS should be rendered. But without something as, well, pimp as conditional comments at our disposal, how do we bring the so-called “good browsers” back in line with our design? Assuming you’re loving the “one patch file per browser” model as much as I do, there’s just one alternative: JavaScript. function isSaf() { var isSaf = (document.childNodes && !document.all && !navigator.taintEnabled && !navigator.accentColorName) ? true : false; return isSaf; } function isOp() { var isOp = (window.opera) ? true : false; return isOp; } Instead of relying on dotcom-era tactics of parsing the browser’s user-agent string, we’re testing here for support for various DOM objects, whose presence or absence we can use to reasonably infer the browser we’re looking at. So running the isOp() function, for example, will test for Opera’s proprietary window.opera object, and thereby accurately tell you if your user’s running Norway’s finest browser. With scripts such as isOp() and isSaf() in place, you can then reasonably test which browser’s viewing your content, and insert additional link elements as needed. function loadPatches(dir) { if (document.getElementsByTagName() && document.createElement()) { var head = document.getElementsByTagName(""head"")[0]; if (head) { var css = new Array(); if (isSaf()) { css.push(""saf.css""); } else if (isOp()) { css.push(""opera.css""); } if (css.length) { var link = document.createElement(""link""); link.setAttribute(""rel"", ""stylesheet""); link.setAttribute(""type"", ""text/css""); link.setAttribute(""media"", ""screen, projection""); for (var i = 0; i < css.length; i++) { var tag = link.cloneNode(true); tag.setAttribute(""href"", dir + css[0]); head.appendChild(tag); } } } } } Here, we’re testing the results of isSaf() and isOp(), one after the other. For each function that returns true, then the name of a new stylesheet is added to the oh-so-cleverly named css array. Then, for each entry in css, we create a new link element, point it at our patch file, and insert it into the head of our template. Fire it up using your favorite onload or DOMContentLoaded function, and you’re good to go. Scripteat Emptor At this point, some of the audience’s more conscientious ‘scripters may be preparing to lob figgy pudding at this author’s head. And that’s perfectly understandable; relying on JavaScript to patch CSS chafes a bit against the normally clean separation we have between our pages’ content, presentation, and behavior layers. And beyond the philosophical concerns, this approach comes with a few technical caveats attached: Browser detection? So un-133t. Browser detection is not something I’d typically recommend. Whenever possible, a proper DOM script should check for the support of a given object or method, rather than the device with which your users view your content. It’s JavaScript, so don’t count on it being available. According to one site, roughly four percent of Internet users don’t have JavaScript enabled. Your site’s stats might be higher or lower than this number, but still: don’t expect that every member of your audience will see these additional stylesheets, and ensure that your content’s still accessible with JS turned off. Be a constant gardener. The sample isSaf() and isOp() functions I’ve written will tell you if the user’s browser is Safari or Opera. As a result, stylesheets written to patch issues in an old browser may break when later releases repair the relevant CSS bugs. You can, of course, add logic to these simple little scripts to serve up version-specific stylesheets, but that way madness may lie. In any event, test your work vigorously, and keep testing it when new versions of the targeted browsers come out. Make sure that a patch written today doesn’t become a bug tomorrow. Patching Firefox, Opera, and Safari isn’t something I’ve had to do frequently: still, there have been occasions where the above script’s come in handy. Between conditional comments, careful CSS auditing, and some judicious JavaScript, browser-based bugs can be handled with near-surgical precision. So pass the ‘nog. It’s patchin’ time.",2007,Ethan Marcotte,ethanmarcotte,2007-12-15T00:00:00+00:00,https://24ways.org/2007/conditional-love/,code 166,Performance On A Shoe String,"Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa’s Movers & Shakers list — a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. Many sites have similar peaks in traffic, especially when they’re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden – wham! – things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it’s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July. Traffic graph from Alexa.com Whilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we’ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don’t know when the peaks will be. If you’re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can’t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have. In this article I’m going to talk about some of the things we’ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don’t need for 11 months of each year. We’ve not always got it right, but we’ve learned a lot along the way. The Problem To know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there’s one available not doing anything). Each process takes a bunch of memory, so there’s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory. With our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take: Reducing the amount of memory needed by each Apache process Reducing the amount of time each process is needed Reducing the number of requests made to the server Yahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important. Server tweaking If you’re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for. Firstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what’s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn’t use mail (ours doesn’t) make sure it’s shut down and not using resources. Secondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with. The final thing to check is that Apache isn’t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won’t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can’t even log in to sort it out. Those are the main tidbits I’ve found useful for this site, although it’s worth repeating that entire books have been written on this subject alone. Caching Although the site is generated with PHP and MySQL, the majority of pages served don’t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn’t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file. We use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology. The important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can. Outsource your feeds We get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it’s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. The simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner’s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building. Off-load big files Larger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly. This year, we started serving most of the images for articles from a subdomain – media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It’s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That’s so affordable it may not be worth even taking the files back off S3 once December has passed. I found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you’re not tied to always using the service, and that it can be transparent to your users. If your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back. Off-load small files You’ll have noticed the pattern by now – get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests. Earlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they’re doing when it comes to high performance, so this year we went back to serving avatars directly from them. If your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. Know what you’re paying for The server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space. So in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn’t even have to pay the setup cost on the new server. So it really pays to know what you’re paying for and keep an eye out of ways you can make improvements without needing to spend more money. Further steps There’s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it’s not minified or compressed. But by tackling the big problems first we’ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way. Over the last 24 days we’ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that’s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that’s not a huge amount of traffic, it can be a lot if you’re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you’re ready to capitalise on it. Of course, people only visit 24 ways for the wealth of knowledge and experience that’s tied up in the articles here. Therefore I’d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas.",2007,Drew McLellan,drewmclellan,2007-12-24T00:00:00+00:00,https://24ways.org/2007/performance-on-a-shoe-string/,ux 167,Back To The Future of Print,"By now we have weathered the storm that was the early days of web development, a dangerous time when we used tables, inline CSS and separate pages for print only versions. We can reflect in a haggard old sea-dog manner (“yarrr… I remember back in the browser wars…”) on the bad practices of the time. We no longer need convincing that print stylesheets are the way to go1, though some of the documentation for them is a little outdated now. I am going to briefly cover 8 tips and 4 main gotchas when creating print stylesheets in our more enlightened era. Getting started As with regular stylesheets, print CSS can be included in a number of ways2, for our purposes we are going to be using the link element. This is still my favourite way of linking to CSS files, its easy to see what files are being included and to what media they are being applied to. Without the media attribute specified the link element defaults to the media type ‘all’ which means that the styles within then apply to print and screen alike. The media type ‘screen’ only applies to the screen and wont be picked up by print, this is the best way of hiding styles from print. Make sure you include your print styles after all your other CSS, because you will need to override certain rules and this is a lot easier if you are flowing with the cascade than against it! Another thing you should be thinking is ‘does it need to be printed’. Consider the context3, if it is not a page that is likely to be printed, such as a landing page or a section index then the print styles should resemble the way the page looks on the screen. Context is really important for the design of your print stylesheet, all the tips and tricks that follow should be taken in the context of the page. If for example you are designing a print stylesheet for an item in a shopping cart, it is irrelevant for the user to know the exact url of the link that takes them to your checkout. Tips and tricks During these tip’s we are going to build up print styles for a textileRef:11112857385470b854b8411:linkStartMarker:“simple example”:/examples/back-to-the-future-of-print/demo-1.html 1. Remove the cruft First things first, navigation, headers and most page furniture are pretty much useless and dead space in print so they will need to be removed, using display:none;. 2. Linearise your content Content doesn’t work so well in columns in print, especially if the content columns are long and intend to stretch over multiple columns (as mentioned in the gotcha section below). You might want to consider Lineariseing the content to flow down the page. If you have your source order in correct priority this will make things a lot easier4. 3. Improve your type Once you have removed all the useless cruft and jiggled things about a bit, you can concentrate more on the typography of the page. Typography is a complex topic5, but simply put serif-ed fonts such as Georgia work better on print and sans serif-ed fonts such as Verdana are more appropriate for screen use. You will probably want to increase font size and line height and change from px to pt (which is specifically a print measurement). 4. Go wild on links There are some incredibly fun things you can do with links in print using CSS. There are two schools of thought, one that consider it is best to disguise inline links as body text because they are not click-able on paper. Personally I believe it is useful to know for reference that the document did link to somewhere originally. When deciding which approach to take, consider the context of your document, do people need to know where they would have gone to? will it help or hinder them to know this information? Also for an alternative to the below, take a look at Aaron Gustafson’s article on generating footnotes for print6. Using some clever selector trickery and CSS generated content you can have the location of the link generated after the link itself. HTML:

    I wish Google could find my keys

    CSS: a:link:after, a:visited:after, a:hover:after, a:active:after { content: "" <"" attr(href) ""> ""; } But this is not perfect, in the above example the content of the href is just naively plonked after the link text: I wish Google would find my keys As looking back over this printout the user is not immediately aware of the location of the link, a better solution is to use even more crazy selectors to deal with relative links. We can also add a style to the generated content so it is distinguishable from the link text itself. CSS: a:link:after, a:visited:after, a:hover:after, a:active:after { content: "" <"" attr(href) ""> ""; color: grey; font-style: italic; font-weight: normal; } a[href^=""/""]:after { content: "" ""; } The output is now what we were looking for (you will need to replace example.com with your own root URL): I wish Google would find my keys Using regular expressions on the attribute selectors, one final thing you can do is to suppress the generated content on mailto: links, if for example you know the link text always reflects the email address. Eg: HTML: me@example.com CSS: a[href^=""mailto""]:after { content: """"; } This example shows the above in action. Of course with this clever technique, there are the usual browser support issues. While it won’t look as intended in browsers such as Internet Explorer 6 and 7 (IE6 and IE7) it will not break either and will just degrade gracefully because IE cannot do generated content. To the best of my knowledge Safari 2+ and Opera 9.X support a colour set on generated content whereas Firefox 2 & Camino display this in black regardless of the link or inherited text colour. 5. Jazz your headers for print This is more of a design consideration, don’t go too nuts though; there are a lot more limitations in print media than on screen. For this example we are going to go for is having a bottom border on h2’s and styling other headings with graduating colors or font sizes. And here is the example complete with jazzy headers. 6. Build in general hooks If you are building a large site with many different types of page, you may find it useful to build into your CSS structure, classes that control what is printed (e.g. noprint and printonly). This may not be semantically ideal, but in the past I have found it really useful for maintainability of code across large and varied sites 7. For that extra touch of class When printing pages from a long URL, even if the option is turned on to show the location of the page in the header, browsers may still display a truncated (and thus useless) version. Using the above tip (or just simply setting to display:none in screen and display:block in print) you can insert the URL of the page you are currently on for print only, using JavaScript’s window.location.href variable. function addPrintFooter() { var p = document.createElement('p'); p.className = 'print-footer'; p.innerHTML = window.location.href; document.body.appendChild(p); } You can then call this function using whichever onload or ondomready handler suits your fancy. Here is our familiar demo to show all the above in action 8. Tabular data across pages If you are using tabled data in your document there are a number of things you can do to increase usability of long tables over several pages. If you use the element this should repeat your table headers on the next page should your table be split. You will need to set thead {display: table-header-group;} explicitly for IE even though this should be the default value. Also if you use tr {page-break-inside: avoid;} this should (browser support depending) stop your table row from breaking across two pages. For more information on styling tables for print please see the CSS discuss wiki7. Gotchas 1. Where did all my content go? Absolutely the most common mistake I see with print styles is the truncated content bug. The symptom of this is that only the first page of a div’s content will be printed, the rest will look truncated after this. Floating long columns may still have this affect, as mentioned in Eric Meyer’s article on ‘A List Apart’ article from 20028; though in testing I am no longer able to replicate this. Using overflow:hidden on long content in Firefox however still causes this truncation. Overflow hidden is commonly used to clear floats9. A simple fix can be applied to resolve this, if the column is floated you can override this with float:none similarly overflow:hidden can be overridden with overflow:visible or the offending rules can be banished to a screen only stylesheet. Using position:absolute on long columns also has a very similar effect in truncating the content, but can be overridden in print with position:static; Whilst I only recommend having a print stylesheet for content pages on your site; do at least check other landing pages, section indexes and your homepage. If these are inaccessible in print possibly due to the above gotcha, it might be wise to provide a light dusting of print styles or move the offending overflow / float rules to a screen only stylesheet to fix the issues. 2. Damn those background browser settings One of the factors of life you now need to accept is that you can’t control the user’s browser settings, no more than you can control whether or not they use IE6. Most browsers by default will not print background colours or images unless explicitly told to by the user. Naturally this causes a number of problems, for starters you will need to rethink things like branding. At this point it helps if you are doing the print styles early in the project so that you can control the logo not being a background image for example. Where colour is important to the meaning of the document, for example a status on an invoice, bear in mind that a textural representation will also need to be supplied as the user may be printing in black and white. Borders will print however regardless of setting, so assuming the user is printing in colour you can always use borders to indicate colour. Check the colour contrast of the text against white, this may need to be altered without backgrounds. You should check how your page looks with backgrounds turned on too, for consistency with the default browser settings you may want to override your background anyway. One final issue with backgrounds being off is list items. It is relatively common practice to suppress the list-style-type and replace with a background image to finely control the bullet positioning. This technique doesn’t translate to print, you will need to disable this background bullet and re-instate your trusty friend the list-style-type. 3. Using JavaScript in your CSS? … beware IE6 Internet explorer has an issue that when Javascript is used in a stylesheet it applies this to all media types even if only applied to screen. For example, if you happen to be using expressions to set a width for IE, perhaps to mimic min-width, a simple width:100% !important rule can overcome the effects the expression has on your print styles10. 4. De-enhance your Progressive enhancements Quite a classic “doh” moment is when you realise that, of course paper doesn’t support Javascript. If you have any dynamic elements on the page, for example a document collapsed per section, you really should have been using Progressive enhancement techniques11 and building for browsers without Javascript first, adding in the fancy stuff later. If this is the case it should be trivial to override your wizzy JS styles in your print stylesheet, to display all your content and make it accessible for print, which is by far the best method of achieving this affect. And Finally… I refer you back to the nature of the document in hand, consider the context of your site and the page. Use the tips here to help you add that extra bit of flair to your printed media. Be careful you don’t get caught out by the common gotchas, keep the design simple, test cross browser and relish in the medium of print. Further Reading 1 For more information constantly updated, please see the CSS discuss wiki on print stylesheets 2 For more information on media types and ways of including CSS please refer to the CSS discuss wiki on Media Stylesheets 3 Eric Meyer talks to ThinkVitamin about the importance of context when designing your print strategy. 4 Mark Boulton describes how he applies a newspaper like print stylesheet to an article in the Guardian website. Mark also has some persuasive arguments that print should not be left to last 5 Richard Rutter Has a fantastic resource on typography which also applies to print. 6 Aaron Gustafson has a great solution to link problem by creating footnotes at the end of the page. 7 The CSS discuss wiki has more detailed information on printing tables and detailed browser support 8 This ‘A List Apart’ article is dated May 10th 2002 though is still mostly relevant 9 Float clearing technique using ‘overflow:hidden’ 10 Autistic Cuckoo describes the interesting insight with regards to expressions specified for screen in IE6 remaining in print 11 Wikipedia has a good article on the definition of progressive enhancement 12 For a really neat trick involving a dynamically generated column to displaying and meanings (as well as somewhere for the user to write notes), try print previewing on Brian Suda’s site",2007,Natalie Downe,nataliedowne,2007-12-09T00:00:00+00:00,https://24ways.org/2007/back-to-the-future-of-print/,design 104,Sitewide Search On A Shoe String,"One of the questions I got a lot when I was building web sites for smaller businesses was if I could create a search engine for their site. Visitors should be able to search only this site and find things without the maintainer having to put “related articles” or “featured content” links on every page by hand. Back when this was all fields this wasn’t easy as you either had to write your own scraping tool, use ht://dig or a paid service from providers like Yahoo, Altavista or later on Google. In the former case you had to swallow the bitter pill of computing and indexing all your content and storing it in a database for quick access and in the latter it hurt your wallet. Times have moved on and nowadays you can have the same functionality for free using Yahoo’s “Build your own search service” – BOSS. The cool thing about BOSS is that it allows for a massive amount of hits a day and you can mash up the returned data in any format you want. Another good feature of it is that it comes with JSON-P as an output format which makes it possible to use it without any server-side component! Starting with a working HTML form In order to add a search to your site, you start with a simple HTML form which you can use without JavaScript. Most search engines will allow you to filter results by domain. In this case we will search “bbc.co.uk”. If you use Yahoo as your standard search, this could be:
    The Google equivalent is:
    In any case make sure to use the ID term for the search term and site for the site, as this is what we are going to use for the script. To make things easier, also have an ID called customsearch on the form. To use BOSS, you should get your own developer API for BOSS and replace the one in the demo code. There is click tracking on the search results to see how successful your app is, so you should make it your own. Adding the BOSS magic BOSS is a REST API, meaning you can use it in any HTTP request or in a browser by simply adding the right parameters to a URL. Say for example you want to search “bbc.co.uk” for “christmas” all you need to do is open the following URL: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&format=xml&appid=YOUR-APPLICATION-ID Try it out and click it to see the results in XML. We don’t want XML though, which is why we get rid of the format=xml parameter which gives us the same information in JSON: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&appid=YOUR-APPLICATION-ID JSON makes most sense when you can send the output to a function and immediately use it. For this to happen all you need is to add a callback parameter and the JSON will be wrapped in a function call. Say for example we want to call SITESEARCH.found() when the data was retrieved we can do it this way: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=YOUR-APPLICATION-ID You can use this immediately in a script node if you want to. The following code would display the total amount of search results for the term christmas on bbc.co.uk as an alert: However, for our example, we need to be a bit more clever with this. Enhancing the search form Here’s the script that enhances a search form to show results below it. SITESEARCH = function(){ var config = { IDs:{ searchForm:'customsearch', term:'term', site:'site' }, loading:'Loading results...', noresults:'No results found.', appID:'YOUR-APP-ID', results:20 }; var form; var out; function init(){ if(config.appID === 'YOUR-APP-ID'){ alert('Please get a real application ID!'); } else { form = document.getElementById(config.IDs.searchForm); if(form){ form.onsubmit = function(){ var site = document.getElementById(config.IDs.site).value; var term = document.getElementById(config.IDs.term).value; if(typeof site === 'string' && typeof term === 'string'){ if(typeof out !== 'undefined'){ out.parentNode.removeChild(out); } out = document.createElement('p'); out.appendChild(document.createTextNode(config.loading)); form.appendChild(out); var APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + term + '?callback=SITESEARCH.found&sites=' + site + '&count=' + config.results + '&appid=' + config.appID; var s = document.createElement('script'); s.setAttribute('src',APIurl); s.setAttribute('type','text/javascript'); document.getElementsByTagName('head')[0].appendChild(s); return false; } }; } } }; function found(o){ var list = document.createElement('ul'); var results = o.ysearchresponse.resultset_web; if(results){ var item,link,description; for(var i=0,j=results.length;i
    Where to go from here This is just a very simple example of what you can do with BOSS. You can define languages and regions, retrieve and display images and news and mix the results with other data sources before displaying them. One very cool feature is that by adding a view=keyterms parameter to the URL you can get the keywords of each of the results to drill deeper into the search. An example for this written in PHP is available on the YDN blog. For JavaScript solutions there is a handy wrapper called yboss available to help you go nuts.",2008,Christian Heilmann,chrisheilmann,2008-12-04T00:00:00+00:00,https://24ways.org/2008/sitewide-search-on-a-shoestring/,code 106,Checking Out: Progress Meters,"It’s the holiday season, so you know what that means: online shopping! When I started developing Web sites back in the 90s, many of my first clients were small local shops wanting to sell their goods online, so I developed many a checkout system. And because of slow dial-up speeds back then, informing the user about where they were in the checkout process was pretty important. Even though we’re (mostly) beyond the dial-up days, informing users about where they are in a flow is still important. In usability tests at the companies I’ve worked at, I’ve seen time and time again how not adequately informing the user about their state can cause real frustration. This is especially true for two sets of users: mobile users and users of assistive devices, in particular, screen readers. The progress meter is a very common design solution used to indicate to the user’s state within a flow. On the design side, much effort may go in to crafting a solution that is as visually informative as possible. On the development side, however, solutions range widely. I’ve checked out the checkouts at a number of sites and here’s what I’ve found when it comes to progress meters: they’re sometimes inaccessible and often confusing or unhelpful — all because of the way in which they’re coded. For those who use assistive devices or text-only browsers, there must be a better way to code the progress meter — and there is. (Note: All code samples are from live sites but have been tweaked to hide the culprits’ identities.) How not to make progress A number of sites assemble their progress meters using non- or semi-semantic markup and images with no alternate text. On text-only browsers (like my mobile phone) and to screen readers, this looks and reads like chunks of content with no context given.
    Shipping information Payment information Place your order
    In the above example, the third state, “Place your order”, is the current state. But a screen reader may not know that, and my cell phone only displays ""Shipping informationPayment informationPlace your order"". Not good. Is this progress? Other sites present the entire progress meter as a graphic, like the following: Now, I have no problem with using a graphic to render a very stylish progress meter (my sample above is probably not the most stylish example, of course, but you understand my point). What becomes important in this case is the use of appropriate alternate text to describe the image. Disappointingly, sites today have a wide range of solutions, including using no alternate text. Check out these code samples which call progress meter images. I think we can all agree that the above is bad, unless you really don’t care whether or not users know where they are in a flow. The alt text in the example above just copies all of the text found in the graphic, but it doesn’t represent the status at all. So for every page in the checkout, the user sees or hears the same text. Sure, by the second or third page in the flow, the user has figured out what’s going on, but she or he had to think about it. I don’t think that’s good. The above probably has the best alternate text out of these examples, because the user at least understands that they’re in the Checkout process, on the Place your order page. But going through the flow with alt text like this, the user doesn’t know how many steps are in the flow. Semantic progress Of course, there are some sites that use an ordered list when marking up the progress meter. Hooray! Unfortunately, no text-only browser or screen reader would be able to describe the user’s current state given this markup.
    1. shipping information
    2. payment information
    3. place your order
    Without CSS enabled, the above is rendered as follows: Progress at last We all know that semantic markup makes for the best foundation, so we’ll start with the markup found above. In order to make the state information accessible, let’s add some additional text in paragraph and span elements.

    There are three steps in this checkout process.

    1. Enter your shipping information
    2. Enter your payment information
    3. Review details and place your order
    Add on some simple CSS to hide the paragraph and spans, and arrange the list items on a single line with a background image to represent the large number, and this is what you’ll get: There are three steps in this checkout process. Enter your shipping information Enter your payment information Review details and place your order To display and describe a state as active, add the class “current” to one of the list items. Then change the hidden content such that it better describes the state to the user.

    There are three steps in this checkout process.

    1. You are currently entering your shipping information
    2. In the next step, you will enter your payment information
    3. In the last step, you will review the details and place your order
    The end result is an attractive progress meter that gives much greater semantic and contextual information. There are three steps in this checkout process. You are currently entering your shipping information In the next step, you will enter your payment information In the last step, you will review the details and place your order For example, the above example renders in a text-only browser as follows: There are three steps in this checkout process. You are currently entering your shipping information In the next step, you will enter your payment information In the last step, you will review the details and place your order And the screen reader I use for testing announces the following: There are three steps in this checkout process. List of three items. You are currently entering your shipping information. In the next step, you will enter your payment information. In the last step, you will review the details and place your order. List end. Here’s a sample code page that summarises this approach. Happy frustration-free online shopping with this improved progress meter!",2008,Kimberly Blessing,kimberlyblessing,2008-12-12T00:00:00+00:00,https://24ways.org/2008/checking-out-progress-meters/,ux 107,Using Google App Engine as Your Own Content Delivery Network,"Do you remember, years ago, when hosting was expensive, domain names were the province of the rich, and you hosted your web pages on Geocities? It seems odd to me now that there was a time when each and every geek didn’t have his own top-level domain and super hosting setup. But as the parts became more and more affordable a man could become an outcast if he didn’t have his own slightly surreal-sounding TLD. And so it will be in the future when people realise with surprise there was a time before affordable content delivery networks. A content delivery network, or CDN, is a system of servers spread around the world, serving files from the nearest physical location. Instead of waiting for a file to find its way from a server farm in Silicon Valley 8,000 kilometres away, I can receive it from London, Dublin, or Paris, cutting down the time I wait. The big names — Google, Yahoo, Amazon, et al — use CDNs for their sites, but they’ve always been far too expensive for us mere mortals. Until now. There’s a service out there ready for you to use as your very own CDN. You have the company’s blessing, you won’t need to write a line of code, and — best of all — it’s free. The name? Google App Engine. In this article you’ll find out how to set up a CDN on Google App Engine. You’ll get the development software running on your own computer, tell App Engine what files to serve, upload them to a web site, and give everyone round the world access to them. Creating your first Google App Engine project Before we do anything else, you’ll need to download the Google App Engine software development kit (SDK). You’ll need Python 2.5 too — you won’t be writing any Python code but the App Engine SDK will need it to run on your computer. If you don’t have Python, App Engine will install it for you (if you use Mac OS X 10.5 or a Linux-based OS you’ll have Python; if you use Windows you won’t). Done that? Excellent, because that’s the hardest step. The rest is plain sailing. You’ll need to choose a unique ‘application id’ — nothing more than a name — for your project. Make sure it consists only of lowercase letters and numbers. For this article I’ll use 24ways2008, but you can choose anything you like. On your computer, create a folder named after your application id. This folder can be anywhere you want: your desktop, your documents folder, or wherever you usually keep your web files. Within your new folder, create a folder called assets, and within that folder create three folders called images, css, and javascript. These three folders are the ones you’ll fill with files and serve from your content delivery network. You can have other folders too, if you like. That will leave you with a folder structure like this: 24ways2008/ assets/ css/ images/ javascript/ Now you need to put a few files in these folders, so we can later see our CDN in action. You can put anything you want in these folders, but for this example we’ll include an HTML file, a style sheet, an image, and a Javascript library. In the top-level folder (the one I’ve called 24ways2008), create a file called index.html. Fill this with any content you want. In the assets/css folder, create a file named core.css and throw in a couple of CSS rules for good measure. In the assets/images directory save any image that takes your fancy — I’ve used the silver badge from the App Engine download page. Finally, to fill the JavaScript folder, add in this jQuery library file. If you’ve got the time and the inclination, you can build a page that uses all these elements. So now we should have a set of files and folders that look something like this: 24ways2008/ assets/ index.html css/ core.css images/ appengine-silver-120x30.gif javascript/ jquery-1.2.6.min.js Which leaves us with one last file to create. This is the important one: it tells App Engine what to do with your files. It’s named app.yaml, it sits at the top-level (inside the folder I’ve named 24ways2008), and it needs to include these lines: application: 24ways2008 version: 1 runtime: python api_version: 1 handlers: - url: / static_files: assets/index.html upload: assets/index.html - url: / static_dir: assets You need to make sure you change 24ways2008 on the first line to whatever you chose as your application id, but otherwise the content of your app.yaml file should be identical. And with that, you’ve created your first App Engine project. If you want it, you can download a zip file containing my project. Testing your project As it stands, your project is ready to be uploaded to App Engine. But we couldn’t call ourselves professionals if we didn’t test it, could we? So, let’s put that downloaded SDK to good use and run the project from your own computer. One of the files you’ll find App Engine installed is named dev_appserver.py, a Python script used to simulate App Engine on your computer. You’ll find lots of information on how to do this in the documentation on the development web server, but it boils down to running the script like so (the space and the dot at the end are important): dev_appserver.py . You’ll need to run this from the command-line: Mac users can run the Terminal application, Linux users can run their favourite shell, and Windows users will need to run it via the Command Prompt (open the Start menu, choose ‘Run…’, type ‘cmd‘, and click ‘OK’). Before you run the script you’ll need to make sure you’re in the project folder — in my case, as I saved it to my desktop I can go there by typing cd ~/Desktop/24ways2008 in my Mac’s Terminal app; if you’re using Windows you can type cd ""C:\Documents and Settings\username\Desktop\24ways2008"" If that’s successful, you’ll see a few lines of output, the last looking something like this: INFO 2008-11-22 14:35:00,830 dev_appserver_main.py] Running application 24ways2008 on port 8080: http://localhost:8080 Now you can power up your favourite browser, point it to http://localhost:8080/, and you’ll see the page you saved as index.html. You’ll also find your CSS file at http://localhost:8080/css/core.css. In fact, anything you put inside the assets folder in the project will be accessible from this domain. You’re running our own App Engine web server! Note that no-one else will be able to see your files: localhost is a special domain that you can only see from your computer — and once you stop the development server (by pressing Control–C) you’ll not be able to see the files in your browser until you start it again. You might notice a new file has turned up in your project: index.yaml. App Engine creates this file when you run the development server, and it’s for internal App Engine use only. If you delete it there are no ill effects, but it will reappear when you next run the development server. If you’re using version control (e.g. Subversion) there’s no need to keep a copy in your repository. So you’ve tested your project and you’ve seen it working on your own machine; now all you need to do is upload your project and the world will be able to see your files too. Uploading your project If you don’t have a Google account, create one and then sign in to App Engine. Tell Google about your new project by clicking on the ‘Create an Application’ button. Enter your application id, give the application a name, and agree to the terms and conditions. That’s it. All we need do now is upload the files. Open your Mac OS X Terminal, Windows Command Prompt, or Linux shell window again, move to the project folder, and type (again, the space and the dot at the end are important): appcfg.py update . Enter your email address and password when prompted, and let App Engine do it’s thing. It’ll take no more than a few seconds, but in that time App Engine will have done the equivalent of logging in to an FTP server and copying files across. It’s fairly understated, but you now have your own project up and running. You can see mine at http://24ways2008.appspot.com/, and everyone can see yours at http://your-application-id.appspot.com/. Your files are being served up over Google’s content delivery network, at no cost to you! Benefits of using Google App Engine The benefits of App Engine as a CDN are obvious: your own server doesn’t suck up the bandwidth, while your visitors will appreciate a faster site. But there are also less obvious benefits. First, once you’ve set up your site, updating it is an absolute breeze. Each time you update a file (or a batch of files) you need only run appcfg.py to see the changes appear on your site. To paraphrase Joel Spolsky, a good web site must be able to be updated in a single step. Many designers and developers can’t make that claim, but with App Engine, you can. App Engine also allows multiple people to work on one application. If you want a friend to be able to upload files to your site you can let him do so without giving him usernames and passwords — all he needs is his own Google account. App Engine also gives you a log of all actions taken by collaborators, so you can see who’s made updates, and when. Another bonus is the simple version control App Engine offers. Do you remember the file named app.yaml you created a while back? The second line looked like this: version: 1 If you change the version number to 2 (or 3, or 4, etc), App Engine will keep a copy of the last version you uploaded. If anything goes wrong with your latest version, you can tell App Engine to revert back to that last saved version. It’s no proper version control system, but it could get you out of a sticky situation. One last thing to note: if you’re not happy using your-application-id.appspot.com as your domain, App Engine will quite happily use any domain you own. The weak points of Google App Engine In the right circumstances, App Engine can be a real boon. I run my own site using the method I’ve discussed above, and I’m very happy with it. But App Engine does have its disadvantages, most notably those discussed by Aral Balkan in his post ‘Why Google App Engine is broken and what Google must do to fix it‘. Aral found the biggest problems while using App Engine as a web application platform; I wouldn’t recommend using it as such either (at least for now) but for our purposes as a CDN for static files, it’s much more worthy. Still, App Engine has two shortcomings you should be aware of. The first is that you can’t host a file larger than one megabyte. If you want to use App Engine to host that 4.3MB download for your latest-and-greatest desktop software, you’re out of luck. The only solution is to stick to smaller files. The second problem is the quota system. Google’s own documentation says you’re allowed 650,000 requests a day and 10,000 megabytes of bandwidth in and out (20,000 megabytes in total), which should be plenty for most sites. But people have seen sites shut down temporarily for breaching quotas — in some cases after inexplicable jumps in Google’s server CPU usage. Aral, who’s seen it happen to his own sites, seemed genuinely frustrated by this, and if you measure your hits in the hundreds of thousands and don’t want to worry about uptime, App Engine isn’t for you. That said, for most of us, App Engine offers a fantastic resource: the ability to host files on Google’s own content delivery network, at no charge. Conclusion If you’ve come this far, you’ve seen how to create a Google App Engine project and host your own files on Google’s CDN. You’ve seen the great advantages App Engine offers — an excellent content delivery network, the ability to update your site with a single command, multiple authors, simple version control, and the use of your own domain — and you’ve come across some of its weaknesses — most importantly the limit on file sizes and the quota system. All that’s left to do is upload those applications — but not before you’ve finished your Christmas shopping.",2008,Matt Riggott,mattriggott,2008-12-06T00:00:00+00:00,https://24ways.org/2008/using-google-app-engine-as-your-own-cdn/,process 109,Geotag Everywhere with Fire Eagle,"A note from the editors: Since this article was written Yahoo! has retired the Fire Eagle service. Location, they say, is everywhere. Everyone has one, all of the time. But on the web, it’s taken until this year to see the emergence of location in the applications we use and build. The possibilities are broad. Increasingly, mobile phones provide SDKs to approximate your location wherever you are, browser extensions such as Loki and Mozilla’s Geode provide browser-level APIs to establish your location from the proximity of wireless networks to your laptop. Yahoo’s Brickhouse group launched Fire Eagle, an ambitious location broker enabling people to take their location from any of these devices or sources, and provide it to a plethora of web services. It enables you to take the location information that only your iPhone knows about and use it anywhere on the web. That said, this is still a time of location as an emerging technology. Fire Eagle stores your location on the web (protected by application-specific access controls), but to try and give an idea of how useful and powerful your location can be — regardless of the services you use now — today’s 24ways is going to build a bookmarklet to call up your location on demand, in any web application. Location Support on the Web Over the past year, the number of applications implementing location features has increased dramatically. Plazes and Brightkite are both full featured social networks based around where you are, whilst Pownce rolled in Fire Eagle support to allow geotagging of all the content you post to their microblogging service. Dipity’s beautiful timeline shows for you moving from place to place and Six Apart’s activity stream for Movable Type started exposing your movements. The number of services that hook into Fire Eagle will increase as location awareness spreads through the developer community, but you can use your location on other sites indirectly too. Consider Flickr. Now world renowned for their incredible mapping and places features, geotagging on Flickr started out as a grassroots extension of regular tagging. That same technique can be used to start rolling geotagging in any publishing platform you come across, for any kind of content. Machine-tags (geo:lat= and geo:lon=) and the adr and geo microformats can be used to enhance anything you write with location information. A crash course in avian inflammability Fire Eagle is a location store. A broker between services and devices which provide location and those which consume it. It’s a switchboard that controls which pieces of your location different applications can see and use, and keeps hidden anything you want kept private. A blog widget that displays your current location in public can be restricted to display just your current city, whilst a service that provides you with a list of the nearest ATMs will operate better with a precise street address. Even if your iPhone tells Fire Eagle exactly where you are, consuming applications only see what you want them to see. That’s important for users to realise that they’re in control, but also important for application developers to remember that you cannot rely on having super-accurate information available all the time. You need to build location aware applications which degrade gracefully, because users will provide fuzzier information — either through choice, or through less accurate sources. Application specific permissions are controlled through an OAuth API. Each application has a unique key, used to request a second, user-specific key that permits access to that user’s information. You store that user key and it remains valid until such a time as the user revokes your application’s access. Unlike with passwords, these keys are unique per application, so revoking the access rights of one application doesn’t break all the others. Building your first Fire Eagle app; Geomarklet Fire Eagle’s developer documentation can take you through examples of writing simple applications using server side technologies (PHP, Python). Here, we’re going to write a client-side bookmarklet to make your location available in every site you use. It’s designed to fast-track the experience of having location available everywhere on web, and show you how that can be really handy. Hopefully, this will set you thinking about how location can enhance the new applications you build in 2009. An oddity of bookmarklets Bookmarklets (or ‘favlets’, for those of an MSIE persuasion) are a strange environment to program in. Critically, you have no persistent storage available. As such, using token-auth APIs in a static environment requires you to build you application in a slightly strange way; authing yourself in advance and then hardcoding the keys into your script. Get started Before you do anything else, go to http://fireeagle.com and log in, get set up if you need to and by all means take a look around. Take a look at the mobile updaters section of the application gallery and perhaps pick out an app that will update Fire Eagle from your phone or laptop. Once that’s done, you need to register for an application key in the developer section. Head straight to /developer/create and complete the form. Since you’re building a standalone application, choose ‘Auth for desktop applications’ (rather than web applications), and select that you’ll be ‘accessing location’, not updating. At the end of this process, you’ll have two application keys, a ‘Consumer Key’ and a ‘Consumer Secret’, which look like these: Consumer Key luKrM9U1pMnu Consumer Secret ZZl9YXXoJX5KLiKyVrMZffNEaBnxnd6M These keys combined allow your application to make requests to Fire Eagle. Next up, you need to auth yourself; granting your new application permission to use your location. Because bookmarklets don’t have local storage, you can’t integrate the auth process into the bookmarklet itself — it would have no way of storing the returned key. Instead, I’ve put together a simple web frontend through which you can auth with your application. Head to Auth me, Amadeus!, enter the application keys you just generated and hit ‘Authorize with Fire Eagle’. You’ll be taken to the Fire Eagle website, just as in regular Fire Eagle applications, and after granting access to your app, be redirected back to Amadeus which will provide you your user tokens. These tokens are used in subsequent requests to read your location. And, skip to the end… The process of building the bookmarklet, making requests to Fire Eagle, rendering it to the page and so forth follows, but if you’re the impatient type, you might like to try this out right now. Take your four API keys from above, and drag the following link to your Bookmarks Toolbar; it contains all the code described below. Before you can use it, you need to edit in your own API keys. Open your browser’s bookmark editor and where you find text like ‘YOUR_CONSUMER_KEY_HERE’, swap in the corresponding key you just generated. Get Location Bookmarklet Basics To start on the bookmarklet code, set out a basic JavaScript module-pattern structure: var Geomarklet = function() { return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); Next we’ll add the keys obtained in the setup step, and also some basic Fire Eagle support objects: var Geomarklet = function() { var Keys = { consumer_key: 'IuKrJUHU1pMnu', consumer_secret: 'ZZl9YXXoJX5KLiKyVEERTfNEaBnxnd6M', user_token: 'xxxxxxxxxxxx', user_secret: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }; var LocationDetail = { EXACT: 0, POSTAL: 1, NEIGHBORHOOD: 2, CITY: 3, REGION: 4, STATE: 5, COUNTRY: 6 }; var index_offset; return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); The Location Hierarchy A successful Fire Eagle query returns an object called the ‘location hierarchy’. Depending on the level of detail shared, the index of a particular piece of information in the array will vary. The LocationDetail object maps the array indices of each level in the hierarchy to something comprehensible, whilst the index_offset variable is an adjustment based on the detail of the result returned. The location hierarchy object looks like this, providing a granular breakdown of a location, in human consumable and machine-friendly forms. ""user"": { ""location_hierarchy"": [{ ""level"": 0, ""level_name"": ""exact"", ""name"": ""707 19th St, San Francisco, CA"", ""normal_name"": ""94123"", ""geometry"": { ""type"": ""Point"", ""coordinates"": [ - 0.2347530752, 67.232323] }, ""label"": null, ""best_guess"": true, ""id"": , ""located_at"": ""2008-12-18T00:49:58-08:00"", ""query"": ""q=707%2019th%20Street,%20Sf"" }, { ""level"": 1, ""level_name"": ""postal"", ""name"": ""San Francisco, CA 94114"", ""normal_name"": ""12345"", ""woeid"": , ""place_id"": """", ""geometry"": { ""type"": ""Polygon"", ""coordinates"": [], ""bbox"": [] }, ""label"": null, ""best_guess"": false, ""id"": 59358791, ""located_at"": ""2008-12-18T00:49:58-08:00"" }, { ""level"": 2, ""level_name"": ""neighborhood"", ""name"": ""The Mission, San Francisco, CA"", ""normal_name"": ""The Mission"", ""woeid"": 23512048, ""place_id"": ""Y12JWsKbApmnSQpbQg"", ""geometry"": { ""type"": ""Polygon"", ""coordinates"": [], ""bbox"": [] }, ""label"": null, ""best_guess"": false, ""id"": 59358801, ""located_at"": ""2008-12-18T00:49:58-08:00"" }, } In this case the first object has a level of 0, so the index_offset is also 0. Prerequisites To query Fire Eagle we call in some existing libraries to handle the OAuth layer and the Fire Eagle API call. Your bookmarklet will need to add the following scripts into the page: The SHA1 encryption algorithm The OAuth wrapper An extension for the OAuth wrapper The Fire Eagle wrapper itself When the bookmarklet is first run, we’ll insert these scripts into the document. We’re also inserting a stylesheet to dress up the UI that will be generated. If you want to follow along any of the more mundane parts of the bookmarklet, you can download the full source code. Rendering This bookmarklet can be extended to support any formatting of your location you like, but for sake of example I’m going to build three common formatters that you’ll find useful for common location scenarios: Sites which already ask for your location; and in publishing systems that accept tags or HTML mark-up. All the rendering functions are items in a renderers object, so they can be iterated through easily, making it trivial to add new formatting functions as your find new use cases (just add another function to the object). var renderers = { geotag: function(user) { if(LocationDetail.EXACT !== index_offset) { return false; } else { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; return ""geo:lat="" + coords[0] + "", geo:lon="" + coords[1]; } }, city: function(user) { if(LocationDetail.CITY < index_offset) { return false; } else { return user.location_hierarchy[LocationDetail.CITY - index_offset].name; } } You should always fail gracefully, and in line with catering to users who choose not to share their location precisely, always check that the location has been returned at the level you require. Geotags are expected to be precise, so if an exact location is unavailable, returning false will tell the rendering aspect of the bookmarklet to ignore the function altogether. These first two are quite simple, geotag returns geo:lat=-0.2347530752, geo:lon=67.232323 and city returns San Francisco, CA. This final renderer creates a chunk of HTML using the adr and geo microformats, using all available aspects of the location hierarchy, and can be used to geotag any content you write on your blog or in comments: html: function(user) { var geostring = ''; var adrstring = ''; var adr = []; adr.push('

    '); // city if(LocationDetail.CITY >= index_offset) { adr.push( '\n ' + user.location_hierarchy[LocationDetail.CITY-index_offset].normal_name + ',' ); } // county if(LocationDetail.REGION >= index_offset) { adr.push( '\n ' + user.location_hierarchy[LocationDetail.REGION-index_offset].normal_name + ',' ); } // locality if(LocationDetail.STATE >= index_offset) { adr.push( '\n ' + user.location_hierarchy[LocationDetail.STATE-index_offset].normal_name + ',' ); } // country if(LocationDetail.COUNTRY >= index_offset) { adr.push( '\n ' + user.location_hierarchy[LocationDetail.COUNTRY-index_offset].normal_name + '' ); } // postal if(LocationDetail.POSTAL >= index_offset) { adr.push( '\n ' + user.location_hierarchy[LocationDetail.POSTAL-index_offset].normal_name + ',' ); } adr.push('\n

    \n'); adrstring = adr.join(''); if(LocationDetail.EXACT === index_offset) { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; geostring = '

    ' +'\n ' + coords[0] + ';' + '\n ' + coords[1] + '\n

    \n'; } return (adrstring + geostring); } Here we check the availability of every level of location and build it into the adr and geo patterns as appropriate. Just as for the geotag function, if there’s no exact location the geo markup won’t be returned. Finally, there’s a rendering method which creates a container for all this data, renders all the applicable location formats and then displays them in the page for a user to copy and paste. You can throw this together with DOM methods and some simple styling, or roll in some components from YUI or JQuery to handle drawing full featured overlays. You can see this simple implementation for rendering in the full source code. Make the call With a framework in place to render Fire Eagle’s location hierarchy, the only thing that remains is to actually request your location. Having already authed through Amadeus earlier, that’s as simple as instantiating the Fire Eagle JavaScript wrapper and making a single function call. It’s a big deal that whilst a lot of new technologies like OAuth add some complexity and require new knowledge to work with, APIs like Fire Eagle are really very simple indeed. return { run: function() { insert_prerequisites(); setTimeout( function() { var fe = new FireEagle( Keys.consumer_key, Keys.consumer_secret, Keys.user_token, Keys.user_secret ); var script = document.createElement('script'); script.type = 'text/javascript'; script.src = fe.getUserUrl( FireEagle.RESPONSE_FORMAT.json, 'Geomarklet.callback' ); document.body.appendChild(script); }, 2000 ); }, callback: function(json) { if(json.rsp && 'fail' == json.rsp.stat) { alert('Error ' + json.rsp.code + "": "" + json.rsp.message); } else { index_offset = json.user.location_hierarchy[0].level; draw_selector(json); } } }; We first insert the prerequisite scripts required for the Fire Eagle request to function, and to prevent trying to instantiate the FireEagle object before it’s been loaded over the wire, the remaining instantiation and request is wrapped inside a setTimeout delay. We then create the request URL, referencing the Geomarklet.callback callback function and then append the script to the document body — allowing a cross-domain request. The callback itself is quite simple. Check for the presence and value of rsp.status to test for errors, and display them as required. If the request is successful set the index_offset — to adjust for the granularity of the location hierarchy — and then pass the object to the renderer. The result? When Geomarklet.run() is called, your location from Fire Eagle is read, and each renderer displayed on the page in an easily copy and pasteable form, ready to be used however you need. Deploy The final step is to convert this code into a long string for use as a bookmarklet. Easiest for Mac users is the JavaScript bundle in TextMate — choose Bundles: JavaScript: Copy as Bookmarklet to Clipboard. Then create a new ‘Get Location’ bookmark in your browser of choice and paste in. Those without TextMate can shrink their code down into a single line by first running their code through the JSLint tool (to ensure the code is free from errors and has all the required semi-colons) and then use a find-and-replace tool to remove line breaks from your code (or even run your code through JSMin to shrink it down). With the bookmarklet created and added to your bookmarks bar, you can now call up your location on any page at all. Get a feel for a web where your location is just another reliable part of the browsing experience. Where next? So, the Geomarklet you’ve been guided through is a pretty simple premise and pretty simple output. But from this base you can start to extend: Add code that will insert each of the location renderings directly into form fields, perhaps, or how about site-specific handlers to add your location tags into the correct form field in Wordpress or Tumblr? Paste in your current location to Google Maps? Or Flickr? Geomarklet gives you a base to start experimenting with location on your own pages and the sites you browse daily. The introduction of consumer accessible geo to the web is an adventure of discovery; not so much discovering new locations, but discovering location itself.",2008,Ben Ward,benward,2008-12-21T00:00:00+00:00,https://24ways.org/2008/geotag-everywhere-with-fire-eagle/,code 171,Rock Solid HTML Emails,"At some stage in your career, it’s likely you’ll be asked by a client to design a HTML email. Before you rush to explain that all the cool kids are using social media, keep in mind that when done correctly, email is still one of the best ways to promote you and your clients online. In fact, a recent survey showed that every dollar spent on email marketing this year generated more than $40 in return. That’s more than any other marketing channel, including the cool ones. There are a whole host of ingredients that contribute to a good email marketing campaign. Permission, relevance, timeliness and engaging content are all important. Even so, the biggest challenge for designers still remains building an email that renders well across all the popular email clients. Same same, but different Before getting into the details, there are some uncomfortable facts that those new to HTML email should be aware of. Building an email is not like building for the web. While web browsers continue their onward march towards standards, many email clients have stubbornly stayed put. Some have even gone backwards. In 2007, Microsoft switched the Outlook rendering engine from Internet Explorer to Word. Yes, as in the word processor. Add to this the quirks of the major web-based email clients like Gmail and Hotmail, sprinkle in a little Lotus Notes and you’ll soon realize how different the email game is. While it’s not without its challenges, rest assured it can be done. In my experience the key is to focus on three things. First, you should keep it simple. The more complex your email design, the more likely is it to choke on one of the popular clients with poor standards support. Second, you need to take your coding skills back a good decade. That often means nesting tables, bringing CSS inline and following the coding guidelines I’ll outline below. Finally, you need to test your designs regularly. Just because a template looks nice in Hotmail now, doesn’t mean it will next week. Setting your lowest common denominator To maintain your sanity, it’s a good idea to decide exactly which email clients you plan on supporting when building a HTML email. While general research is helpful, the email clients your subscribers are using can vary significantly from list to list. If you have the time there are a number of tools that can tell you specifically which email clients your subscribers are using. Trust me, if the testing shows almost none of them are using a client like Lotus Notes, save yourself some frustration and ignore it altogether. Knowing which email clients you’re targeting not only makes the building process easier, it can save you lots of time in the testing phase too. For the purpose of this article, I’ll be sharing techniques that give the best results across all of the popular clients, including the notorious ones like Gmail, Lotus Notes 6 and Outlook 2007. Just remember that pixel perfection in all email clients is a pipe dream. Let’s get started. Use tables for layout Because clients like Gmail and Outlook 2007 have poor support for float, margin and padding, you’ll need to use tables as the framework of your email. While nested tables are widely supported, consistent treatment of width, margin and padding within table cells is not. For the best results, keep the following in mind when coding your table structure. Set the width in each cell, not the table When you combine table widths, td widths, td padding and CSS padding into an email, the final result is different in almost every email client. The most reliable way to set the width of your table is to set a width for each cell, not for the table itself.
    Never assume that if you don’t specify a cell width the email client will figure it out. It won’t. Also avoid using percentage based widths. Clients like Outlook 2007 don’t respect them, especially for nested tables. Stick to pixels. If you want to add padding to each cell, use either the cellpadding attribute of the table or CSS padding for each cell, but never combine the two. Err toward nesting Table nesting is far more reliable than setting left and right margins or padding for table cells. If you can achieve the same effect by table nesting, that will always give you the best result across the buggier email clients. Use a container table for body background colors Many email clients ignore background colors specified in your CSS or the tag. To work around this, wrap your entire email with a 100% width table and give that a background color.
    Your email code goes here.
    You can use the same approach for background images too. Just remember that some email clients don’t support them, so always provide a fallback color. Avoid unnecessary whitespace in table cells Where possible, avoid whitespace between your tags. Some email clients (ahem, Yahoo! and Hotmail) can add additional padding above or below the cell contents in some scenarios, breaking your design for no apparent reason. CSS and general font formatting While some email designers do their best to avoid CSS altogether and rely on the dreaded tag, the truth is many CSS properties are well supported by most email clients. See this comprehensive list of CSS support across the major clients for a good idea of the safe properties and those that should be avoided. Always move your CSS inline Gmail is the culprit for this one. By stripping the CSS from the and of any email, we’re left with no choice but to move all CSS inline. The good news is this is something you can almost completely automate. Free services like Premailer will move all CSS inline with the click of a button. I recommend leaving this step to the end of your build process so you can utilize all the benefits of CSS. Avoid shorthand for fonts and hex notation A number of email clients reject CSS shorthand for the font property. For example, never set your font styles like this. p { font:bold 1em/1.2em georgia,times,serif; } Instead, declare the properties individually like this. p { font-weight: bold; font-size: 1em; line-height: 1.2em; font-family: georgia,times,serif; } While we’re on the topic of fonts, I recently tested every conceivable variation of @font-face across the major email clients. The results were dismal, so unfortunately it’s web-safe fonts in email for the foreseeable future. When declaring the color property in your CSS, some email clients don’t support shorthand hexadecimal colors like color:#f60; instead of color:#ff6600;. Stick to the longhand approach for the best results. Paragraphs Just like table cell spacing, paragraph spacing can be tricky to get a consistent result across the board. I’ve seen many designers revert to using double
    or DIVs with inline CSS margins to work around these shortfalls, but recent testing showed that paragraph support is now reliable enough to use in most cases (there was a time when Yahoo! didn’t support the paragraph tag at all). The best approach is to set the margin inline via CSS for every paragraph in your email, like so: p { margin: 0 0 1.6em 0; } Again, do this via CSS in the head when building your email, then use Premailer to bring it inline for each paragraph later. If part of your design is height-sensitive and calls for pixel perfection, I recommend avoiding paragraphs altogether and setting the text formatting inline in the table cell. You might need to use table nesting or cellpadding / CSS to get the desired result. Here’s an example: your height sensitive text Links Some email clients will overwrite your link colors with their defaults, and you can avoid this by taking two steps. First, set a default color for each link inline like so: this is a link Next, add a redundant span inside the a tag. this is a link To some this may be overkill, but if link color is important to your design then a superfluous span is the best way to achieve consistency. Images in HTML emails The most important thing to remember about images in email is that they won’t be visible by default for many subscribers. If you start your design with that assumption, it forces you to keep things simple and ensure no important content is suppressed by image blocking. With this in mind, here are the essentials to remember when using images in HTML email: Avoid spacer images While the combination of spacer images and nested tables was popular on the web ten years ago, image blocking in many email clients has ruled it out as a reliable technique today. Most clients replace images with an empty placeholder in the same dimensions, others strip the image altogether. Given image blocking is on by default in most email clients, this can lead to a poor first impression for many of your subscribers. Stick to fixed cell widths to keep your formatting in place with or without images. Always include the dimensions of your image If you forget to set the dimensions for each image, a number of clients will invent their own sizes when images are blocked and break your layout. Also, ensure that any images are correctly sized before adding them to your email. Some email clients will ignore the dimensions specified in code and rely on the true dimensions of your image. Avoid PNGs Lotus Notes 6 and 7 don’t support 8-bit or 24-bit PNG images, so stick with the GIF or JPG formats for all images, even if it means some additional file size. Provide fallback colors for background images Outlook 2007 has no support for background images (aside from this hack to get full page background images working). If you want to use a background image in your design, always provide a background color the email client can fall back on. This solves both the image blocking and Outlook 2007 problem simultaneously. Don’t forget alt text Lack of standards support means email clients have long destroyed the chances of a semantic and accessible HTML email. Even still, providing alt text is important from an image blocking perspective. Even with images suppressed by default, many email clients will display the provided alt text instead. Just remember that some email clients like Outlook 2007, Hotmail and Apple Mail don’t support alt text at all when images are blocked. Use the display hack for Hotmail For some inexplicable reason, Windows Live Hotmail adds a few pixels of additional padding below images. A workaround is to set the display property like so. img {display:block;} This removes the padding in Hotmail and still gives you the predicable result in other email clients. Don’t use floats Both Outlook 2007 and earlier versions of Notes offer no support for the float property. Instead, use the align attribute of the img tag to float images in your email. If you’re seeing strange image behavior in Yahoo! Mail, adding align=“top” to your images can often solve this problem. Video in email With no support for JavaScript or the object tag, video in email (if you can call it that) has long been limited to animated gifs. However, some recent research I did into the HTML5 video tag in email showed some promising results. Turns out HTML5 video does work in many email clients right now, including Apple Mail, Entourage 2008, MobileMe and the iPhone. The real benefit of this approach is that if the video isn’t supported, you can provide reliable fallback content such as an animated GIF or a clickable image linking to the video in the browser. Of course, the question of whether you should add video to email is another issue altogether. If you lean toward the “yes” side check out the technique with code samples. What about mobile email? The mobile email landscape was a huge mess until recently. With the advent of the iPhone, Android and big improvements from Palm and RIM, it’s becoming less important to think of mobile as a different email platform altogether. That said, there are a few key pointers to keep in mind when coding your emails to get a decent result for your more mobile subscribers. Keep the width less than 600 pixels Because of email client preview panes, this rule was important long before mobile email clients came of age. In truth, the iPhone and Pre have a viewport of 320 pixels, the Droid 480 pixels and the Blackberry models hover around 360 pixels. Sticking to a maximum of 600 pixels wide ensures your design should still be readable when scaled down for each device. This width also gives good results in desktop and web-based preview panes. Be aware of automatic text resizing In what is almost always a good feature, email clients using webkit (such as the iPhone, Pre and Android) can automatically adjust font sizes to increase readability. If testing shows this feature is doing more harm than good to your design, you can always disable it with the following CSS rule: -webkit-text-size-adjust: none; Don’t forget to test While standards support in email clients hasn’t made much progress in the last few years, there has been continual change (for better or worse) in some email clients. Web-based providers like Yahoo!, Hotmail and Gmail are notorious for this. On countless occasions I’ve seen a proven design suddenly stop working without explanation. For this reason alone it’s important to retest your email designs on a regular basis. I find a quick test every month or so does the trick, especially in the web-based clients. The good news is that after designing and testing a few HTML email campaigns, you will find that order will emerge from the chaos. Many of these pitfalls will become quite predictable and your inbox-friendly designs will take shape with them in mind. Looking ahead Designing HTML email can be a tough pill for new designers and standardistas to swallow, especially given the fickle and retrospective nature of email clients today. With HTML5 just around the corner we are entering a new, uncertain phase. Will email client developers take the opportunity to repent on past mistakes and bring email clients into the present? The aim of groups such as the Email Standards Project is to make much of the above advice as redundant as the long-forgotten and tags, however, only time will tell if this is to become a reality. Although not the most compliant (or fashionable) medium, the results speak for themselves – email is, and will continue to be one of the most successful and targeted marketing channels available to you. As a designer with HTML email design skills in your arsenal, you have the opportunity to not only broaden your service offering, but gain a unique appreciation of how vital standards are. Next steps Ready to get started? There are a number of HTML email design galleries to provide ideas and inspiration for your own designs. http://www.campaignmonitor.com/gallery/ http://htmlemailgallery.com/ http://inboxaward.com/ Enjoy!",2009,David Greiner,davidgreiner,2009-12-13T00:00:00+00:00,https://24ways.org/2009/rock-solid-html-emails/,code 177,"HTML5: Tool of Satan, or Yule of Santa?","It would lead to unseasonal arguments to discuss the title of this piece here, and the arguments are as indigestible as the fourth turkey curry of the season, so we’ll restrict our article to the practical rather than the philosophical: what HTML5 can you reasonably expect to be able to use reliably cross-browser in the early months of 2010? The answer is that you can use more than you might think, due to the seasonal tinsel of feature-detection and using the sparkly pixie-dust of IE-only VML (but used in a way that won’t damage your Elf). Canvas canvas is a 2D drawing API that defines a blank area of the screen of arbitrary size, and allows you to draw on it using JavaScript. The pictures can be animated, such as in this canvas mashup of Wolfenstein 3D and Flickr. (The difference between canvas and SVG is that SVG uses vector graphics, so is infinitely scalable. It also keeps a DOM, whereas canvas is just pixels so you have to do all your own book-keeping yourself in JavaScript if you want to know where aliens are on screen, or do collision detection.) Previously, you needed to do this using Adobe Flash or Java applets, requiring plugins and potentially compromising keyboard accessibility. Canvas drawing is supported now in Opera, Safari, Chrome and Firefox. The reindeer in the corner is, of course, Internet Explorer, which currently has zero support for canvas (or SVG, come to that). Now, don’t pull a face like all you’ve found in your Yuletide stocking is a mouldy satsuma and a couple of nuts—that’s not the end of the story. Canvas was originally an Apple proprietary technology, and Internet Explorer had a similar one called Vector Markup Language which was submitted to the W3C for standardisation in 1998 but which, unlike canvas, was not blessed with retrospective standardisation. What you need, then, is some way for Internet Explorer to translate canvas to VML on-the-fly, while leaving the other, more standards-compliant browsers to use the HTML5. And such a way exists—it’s a JavaScript library called excanvas. It’s downloadable from http://code.google.com/p/explorercanvas/ and it’s simple to include it via a conditional comment in the head for IE: Simply include this, and your canvas will be natively supported in the modern browsers (and the library won’t even be downloaded) whereas IE will suddenly render your canvas using its own VML engine. Be sure, however, to check it carefully, as the IE JavaScript engine isn’t so fast and you’ll need to be sure that performance isn’t too degraded to use. Forms Since the beginning of the Web, developers have been coding forms, and then writing JavaScript to check whether an input is a correctly formed email address, URL, credit card number or conforms to some other pattern. The cumulative labour of the world’s developers over the last 15 years makes whizzing round in a sleigh and delivering presents seem like popping to the corner shop in comparison. With HTML5, that’s all about to change. As Yaili began to explore on Day 3, a host of new attributes to the input element provide built-in validation for email address formats (input type=email), URLs (input type=url), any pattern that can be expressed with a JavaScript-syntax regex (pattern=""[0-9][A-Z]{3}"") and the like. New attributes such as required, autofocus, input type=number min=3 max=50 remove much of the tedious JavaScript from form validation. Other, really exciting input types are available (see all input types). The datalist is reminiscent of a select box, but allows the user to enter their own text if they don’t want to choose one of the pre-defined options. input type=range is rendered as a slider, while input type=date pops up a date picker, all natively in the browser with no JavaScript required at all. Currently, support is most complete in an experimental implementation in Opera and a number of the new attributes in Webkit-based browsers. But don’t let that stop you! The clever thing about the specification of the new Web Forms is that all the new input types are attributes (rather than elements). input defaults to input type=text, so if a browser doesn’t understand a new HTML5 type, it gracefully degrades to a plain text input. So where does that leave validation in those browsers that don’t support Web Forms? The answer is that you don’t retire your pre-existing JavaScript validation just yet, but you leave it as a fallback after doing some feature detection. To detect whether (say) input type=email is supported, you make a new input type=email with JavaScript but don’t add it to the page. Then, you interrogate your new element to find out what its type attribute is. If it’s reported back as “email”, then the browser supports the new feature, so let it do its work and don’t bring in any JavaScript validation. If it’s reported back as “text”, it’s fallen back to the default, indicating that it’s not supported, so your code should branch to your old validation routines. Alternatively, use the small (7K) Modernizr library which will do this work for you and give you JavaScript booleans like Modernizr.inputtypes[email] set to true or false. So what does this buy you? Well, first and foremost, you’re future-proofing your code for that time when all browsers support these hugely useful additions to forms. Secondly, you buy a usability and accessibility win. Although it’s tempting to style the stuffing out of your form fields (which can, incidentally, lead to madness), whatever your branding people say, it’s better to leave forms as close to the browser defaults as possible. A browser’s slider and date pickers will be the same across different sites, making it much more comprehensible to users. And, by using native controls rather than faking sliders and date pickers with JavaScript, your forms are much more likely to be accessible to users of assistive technology. HTML5 DOCTYPE You can use the new DOCTYPE !doctype html now and – hey presto – you’re writing HTML5, as it’s pretty much a superset of HTML4. There are some useful advantages to doing this. The first is that the HTML5 validator (I use http://html5.validator.nu) also validates ARIA information, whereas the HTML4 validator doesn’t, as ARIA is a new spec developed after HTML4. (Actually, it’s more accurate to say that it doesn’t validate your ARIA attributes, but it doesn’t automatically report them as an error.) Another advantage is that HTML5 allows tabindex as a global attribute (that is, on any element). Although originally designed as an accessibility bolt-on, I ordinarily advise you don’t use it; a well-structured page should provide a logical tab order through links and form fields already. However, tabindex=""-1"" is a legal value in HTML5 as it allows for the element to be programmatically focussable by JavaScript. It’s also very useful for correcting a bug in Internet Explorer when used with a keyboard; in-page links go nowhere if the destination doesn’t have a proprietary property called hasLayout set or a tabindex of -1. So, whether it is the tool of Satan or yule of Santa, HTML5 is just around the corner. Some you can use now, and by the end of 2010 I predict you’ll be able to use a whole lot more as new browser versions are released.",2009,Bruce Lawson,brucelawson,2009-12-05T00:00:00+00:00,https://24ways.org/2009/html5-tool-of-satan-or-yule-of-santa/,code 182,Breaking Out The Edges of The Browser,"HTML5 contains more than just the new entities for a more meaningful document, it also contains an arsenal of JavaScript APIs. So many in fact, that some APIs have outgrown the HTML5 spec’s backyard and have been sent away to grow up all on their own and been given the prestigious honour of being specs in their own right. So when I refer to (bendy finger quote) “HTML5”, I mean the HTML5 specification and a handful of other specifications that help us authors build web applications. Examples of those specs I would include in the umbrella term would be: geolocation, web storage, web databases, web sockets and web workers, to name a few. For all you guys and gals, on this special 2009 series of 24 ways, I’m just going to focus on data storage and offline applications: boldly taking your browser where no browser has gone before! Web Storage The Web Storage API is basically cookies on steroids, a unhealthy dosage of steroids. Cookies are always a pain to work with. First of all you have the problem of setting, changing and deleting them. Typically solved by Googling and blindly relying on PPK’s solution. If that wasn’t enough, there’s the 4Kb limit that some of you have hit when you really don’t want to. The Web Storage API gets around all of the hoops you have to jump through with cookies. Storage supports around 5Mb of data per domain (the spec’s recommendation, but it’s open to the browsers to implement anything they like) and splits in to two types of storage objects: sessionStorage – available to all pages on that domain while the window remains open localStorage – available on the domain until manually removed Support Ignoring beta browsers for our support list, below is a list of the major browsers and their support for the Web Storage API: Latest: Internet Explorer, Firefox, Safari (desktop & mobile/iPhone) Partial: Google Chrome (only supports localStorage) Not supported: Opera (as of 10.10) Usage Both sessionStorage and localStorage support the same interface for accessing their contents, so for these examples I’ll use localStorage. The storage interface includes the following methods: setItem(key, value) getItem(key) key(index) removeItem(key) clear() In the simple example below, we’ll use setItem and getItem to store and retrieve data: localStorage.setItem('name', 'Remy'); alert( localStorage.getItem('name') ); Using alert boxes can be a pretty lame way of debugging. Conveniently Safari (and Chrome) include database tab in their debugging tools (cmd+alt+i), so you can get a visual handle on the state of your data: Viewing localStorage As far as I know only Safari has this view on stored data natively in the browser. There may be a Firefox plugin (but I’ve not found it yet!) and IE… well that’s just IE. Even though we’ve used setItem and getItem, there’s also a few other ways you can set and access the data. In the example below, we’re accessing the stored value directly using an expando and equally, you can also set values this way: localStorage.name = ""Remy""; alert( localStorage.name ); // shows ""Remy"" The Web Storage API also has a key method, which is zero based, and returns the key in which data has been stored. This should also be in the same order that you set the keys, for example: alert( localStorage.getItem(localStorage.key(0)) ); // shows ""Remy"" I mention the key() method because it’s not an unlikely name for a stored value. This can cause serious problems though. When selecting the names for your keys, you need to be sure you don’t take one of the method names that are already on the storage object, like key, clear, etc. As there are no warnings when you try to overwrite the methods, it means when you come to access the key() method, the call breaks as key is a string value and not a function. You can try this yourself by creating a new stored value using localStorage.key = ""foo"" and you’ll see that the Safari debugger breaks because it relies on the key() method to enumerate each of the stored values. Usage Notes Currently all browsers only support storing strings. This also means if you store a numeric, it will get converted to a string: localStorage.setItem('count', 31); alert(typeof localStorage.getItem('count')); // shows ""string"" This also means you can’t store more complicated objects natively with the storage objects. To get around this, you can use Douglas Crockford’s JSON parser (though Firefox 3.5 has JSON parsing support baked in to the browser – yay!) json2.js to convert the object to a stringified JSON object: var person = { name: 'Remy', height: 'short', location: 'Brighton, UK' }; localStorage.setItem('person', JSON.stringify(person)); alert( JSON.parse(localStorage.getItem('person')).name ); // shows ""Remy"" Alternatives There are a few solutions out there that provide storage solutions that detect the Web Storage API, and if it’s not available, fall back to different technologies (for instance, using a flash object to store data). One comprehensive version of this is Dojo’s storage library. I’m personally more of a fan of libraries that plug missing functionality under the same namespace, just as Crockford’s JSON parser does (above). For those interested it what that might look like, I’ve mocked together a simple implementation of sessionStorage. Note that it’s incomplete (because it’s missing the key method), and it could be refactored to not using the JSON stringify (but you would need to ensure that the values were properly and safely encoded): // requires json2.js for all browsers other than Firefox 3.5 if (!window.sessionStorage && JSON) { window.sessionStorage = (function () { // window.top.name ensures top level, and supports around 2Mb var data = window.top.name ? JSON.parse(window.top.name) : {}; return { setItem: function (key, value) { data[key] = value+""""; // force to string window.top.name = JSON.stringify(data); }, removeItem: function (key) { delete data[key]; window.top.name = JSON.stringify(data); }, getItem: function (key) { return data[key] || null; }, clear: function () { data = {}; window.top.name = ''; } }; })(); } Now that we’ve cracked the cookie jar with our oversized Web Storage API, let’s have a look at how we take our applications offline entirely. Offline Applications Offline applications is (still) part of the HTML5 specification. It allows developers to build a web app and have it still function without an internet connection. The app is access via the same URL as it would be if the user were online, but the contents (or what the developer specifies) is served up to the browser from a local cache. From there it’s just an everyday stroll through open web technologies, i.e. you still have access to the Web Storage API and anything else you can do without a web connection. For this section, I’ll refer you to a prototype demo I wrote recently of a contrived Rubik’s cube (contrived because it doesn’t work and it only works in Safari because I’m using 3D transforms). Offline Rubik’s cube Support Support for offline applications is still fairly limited, but the possibilities of offline applications is pretty exciting, particularly as we’re seeing mobile support and support in applications such as Fluid (and I would expect other render engine wrapping apps). Support currently, is as follows: Latest: Safari (desktop & mobile/iPhone) Sort of: Firefox‡ Not supported: Internet Explorer, Opera, Google Chrome ‡ Firefox 3.5 was released to include offline support, but in fact has bugs where it doesn’t work properly (certainly on the Mac), Minefield (Firefox beta) has resolved the bug. Usage The status of the application’s cache can be tested from the window.applicationCache object. However, we’ll first look at how to enable your app for offline access. You need to create a manifest file, which will tell the browser what to cache, and then we point our web page to that cache: For the manifest to be properly read by the browser, your server needs to serve the .manifest files as text/manifest by adding the following to your mime.types: text/cache-manifest manifest Next we need to populate our manifest file so the browser can read it: CACHE MANIFEST /demo/rubiks/index.html /demo/rubiks/style.css /demo/rubiks/jquery.min.js /demo/rubiks/rubiks.js # version 15 The first line of the manifest must read CACHE MANIFEST. Then subsequent lines tell the browser what to cache. The HTML5 spec recommends that you include the calling web page (in my case index.html), but it’s not required. If I didn’t include index.html, the browser would cache it as part of the offline resources. These resources are implicitly under the CACHE namespace (which you can specify any number of times if you want to). In addition, there are two further namespaces: NETWORK and FALLBACK. NETWORK is a whitelist namespace that tells the browser not to cache this resource and always try to request it through the network. FALLBACK tells the browser that whilst in offline mode, if the resource isn’t available, it should return the fallback resource. Finally, in my example I’ve included a comment with a version number. This is because once you include a manifest, the only way you can tell the browser to reload the resources is if the manifest contents changes. So I’ve included a version number in the manifest which I can change forcing the browser to reload all of the assets. How it works If you’re building an app that makes use of the offline cache, I would strongly recommend that you add the manifest last. The browser implementations are very new, so can sometimes get a bit tricky to debug since once the resources are cached, they really stick in the browser. These are the steps that happen during a request for an app with a manifest: Browser: sends request for your app.html Server: serves all associated resources with app.html – as normal Browser: notices that app.html has a manifest, it re-request the assets in the manifest Server: serves the requested manifest assets (again) Browser: window.applicationCache has a status of UPDATEREADY Browser: reloads Browser: only request manifest file (which doesn’t show on the net requests panel) Server: responds with 304 Not Modified on the manifest file Browser: serves all the cached resources locally What might also add confusion to this process, is that the way the browsers work (currently) is if there is a cache already in place, it will use this first over updated resources. So if your manifest has changed, the browser will have already loaded the offline cache, so the user will only see the updated on the next reload. This may seem a bit convoluted, but you can also trigger some of this manually through the applicationCache methods which can ease some of this pain. If you bind to the online event you can manually try to update the offline cache. If the cache has then updated, swap the updated resources in to the cache and the next time the app loads it will be up to date. You could also prompt your user to reload the app (which is just a refresh) if there’s an update available. For example (though this is just pseudo code): addEvent(applicationCache, 'updateready', function () { applicationCache.swapCache(); tellUserToRefresh(); }); addEvent(window, 'online', function () { applicationCache.update(); }); Breaking out of the Browser So that’s two different technologies that you can use to break out of the traditional browser/web page model and get your apps working in a more application-ny way. There’s loads more in the HTML5 and non-HTML5 APIs to play with, so take your Christmas break to check them out!",2009,Remy Sharp,remysharp,2009-12-02T00:00:00+00:00,https://24ways.org/2009/breaking-out-the-edges-of-the-browser/,code 185,Make Your Mockup in Markup,"We aren’t designing copies of web pages, we’re designing web pages. Andy Clarke, via Quotes on Design The old way I used to think the best place to design a website was in an image editor. I’d create a pixel-perfect PSD filled with generic content, send it off to the client, go through several rounds of revisions, and eventually create the markup. Does this process sound familiar? You’re not alone. In a very scientific and official survey I conducted, close to 90% of respondents said they design in Photoshop before the browser. That process is whack, yo! Recently, thanks in large part to the influence of design hero Dan Cederholm, I’ve come to the conclusion that a website’s design should begin where it’s going to live: in the browser. Die Photoshop, die Some of you may be wondering, “what’s so bad about using Photoshop for the bulk of my design?” Well, any seasoned designer will tell you that working in Photoshop is akin to working in a minefield: you never know when it’s going to blow up in your face. The application Adobe Photoshop CS4 has unexpectedly ruined your day. Photoshop’s propensity to crash at crucial moments is a running joke in the industry, as is its barely usable interface. And don’t even get me started on the hot, steaming pile of crap that is text rendering. Text rendered in Photoshop (left) versus Safari (right). Crashing and text rendering issues suck, but we’ve learned to live with them. The real issue with using Photoshop for mockups is the expectations you’re setting for a client. When you send the client a static image of the design, you’re not giving them the whole picture — they can’t see how a fluid grid would function, how the design will look in a variety of browsers, basic interactions like :hover effects, or JavaScript behaviors. For more on the disadvantages to showing clients designs as images rather than websites, check out Andy Clarke’s Time to stop showing clients static design visuals. A necessary evil? In the past we’ve put up with Photoshop because it was vital to achieving our beloved rounded corners, drop shadows, outer glows, and gradients. However, with the recent adaptation of CSS3 in major browsers, and the slow, joyous death of IE6, browsers can render mockups that are just as beautiful as those created in an image editor. With the power of RGBA, text-shadow, box-shadow, border-radius, transparent PNGs, and @font-face combined, you can create a prototype that radiates shiny awesomeness right in the browser. If you can see this epic article through to the end, I’ll show you step by step how to create a gorgeous mockup using mostly markup. Get started by getting naked Content precedes design. Design in the absence of content is not design, it’s decoration. Jeffrey Zeldman In the beginning, don’t even think about style. Instead, start with the foundation: the content. Lay the groundwork for your markup order, and ensure that your design will be useable with styles and images turned off. This is great for prioritizing the content, and puts you on the right path for accessibility and search engine optimization. Not a bad place to start, amirite? An example of unstyled content, in all its naked glory. View it large. Flush out the layout The next step is to structure the content in a usable way. With CSS, making basic layout changes is as easy as switching up a float, so experiment to see what structure suits the content best. The mockup with basic layout work done. Got your grids covered There are a variety of tools that allow you to layer a grid over your browser window. For Mac users I recommend using Slammer, and PC users can check out one of the bookmarklets that are available, such as 960 Gridder. The mockup with a grid applied using Slammer. Once you’ve found a layout that works well for the content, pass it along to the client for review. This keeps them involved in the design process, and gives them an idea of how the site will be structured when it’s live. Start your styling Now for the fun part: begin applying the presentation layer. Let usability considerations drive your decisions about color and typography; use highlighted colors and contrasting typefaces on elements you wish to emphasize. RGBA? More like RGByay! Introducing color is easy with RGBA. I like to start with the page’s main color, then use white at varying opacities to empasize content sections. In the example mockup the body background is set to rgba(203,111,21), the content containers are set to rgba(255,255,255,0.7), and a few elements are highlighted with rgba(255,255,255,0.1) If you’re not sure how RGBA works, check out Drew McLellan’s super helpful 24ways article. Laying down type Just like with color, you can use typography to evoke a feeling and direct a user’s attention. Have contrasting typefaces (like serif headlines and sans-serif body text) to group the content into meaningful sections. In a recent A List Apart article, the Master of Web Typography™ Jason Santa Maria offers excellent advice on how to choose your typefaces: Write down a general description of the qualities of the message you are trying to convey, and then look for typefaces that embody those qualities. Sounds pretty straightforward. I wanted to give my design a classic feel with a hint of nostalgia, so I used Georgia for the headlines, and incorporated the ornate ampersand from Baskerville into the header. A closeup on the site’s header. Let’s get sexy The design doesn’t look too bad as it is, but it’s still pretty flat. We can do better, and after mixing in some CSS3 and a couple of PNGs, it’s going to get downright steamy in here. Give it some glow Objects in the natural world reflect light, so to make your design feel tangible and organic, give it some glow. In the example design I achieved this by creating two white to transparent gradients of varying opacities. Both have a solid white border across their top, which gives edges a double border effect and makes them look sharper. Using CSS3’s text-shadow on headlines and box-shadow on content modules is another quick way to add depth. A wide and closeup view of the design with gradients, text-shadow and box-shadow added. For information on how to implement text-shadow and box-shadow using RGBA, check out the article I wrote on it last week. 37 pieces of flair Okay, maybe you don’t need that much flair, but it couldn’t hurt to add a little; it’s the details that will set your design apart. Work in imagery and texture, using PNGs with an alpha channel so you can layer images and still tweak the color later on. The design with grungy textures, a noisy diagonal stripe pattern, and some old transportation images layered behind the text. Because the colors are rendered using RGBA, these images bleed through the content, giving the design a layered feel. Best viewed large. Send it off Hey, look at that. You’ve got a detailed, well structured mockup for the client to review. Best of all, your markup is complete too. If the client approves the design at this stage, your template is practically finished. Bust out the party hats! Not so fast, Buster! So I don’t know about you, but I’ve never gotten a design past the client’s keen eye for criticism on the first go. Let’s review some hypothetical feedback (none of which is too outlandish, in my experience), and see how we’d make the requested changes in the browser. Updating the typography My ex-girlfriend loved Georgia, so I never want to see it again. Can we get rid of it? I want to use a font that’s chunky and loud, just like my stupid ex-girlfriend. Fakey McClient Yikes! Thankfully with CSS, removing Georgia is as easy as running a find and replace on the stylesheet. In my revised mockup, I used @font-face and League Gothic on the headlines to give the typography the, um, unique feel the client is looking for. The same mockup, using @font-face on the headlines. If you’re unfamiliar with implementing @font-face, check out Nice Web Type‘s helpful article. Adding rounded corners I’m not sure if I’ll like it, but I want to see what it’d look like with rounded corners. My cousin, a Web 2.0 marketing guru, says they’re trendy right now. Fakey McClient Switching to rounded corners is a nightmare if you’re doing your mockup in Photoshop, since it means recreating most of the shapes and UI elements in the design. Thankfully, with CSS border-radius comes to our rescue! By applying this gem of a style to a handful of classes, you’ll be rounded out in no time. The mockup with rounded corners, created using border-radius. If you’re not sure how to implement border-radius, check out CSS3.info‘s quick how-to. Making changes to the color The design is too dark, it’s depressing! They call it ‘the blues’ for a reason, dummy. Can you try using a brighter color? I want orange, like Zeldman uses. Fakey McClient Making color changes is another groan-inducing task when working in Photoshop. Finding and updating every background layer, every drop shadow, and every link can take forever in a complex PSD. However, if you’ve done your mockup in markup with RGBA and semi-transparent PNGs, making changes to your color is as easy as updating the body background and a few font colors. The mockup with an orange color scheme. Best viewed large. Ahem, what about Internet Explorer? Gee, thanks for reminding me, buzzkill. Several of the CSS features I’ve suggested you use, such as RGBA, text-shadow and box-shadow, and border-radius, are not supported in Internet Explorer. I know, it makes me sad too. However, this doesn’t mean you can’t try these techniques out in your markup based mockups. The point here is to get your mockups done as efficiently as possible, and to keep the emphasis on markup from the very beginning. Once the design is approved, you and the client have to decide if you can live with the design looking different in different browsers. Is it so bad if some users get to see drop shadows and some don’t? Or if the rounded corners are missing for a portion of your audience? The design won’t be broken for IE people, they’re just missing out on a few visual treats that other users will see. The idea of rewarding users who choose modern browsers is not a new concept; Dan covers it thoroughly in Handcrafted CSS, and it’s been written about in the past by Aaron Gustafson and Andy Clarke on several occasions. I believe we shouldn’t have to design for the lowest common denominator (cough, IE6 users, cough); instead we should create designs that are beautiful in modern browsers, but still degrade nicely for the other guy. However, some clients just aren’t that progressive, and in that case you can always use background images for drop shadows and rounded corners, as you have in the past. Closing thoughts With the advent of CSS3, browsers are just as capable of giving us beautiful, detailed mockups as Photoshop, and in half the time. I’m not the only one to make an argument for this revised process; in his article Time to stop showing clients static design visuals, and in his presentation Walls Come Tumbling Down, Andy Clarke makes a fantastic case for creating your mockups in markup. So I guess my challenge to you for 2010 is to get out of Photoshop and into the code. Even if the arguments for designing in the browser aren’t enough to make you change your process permanently, it’s worthwhile to give it a try. Look at the New Year as a time to experiment; applying constraints and evaluating old processes can do wonders for improving your efficiency and creativity.",2009,Meagan Fisher,meaganfisher,2009-12-24T00:00:00+00:00,https://24ways.org/2009/make-your-mockup-in-markup/,process 186,The Web Is Your CMS,"It is amazing what you can do these days with the services offered on the web. Flickr stores terabytes of photos for us and converts them automatically to all kind of sizes, finds people in them and even allows us to edit them online. YouTube does almost the same complete job with videos, LinkedIn allows us to maintain our CV, Delicious our bookmarks and so on. We don’t have to do these tasks ourselves any more, as all of these systems also come with ways to use the data in the form of Application Programming Interfaces, or APIs for short. APIs give us raw data when we send requests telling the system what we want to get back. The problem is that every API has a different idea of what is a simple way of accessing this data and in which format to give it back. Making it easier to access APIs What we need is a way to abstract the pains of different data formats and authentication formats away from the developer — and this is the purpose of the Yahoo Query Language, or YQL for short. Libraries like jQuery and YUI make it easy and reliable to use JavaScript in browsers (yes, even IE6) and YQL allows us to access web services and even the data embedded in web documents in a simple fashion – SQL style. Select * from the web and filter it the way I want YQL is a web service that takes a few inputs itself: A query that tells it what to get, update or access An output format – XML, JSON, JSON-P or JSON-P-X A callback function (if you defined JSON-P or JSON-P-X) You can try it out yourself – check out this link to get back Flickr photos for the search term ‘santa’*%20from%20flickr.photos.search%20where%20text%3D%22santa%22&format=xml in XML format. The YQL query for this is select * from flickr.photos.search where text=""santa"" The easiest way to take your first steps with YQL is to look at the console. There you get sample queries, access to all the data sources available to you and you can easily put together complex queries. In this article, however, let’s use PHP to put together a web page that pulls in Flickr photos, blog posts, Videos from YouTube and latest bookmarks from Delicious. Check out the demo and get the source code on GitHub. query->results->results; /* YouTube output */ $youtube = '
      '; foreach($results[0]->item as $r){ $cleanHTML = undoYouTubeMarkupCrimes($r->description); $youtube .= '
    • '.$cleanHTML.'
    • '; } $youtube .= '
    '; /* Flickr output */ $flickr = ''; /* Delicious output */ $delicious = ''; /* Blog output */ $blog = ''; function undoYouTubeMarkupCrimes($str){ $cleaner = preg_replace('/555px/','100%',$str); $cleaner = preg_replace('/width=""[^""]+""/','',$cleaner); $cleaner = preg_replace('//','',$cleaner); return $cleaner; } ?> What we are doing here is create a few different YQL statements and queue them together with the query.multi table. Each of these can be run inside YQL itself. Check out the YouTube, Flickr, Delicious and Blog example in the console if you don’t believe me. The benefit of using this table is that we don’t make individual requests for each query but we get all the data in one single request – which means a much better performing solution as the YQL server farm is faster on the web than our servers. We point the query to the YQL web service end point and get the resulting data using cURL. All that we need to do then is to convert the returned data to HTML lists that can be printed out inside an HTML template. Mixing, matching and using HTML as a data source This was a simple example of what YQL can do for you. Where it gets really powerful however is by mixing and matching different APIs. YQL is also a good tool to get information from HTML documents. By using the html table you can load the content of an HTML document (which gets fixed automatically by HTMLTidy) and use XPATH to filter down results to what you need. Take the following example which takes headlines from the news.bbc.co.uk homepage and runs the results through Yahoo’s Term Extractor API to give you a list of currently hot topics. select * from search.termextract where context in ( select content from html where url=""http://news.bbc.co.uk"" and xpath=""//table[@width=800]//a"" ) Try it out in the console or see the results here. In English, this means: Go to http://news.bbc.co.uk and get me the HTML Run it through HTML Tidy to clean it up. Get me only the links inside the table with an attribute of width and the value 800 Get only the content of the link and for each of the links Take the content and send it as context to the Yahoo Term Extractor API If we choose JSON-P as the output format we can use the outcome directly in JavaScript (see this demo or see its source):
      Using JSON, we can also use PHP which means the demo works for everybody – not only those with JavaScript enabled (see this demo or see its source):
      • query->results->Result); echo join('
      • ',$topics); ?>
      Summary This article could only scratch the surface of YQL. You have not only read access to the web but you can also write to web services. For example you can update Twitter, post to your WordPress blog or shorten a URL with bit.ly. Using Open Tables you can add any web service to the YQL interface and you can even run server-side JavaScript which is for example useful to return Flickr photos as HTML or get the HTML content from a document that needs POST data. The web of data is already here, and using YQL you don’t have to be a web services expert to use it and be part of it.",2009,Christian Heilmann,chrisheilmann,2009-12-17T00:00:00+00:00,https://24ways.org/2009/the-web-is-your-cms/,code 188,Don't Lose Your :focus,"For many web designers, accessibility conjures up images of blind users with screenreaders, and the difficulties in making sites accessible to this particular audience. Of course, accessibility covers a wide range of situations that go beyond the extreme example of screenreader users. And while it’s true that making a complex site accessible can often be a daunting prospect, there are also many small things that don’t take anything more than a bit of judicious planning, are very easy to test (without having to buy expensive assistive technology), and can make all the difference to certain user groups. In this short article we’ll focus on keyboard accessibility and how careless use of CSS can potentially make your sites completely unusable. Keyboard Access Users who for whatever reason can’t use a mouse will employ a keyboard (or keyboard-like custom interface) to navigate around web pages. By default, they will use TAB and SHIFT + TAB to move from one focusable element (links, form controls and area) of a page to the next. Note: in OS X, you’ll first need to turn on full keyboard access under System Preferences > Keyboard and Mouse > Keyboard Shortcuts. Safari under Windows needs to have the option Press Tab to highlight each item on a webpage in Preferences > Advanced enabled. Opera is the odd one out, as it has a variety of keyboard navigation options – the most relevant here being spatial navigation via Shift+Down, Shift+Up, Shift+Left, and Shift+Right). But I Don’t Like Your Dotted Lines… To show users where they are within a page, browsers place an outline around the element that currently has focus. The “problem” with these default outlines is that some browsers (Internet Explorer and Firefox) also display them when a user clicks on a focusable element with the mouse. Particularly on sites that make extensive use of image replacement on links with “off left” techniques this can create very unsightly outlines that stretch from the replaced element all the way to the left edge of the browser. Outline bleeding off to the left (image-replacement example from carsonified.com) There is a trivial workaround to prevent outlines from “spilling over” by adding a simple overflow:hidden, which keeps the outline in check around the clickable portion of the image-replaced element itself. Outline tamed with overflow:hidden But for many designers, even this is not enough. As a final solution, many actively suppress outlines altogether in their stylesheets. Controversially, even Eric Meyer’s popular reset.css – an otherwise excellent set of styles that levels the playing field of varying browser defaults – suppresses outlines. html, body, div, span, applet, object, iframe ... { ... outline: 0; ... } /* remember to define focus styles! */ :focus { outline: 0; } Yes, in his explanation (and in the CSS itself) Eric does remind designers to define relevant styles for :focus… but judging by the number of sites that seem to ignore this (and often remove the related comment from the stylesheet altogether), the message doesn’t seem to have sunk in. Anyway… hurrah! No more unsightly dotted lines on our lovely design. But what about keyboard users? Although technically they can still TAB from one element to the next, they now get no default cue as to where they are within the page (one notable exception here is Opera, where the outline is displayed regardless of stylesheets)… and if they’re Safari users, they won’t even get an indication of a link’s target in the status bar, like they would if they hovered over it with the mouse. Only Suppress outline For Mouse Users Is there a way to allow users navigating with the keyboard to retain the standard outline behaviour they’ve come to expect from their browser, while also ensuring that it doesn’t show display for mouse users? Testing some convoluted style combinations After playing with various approaches (see Better CSS outline suppression for more details), the most elegant solution also seemed to be the simplest: don’t remove the outline on :focus, do it on :active instead – after all, :active is the dynamic pseudo-class that deals explicitly with the styles that should be applied when a focusable element is clicked or otherwise activated. a:active { outline: none; } The only minor issues with this method: if a user activates a link and then uses the browser’s back button, the outline becomes visible. Oh, and old versions of Internet Explorer notoriously get confused by the exact meaning of :focus, :hover and :active, so this method fails in IE6 and below. Personally, I can live with both of these. Note: at the last minute before submitting this article, I discovered a fatal flaw in my test. It appears that outline still manages to appear in the time between activating a link and the link target loading (which in hindsight is logical – after activation, the link does indeed receive focus). As my test page only used in-page links, this issue never came up before. The slightly less elegant solution is to also suppress the outline on :hover. a:hover, a:active { outline: none; } In Conclusion Of course, many web designers may argue that they know what’s best, even for their keyboard-using audience. Maybe they’ve removed the default outline and are instead providing some carefully designed :focus styles. If they know for sure that these custom styles are indeed a reliable alternative for their users, more power to them… but, at the risk of sounding like Jakob “blue underlined links” Nielsen, I’d still argue that sometimes the default browser behaviours are best left alone. Complemented, yes (and if you’re already defining some fancy styles for :hover, by all means feel free to also make them display on :focus)… but not suppressed.",2009,Patrick Lauke,patricklauke,2009-12-09T00:00:00+00:00,https://24ways.org/2009/dont-lose-your-focus/,code 192,Cleaner Code with CSS3 Selectors,"The parts of CSS3 that seem to grab the most column inches on blogs and in articles are the shiny bits. Rounded corners, text shadow and new ways to achieve CSS layouts are all exciting and bring with them all kinds of possibilities for web design. However what really gets me, as a developer, excited is a bit more mundane. In this article I’m going to take a look at some of the ways our front and back-end code will be simplified by CSS3, by looking at the ways we achieve certain visual effects now in comparison to how we will achieve them in a glorious, CSS3-supported future. I’m also going to demonstrate how we can use these selectors now with a little help from JavaScript – which can work out very useful if you find yourself in a situation where you can’t change markup that is being output by some server-side code. The wonder of nth-child So why does nth-child get me so excited? Here is a really common situation, the designer would like the tables in the application to look like this: Setting every other table row to a different colour is a common way to enhance readability of long rows. The tried and tested way to implement this is by adding a class to every other row. If you are writing the markup for your table by hand this is a bit of a nuisance, and if you stick a row in the middle you have to change the rows the class is applied to. If your markup is generated by your content management system then you need to get the server-side code to add that class – if you have access to that code. Striping every other row - using classes
      Name Cards sent Cards received Cards written but not sent
      Ann 40 28 4
      Joe 2 27 29
      Paul 5 35 2
      Louise 65 65 0
      View Example 1 This situation is something I deal with on almost every project, and apart from being an extra thing to do, it just isn’t ideal having the server-side code squirt classes into the markup for purely presentational reasons. This is where the nth-child pseudo-class selector comes in. The server-side code creates a valid HTML table for the data, and the CSS then selects the odd rows with the following selector: tr:nth-child(odd) td { background-color: #86B486; } View Example 2 The odd and even keywords are very handy in this situation – however you can also use a multiplier here. 2n would be equivalent to the keyword ‘odd’ 3n would select every third row and so on. Browser support Sadly, nth-child has pretty poor browser support. It is not supported in Internet Explorer 8 and has somewhat buggy support in some other browsers. Firefox 3.5 does have support. In some situations however, you might want to consider using JavaScript to add this support to browsers that don’t have it. This can be very useful if you are dealing with a Content Management System where you have no ability to change the server-side code to add classes into the markup. I’m going to use jQuery in these examples as it is very simple to use the same CSS selector used in the CSS to target elements with jQuery – however you could use any library or write your own function to do the same job. In the CSS I have added the original class selector to the nth-child selector: tr:nth-child(odd) td, tr.odd td { background-color: #86B486; } Then I am adding some jQuery to add a class to the markup once the document has loaded – using the very same nth-child selector that works for browsers that support it. View Example 3 We could just add a background colour to the element using jQuery, however I prefer not to mix that information into the JavaScript as if we change the colour on our table rows I would need to remember to change it both in the CSS and in the JavaScript. Doing something different with the last element So here’s another thing that we often deal with. You have a list of items all floated left with a right hand margin on each element constrained within a fixed width layout. If each element has the right margin applied the margin on the final element will cause the set to become too wide forcing that last item down to the next row as shown in the below example where I have used a grey border to indicate the fixed width. Currently we have two ways to deal with this. We can put a negative right margin on the list, the same width as the space between the elements. This means that the extra margin on the final element fills that space and the item doesn’t drop down. The last item is different
      View Example 4 The other solution will be to put a class on the final element and in the CSS remove the margin for this class. ul.gallery li.last { margin-right: 0; } This second solution may not be easy if the content is generated from server-side code that you don’t have access to change. It could all be so different. In CSS3 we have marvellously common-sense selectors such as last-child, meaning that we can simply add rules for the last list item. ul.gallery li:last-child { margin-right: 0; } View Example 5 This removed the margin on the li which is the last-child of the ul with a class of gallery. No messing about sticking classes on the last item, or pushing the width of the item out wit a negative margin. If this list of items repeated ad infinitum then you could also use nth-child for this task. Creating a rule that makes every 3rd element margin-less. ul.gallery li:nth-child(3n) { margin-right: 0; } View Example 6 A similar example is where the designer has added borders to the bottom of each element – but the last item does not have a border or is in some other way different. Again, only a class added to the last element will save you here if you cannot rely on using the last-child selector. Browser support for last-child The situation for last-child is similar to that of nth-child, in that there is no support in Internet Explorer 8. However, once again it is very simple to replicate the functionality using jQuery. Adding our .last class to the last list item. $(""ul.gallery li:last-child"").addClass(""last""); We could also use the nth-child selector to add the .last class to every third list item. $(""ul.gallery li:nth-child(3n)"").addClass(""last""); View Example 7 Fun with forms Styling forms can be a bit of a trial, made difficult by the fact that any CSS applied to the input element will effect text fields, submit buttons, checkboxes and radio buttons. As developers we are left adding classes to our form fields to differentiate them. In most builds all of my text fields have a simple class of text whereas I wouldn’t dream of adding a class of para to every paragraph element in a document. Syling form fields

      Send your Christmas list to Santa

      View Example 8 Attribute selectors provide a way of targeting elements by looking at the attributes of those elements. Unlike the other examples in this article which are CSS3 selectors, the attribute selector is actually a CSS2.1 selector – it just doesn’t get much use because of lack of support in Internet Explorer 6. Using attribute selectors we can write rules for text inputs and form buttons without needing to add any classes to the markup. For example after removing the text and button classes from my text and submit button input elements I can use the following rules to target them: form input[type=""text""] { border: 1px solid #333; padding: 0.2em; width: 400px; } form input[type=""submit""]{ border: 1px solid #333; background-color: #eee; color: #000; padding: 0.1em; } View Example 9 Another problem that I encounter with forms is where I am using CSS to position my labels and form elements by floating the labels. This works fine as long as I want all of my labels to be floated, however sometimes we get a set of radio buttons or a checkbox, and I don’t want the label field to be floated. As you can see in the below example the label for the checkbox is squashed up into the space used for the other labels, yet it makes more sense for the checkbox to display after the text. I could use a class on this label element however CSS3 lets me to target the label attribute directly by looking at the value of the for attribute. label[for=""fOptIn""] { float: none; width: auto; } Being able to precisely target attributes in this way is incredibly useful, and once IE6 is no longer an issue this will really help to clean up our markup and save us from having to create all kinds of special cases when generating this markup on the server-side. Browser support The news for attribute selectors is actually pretty good with Internet Explorer 7+, Firefox 2+ and all other modern browsers all having support. As I have already mentioned this is a CSS2.1 selector and so we really should expect to be able to use it as we head into 2010! Internet Explorer 7 has slightly buggy support and will fail on the label example shown above however I discovered a workaround in the Sitepoint CSS reference comments. Adding the selector label[htmlFor=""fOptIn""] to the correct selector will create a match for IE7. IE6 does not support these selector but, once again, you can use jQuery to plug the holes in IE6 support. The following jQuery will add the text and button classes to your fields and also add a checks class to the label for the checkbox, which you can use to remove the float and width for this element. $('form input[type=""submit""]').addClass(""button""); $('form input[type=""text""]').addClass(""text""); $('label[for=""fOptIn""]').addClass(""checks""); View Example 10 The selectors I’ve used in this article are easy to overlook as we do have ways to achieve these things currently. As developers – especially when we have frameworks and existing code that cope with these situations – it is easy to carry on as we always have done. I think that the time has come to start to clean up our front and backend code and replace our reliance on classes with these more advanced selectors. With the help of a little JavaScript almost all users will still get the full effect and, where we are dealing with purely visual effects, there is definitely a case to be made for not worrying about the very small percentage of people with old browsers and no JavaScript. They will still receive a readable website, it may just be missing some of the finesse offered to the modern browsing experience.",2009,Rachel Andrew,rachelandrew,2009-12-20T00:00:00+00:00,https://24ways.org/2009/cleaner-code-with-css3-selectors/,code 217,Beyond Web Mechanics – Creating Meaningful Web Design,"It was just over three years ago when I embarked on becoming a web designer, and the first opinion piece about the state of web design I came across was a conference talk by Elliot Jay Stocks called ‘Destroy the Web 2.0 Look’. Elliot’s presentation was a call to arms, a plea to web designers the world over to stop the endless reproductions of the so called ‘Web 2.0 look’. Three and a half years on from Elliot’s talk, what has changed? Well, from an aesthetic standpoint, not a whole lot. The Web 2.0 look has evolved, but it’s still with us and much of the web remains filled with cookie cutter websites that bear a striking resemblance to one another. This wouldn’t matter so much if these websites were selling comparable services or products, but they’re not. They look similar, they follow the same web design trends; their aesthetic style sends out a very similar message, yet they’re selling completely different services or products. How can you be communicating effectively with your users when your online book store is visually indistinguishable from an online cosmetic store? This just doesn’t make sense. I don’t want to belittle the current version of the Web 2.0 look for the sake of it. I want to talk about the opportunity we have as web designers to create more meaningful experiences for the people using our websites. Using design wisely gives us the ability to communicate messages, ideas and attitudes that our users will understand and connect with. Being human As human beings we respond emotionally to everything around us – people, objects, posters, packaging or websites. We also respond in different ways to different kinds of aesthetic design and style. We care about style and aesthetics deeply, whether we realise it or not. Aesthetic design has the power to attract or repel. We often make decisions based purely on aesthetics and style – and don’t retailers the world over know it! We connect attitudes and strongly held beliefs to style. Individuals will proudly associate themselves with a certain style or aesthetic because it’s an expression of who they are. You know that old phrase, ‘Don’t judge a book by its cover’? Well, the problem is that people do, so it’s important we get the cover right. Much is made of how to structure web pages, how to create a logical information hierarchy, how to use layout and typography to clearly communicate with your users. It’s important, however, not to mistake clarity of information or legibility with getting your message across. Few users actually read websites word by word: it’s far more likely they’ll just scan the page. If the page is copy-heavy and nothing grabs their attention, they may well just move on. This is why it’s so important to create a visual experience that actually means something to the user. Meaningful design When we view a poster or website, we make split-second assessments and judgements of what is in front of us. Our first impressions of what a website does or who it is aimed at are provoked by the style and aesthetic of the website. For example, with clever use of colour, typography, graphic design and imagery we can communicate to users that an organisation is friendly, edgy, compassionate, fun or environmentally conscious. Using a certain aesthetic we can convey the personality of that organisation, target age ranges, different sexes or cultural groups, communicate brand attributes, and more. We can make our users feel like they’re part of something and, perhaps even more importantly, we can make new users want to be a part of something. And we can achieve all this before the user has read a single word. By establishing a website’s aesthetic and creating a meaningful visual language, a design is no longer just a random collection of pretty gradients that have been plucked out of thin air. There can be a logic behind the design decisions we make. So, before you slap another generic piece of ribbon or an ultra shiny icon into the top-left corner of your website, think about why you are doing it. If you can’t come up with a reason better than “I saw it on another website”, it’s probably a poor application of style. Design and style There are a number of reasons why the web suffers from a lack meaningful design. Firstly, there are too many preconceptions of what a website should look like. It’s too easy for designers to borrow styles from other websites, thereby limiting the range of website designs we see on the web. Secondly, many web designers think of aesthetic design as of secondary importance, which shouldn’t be the case. Designing websites that are accessible and easy to use is, of course, very important but this is the very least a web designer should be delivering. Easy to use websites should come as standard – it’s equally important to create meaningful, compelling and beautiful experiences for everyone who uses our websites. The aesthetics of your site are part of the design, and to ignore this and play down the role of aesthetic design is just a wasted opportunity. No compromise necessary Easy to use, accessible websites and beautiful, meaningful aesthetics are not mutually exclusive. The key is to apply style and aesthetic design appropriately. We need to think about who and what we’re designing for and ask ourselves why we’re applying a certain kind of aesthetic style to our design. If you do this, there’s no reason why effective, functional design should come at the expense of jaw-dropping, meaningful aesthetics. Web designers need to understand the differences between functional design and aesthetic design but, even more importantly, they need to know how to make them work together. It’s combining these elements of design successfully that makes for the best web design in the world.",2010,Mike Kus,mikekus,2010-12-05T00:00:00+00:00,https://24ways.org/2010/beyond-web-mechanics-creating-meaningful-web-design/,design 218,Put Yourself in a Corner,"Some backstory, and a shameful confession For the first couple years of high school I was one of those jerks who made only the minimal required effort in school. Strangely enough, how badly I behaved in a class was always in direct proportion to how skilled I was in the subject matter. In the subjects where I was confident that I could pass without trying too hard, I would give myself added freedom to goof off in class. Because I was a closeted lit-nerd, I was most skilled in English class. I’d devour and annotate required reading over the weekend, I knew my biblical and mythological allusions up and down, and I could give you a postmodern interpretation of a text like nobody’s business. But in class, I’d sit in the back and gossip with my friends, nap, or scribble patterns in the margins of my textbooks. I was nonchalant during discussion, I pretended not to listen during lectures. I secretly knew my stuff, so I did well enough on tests, quizzes, and essays. But I acted like an ass, and wasn’t getting the most I could out of my education. The day of humiliation, but also epiphany One day in Ms. Kaney’s AP English Lit class, I was sitting in the back doodling. An earbud was dangling under my sweater hood, attached to the CD player (remember those?) sitting in my desk. Because of this auditory distraction, the first time Ms. Kaney called my name, I barely noticed. I definitely heard her the second time, when she didn’t call my name so much as roar it. I can still remember her five feet frame stomping across the room and grabbing an empty desk. It screamed across the worn tile as she slammed it next to hers. She said, “This is where you sit now.” My face gets hot just thinking about it. I gathered my things, including the CD player (which was now impossible to conceal), and made my way up to the newly appointed Seat of Shame. There I sat, with my back to the class, eye-to-eye with Ms. Kaney. From my new vantage point I couldn’t see my friends, or the clock, or the window. All I saw were Ms. Kaney’s eyes, peering at me over her reading glasses while I worked. In addition to this punishment, I was told that from now on, not only would I participate in class discussions, but I would serve detention with her once a week until an undetermined point in the future. During these detentions, Ms. Kaney would give me new books to read, outside the curriculum, and added on to my normal homework. They ranged from classics to modern novels, and she read over my notes on each book. We’d discuss them at length after class, and I grew to value not only our private discussions, but the ones in class as well. After a few weeks, there wasn’t even a question of this being punishment. It was heaven, and I was more productive than ever. To the point Please excuse this sentimental story. It’s not just about honoring a teacher who cared enough to change my life, it’s really about sharing a lesson. The most valuable education Ms. Kaney gave me had nothing to do with literature. She taught me that I (and perhaps other people who share my special brand of crazy) need to be put in a corner to flourish. When we have physical and mental constraints applied, we accomplish our best work. For those of you still reading, now seems like a good time to insert a pre-emptive word of mediation. Many of you, maybe all of you, are self-disciplined enough that you don’t require the rigorous restrictions I use to maximize productivity. Also, I know many people who operate best in a stimulating and open environment. I would advise everyone to seek and execute techniques that work best for them. But, for those of you who share my inclination towards daydreams and digressions, perhaps you’ll find something useful in the advice to follow. In which I pretend to be Special Agent Olivia Dunham Now that I’m an adult, and no longer have Ms. Kaney to reign me in, I have to find ways to put myself in the corner. By rejecting distraction and shaping an environment designed for intense focus, I’m able to achieve improved productivity. Lately I’ve been obsessed with the TV show Fringe, a sci-fi series about an FBI agent and her team of genius scientists who save the world (no, YOU’RE a nerd). There’s a scene in the show where the primary character has to delve into her subconscious to do extraordinary things, and she accomplishes this by immersing herself in a sensory deprivation tank. The premise is this: when enclosed in a space devoid of sound, smell, or light, she will enter a new plane of consciousness wherein she can tap into new levels of perception. This might sound a little nuts, but to me this premise has some real-world application. When I am isolated from distraction, and limited to only the task at hand, I’m able to be productive on a whole new level. Since I can’t actually work in an airtight iron enclosure devoid of input, I find practical ways to create an interruption-free environment. Since I work from home, many of my methods for coping with distractions wouldn’t be necessary for my office-bound counterpart. However for some of you 9-to-5-ers, the principles will still apply. Consider your visual input First, I have to limit my scope to the world I can (and need to) affect. In the largest sense, this means closing my curtains to the chaotic scene of traffic, birds, the post office, a convenience store, and generally lovely weather that waits outside my window. When the curtains are drawn and I’m no longer surrounded by this view, my sphere is reduced to my desk, my TV, and my cat. Sometimes this step alone is enough to allow me to focus. But, my visual input can be whittled down further still. For example, the desk where I usually keep my laptop is littered with twelve owl figurines, a globe, four books, a three-pound weight, and various nerdy paraphernalia (hard drives, Wacom tablets, unnecessary bluetooth accessories, and so on). It’s not so much a desk as a dumping ground for wacky flea market finds and impulse technology buys. Therefore, in addition to this Official Desk, I have an adult version of Ms. Kaney’s Seat of Shame. It’s a rusty old student’s desk I picked up at the Salvation Army, almost an exact replica of the model Ms. Kaney dragged across the classroom all those years ago. This tiny reproduction Seat of Shame is literally in a corner, where my only view is a blank wall. When I truly need to focus, this is where I take refuge, with only a notebook and a pencil (and occasionally an iPad). Find out what works for your ears Even from my limited sample size of two people, I know there are lots of different ways to cope with auditory distraction. I prefer silence when focused on independent work, and usually employ some form of a white noise generator. I’ve yet to opt for the fancy ‘real’ white noise machines; instead, I use a desktop fan or our allergy filter machine. This is usually sufficient to block out the sounds of the dishwasher and the cat, which allows me to think only about the task of hand. My boyfriend, the other half of my extensive survey, swears by another method. He calls it The Wall of Sound, and it’s basically an intense blast of raucous music streamed directly into his head. The outcome of his technique is really the same as mine; he’s blocking out unexpected auditory input. If you can handle the grating sounds of noisy music while working, I suggest you give The Wall of Sound a try. Don’t count the minutes When I sat in the original Seat of Shame in lit class, I could no longer see the big classroom clock slowly ticking away the seconds until lunch. Without the marker of time, the class period often flew by. The same is true now when I work; the less aware of time I am, the less it feels like time is passing too quickly or slowly, and the more I can focus on the task (not how long it takes). Nowadays, to assist in my effort to forget the passing of time, I sometimes put a sticky note over the clock on my monitor. If I’m writing, I’ll use an app like WriteRoom, which blocks out everything but a simple text editor. There are situations when it’s not advisable to completely lose track of time. If I’m working on a project with an hourly rate and a tight scope, or if I need to be on time to a meeting or call, I don’t want to lose myself in the expanse of the day. In these cases, I’ll set an alarm that lets me know it’s time to reign myself back in (or on some days, take a shower). Put yourself in a mental corner, too When Ms. Kaney took action and forced me to step up my game, she had the insight to not just change things physically, but to challenge me mentally as well. She assigned me reading material outside the normal coursework, then upped the pressure by requiring detailed reports of the material. While this additional stress was sometimes uncomfortable, it pushed me to work harder than I would have had there been less of a demand. Just as there can be freedom in the limitations of a distraction-free environment, I’d argue there is liberty in added mental constraints as well. Deadlines as a constraint Much has been written about the role of deadlines in the creative process, and they seem to serve different functions in different cases. I find that deadlines usually act as an important constraint and, without them, it would be nearly impossible for me to ever consider a project finished. There are usually limitless ways to improve upon the work I do and, if there’s no imperative for me to be done at a certain point, I will revise ad infinitum. (Hence, the personal site redesign that will never end – Coming Soon, Forever!). But if I have a clear deadline in mind, there’s a point when the obsessive tweaking has to stop. I reach a stage where I have to gather up the nerve to launch the thing. Putting the pro in procrastination Sometimes I’ve found that my tendency to procrastinate can help my productivity. (Ducks, as half the internet throws things at her.) I understand the reasons why procrastination can be harmful, and why it’s usually a good idea to work diligently and evenly towards a goal. I try to divide my projects up in a practical way, and sometimes I even pull it off. But for those tasks where you work aimlessly and no focus comes, or you find that every other to-do item is more appealing, sometimes you’re forced to bring it together at the last moment. And sometimes, this environment of stress is a formula for magic. Often when I’m down to the wire and have no choice but to produce, my mind shifts towards a new level of clarity. There’s no time to endlessly browse for inspiration, or experiment with convoluted solutions that lead nowhere. Obviously a life lived perpetually on the edge of a deadline would be a rather stressful one, so it’s not a state of being I’d advocate for everyone, all the time. But every now and then, the work done when I’m down to the wire is my best. Keep one toe outside your comfort zone When I’m choosing new projects to take on, I often seek out work that involves an element of challenge. Whether it’s a design problem that will require some creative thinking, or a coding project that lends itself to using new technology like HTML5, I find a manageable level of difficulty to be an added bonus. The tension that comes from learning a new skill or rethinking an old standby is a useful constraint, as it keeps the work interesting, and ensures that I continue learning. There you have it Well, I think I’ve spilled most of my crazy secrets for forcing my easily distracted brain to focus. As with everything we web workers do, there are an infinite number of ways to encourage productivity. I hope you’ve found a few of these to be helpful, and please share your personal techniques in the comments. Have a happy and productive new year!",2010,Meagan Fisher,meaganfisher,2010-12-20T00:00:00+00:00,https://24ways.org/2010/put-yourself-in-a-corner/,process 221,"“Probably, Maybe, No”: The State of HTML5 Audio","With the hype around HTML5 and CSS3 exceeding levels not seen since 2005’s Ajax era, it’s worth noting that the excitement comes with good reason: the two specifications render many years of feature hacks redundant by replacing them with native features. For fun, consider how many CSS2-based rounded corners hacks you’ve probably glossed over, looking for a magic solution. These days, with CSS3, the magic is border-radius (and perhaps some vendor prefixes) followed by a coffee break. CSS3’s border-radius, box-shadow, text-shadow and gradients, and HTML5’s ,