rowid,title,contents,year,author,author_slug,published,url,topic
8,Coding Towards Accessibility,"“Can we make it AAA-compliant?” – does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means?
I’m not here to talk about that.
The Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we’re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility.
This Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow.
But first, let’s clear up a couple of misconceptions.
Dreary, flat experiences
I recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client’s stringent accessibility requirements I was concerned that I’d have to scale back what was quite a complex set of visual designs.
Sites like Jakob Neilsen’s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.)
Of course, there are other reasons for these sites’ aesthetics — and it’s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It’s always our own ingenuity and attention to detail that are going to be the limiting factors.
Synecdoche
We must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered.
There’s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider.
Planning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn’t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now.
But the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice.
The mouse trap
The simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse.
Keyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily — just by ditching a peripheral.
Learning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without.
Don’t just :hover
The reality is that when you first start doing this you can find your site becomes unusable straightaway. It’s easy to lose track of which element is in focus as you hit the tab key repeatedly.
One of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I’m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!).
You may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It’s a tiny change and there is no downside. So instead of this:
.my-cool-link:hover {
background-color: MistyRose ;
}
…try writing this:
.my-cool-link:hover,
.my-cool-link:focus,
.my-cool-link:active {
background-color: MistyRose ;
}
I’ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven’t yet. I worry that people reading my code won’t see that I’m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand.
JavaScript can play, too
This was another revelation for me. Keyboard-only navigation doesn’t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we’re able to create complex JavaScript-driven interfaces which all users can interact with.
Some of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention — these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity.
Of course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them.
Second, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It’s a fantastic reference which I revisit all the time
I’m not a huge fan of the jQuery UI library. It’s a pain to style and the code is a bit bloated. So I’ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below).
Coding for accessibility promotes good habits
This is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements.
Your code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don’t you?)
If you aren’t already in the habit of separating out your interface functionality into discrete functions, you will be soon.
var doSomethingCool = function(){
// Do something cool here.
}
// Bind function to a button click - pretty vanilla
$('.myCoolButton').on('click', function(){
doSomethingCool();
return false;
});
// Bind the same function to a range of keypresses
$(document).keyup(function(e){
switch(e.keyCode) {
case 13: // enter
case 32: // spacebar
doSomethingCool();
break;
case 27: // escape
doSomethingElse();
break;
}
});
To be honest, if you’re doing complex UI stuff with JavaScript these days, or if you’ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren’t already — just add a few more event bindings into your UI layer!
Manipulating the tab order
So, you’ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You’ve applied all your hover states to :focus and :active so you can see where you’re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right?
There’s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element ‘tabbable’?
A good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there’s no reason to make that heading into a link simply because it’s interactive.
Tyrannosaurus
Tyrannosaurus; meaning ""tyrant lizard""...
Utahraptor
Utahraptor is a genus of theropod dinosaurs...
Dromiceiomimus
Ornithomimus is a genus of ornithomimid dinosaurs...
Adding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process.
$('.accordion-widget h3').attr('tabindex', '0');
You can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to −1, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern — at least on my team!
A farewell ARIA
This is where things can get complex, and I’m no expert on the ARIA specification: I feel like I’ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I’d like to demystify things a little bit to encourage you to look into this area further yourself.
ARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements.
Let’s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can’t handle JavaScript will announce all the content within the accordion, no problem.
But modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that’s as far as you have gone, then you’ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox:
> Tyrannosaurus. Heading Three.
> Utahraptor. Heading Three.
> Dromiceiomimus. Heading Three.
We can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.
Utahraptor
Utahraptor is a genus of theropod dinosaurs...
Now, if a screen reader user encounters this markup they will hear the following:
> Tyrannosaurus. Tab not selected; one of three.
> Utahraptor. Tab not selected; two of three.
> Dromiceiomimus. Tab not selected; three of three.
You could add arrow key events to help the user browse up and down the tab list items until they find one they like.
Your accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this:
> Utahraptor. Selected; two of three.
The paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader.
This kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based.
Conclusion
Getting complex interfaces right for all of your users can be difficult — there’s no point pretending otherwise. And there’s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That’s a huge area in itself, and it’s one which I have not yet had a chance to research properly.
So, there’s lots to learn, and there’s lots to do to get it right. But don’t be disheartened. If you have read this far then I’ll leave you with one final piece of advice: don’t wait.
Don’t wait until you’re building a site which mandates AAA-compliance to try this stuff out. Don’t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You’ll be surprised at the things that you learn and the issues you uncover.
And the next time an true accessibility project comes along, you will be way ahead of the game.",2013,Charlie Perrins,charlieperrins,2013-12-03T00:00:00+00:00,https://24ways.org/2013/coding-towards-accessibility/,code
11,JavaScript: Taking Off the Training Wheels,"JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it’s understandable that when 24 ways asked, “What one thing do you wish you had more time to learn about?”, a number of you answered “JavaScript!”
This article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can’t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.
Why learn JavaScript?
So what’s in it for you? Why take the next step and learn the fundamentals?
Confidence with jQuery
If nothing else, learning JavaScript will improve your jQuery code; you’ll be comfortable writing jQuery from scratch and feel happy bending others’ code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it’s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.
In fact, you could say that jQuery’s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That’s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn’t needed.
An example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here’s a comparison of some jQuery code and the equivalent without.
$('.counter').each(function (index) {
$(this).text(index + 1);
});
var counters = document.querySelectorAll('.counter');
[].slice.call(counters).forEach(function (elem, index) {
elem.textContent = index + 1;
});
Solving problems no one else has!
When you have to go to the internet to solve a problem, you’re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.
Node.js
Node.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don’t worry: the Node community is thriving, very friendly and willing to help.
I think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!
Here’s an example web server written with Node. http is a module that allows you to create servers and, like jQuery’s $.ajax, make requests. It’s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it’s certainly not out of your reach.
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World');
}).listen(1337);
console.log('Server running at http://localhost:1337/');
Grunt and other website tools
Node has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I’ll talk a little bit about Grunt here.
Grunt is a task runner, and many people use it for compiling Sass or compressing their site’s JavaScript and images. It’s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.
Ways to improve your skills
So you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.
Rebuild a jQuery app
Converting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you’re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way’s jQuery to JavaScript article.
Find a mentor
If you think you’d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I’d look out for someone who’s active and friendly on Twitter, and does the kind of work you’d like to do. Introduce yourself over Twitter or send them an email. I wouldn’t expect a full tutoring course (although that is another option!) but they’ll be very glad to answer a question and any follow-ups every now and then.
Go to a workshop
Many conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there’s one in your area. Workshops are great because you can ask direct questions, and you’re in an environment where others are learning just like you are — no need to learn alone!
Set yourself challenges
This is one way I like to learn new things. I have a new thing that I’m not very good at, so I pick something that I think is just out of my reach and I try to build it. It’s learning by doing and, even if you fail, it can be enormously valuable.
Where to start?
If you’ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.
I’ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.
Beginner
If you’re just getting started with JavaScript, I’d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They’re all reputable sources (although, I’ve included something I wrote — you can decide about that one!) and will not lead you astray.
jQuery’s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.
Codecademy’s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.
HTMLDog’s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]
The tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.
Getting in-depth
For more comprehensive documentation and help I’d recommend adding these places to your list of go-tos.
MDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it’s a great place to just go and browse.
Axel Rauschmayer’s 2ality is a stunning collection of articles that will take you deep into JavaScript. It’s certainly worth looking at.
Addy Osmani’s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.
And finally…
I think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you’ll get there. I bet you’ll learn a whole lot along the way. Good luck!
Many thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.",2013,Tom Ashworth,tomashworth,2013-12-05T00:00:00+00:00,https://24ways.org/2013/javascript-taking-off-the-training-wheels/,code
15,Git for Grown-ups,"You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven’t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work.
It’s not you. It’s Git. Promise.
Yes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I’m not going to give you the top five commands that you need to memorize; and I’m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I’ve come to a grand realization: when we teach Git, we’re doing it wrong.
Let me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education:
Adults prefer to know why they are learning something.
The foundation of the learning activities should include experience.
Adults prefer to be able to plan and evaluate their own instruction.
Adults are more interested in learning things which directly impact their daily activities.
Adults prefer learning to be oriented not towards content, but towards problems.
Adults relate more to their own motivators than to external ones.
Nowhere in this list does it include “memorize the five most popular Git commands”. And yet this is how we teach version control: init, add, commit, branch, push. You’re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge.
“Fine,” I can hear you saying to yourself. “But I’m here to learn about version control.” Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it’s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It’s no Sass, y’know? Git wasn’t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We’re in this together, and it’s going to be OK.
We’re going to do a little activity. We’re going to create your perfect Git cheatsheet.
I want you to start by writing down a list of all the people on your code team. This list may include:
developers
designers
project managers
clients
Next, I want you to write down a list of all the ways you interact with your team. Maybe you’re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you’re actually responsible for. For example, my list looks like this:
writing code
reviewing code
publishing tested code to your server(s)
troubleshooting broken code
The next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items:
code hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?)
server ecosystem (dev/staging/live)
automated testing systems or review gates
automated build systems (that Jenkins dude people keep referring to)
Brilliant! Now you’ve got your actors and your actions, it’s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish.
Centralized workflow
Everyone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time.
Branching workflow
Everyone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they’re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control — they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch.
Forking workflow
Everyone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another’s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process.
Gitflow workflow
A specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010.
But these workflows aren’t about you yet, are they? So let’s make the connections.
From the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this.
Chances are high that you don’t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows.
But wait!
The code that’s on your development machine isn’t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed.
On your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch.
Finally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task.
From here, it’s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git’s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner — demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.",2013,Emma Jane Westby,emmajanewestby,2013-12-04T00:00:00+00:00,https://24ways.org/2013/git-for-grownups/,code
16,URL Rewriting for the Fearful,"I think it was Marilyn Monroe who said, “If you can’t handle me at my worst, please just fix these rewrite rules, I’m getting an internal server error.” Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from.
The majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable — I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we’re going to go back to basics to try to make the whole rigmarole more understandable.
When we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that’s the most common case, that’s what I’ll be sticking to here. If you work with a different server, there’s often documentation specifically for translating from Apache’s mod_rewrite rules. I even found an automatic converter for nginx.
This isn’t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you’re doing. If you’ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don’t worry. As Michael Jackson once insipidly whined, you are not alone.
The basics
Rewrite rules form part of the Apache web server’s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is:
RewriteEngine on
The general formula for a rewrite rule is:
RewriteRule URL/to/match URL/to/use/if/it/matches [options]
When we talk about URL rewriting, we’re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We’ll look at those in turn.
Redirects
Redirects match an incoming URL, and then redirect the user’s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work.
In 1998, Sir Tim Berners-Lee wrote that Cool URIs don’t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it’s fine to move things around — especially to correct bad URL design choices of the past — provided that you can do so while keeping those old URLs working. That’s where redirects can help.
A redirect might look like this
RewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L]
Rewriting
By default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file.
A rewrite rule changes that process by breaking the direct relationship between the URL and the file system. “When there’s a request for /about/history.html” a rewrite rule might say, “use the file /about_section.php instead.”
This opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page.
RewriteRule ^for/this/url/$ /use/this/file.php [L]
Matching patterns
By now you’ll have noted the weird ^ and $ characters wrapped around the URL we’re trying to match. That’s because what we’re actually using here is a pattern. Technically, it is what’s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We’ll call it a pattern because we’re not animals.
What are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you’d wonder what I wanted your credit card details for, but you’d know that I wanted a two-digit month, a slash, and a two-digit year. That’s not a regular expression, but it’s the same idea: using some placeholder characters to define the pattern of the input you’re trying to match.
We’ve already met two regexp characters.
^
Matches the beginning of a string
$
Matches the end of a string
When a pattern starts with ^ and ends with $ it’s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too:
[0-9]
Matches a number, 0–9. [2-4] would match numbers 2 to 4 inclusive.
[a-z]
Matches lowercase letters a–z
[A-Z]
Matches uppercase letters A–Z
[a-z0-9]
Combining some of these, this matches letters a–z and numbers 0–9
These are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you’re looking for within the brackets, as well as the ranges shown above.
However, all these just match one single character. [0-9] would match 8 but not 84 — to match 84 we’d need to use [0-9] twice.
[0-9][0-9]
So, if we wanted to match 1984 we could to do this:
[0-9][0-9][0-9][0-9]
…but that’s getting silly. Instead, we can do this:
[0-9]{4}
That means any character between 0 and 9, four times. If we wanted to match a number, but didn’t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more.
[0-9]+
This now matches 1, 123 and 1234567.
Putting it into practice
Let’s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this:
2013/article-title/
They start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We’d match it like this:
^[0-9]{4}/[a-z0-9-]+/$
If that looks frightening, don’t worry. Breaking it down, from the start of the URL (^) we’re looking for four numbers ([0-9]{4}). Then a slash — that’s just literal — and then anything lowercase a–z or 0–9 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$).
Putting that into a rewrite rule, we end up with this:
RewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php
We’re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it’s supposed to display.
Capturing groups, and replacements
When rewriting URLs you’ll often want to take important parts of the URL you’re matching and pass them along to the script that handles the request. That’s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we’re looking for. That means we need to call it like this:
/article.php?year=2013&slug=article-title
To do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what’s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses.
Our pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes:
^([0-9]{4})/([a-z0-9-]+)/$
To use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.)
RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2
The value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern.
Options
Several brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are:
R=301
Perform an HTTP 301 redirect to send the user’s browser to the new URL. A status of 301 means a resource has moved permanently and so it’s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes.
L
Last. If this rule matches, don’t bother processing the following rules.
Options are set in square brackets at the end of the rule. You can set multiple options by separating them with commas:
RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L]
or
RewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L]
Common pitfalls
Once you’ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag.
L for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does — the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes.
One way to avoid this problem is to keep your ‘real’ pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules.
Useful snippets
I find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to.
Excluding a directory
As mentioned above, if you’re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won’t be applied.
RewriteRule ^_source - [L]
This is also useful for excluding things like CMS folders from your website’s rewrite rules
RewriteRule ^perch - [L]
Adding or removing www from the domain
Some folk like to use a www and others don’t. Usually, it’s best to pick one and go with it, and redirect the one you don’t want. On this site, we don’t use www.24ways.org so we redirect those requests to 24ways.org.
This uses a RewriteCond which is like an if for a rewrite rule: “If this condition matches, then apply the following rule.” In this case, it’s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything:
RewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC]
RewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L]
The [NC] flag means ‘no case’ — the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance.
Removing file extensions
Sometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.+)$ $1.php [L,QSA]
This says if the file being asked for isn’t a file (!-f) and if it isn’t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means ‘query string append’: append the existing query string onto the rewritten URL.
It’s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL.
Logging for when it all goes wrong
Although not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it’s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance.
RewriteEngine On
RewriteLog ""/full/system/path/to/rewrite.log""
RewriteLogLevel 5
To be doubly clear: this will not work from an .htaccess file — it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.)
The white screen of death
One of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn’t an error message, of course. It’s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log.
If you have access to your server logs, check the Apache error log and you’ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.)
In conclusion
Rewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future.
If you’re redesigning a site, remember that cool URIs don’t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working.
Further reading
To find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources.
From the horse’s mouth, the Apache mod_rewrite documentation
Particularly useful with that documentation is the RewriteRule Flags listing
You may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial
Friend of 24 ways, Neil Crosby has a mod_rewrite Beginner’s Guide which I’ve found handy over the years.
As noted at the start, this isn’t a fully comprehensive guide, but I hope it’s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.",2013,Drew McLellan,drewmclellan,2013-12-01T00:00:00+00:00,https://24ways.org/2013/url-rewriting-for-the-fearful/,code
18,Grunt for People Who Think Things Like Grunt are Weird and Hard,"Front-end developers are often told to do certain things:
Work in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website.
Compress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website.
Optimize your images to reduce their file size without affecting quality.
Use Sass for CSS authoring because of all the useful abstraction it allows.
That’s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks.
I bet you’ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you’ve got it set up, which isn’t particularly difficult, those things can happen automatically without you having to think about them again.
But let’s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you.
Let’s nip some misconceptions in the bud right away
Perhaps you’ve heard of Grunt, but haven’t done anything with it. I’m sure that applies to many of you. Maybe one of the following hang-ups applies to you.
I don’t need the things Grunt does
You probably do, actually. Check out that list up top. Those things aren’t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that’s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don’t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job.
Grunt runs on Node.js — I don’t know Node
You don’t have to know Node. Just like you don’t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word.
I have other ways to do the things Grunt could do for me
Are they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I’d venture.
Grunt is a command line tool — I’m just a designer
I’m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don’t think that’s going to happen with Grunt1.
The extent to which you need to use the command line is:
Navigate to your project’s directory.
Type grunt and press Return.
After set-up, that is, which again isn’t particularly difficult.
OK. Let’s get Grunt installed
Node is indeed a prerequisite for Grunt. If you don’t have Node installed, don’t worry, it’s very easy. You literally download an installer and run it. Click the big Install button on the Node website.
You install Grunt on a per-project basis. Go to your project’s folder. It needs a file there named package.json at the root level. You can just create one and put it there.
package.json at root
The contents of that file should be this:
{
""name"": ""example-project"",
""version"": ""0.1.0"",
""devDependencies"": {
""grunt"": ""~0.4.1""
}
}
Feel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that.
This is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you’re familiar with that). You could even think of it a bit like a plug-in for WordPress.
Once that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this:
Terminal rube changing directories
Then run the command:
npm install
After you’ve run that command, a new folder called node_modules will show up in your project.
Example of node_modules folder
The other files you see there, README.md and LICENSE are there because I’m going to put this project on GitHub and that’s just standard fare there.
The last installation step is to install the Grunt CLI (command line interface). That’s what makes the grunt command in the terminal work. Without it, typing grunt will net you a “Command Not Found”-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you’d have ten copies of Grunt CLI.
This is a one-liner again. Just run this command in the terminal:
npm install -g grunt-cli
You should close and reopen the terminal as well. That’s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days.
Let’s make Grunt concatenate some files
Perhaps in our project there are three separate JavaScript files:
jquery.js – The library we are using.
carousel.js – A jQuery plug-in we are using.
global.js – Our authored JavaScript file where we configure and call the plug-in.
In production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us.
But wait. Grunt actually doesn’t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven’t set up Grunt to do anything yet, so let’s do that.
The official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project’s root folder):
npm install grunt-contrib-concat --save-dev
A neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You’ll see a new line:
""grunt-contrib-concat"": ""~0.3.0""
Now we’re ready to use it. To use it we need to start configuring Grunt and telling it what to do.
You tell Grunt what to do via a configuration file named Gruntfile.js2
Just like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn’t worry about what every word of this means. Just check out the format:
module.exports = function(grunt) {
// 1. All configuration goes here
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
concat: {
// 2. Configuration for concatinating files goes here.
}
});
// 3. Where we tell Grunt we plan to use this plug-in.
grunt.loadNpmTasks('grunt-contrib-concat');
// 4. Where we tell Grunt what to do when we type ""grunt"" into the terminal.
grunt.registerTask('default', ['concat']);
};
Now we need to create that configuration. The documentation can be overwhelming. Let’s focus just on the very simple usage example.
Remember, we have three JavaScript files we’re trying to concatenate. We’ll list file paths to them under src in an array of file paths (as quoted strings) and then we’ll list a destination file as dest. The destination file doesn’t have to exist yet. It will be created when this task runs and squishes all the files together.
Both our jquery.js and carousel.js files are libraries. We most likely won’t be touching them. So, for organization, we’ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let’s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website.
concat: {
dist: {
src: [
'js/libs/*.js', // All JS in the libs folder
'js/global.js' // This specific file
],
dest: 'js/build/production.js',
}
}
Note: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file.
With that concat configuration in place, head over to the terminal, run the command:
grunt
and watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let’s do more things!
Let’s make Grunt minify that JavaScript
We have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to:
Find a Grunt plug-in to do what we want
Learn the configuration style of that plug-in
Write that configuration to work with our project
The official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it:
npm install grunt-contrib-uglify --save-dev
Then we alter our Gruntfile.js to load the plug-in:
grunt.loadNpmTasks('grunt-contrib-uglify');
Then we configure it:
uglify: {
build: {
src: 'js/build/production.js',
dest: 'js/build/production.min.js'
}
}
Let’s update that default task to also run minification:
grunt.registerTask('default', ['concat', 'uglify']);
Super-similar to the concatenation set-up, right?
Run grunt at the terminal and you’ll get some deliciously minified JavaScript:
Minified JavaScript
That production.min.js file is what we would load up for use in our index.html file.
Let’s make Grunt optimize our images
We’ve got this down pat now. Let’s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it:
npm install grunt-contrib-imagemin --save-dev
Register it in the Gruntfile.js:
grunt.loadNpmTasks('grunt-contrib-imagemin');
Configure it:
imagemin: {
dynamic: {
files: [{
expand: true,
cwd: 'images/',
src: ['**/*.{png,jpg,gif}'],
dest: 'images/build/'
}]
}
}
Make sure it runs:
grunt.registerTask('default', ['concat', 'uglify', 'imagemin']);
Run grunt and watch that gorgeous squishification happen:
Squished images
Gotta love performance increases for nearly zero effort.
Let’s get a little bit smarter and automate
What we’ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt:
Run these tasks automatically when they should
Run only the tasks needed at the time
For instance:
Concatenate and minify JavaScript when JavaScript changes
Optimize images when a new image is added or an existing one changes
We can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin.
I’ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change.
We want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So:
watch: {
scripts: {
files: ['js/*.js'],
tasks: ['concat', 'uglify'],
options: {
spawn: false,
},
}
}
Feels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don’t even really know what that does. From what I understand from the documentation it is the smart default. That’s real-world development. Just leave it alone if it’s working and if it’s not, learn more.
Note: Isn’t it frustrating when something that looks so easy in a tutorial doesn’t seem to work for you? If you can’t get Grunt to run after making a change, it’s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal:
Errors running Grunt
Usually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run.
Let’s make Grunt do our preprocessing
The last thing on our list from the top of the article is using Sass — yet another task Grunt is well-suited to run for us. But wait? Isn’t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it’s not quite up-to-snuff with the main Ruby project. So, we’ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don’t, follow the command line instructions.
What’s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file:
sass: {
dist: {
options: {
style: 'compressed'
},
files: {
'css/build/global.css': 'css/global.scss'
}
}
}
We wouldn’t want to manually run this task. We already have the watch plug-in installed, so let’s use it! Within the watch configuration, we’ll add another subtask:
css: {
files: ['css/*.scss'],
tasks: ['sass'],
options: {
spawn: false,
}
}
That’ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated.
Let’s take this one step further (it’s absolutely worth it) and add LiveReload. With LiveReload, you won’t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites).
It’s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to:
Install the browser plug-in
Add to the top of the watch configuration:
. watch: {
options: {
livereload: true,
},
scripts: {
/* etc */
Restart the browser and click the LiveReload icon to activate it.
Update some Sass and watch it change the page automatically.
Live reloading browser
Yum.
Prefer a video?
If you’re the type that likes to learn by watching, I’ve made a screencast to accompany this article that I’ve published over on CSS-Tricks: First Moments with Grunt
Leveling up
As you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations.
Some hardcore devops nerds might scoff at the simplistic setup we have going here. But I’d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don’t forget this is all free and open source, which is amazing.
You might level up by adding more useful tasks:
Running your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons.
Writing and running JavaScript unit tests (example: Jasmine).
Build your image sprites and SVG icons automatically (example: Grunticon).
Start a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP.
Check for code problems with HTML-Inspector, CSS Lint, or JS Hint.
Have new CSS be automatically injected into the browser when it ever changes.
Help you commit or push to a version control repository like GitHub.
Add version numbers to your assets (cache busting).
Help you deploy to a staging or production environment (example: DPLOY).
You might level up by simply understanding more about Grunt itself:
Read Grunt Boilerplate by Mark McDonnell.
Read Grunt Tips and Tricks by Nicolas Bevacqua.
Organize your Gruntfile.js by splitting it up into smaller files.
Check out other people’s and projects’ Gruntfile.js.
Learn more about Grunt by digging into its source and learning about its API.
Let’s share
I think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We’re all in this thing together!
1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I’m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot.
Perhaps a decent middleground is this Grunt DevTools Chrome add-on.
2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don’t literally name it Gruntfile — it won’t work.",2013,Chris Coyier,chriscoyier,2013-12-11T00:00:00+00:00,https://24ways.org/2013/grunt-is-not-weird-and-hard/,code
20,Make Your Browser Dance,"It was a crisp winter’s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.
This was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these… and working out ways to mix them as the music played.
This was it. This was me VJing.
This was all a lifetime (well a decade!) ago.
When I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API
But wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?
Well, there’s only one thing to do: let’s try it!
Let’s take to the dance floor
Over the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let’s take the same approach here.
The main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?
So, let’s start at the beginning: creating a simple visual. For this I’m going to create a CSS animation. It’s just a funky i element with the opacity being changed to make it flash.
See the Pen Creating a light by Rumyra (@Rumyra) on CodePen
A note about prefixes: I’ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article
Start the music
Well, that’s pretty easy so far. Next up: loading in some sound. For this we’ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device’s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.
So, let’s start by initialising our audio context.
var contextClass = window.AudioContext;
if (contextClass) {
//web audio api available.
var audioContext = new contextClass();
} else {
//web audio api unavailable
//warn user to upgrade/change browser
}
Now let’s load our sound file into the new context we created with an XMLHttpRequest.
function loadSound() {
//set audio file url
var audioFileUrl = '/octave.ogg';
//create new request
var request = new XMLHttpRequest();
request.open(""GET"", audioFileUrl, true);
request.responseType = ""arraybuffer"";
request.onload = function() {
//take from http request and decode into buffer
context.decodeAudioData(request.response, function(buffer) {
audioBuffer = buffer;
});
}
request.send();
}
Phew! Now we’ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O’Reilly Web Audio API book by Boris Smus is available to read online free.
All we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.
Learning the steps
Let’s take a minute to step back and remember our school days and science class. I’m sure if I drew a picture of a sound wave, we would all start nodding our heads.
The sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.
This is great when everything is analogue, but the waveform varies continuously and it’s not suitable for digital processing: there’s an infinite set of values. For digital processing, we need discrete numbers.
We have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we’re doing in the code above is piping that data in the audio context. All we need to do now is access it.
We can do this with the Web Audio API’s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.
function createAnalyser(source) {
//create analyser node
analyser = audioContext.createAnalyser();
//connect to source
source.connect(analyzer);
//pipe to speakers
analyser.connect(audioContext.destination);
}
The data I’m really interested in here is frequency. Later we could look into amplitude or time, but for now I’m going to stick with frequency.
The analyser node gives us frequency data via the getFrequencyByteData method.
Don’t forget to count!
To collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?
This is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context’s sampleRate attribute. This is good to bear in mind when you’re thinking about your resolution of frequencies.
var sampleRate = audioContext.sampleRate;
Let’s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we’re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0–20,000hz.
So, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.
//set our FFT size
analyzer.fftSize = 2048;
//create an empty array with 1024 items
var frequencyData = new Uint8Array(1024);
So, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 ÷ 1,024 = 23.44Hz apart.
The thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.
function update() {
//constantly getting feedback from data
requestAnimationFrame(update);
analyzer.getByteFrequencyData(frequencyData);
}
Putting it all together
Sweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.
We can easily pause and run our CSS animation from JavaScript:
element.style.webkitAnimationPlayState = ""paused"";
element.style.webkitAnimationPlayState = ""running"";
Unfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.
There is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.
This seems a bit much for our proof of concept, so let’s backtrack a little. We know by the animation we’ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.
element.style.opacity = ""1"";
element.style.opacity = ""0.2"";
So let’s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I’ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.
//get light elements
var lights = document.getElementsByTagName('i');
var totalLights = lights.length;
for (var i=0; i 160){
//start animation on element
lights[i].style.opacity = ""1"";
} else {
lights[i].style.opacity = ""0.2"";
}
}
See all the code in action here. I suggest viewing in a modern browser :)
Awesome! It is true — we can VJ in our browser!
Let’s dance!
So, let’s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.
Check it out!
I don’t know about you, but I’m pretty excited — that’s just a bit of HTML, CSS and JavaScript!
The other thing to think about, of course, is the sound that you would get at a venue. We don’t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I’ve found, is to capture what my laptop’s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.
Let’s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.
And relax :)
There you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.
So, where do we go from here? I already have a wealth of ideas. We haven’t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it’s questionable whether the browser is the best environment for this. For one, I’m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it’s fun, and it looks cool and sometimes I think it’s OK to just dance.",2013,Ruth John,ruthjohn,2013-12-02T00:00:00+00:00,https://24ways.org/2013/make-your-browser-dance/,code
21,Keeping Parts of Your Codebase Private on GitHub,"Open source is brilliant, there’s no denying that, and GitHub has been instrumental in open source’s recent success. I’m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!
A slightly less common issue, and one I’ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.
Before we begin, it’s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.
N.B. This post requires some basic Git knowledge.
Adding your public remote
Let’s assume you’re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the “Adding your private remote” section.)
So, we have a clean slate: nothing has been set up yet, we’re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).
On your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff — whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:
$ git remote add origin git@github.com:[user]/site.com.git
Here we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you’ve explicitly changed it) to that remote:
$ git push -u origin master
Here we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.
This is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.
Adding your private remote
We’ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.
In your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don’t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.
$ git checkout -b dev
We have now made a new branch called dev off the branch we were on last (master, unless you renamed it).
Now we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:
$ git remote add private git@github.com:[user]/private.site.com.git
Like before, we are just telling Git to add a new remote to this repo, only this time we’ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.
Now we need to tell our dev branch to push to our private remote:
$ git push -u private dev
Here, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we’ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).
Now you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.
Any work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.
Adding more branches
So far we’ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you’d like. Of course, you’d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let’s imagine we want to create a branch to try something out real quickly:
$ git checkout -b test
Now, when we come to push this branch, we can choose which remote we send it to:
$ git push -u private test
This pushes the new test branch to our private remote (again, setting the persistent tracking with -u).
You can have as many or as few remotes or branches as you like.
Combining the two
Let’s say you’ve been working on a new feature in private for a few days, and you’ve kept that on the private remote. You’ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:
$ git checkout master
Then merge in the branch that contained the feature:
$ git merge dev
Now master contains the commits that were made on dev and, once you’ve pushed master to its remote, those commits will be viewable publicly on GitHub:
$ git push
Note that we can just run $ git push on the master branch as we’d previously set up our upstream tracking (-u).
Multiple machines
So far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let’s say you’ve got yourself a new Mac (yay!) and you want to clone an existing project:
$ git clone git@github.com:[user]/site.com.git
This will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.
Done!
If you’d like to see me blitz through all that in one go, check the showterm recording.
The beauty of this is that we can still share our code, but we don’t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it’s ready for merge. Have a blog post in a Jekyll site that you’re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it’s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.
All this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!",2013,Harry Roberts,harryroberts,2013-12-09T00:00:00+00:00,https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/,code
30,"Making Sites More Responsive, Responsibly","With digital projects we’re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations. A product’s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities.
The popularisation of responsive web design is a great example of how we are able to shape the web’s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn’t know or care about the difference in these approaches, but we did. Responsive design was similar in this respect – momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.
We tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text.
The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens.
When we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more responsive to people’s needs, enhancing their usability and accessibility along the way.
Being responsibly responsive
Of course, when we think about being more responsive, there’s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that:
A responsible responsive design equally considers the following throughout a project:
Usability: The way a website’s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions.
Access: The ability for users of all devices, browsers, and assistive technologies to access and understand a site’s features and content.
Sustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future.
Performance: The speed at which a site’s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface.
Scott’s book covers these ideas in a lot more detail than I’ll be able to here (put it on your Christmas list if it’s not there already), but for now let’s think a bit more about our roles as digital creators and the power this gives us.
Our choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies.
The power of the web
We all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions.
You’ll have seen articles with flashy titles such as “Top 5 JavaScript APIs You’ve Never Heard Of!”, which you’ll probably read, think “That’s quite cool”, yet never use in any real work.
There is great potential for technologies like these to be misused, but there are also great prospects for them to be used well to enhance experiences. Let’s have a look at a few examples you may not have considered.
Offline first
When we make websites, many of us follow a process which involves user stories – standardised snippets of context explaining who needs what, and why.
“As a student I want to pay online for my course so I don’t have to visit the college in person.”
“As a retailer I want to generate unique product codes so I can manage my stock.”
We very often focus heavily on what needs doing, but may not consider carefully how it will be done. As in Scott’s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense – including under different conditions.
Offline first is yet another ‘first’ methodology (my personal favourite being ‘tea first’), which encourages us to develop so that connectivity itself is an enhancement – letting users continue with tasks even when they’re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK’s signal-barren East Anglian wilderness as I do), then you’ll realise that connectivity isn’t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I’m sure we’re all familiar with – the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone’s likely to be offline after all. Wouldn’t it be better if we could do something like this instead?
Someone visits our conference website.
On this initial run, some assets may be cached for future use: the conference schedule, the site’s CSS, photos of the speakers.
When the attendee revisits the site on the day, the page shell loads up from the cache.
If we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use.
If we don’t have something cached already, then we can try grabbing it online.
If for any reason our requests for new content fail (we’re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is cached.
There are a number of ways we can make something like this, including using the application cache (AppCache) if you’re that way inclined. However, you may want to look into service workers instead. There are also some great resources on Offline First! if you’d like to find out more about this.
Building in offline functionality isn’t necessarily about starting offline first, and it’s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we’d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott’s criteria.
As I mentioned, this isn’t necessarily the kind of development choice that our clients will ask us for, but it’s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation.
Even more accessible
We’ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices?
Again using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen.
Input
Ambient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It’s not new – many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels.
If our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this:
window.addEventListener('devicelight', function(e) {
var lux = e.value;
if (lux < 50) {
//Change things for dim light
}
if (lux >= 50 && lux <= 10000) {
//Change things for normal light
}
if (lux > 10000) {
//Change things for bright light
}
});
Live demo (requires light sensor and supported browser).
Soon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it’ll probably look something like this:
@media (light-level: dim) {
/*Change things for dim light*/
}
@media (light-level: normal) {
/*Change things for normal light*/
}
@media (light-level: washed) {
/*Change things for bright light*/
}
While we may be quick to dismiss this kind of detection as being a gimmick, it’s important to consider that apps such as Light Detector, listed on Apple’s accessibility page, provide important context around exactly this functionality.
“If you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.”
everywaretechnologies.com/apps/lightdetector
Input can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft.
Output
Web technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users.
When we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we’re seeing and hearing.
Haptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here’s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds.
navigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]);
Live demo (requires supported browser)
With great power…
What you’ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn’t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects – if we don’t root for them, they probably won’t happen. This is where our professional responsibility comes in.
We won’t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we’ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let’s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations.
When you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.",2014,Sally Jenkinson,sallyjenkinson,2014-12-10T00:00:00+00:00,https://24ways.org/2014/making-sites-more-responsive-responsibly/,code
31,Dealing with Emergencies in Git,"The stockings were hung by the chimney with care,
In hopes that version control soon would be there.
This summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I’ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.
Let’s start by painting an updated version of Clement Clarke Moore’s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.
Perhaps a few of these images are familiar, or maybe they’re just settings you’ve seen in a movie. It doesn’t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: ‘What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?’ It’s okay if the map isn’t perfect. As you refine your understanding of a new topic, you’ll outgrow the initial metaphors, analogies, and comparisons.
With apologies to Mr. Moore, let’s give it a try.
Getting Interrupted in Git
When on the roof there arose such a clatter!
You’re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you’ve been working on is going to need to wait while you investigate the commotion.
If you’ve got even a little bit of experience working with Git, you know that you cannot simply change what you’re working on in times of emergency. If you’ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.
Up to this point, you’ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of ‘switching branches for a sec’. This isn’t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren’t finished with what you were working on.
You don’t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren’t putting your book away though, you’re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.
Let’s say you’ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:
$ git stash
After running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.
With the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it’s not reindeer after all, but just your boss who thought they’d help out by writing some code on the project you’ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.
You fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:
$ git fetch
$ git branch -r
$ git checkout -b helpful-boss-branch origin/helpful-boss-branch
You are now in a local copy of the branch where you are free to look around, and figure out exactly what’s going on.
You sigh audibly and say, ‘Okay. Tell me what was happening when you first realised you’d gotten into a mess’ as you look through the log messages for the branch.
$ git log --oneline
$ git log
By using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.
You may also want to compare the work your boss has done to the main branch for your project. For this article, we’ll assume the main branch is named master.
$ git diff master
Looking through the commits, you may be able to see that things started out okay but then took a turn for the worse.
Checking out a single commit
Using commands you’re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.
Using the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let’s say the unique identifier you want to checkout is 25f6d7f.
$ git checkout 25f6d7f
Note: checking out '25f6d7f'.
You are in 'detached HEAD' state. You can look around,
make experimental changes and commit them, and you can
discard any commits you make in this state without
impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:
$ git checkout -b new_branch_name
HEAD is now at 25f6d7f... Removed first paragraph.
This is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.
Take a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you’ve temporarily disconnected from a known chain of events. In other words, you’re currently looking at the middle of a story (or branch) about what happened – and you’re not at the endpoint for this particular story.
Git allows you to view the history of your repository as a timeline (technically it’s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you’re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.
$ git checkout master
Warning: you are leaving 1 commit behind, not connected to
any of your branches:
7a85788 Your witty holiday commit message.
If you want to keep them by creating a new branch, this may be a good time to do so with:
$ git branch new_branch_name 7a85788
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
So, if you want to save the commits you’ve made while in a detached HEAD state, you simply need to put them on a new branch.
$ git branch saved-headless-commits 7a85788
With this trick under your belt, you can jingle around in history as much as you’d like. It’s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you’ll need to move back to the branch tip by checking out the branch again.
$ git checkout helpful-boss-branch
You’re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you’ve followed the instructions Git gave you. That wasn’t so scary after all, now, was it?
Back to our reindeer problem.
If your boss is anything like the bosses I’ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you’ll want to capture the good work using one of several different methods.
Back in the living room, we’ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.
Saving just one commit
About that bowl of nuts. If you’re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we’re just going to pick out one nut from the bowl. In Git terms, we’re going to cherry-pick a commit and save it to another branch.
First, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.
$ git checkout master
$ git checkout -b rescue-the-boss
From your boss’s branch, helpful-boss-branch locate the commit you want to keep.
$ git log --oneline helpful-boss-branch
Let’s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.
$ git cherry-pick e08740b
If you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss’s branch.
At this point you might need to make a few additional fixes to help your boss out. (You’re angling for a bonus out of all this. Go the extra mile.) Once you’ve made your additional changes, you’ll need to add that work to the branch as well.
$ git add [filename(s)]
$ git commit -m ""Building on boss's work to improve feature X.""
Go ahead and test everything, and make sure it’s perfect. You don’t want to introduce your own mistakes during the rescue mission!
Uploading the fixed branch
The next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.
$ git push -u origin rescue-the-boss
Cleaning up and getting back to work
With your boss rescued, and your bonus secured, you can now delete the local temporary branches.
$ git branch --delete rescue-the-boss
$ git branch --delete helpful-boss-branch
And settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.
$ git checkout waiting-for-st-nicholas
$ git stash pop
Your working directory has been returned to exactly the same state you were in at the beginning of the article.
Having fun with analogies
I’ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it’s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it’s like trying to find that one broken light on the string of Christmas lights). It doesn’t matter if the analogy isn’t perfect. It’s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner’s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.
Or, if you’re like me, you can choose to never grow old and just keep mucking about in the analogies. I’d argue it’s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.",2014,Emma Jane Westby,emmajanewestby,2014-12-02T00:00:00+00:00,https://24ways.org/2014/dealing-with-emergencies-in-git/,code
36,Naming Things,"There are only two hard things in computer science: cache invalidation and naming things.
Phil Karlton
Being a professional web developer means taking responsibility for the code you write and ensuring it is comprehensible to others. Having a documented code style is one means of achieving this, although the size and type of project you’re working on will dictate the conventions used and how rigorously they are enforced.
Working in-house may mean working with multiple developers, perhaps in distributed teams, who are all committing changes – possibly to a significant codebase – at the same time. Left unchecked, this codebase can become unwieldy. Coding conventions ensure everyone can contribute, and help build a product that works as a coherent whole.
Even on smaller projects, perhaps working within an agency or by yourself, at some point the resulting product will need to be handed over to a third party. It’s sensible, therefore, to ensure that your code can be understood by those who’ll eventually take ownership of it.
Put simply, code is read more often than it is written or changed. A consistent and predictable naming scheme can make code easier for other developers to understand, improve and maintain, presumably leaving them free to worry about cache invalidation.
Let’s talk about semantics
Names not only allow us to identify objects, but they can also help us describe the objects being identified.
Semantics (the meaning or interpretation of words) is the cornerstone of standards-based web development. Using appropriate HTML elements allows us to create documents and applications that have implicit structural meaning. Thanks to HTML5, the vocabulary we can choose from has grown even larger.
HTML elements provide one level of meaning: a widely accepted description of a document’s underlying structure. It’s only with the mutual agreement of browser vendors and developers that
indicates a paragraph.
Yet (with the exception of widely accepted microdata and microformat schemas) only HTML elements convey any meaning that can be parsed consistently by user agents. While using semantic values for class names is a noble endeavour, they provide no additional information to the visitor of a website; take them away and a document will have exactly the same semantic value.
I didn’t always think this was the case, but the real world has a habit of changing your opinion. Much of my thinking around semantics has been informed by the writing of my peers. In “About HTML semantics and front-end architecture”, Nicholas Gallagher wrote:
The important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose – providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use.
These thoughts are echoed by Harry Roberts in his CSS Guidelines:
The debate surrounding semantics has raged for years, but it is important that we adopt a more pragmatic, sensible approach to naming things in order to work more efficiently and effectively. Instead of focussing on ‘semantics’, look more closely at sensibility and longevity – choose names based on ease of maintenance, not for their perceived meaning.
Naming methodologies
Front-end development has undergone a revolution in recent years. As the projects we’ve worked on have grown larger and more important, our development practices have matured. The pros and cons of object-orientated approaches to CSS can be endlessly debated, yet their introduction has highlighted the usefulness of having documented naming schemes.
Jonathan Snook’s SMACSS (Scalable and Modular Architecture for CSS) collects style rules into five categories: base, layout, module, state and theme. This grouping makes it clear what each rule does, and is aided by a naming convention:
By separating rules into the five categories, naming convention is beneficial for immediately understanding which category a particular style belongs to and its role within the overall scope of the page. On large projects, it is more likely to have styles broken up across multiple files. In these cases, naming convention also makes it easier to find which file a style belongs to.
I like to use a prefix to differentiate between layout, state and module rules. For layout, I use l- but layout- would work just as well. Using prefixes like grid- also provide enough clarity to separate layout styles from other styles. For state rules, I like is- as in is-hidden or is-collapsed. This helps describe things in a very readable way.
SMACSS is more a set of suggestions than a rigid framework, so its ideas can be incorporated into your own practice. Nicholas Gallagher’s SUIT CSS project is far more strict in its naming conventions:
SUIT CSS relies on structured class names and meaningful hyphens (i.e., not using hyphens merely to separate words). This helps to work around the current limits of applying CSS to the DOM (i.e., the lack of style encapsulation), and to better communicate the relationships between classes.
Over the last year, I’ve favoured a BEM-inspired approach to CSS. BEM stands for block, element, modifier, which describes the three types of rule that contribute to the style of a single component. This means that, given the following markup:
Rudolph
Dasher
Dancer
Prancer
Vixen
Comet
Cupid
Dunder
Blixem
I know that:
.sleigh is a containing block or component.
.sleigh__reindeer is used only as a descendent element of .sleigh.
.sleigh__reindeer––famous is used only as a modifier of .sleigh__reindeer.
With this naming scheme in place, I know which styles relate to a particular component, and which are shared. Beyond reducing specificity-related head-scratching, this approach has given me a framework within which I can consistently label items, and has sped up my workflow considerably.
Each of these methodologies shows that any robust CSS naming convention will have clear rules around case (lowercase, camelCase, PascalCase) and the use of special (allowed) characters like hyphens and underscores.
What makes for a good name?
Regardless of higher-level conventions, there’s no getting away from the fact that, at some point, we’re still going to have to name things. Recognising that classes should be named with other developers in mind, what makes for a good name?
Understandable
The most important aspect is for a name to be understandable. Words used in your project may come from a variety of sources: some may be widely understood, and others only be recognised by people working within a particular environment.
Culture
Most words you’ll choose will have common currency outside the world of web development, although they may have a particular interpretation among developers (think menu, list, input). However, words may have a narrower cultural significance; for example, in Germany and other German-speaking countries, impressum is the term used for legally mandated statements of ownership.
Industry
Industries often use specific terms to describe common business practices and concepts. Publishing has a number of these (headline, standfirst, masthead, colophon…) all have well understood meanings – and not all of them are relevant to online usage.
Organisation
Companies may have internal names (or nicknames) for their products and services. The Guardian is rife with such names: bisons (and buffalos), pixies (and super-pixies), bentos (and mini-bentos)… all of which mean something very different outside the organisation. Although such names can be useful inside smaller teams, in larger organisations they can become a barrier to entry, a sort of secret code used among employees who have been around long enough to know what they mean.
Product
Your team will undoubtedly have created names for specific features or interface components used in your product. For example, at Clearleft we coined the term gravigation for a navigation bar that was pinned to the bottom of the viewport. Elements of a visual design language may have names, too. Transport for London’s bar and circle logo is known internally as the roundel, while Nike’s logo is called the swoosh. Branding agencies often christen colours within a brand palette, too, either to evoke aspects of the identity or to indicate intended usage.
Once you recognise the origin of the words you use, you’ll be better able to judge their appropriateness. Using Latin words for class names may satisfy a need to use semantic-sounding terms but, unless you work in a company whose employees have a basic grasp of Latin, a degree of translation will be required. Military ranks might be a clever way of declaring sizes without implying actual values, but I’d venture most people outside the armed forces don’t know how they’re ordered.
Obvious
Quite often, the first name that comes into your head will be the best option. Names that obliquely reference the function of a class (e.g. receptacle instead of container, kevlar instead of no-bullets) only serve to add an additional layer of abstraction. Don’t overthink it!
One way of knowing if the names you use are well understood is to look at what similar concepts are called in existing vocabularies. schema.org, Dublin Core and the BBC’s ontologies are all useful sources for object names.
Functional
While we’ve learned to avoid using presentational classes, there remains a tension between naming things based on their content, and naming them for their intended presentation or behaviour (which may change at different breakpoints). Rather than think about a component’s appearance or behaviour, instead look to its function, its purpose. To clarify, ask what a component’s function is, and not how the component functions.
For example, the Guardian’s internal content system uses the following names for different types of image placement: supporting, showcase and thumbnail, with inline being the default. These options make no promise of the resulting position on a webpage (or smartphone app, or television screen…), but do suggest intended use, and therefore imply the likely presentation.
Consistent
Being consistent in your approach to names will allow for easier naming of successive components, and extending the vocabulary when necessary. For example, a predictably named hierarchy might use names like primary and secondary. Should another level need to be added, tertiary is clearly be preferred over third.
Appropriate
Your project will feature a mix of style rules. Some will perform utility functions (clearing floats, removing bullets from a list, reseting margins), while others will perform specific functions used only once or twice in a project. Names should reflect this. For commonly used classes, be generic; for unique components be more specific.
It’s also worth remembering that you can use multiple classes on an element, so combining both generic and specific can give you a powerful modular design system:
Generic: list
Specific: naughty-children
Combined: naughty-children list
If following the BEM methodology, you might use the following classes:
Generic: list
Specific: list––nice-children
Combined: list list––nice-children
Extensible
Good naming schemes can be extended. One way of achieving this is to use namespaces, which are basically a way of grouping related names under a higher-level term.
Microformats are a good example of a well-designed naming scheme, with many of its vocabularies taking property names from existing and related specifications (e.g. hCard is a 1:1 representation of vCard). Microformats 2 goes one step further by grouping properties under several namespaces:
h-* for root class names (e.g. h-card)
p-* for simple (text) properties (e.g. p-name)
u-* for URL properties (e.g. u-photo)
dt-* for date/time properties (e.g. dt-bday)
e-* for embedded markup properties (e.g. e-note)
The inclusion of namespaces is a massive improvement over the earlier specification, but the downside is that microformats now occupy five separate namespaces. This might be problematic if you are using u-* for your utility classes. While nothing will break, your naming system won’t be as robust, so plan accordingly.
(Note: Microformats perform a very specific function, separate from any presentational concerns. It’s therefore considered best practice to not use microformat classes as styling hooks, but instead use additional classes that relate to the function of the component and adhere to your own naming conventions.)
Short
Names should be as long as required, but no longer. When looking for words to describe a particular function, I try to look for single words where possible. Avoid abbreviations unless they are understood within the contexts described above. rrp is fine if labelling a recommended retail price in an online shop, but not very helpful if used to mean ragged-right paragraph, for example.
Fun!
Finally, names can be an opportunity to have some fun! Names can give character to a project, be it by providing an outlet for in-jokes or adding little easter eggs for those inclined to look.
The copyright statement on Apple’s website has long been named sosumi, a word that has a nice little history inside Apple. Until recently, the hamburger menu icon on the Guardian website was labelled honest-burger, after the developer’s favourite burger restaurant.
A few thoughts on preprocessors
CSS preprocessors have solved a lot of problems, but they have an unfortunate downside: they require you to name yet more things! Whereas we needed to worry only about style rules, now we need names for variables, mixins, functions… oh my!
A second article could be written about naming these, so for now I’ll offer just a few thoughts. The first is to note that preprocessors make it easier to change things, as they allow for DRYer code. So while the names of variables are important (and the advice in this article still very much applies), you can afford to relax a little.
Looking to name colour variables? If possible, find out if colours have been assigned names in a brand palette. If not, use obvious names (based on appearance or function, depending on your preference) and adapt as the palette grows. If it becomes difficult to name colours that are too similar, I’d venture that the problem lies with the design rather than the naming scheme.
The same is true for responsive breakpoints. Preprocessors allow you to move awkward naming conventions out of the markup and into the CSS. Although terms like mobile, tablet and desktop are not desirable given the need to think about device-agnostic design, if these terms are widely understood within a product team and among stakeholders, using them will ensure everyone is using the same language (they can always be changed later).
It still feels like we’re at the very beginning of understanding how preprocessors fit into a development workflow, if at all! I suspect over the next few years, best practices will emerge for all of these considerations. In the meantime, use your brain!
Even with sensible rules and conventions in place, naming things can remain difficult, but hopefully I’ve made this exercise a little less painful. Christmas is a time of giving, so to the developer reading your code in a year’s time, why not make your gift one of clearer class names.",2014,Paul Lloyd,paulrobertlloyd,2014-12-21T00:00:00+00:00,https://24ways.org/2014/naming-things/,code
37,JavaScript Modules the ES6 Way,"JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly.
This is something nearly all other languages come with out of the box, whether it be Ruby’s require, Python’s import, or any other language you’re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let’s be clear: it doesn’t mean the new module system in the upcoming version of JavaScript won’t be useful to you if you’re building smaller websites rather than the next Instagram.
Thankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won’t be landing in browsers for a while yet – but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I’d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It’s much simpler than you might think.
What is ES6?
ECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It’s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn’t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet.
The ES6 module spec
The ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I’ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it’s what I’ll focus on more.
Module syntax
The key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we’ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it.
Another important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There’s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous.
Named exports
Modules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword:
export function double(x) {
return x + x;
};
You can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces.
var double = function(x) {
return x + x;
}
export { double };
A module can then import the double function like so:
import { double } from 'mymodule';
double(2); // 4
Again, curly braces are required around the variable you would like to import. It’s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension.
The reason for those extra braces is that this syntax lets you export multiple variables:
var double = function(x) {
return x + x;
}
var square = function(x) {
return x * x;
}
export { double, square }
I personally prefer this syntax over the export function …, but only because it makes it much clearer to me what the module exports. Typically I will have my export {…} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting.
A file importing both double and square can do so in just the way you’d expect:
import { double, square } from 'mymodule';
double(2); // 4
square(3); // 9
With this approach you can’t easily import an entire module and all its methods. This is by design – it’s much better and you’re encouraged to import just the functions you need to use.
Default exports
Along with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so:
export default function(x) {
return x + x;
}
And that can be imported:
import double from 'mymodule';
double(2); // 4
This time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you’d like. Default exports are not named, so you can import them as anything you like:
import christmas from 'mymodule';
christmas(2); // 4
The above is entirely valid.
Although it’s not something that is used too often, a module can have both named exports and a default export, if you wish.
One of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that’s fine, and you shouldn’t change that to meet the preferences of those designing the spec.
Programmatic API
Along with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It’s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user’s browser didn’t support a feature your app relied on. An example of doing this is:
if(someFeatureNotSupported) {
System.import('my-polyfill').then(function(myPolyFill) {
// use the module from here
});
}
System.import will return a promise, which, if you’re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete.
This programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task.
How to use it today
It’s all well and good having this new syntax, but right now it won’t work in any browser – and it’s not likely to for a long time. Maybe in next year’s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we’re stuck with a bit of extra work.
ES6 module transpiler
One solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it – not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp.
The advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don’t have any system in place for handling modules in the browser, using the transpiler doesn’t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn’t do anything to help you get that code running in the browser, but if you have that part sorted it’s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler.
SystemJS
Another solution is SystemJS. It’s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser).
To load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn’t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you.
All you need to do to get SystemJS running is to add a
When you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn’t make a lot of requests on the live site. In development though, it makes for a really nice workflow.
When working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application’s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file.
SystemJS also provides a workflow for bundling your application together into one file.
Conclusion
ES6 modules may be at least six months to a year away (if not more) but that doesn’t mean they can’t be used today. Although there is an overhead to using them now – with the work required to set up SystemJS, the module transpiler, or another solution – that doesn’t mean it’s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You’ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution.
If you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road.
Further reading
If you’d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources:
ECMAScript 6 modules: the final syntax by Axel Rauschmayer
Practical Workflows for ES6 Modules by Guy Bedford
ECMAScript 6 resources for the curious JavaScripter by Addy Osmani
Tracking ES6 support by Addy Osmani
ES6 Tools List by Addy Osmani
Using Grunt and the ES6 Module Transpiler by Thomas Boyt
JavaScript Modules and Dependencies with jspm by myself
Using ES6 Modules Today by Guy Bedford",2014,Jack Franklin,jackfranklin,2014-12-03T00:00:00+00:00,https://24ways.org/2014/javascript-modules-the-es6-way/,code
38,"Websites of Christmas Past, Present and Future","The websites of Christmas past
The first website was created at CERN. It was launched on 20 December 1990 (just in time for Christmas!), and it still works today, after twenty-four years. Isn’t that incredible?!
Why does this website still work after all this time? I can think of a few reasons.
First, the authors of this document chose HTML. Of course they couldn’t have known back then the extent to which we would be creating documents in HTML, but HTML always had a lot going for it. It’s built on top of plain text, which means it can be opened in any text editor, and it’s pretty readable, even without any parsing.
Despite the fact that HTML has changed quite a lot over the past twenty-four years, extensions to the specification have always been implemented in a backwards-compatible manner. Reading through the 1992 W3C document HTML Tags, you’ll see just how it has evolved. We still have h1 – h6 elements, but I’d not heard of the element before. Despite being deprecated since HTML2, it still works in several browsers. You can see it in action on my website.
As well as being written in HTML, there is no run-time compilation of code; the first website simply consists of HTML files transmitted over the web. Due to its lack of complexity, it stood a good chance of surviving in the turbulent World Wide Web.
That’s all well and good for a simple, static website. But websites created today are increasingly interactive. Many require a login and provide experiences that are tailored to the individual user. This type of dynamic website requires code to be executed somewhere.
Traditionally, dynamic websites would execute such code on the server, and transmit a simple HTML file to the user. As far as the browser was concerned, this wasn’t much different from the first website, as the additional complexity all happened before the document was sent to the browser.
Doing it all in the browser
In 2003, the first single page interface was created at slashdotslash.com. A single page interface or single page app is a website where the page is created in the browser via JavaScript. The benefit of this technique is that, after the initial page load, subsequent interactions can happen instantly, or very quickly, as they all happen in the browser.
When software runs on the client rather than the server, it is often referred to as a fat client. This means that the bulk of the processing happens on the client rather than the server (which can now be thin).
A fat client is preferred over a thin client because:
It takes some processing requirements away from the server, thereby reducing the cost of servers (a thin server requires cheaper, or fewer servers).
They can often continue working offline, provided no server communication is required to complete tasks after initial load.
The latency of internet communications is bypassed after initial load, as interactions can appear near instantaneous when compared to waiting for a response from the server.
But there are also some big downsides, and these are often overlooked:
They can’t work without JavaScript. Obviously JavaScript is a requirement for any client-side code execution. And as the UK Government Digital Service discovered, 1.1% of their visitors did not receive JavaScript enhancements. Of that 1.1%, 81% had JavaScript enabled, but their browsers failed to execute it (possibly due to dropping the internet connection). If you care about 1.1% of your visitors, you should care about the non-JavaScript experience for your website.
The browser needs to do all the processing. This means that the hardware it runs on needs to be fast. It also means that we require all clients to have largely the same capabilities and browser APIs.
The initial payload is often much larger, and nothing will be rendered for the user until this payload has been fully downloaded and executed. If the connection drops at any point, or the code fails to execute owing to a bug, we’re left with the non-JavaScript experience.
They are not easily indexed as every crawler now needs to run JavaScript just to receive the content of the website.
These are not merely edge case issues to shirk off. The first three issues will affect some of your visitors; the fourth affects everyone, including you.
What problem are we trying to solve?
So what can be done to address these issues? Whereas fat clients solve some inherent issues with the web, they seem to create as many problems. When attempting to resolve any issue, it’s always good to try to uncover the original problem and work forwards from there. One of the best ways to frame a problem is as a user story. A user story considers the who, what and why of a need. Here’s a template:
As a {who} I want {what} so that {why}
I haven’t got a specific project in mind, so let’s refer to the who as user. Here’s one that could explain the use of thick clients.
As a user I want the site to respond to my actions quickly so that I get immediate feedback when I do something.
This user story could probably apply to a great number of websites, but so could this:
As a user I want to get to the content quickly, so that I don’t have to wait too long to find out what the site is all about or get the content I need.
A better solution
How can we balance both these user needs? How can we have a website that loads fast, and also reacts fast? The solution is to have a thick server, that serves the complete document, and then a thick client, that manages subsequent actions and replaces parts of the page. What we’re talking about here is simply progressive enhancement, but from the user’s perspective.
The initial payload contains the entire document. At this point, all interactions would happen in a traditional way using links or form elements. Then, once we’ve downloaded the JavaScript (asynchronously, after load) we can enhance the experience with JavaScript interactions. If for whatever reason our JavaScript fails to download or execute, it’s no biggie – we’ve already got a fully functioning website. If an API that we need isn’t available in this browser, it’s not a problem. We just fall back to the basic experience.
This second point, of having some minimum requirement for an enhanced experience, is often referred to as cutting the mustard, first used in this sense by the BBC News team. Essentially it’s an if statement like this:
if('querySelector' in document
&& 'localStorage' in window
&& 'addEventListener' in window) {
// bootstrap the JavaScript application
}
This code states that the browser must support the following methods before downloading and executing the JavaScript:
document.querySelector (can it find elements by CSS selectors)
window.localStorage (can it store strings)
window.addEventListener (can it bind to events in a standards-compliant way)
These three properties are what the BBC News team decided to test for, as they are present in their website’s JavaScript. Each website will have its own requirements. The last method, window.addEventListener is in interesting one. Although it’s simple to bind to events on IE8 and earlier, these browsers have very inconsistent support for standards. Making any JavaScript-heavy website work on IE8 and earlier is a painful exercise, and comes at a cost to all users on other browsers, as they’ll download unnecessary code to patch support for IE.
JavaScript API support by browser.
I discovered that IE8 supports 12% of the current JavaScript APIs, while IE9 supports 16%, and IE10 51%. It seems, then, that IE10 could be the earliest version of IE that I’d like to develop JavaScript for. That doesn’t mean that users on browsers earlier than 10 can’t use the website. On the contrary, they get the core experience, and because it’s just HTML and CSS, it’s much more likely to be bug-free, and could even provide a better experience than trying to run JavaScript in their browser. They receive the thin client experience.
By reducing the number of platforms that our enhanced JavaScript version supports, we can better focus our efforts on those platforms and offer an even greater experience to those users. But we can only do that if we use progressive enhancement. Otherwise our website would be completely broken for all other users.
So what we have is a thick server, capable of serving the entire website to our users, complete with all core functionality needed for our users to complete their tasks; and we have a thick client on supported browsers, which can bring an even greater experience to those users.
This is all transparent to users. They may notice that the website seems snappier on the new iPhone they received for Christmas than on the Windows 7 machine they got five years ago, but then they probably expected it to be faster on their iPhone anyway.
Isn’t this just more work?
It’s true that making a thick server and a thick client is more work than just making one or the other. But there are some big advantages:
The website works for everyone.
You can decide when users get the enhanced experience.
You can enhance features in an iterative (or agile) manner.
When the website breaks, it doesn’t break down.
The more you practise this approach, the quicker you will become.
The websites of Christmas present
The best way to discover websites using this technique of progressive enhancement is to disable JavaScript and see if the website breaks. I use the Web Developer extension, which is available for Chrome and Firefox. It lets me quickly disable JavaScript.
Web Developer extension.
24 ways works with and without JavaScript. Try using the menu icon to view the navigation. Without JavaScript, it’s a jump link to the bottom of the page, but with JavaScript, the menu slides in from the right.
24 ways navigation with JavaScript disabled.
24 ways navigation with working JavaScript.
Google search will also work without JavaScript. You won’t get instant search results or any prerendering, because those are enhancements.
For a more app-like example, try using Twitter. Without JavaScript, it still works, and looks nearly identical. But when you load JavaScript, links open in modal windows and all pages are navigated much quicker, as only the content that has changed is loaded. You can read about how they achieved this in Twitter’s blog posts Improving performance on twitter.com and Implementing pushState for twitter.com.
Unfortunately Facebook doesn’t use progressive enhancement, which not only means that the website doesn’t work without JavaScript, but it takes longer to load. I tested it on WebPagetest and if you compare the load times of Twitter and Facebook, you’ll notice that, despite putting similar content on the page, Facebook takes two and a half times longer to render the core content on the page.
Facebook takes two and a half times longer to load than Twitter.
Websites of Christmas yet to come
Every project is different, and making a website that enjoys a long life, or serves a larger number of users may or may not be a high priority. But I hope I’ve convinced you that it certainly is possible to look to the past and future simultaneously, and that there can be significant advantages to doing so.",2014,Josh Emerson,joshemerson,2014-12-08T00:00:00+00:00,https://24ways.org/2014/websites-of-christmas-past-present-and-future/,code
42,An Overview of SVG Sprite Creation Techniques,"SVG can be used as an icon system to replace icon fonts. The reasons why SVG makes for a superior icon system are numerous, but we won’t be going over them in this article. If you don’t use SVG icons and are interested in knowing why you may want to use them, I recommend you check out “Inline SVG vs Icon Fonts” by Chris Coyier – it covers the most important aspects of both systems and compares them with each other to help you make a better decision about which system to choose.
Once you’ve made the decision to use SVG instead of icon fonts, you’ll need to think of the best way to optimise the delivery of your icons, and ways to make the creation and use of icons faster.
Just like bitmaps, we can create image sprites with SVG – they don’t look or work exactly alike, but the basic concept is pretty much the same.
There are several ways to create SVG sprites, and this article will give you an overview of three of them. While we’re at it, we’re going to take a look at some of the available tools used to automate sprite creation and fallback for us.
Prerequisites
The content of this article assumes you are familiar with SVG. If you’ve never worked with SVG before, you may want to look at some of the introductory tutorials covering SVG syntax, structure and embedding techniques. I recommend the following:
SVG basics: Using SVG.
Structure: Structuring, Grouping, and Referencing in SVG — The ,