rowid,title,contents,year,author,author_slug,published,url,topic 1,Why Bother with Accessibility?,"Web accessibility (known in other fields as inclusive design or universal design) is the degree to which a website is available to as many people as possible. Accessibility is most often used to describe how people with disabilities can access the web. How we approach accessibility In the web community, there’s a surprisingly inconsistent approach to accessibility. There are some who are endlessly dedicated to accessible web design, and there are some who believe it so intrinsic to the web that it shouldn’t be considered a separate topic. Still, of those who are familiar with accessibility, there’s an overwhelming number of designers, developers, clients and bosses who just aren’t that bothered. Over the last few months I’ve spoken to a lot of people about accessibility, and I’ve heard the same reasons to ignore it over and over again. Let’s take a look at the most common excuses. Excuse 1: “People with disabilities don’t really use the web” Accessibility will make your site available to more people — the inclusion case In the same way that the accessibility of a building isn’t just about access for wheelchair users, web accessibility isn’t just about blind users and screen readers. We can affect positively the lives of many people by making their access to the web easier. There are four main types of disability that affect use of the web: Visual Blindness, low vision and colour-blindness Auditory Profoundly deaf and hard of hearing Motor The inability to use a mouse, slow response time, limited fine motor control Cognitive Learning difficulties, distractibility, the inability to focus on large amounts of information None of these disabilities are completely black and white Examining deafness, it’s clear from the medical scale that there are many grey areas between full hearing and total deafness: mild moderate moderately severe severe profound totally deaf For eyesight, and brain conditions that affect what users see, there is a huge range of conditions and challenges: astigmatism colour blindness akinetopsia (motion blindness) scotopic visual sensitivity (visual stress related to light) visual agnosia (impaired recognition or identification of objects) While we might have medical and government-recognised definitions that tell us what makes a disability, day-to-day life is not so straightforward. People experience varying degrees of different conditions, and often one or more conditions at a time, creating a false divide when you view disability in terms of us and them. Impairments aren’t always permanent As we age, we’re more likely to experience different levels of visual, auditory, motor and cognitive impairments. We might have an accident or illness that affects us temporarily. We might struggle more earlier or later in the day. There are so many little physiological factors that affect the way people interact with the web that we can’t afford to make any assumptions based on our own limited experiences. Impairments might be somewhere between the user and the website There are also impairments that aren’t directly related to the user. Environmental factors have a huge effect on the way people interact with the web. These could be: Low bandwidth, or intermittent internet connection Bright light, rain, or other weather-based conditions Noisy environments, or a location where the user doesn’t want to disturb their neighbours with sound Browsing with mobile devices, games consoles and other non-desktop devices Browsing with legacy browsers or operating systems Such environmental factors show that it’s not just those with physical impairments who benefit from more accessible websites. We started designing responsive websites so we could be more future-friendly, and with a shared goal of better optimised experiences, accessibility should be at the core of responsive web design. Excuse 2: “We don’t want to affect the experience for the majority of our users” Accessibility will improve your site for all your users — the usability case On a basic level, the different disability groups, as shown in the inclusion case, equate to simple usability goals: Visual – make it easy to read Auditory – make it easy to hear Motor – make it easy to interact Cognitive – make it easy to understand and focus Taking care to ensure good usability in these areas will also have an impact on accessibility. Unless your site is catering specifically to a particular disability, where extreme optimisation is most beneficial, taking care to design with accessibility in mind will rarely negatively affect the experience of your wider audience. Excuse 3: “We don’t have the budget for accessibility” Accessibility will make you money — the business case By reducing your audience through ignoring accessibility, you’re potentially excluding the income from those users. Designing with accessibility in mind from the beginning of a project makes it easier to make small inexpensive optimisations as part of the design and development process, rather than bolting on costly updates to increase your potential audience later on. The following are excerpts from a white paper about companies that increased the accessibility of their websites to comply with government regulation. Improvements in accessibility doubled Legal and General’s life insurance sales online. Improvements in accessibility increased Tesco’s grocery home delivery sales by £13 million in 2005… To their surprise they found that many normal visitors preferred the ease of navigation and improved simplicity of the [parallel] accessible site and switched to use it. Tesco have replaced their ‘normal’ site with their accessible version and expect a further increase in revenues. Improvements in accessibility increased Virgin.net sales by 68%. Statistics all from WSI white paper: Improve your website’s usability and accessibility to increase sales (PDF). Excuse 4: “Accessible websites are ugly” Accessibility won’t stop your site from being beautiful — the beauty case Many people use ugly accessible websites as proof that all accessible websites are ugly. This just isn’t the case. I’ve compiled some examples of beautiful and accessible websites with screenshots of how they look through the Color Oracle simulator and how they perform when run through Webaim’s Wave accessibility checker tool. While automated tools are no substitute for real users, they can help you learn more about good practices, and give you guidance on where your site needs improvements to make it more accessible. Amazon.co.uk It may not be a decorated beauty, but Amazon is often first in functional design. It’s a huge website with a lot of interactive content, but it generates just five errors on the Wave test, and is easy to read under a Color Oracle filter. Screenshot of Amazon website Screenshot of Amazon’s Wave results – five errors Screenshot of Amazon through a Color Oracle filter 24 ways When Tim Van Damme redesigned 24 ways back in 2007, it was a striking and unusual design that showed what could be achieved with CSS and some imagination. Despite the complexity of the design, it gets an outstanding zero errors on the Wave test, and is still readable under a Color Oracle filter. Screenshot of pre-2013 24 ways website design Screenshot of 24 ways Wave results – zero errors Screenshot of 24ways through a Color Oracle filter Opera’s Shiny Demos Demos and prototypes are notorious for ignoring accessibility, but Opera’s Shiny Demos site shows how exploring new technologies doesn’t have to exclude anyone. It only gets one error on the Wave test, and looks fine under a Color Oracle filter. Screenshot of Opera’s Shiny Demos website Screenshot of Opera’s Shiny Demos Wave results – 1 error Screenshot of Opera’s Shiny Demos through a Color Oracle filter SoundCloud When a site is more app-like, relying on more interaction from the user, accessibility can be more challenging. However, SoundCloud only gets one error on the Wave test, and the colour contrast holds up well under a Color Oracle filter. Screenshot of SoundCloud website Screenshot of SoundCloud’s Wave results – one error Screenshot of SoundCloud through a Color Oracle filter Education and balance As with most web design, doing accessibility well is about combining your knowledge of accessibility with your project’s context to create a balance that serves your users’ needs. Your types of content and interactions will dictate one set of constraints. Your users’ needs and goals will dictate another. In broad terms, web design as a practice is finding the equilibrium between these constraints. And then there’s just caring. The web as a platform is open, affordable and available to many. Accessibility is our way to ensure that nobody gets shut out.",2013,Laura Kalbag,laurakalbag,2013-12-10T00:00:00+00:00,https://24ways.org/2013/why-bother-with-accessibility/,design 3,Project Hubs: A Home Base for Design Projects,"SCENE: A design review meeting. Laptop screens. Coffee cups. Project manager: Hey, did you get my email with the assets we’ll be discussing? Client: I got an email from you, but it looks like there’s no attachment. PM: Whoops! OK. I’m resending the files with the attachments. Check again? Client: OK, I see them. It’s homepage_v3_brian-edits_FINAL_for-review.pdf, right? PM: Yeah, that’s the one. Client: OK, hang on, Bill’s going to print them out. (3-minute pause. Small talk ensues.) Client: Alright, Bill’s back. We’re good to start. Brian: Oh, actually those homepage edits we talked about last time are in the homepage_v4_brian_FINAL_v2.pdf document that I posted to Basecamp earlier today. Client: Oh, OK. What message thread was that in? Brian: Uh, I’m pretty sure it’s in “Homepage Edits and Holiday Schedule.” Client: Alright, I see them. Bill’s going back to the printer. Hang on a sec… This is only a slightly exaggerated version of my experience in design review meetings. The design project dance is a sloppy one. It involves a slew of email attachments, PDFs, PSDs, revisions, GitHub repos, staging environments, and more. And while tools like Basecamp can help manage all these moving parts, it can still be incredibly challenging to extract only the important bits, juggle deliverables, and see how your project is progressing. Enter project hubs. Project hubs A project hub consolidates all the key design and development materials onto a single webpage presented in reverse chronological order. The timeline lives online (either publicly available or password protected), so that everyone involved in the team has easy access to it. A project hub. I was introduced to project hubs after seeing Dan Mall’s open redesign of Reading Is Fundamental. Thankfully, I had a chance to work with Dan on two projects where I got to see firsthand how beneficial a project hub can be. Here’s what makes a project hub great: Serves as a centralized home base for the project Trains clients and teams to decide in the browser Easily and visually view project’s progress Provides an archive for project artifacts A home base Your clients and colleagues can expect to get the latest and greatest updates to your project when visiting the project hub, the same way you’d expect to get the latest information on a requested topic when you visit a Wikipedia page. That’s the beauty of URIs that don’t change. Creating a project hub reduces a ton of email volley nonsense, and eliminates the need to produce files and directories with staggeringly ridiculous names like design/12.13.13/team/brian/for_review/_FINAL/styletile_121313_brian-edits-final_v2_FINAL.pdf. The team can simply visit the project hub’s URL and click the link to whatever artifact they need. Need to make an update? Simply update the link on the project hub. No more email tango and silly file names. Deciding in the browser Let’s change the phrase “designing in the browser” to “deciding in the browser.” Dan Mall We make websites, but all too often we find ourselves looking at web design artifacts in abstractions. We email PDFs to each other, glance at mockup JPGs on our desktops, and of course kill trees in order to print out designs so that we can scribble in the margins. All of these practices subtly take everyone further and further away from the design’s eventual final resting place: the browser. Because a project hub is just a simple webpage, reviewing designs is as easy as clicking some links, which keep your clients and teams in the browser. You can keep people in the browser with yet another clever trick from the wily Dan Mall: instead of sending clients PDFs or JPGs, he created a simple webpage and tossed his static visuals into the template (you can view an example here). This forces clients to review web design work in the browser rather than launching a PDF viewer or Preview. Now this all might sound trivial to you (“Of course my client knows that we’re designing a website!”), but keeping the design artifacts in the browser subconsciously helps remind everyone of the medium for which you’re designing, which helps everyone focus on the right aspects of the design and have the right conversations. Progress over time When you’re in the trenches, it’s often hard to visualize how a project is progressing. Tools like Basecamp include discussions, files, to-dos, and more, which are all great tools but also make things a bit noisy. Project hubs provide you and your clients a quick and easy way to see at a glance how things are coming along. Teams can rest assured they’re viewing the most current versions of designs, and managers can share progress with stakeholders simply by providing a link to the project hub. Over time, a project hub becomes an easily accessible archive of all the design decisions, which makes it easy to compare and contrast different versions of designs and prototypes. Setting up a project hub Setting up your own project hub is pretty simple. Simply create a webpage with some basic styles and branding. I’ve created a project hub template that’s available on GitHub if you want a jump-start. Publish the webpage to a URL somewhere that makes sense (we’ve found that a subdomain of your site works quite well) and share it with everyone involved in the project. Bookmark it. Let everyone know that this is where design updates will be shared, and that they can always come back to the project hub to track the project’s progress. When it comes time to share new updates, simply add a new node to the timeline and republish the webpage. Simple FTPing works just fine, but it might make sense to keep track of changes using version control. Our project hub for our open redesign of the Pittsburgh Food Bank is managed on GitHub, which means that I can make edits to the hub right from GitHub. Thanks to the magical wizardry of webhooks, I can automatically deploy the project hub so that everything stays in sync. That’s the fancy-pants way to do it, and is certainly not a requirement. As long as you’re able to easily make edits and keep your project hub up to date, you’re good to go. So that’s the hubbub Project hubs can help tame the chaos of the design process by providing a home base for all key design and development materials. Keep the design artifacts in the browser and give clients and colleagues quick insight into your project’s progress. Happy hubbing!",2013,Brad Frost,bradfrost,2013-12-17T00:00:00+00:00,https://24ways.org/2013/project-hubs/,process 8,Coding Towards Accessibility,"“Can we make it AAA-compliant?” – does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means? I’m not here to talk about that. The Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we’re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility. This Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow. But first, let’s clear up a couple of misconceptions. Dreary, flat experiences I recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client’s stringent accessibility requirements I was concerned that I’d have to scale back what was quite a complex set of visual designs. Sites like Jakob Neilsen’s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.) Of course, there are other reasons for these sites’ aesthetics — and it’s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It’s always our own ingenuity and attention to detail that are going to be the limiting factors. Synecdoche We must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. There’s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider. Planning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn’t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now. But the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice. The mouse trap The simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse. Keyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily — just by ditching a peripheral. Learning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without. Don’t just :hover The reality is that when you first start doing this you can find your site becomes unusable straightaway. It’s easy to lose track of which element is in focus as you hit the tab key repeatedly. One of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I’m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!). You may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It’s a tiny change and there is no downside. So instead of this: .my-cool-link:hover { background-color: MistyRose ; } …try writing this: .my-cool-link:hover, .my-cool-link:focus, .my-cool-link:active { background-color: MistyRose ; } I’ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven’t yet. I worry that people reading my code won’t see that I’m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand. JavaScript can play, too This was another revelation for me. Keyboard-only navigation doesn’t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we’re able to create complex JavaScript-driven interfaces which all users can interact with. Some of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention — these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. Of course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them. Second, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It’s a fantastic reference which I revisit all the time I’m not a huge fan of the jQuery UI library. It’s a pain to style and the code is a bit bloated. So I’ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below). Coding for accessibility promotes good habits This is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. Your code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don’t you?) If you aren’t already in the habit of separating out your interface functionality into discrete functions, you will be soon. var doSomethingCool = function(){ // Do something cool here. } // Bind function to a button click - pretty vanilla $('.myCoolButton').on('click', function(){ doSomethingCool(); return false; }); // Bind the same function to a range of keypresses $(document).keyup(function(e){ switch(e.keyCode) { case 13: // enter case 32: // spacebar doSomethingCool(); break; case 27: // escape doSomethingElse(); break; } }); To be honest, if you’re doing complex UI stuff with JavaScript these days, or if you’ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren’t already — just add a few more event bindings into your UI layer! Manipulating the tab order So, you’ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You’ve applied all your hover states to :focus and :active so you can see where you’re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right? There’s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element ‘tabbable’? A good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there’s no reason to make that heading into a link simply because it’s interactive.

Tyrannosaurus

Tyrannosaurus; meaning ""tyrant lizard""...

Utahraptor

Utahraptor is a genus of theropod dinosaurs...

Dromiceiomimus

Ornithomimus is a genus of ornithomimid dinosaurs...

Adding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process. $('.accordion-widget h3').attr('tabindex', '0'); You can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to −1, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern — at least on my team! A farewell ARIA This is where things can get complex, and I’m no expert on the ARIA specification: I feel like I’ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I’d like to demystify things a little bit to encourage you to look into this area further yourself. ARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. Let’s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can’t handle JavaScript will announce all the content within the accordion, no problem. But modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that’s as far as you have gone, then you’ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox: > Tyrannosaurus. Heading Three. > Utahraptor. Heading Three. > Dromiceiomimus. Heading Three. We can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.

Utahraptor

Utahraptor is a genus of theropod dinosaurs...

Now, if a screen reader user encounters this markup they will hear the following: > Tyrannosaurus. Tab not selected; one of three. > Utahraptor. Tab not selected; two of three. > Dromiceiomimus. Tab not selected; three of three. You could add arrow key events to help the user browse up and down the tab list items until they find one they like. Your accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this: > Utahraptor. Selected; two of three. The paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader. This kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based. Conclusion Getting complex interfaces right for all of your users can be difficult — there’s no point pretending otherwise. And there’s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That’s a huge area in itself, and it’s one which I have not yet had a chance to research properly. So, there’s lots to learn, and there’s lots to do to get it right. But don’t be disheartened. If you have read this far then I’ll leave you with one final piece of advice: don’t wait. Don’t wait until you’re building a site which mandates AAA-compliance to try this stuff out. Don’t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You’ll be surprised at the things that you learn and the issues you uncover. And the next time an true accessibility project comes along, you will be way ahead of the game.",2013,Charlie Perrins,charlieperrins,2013-12-03T00:00:00+00:00,https://24ways.org/2013/coding-towards-accessibility/,code 13,Data-driven Design with an Annual Survey,"Too often, we base designs on assumptions that don’t match customer perspectives. Why? Because the data we need to make informed decisions isn’t available. Imagine starting off the year with a treasure trove of user data that can be filtered, sliced, and diced to inform new UI designs, help you discover where users struggle the most, and expose emerging trends in your customers’ needs that could lead to new features. Why, that would be useful indeed. And it’s easy to obtain by conducting an annual survey. Annual surveys may seem as exciting as receiving socks and undies for Christmas, but they’re the gift that keeps on giving all year long (just like fresh socks and undies). I’m not ashamed to admit it: I love surveys! Each time my design research team runs a survey, we learn so much about customer motivations, interests, and behaviors. Surveys provide an aggregate snapshot of your users that can’t easily be obtained by other research methods, and they can be conducted quickly too. You can build a survey in a few hours, run a pilot test in a day, and have real results streaming in the following day. Speed is essential if design research is going to keep pace with a busy product release schedule. Surveys are also an invaluable springboard for customer interviews, which provide deep perspectives on user behavior. If you play your cards right as you construct your survey, you can capture a user ID and an email address for each respondent, making it easy to get in touch with customers whose feedback is particularly intriguing. No more recruiting customers for your research via Twitter or through a recruiting company charging a small fortune. You can filter survey responses and isolate the exact customers to talk with in moments, not months. I love this connected process of sending targeted surveys, filtering the results, and then — with surgical precision — selecting just the right customers to interview. Not only is it fast and cheap, but it lets design researchers do quantitative and qualitative research in a coordinated way. Aggregate survey responses help you quantify the perspectives of different user segments, and interviews help you get into the heads of your customers. An annual survey can give your team the data needed to make more informed designs in the new year. It all starts with a plan. Planning your survey Before you start jotting down questions to ask users, spend some time thinking about the work your team will be doing in the coming year. Are you planning new mobile apps or a responsive redesign? Then questions about devices used and behaviors around mobile devices might be in order. Rethinking your content strategy? Then you might want to ask a few questions about how your customers consume content. You can’t predict all of the projects you’ll be working on in the coming year, but tuck a couple of sections in your survey about the projects you’re certain about. This will give you the research you need to start new projects with solid foundational data. Google Drive is a great place to start collaboratively building survey questions with colleagues. Questions that seem crystal clear in your head get challenged, refined, or even expanded quickly when the entire team can chime in. As you craft your survey, try to consider how you’ll filter it once all of the data is compiled. Do you need to see responses by industry, by age of an account, by devices used, or by size of company? Adding the right filter questions can help you discover fascinating patterns in user segments. Filtering on responses to a few questions can surface insights like: customers in non-profit companies with more than 100 employees are 17% more likely to use an Android phone and are most attracted to features A, D, and F. A designer working on the landing page for a non-profit would love to have concrete information like this. Filter questions are key, so consider them carefully. But don’t go overboard — too many of them and you’ll start to hurt your survey response rate. Multiple choice questions are the heart of most surveys because respondents can complete them quickly, which increases response rate, and researchers can analyze them without a lot of manual categorization. Open text field questions are valuable too, but be careful not to add too many to your survey. You’ll hate yourself after the survey’s done and you have to sort through and tag thousands of open responses so patterns become visible. Oy vey! An open-ended question works well towards the end of the survey. At this point respondents have a lot of topics swirling around in their head and tend to say weird things that will pique your interest. This is where you’ll find the outliers who are using your product. They’ll be fascinating to interview, and on occasion will help you see your work in a brand new way. Conclude your survey with a question asking permission to get in touch for a followup interview so you don’t pester people who want to be left alone. With your questions nailed down, it’s time to build out that survey and get it ready for sending! Building your survey There are dozens of apps you could use to build your survey, but SurveyMonkey is the one that I prefer. It lets you pass in variables for each respondent such as user ID and email address. Metadata about respondents is essential if you’re going to do any follow-up interviews with your customers in the coming year. SurveyMonkey also makes it easy to set up question logic, showing questions to customers only if they responded in a certain way to a prior question. This helps you avoid asking irrelevant questions to some respondents. Determining survey recipients Once you’ve chosen a survey tool and entered all of your questions, you need to gather a list of recipients. Your first instinct will be to send it to everyone. You might say, “I need maximum response and metric shit tons of data!” But this is rarely the best approach — broad distribution almost always leads to lower response rates, increased noise, and decreased signal in your data. Are there subsets of customers you could send to, like only those who are active, those who are paying, or have been with you for a certain length of time? Talk to the keepers of your customer database and see how they can segment it so you can be certain you’re talking to just the people who will have the most relevant responses for your needs. If you want to get super nerdy when finding the right customer sample to survey, use a [sample size calculator]. Sampling is a deep subject best explored in other articles. Crafting your survey email After focusing your energies on writing and building your survey, the email asking your customers to respond seems almost trivial, but it will greatly influence your response rate. Take great care when writing your subject line and the body of the email. If you can pull it off, A/B testing subject lines can greatly improve the open rate of your email and click-through to your survey. My design research team has seen a ~10% increase in open and click rates when we A/B tested. We’ve found that personalizing subject lines and greetings with the recipients name (ie. “Hey, Aarron. How can we make our app work better for you?”) gave us the best response rates. Your mileage may vary. The tone of your email is important — be friendly, honest, and to the point. Those that are passionate about your product will be happy to share their perspective. Writing a survey email that people will actually respond to ain’t easy — in fact, they’re almost always annoying. But Ben Chestnut found a non-annoying way to send a survey email and improve response rates. The email sent for the 2013 MailChimp survey let customers know what we’d been up to in the previous year, and invited feedback on what we should work on in the coming year. The link to your survey should be a clear call to action. A big button with a label like “Answer a few questions” generally does the trick. The URL linking to the survey will need to include some variables like user ID and email. It might look something like this if you’re using SurveyMonkey: http://surveymonkey.com/s/somesurveyid/?uid=*|UID|*&email=*|email|* As each email is sent, the proper data will be populated in the variables, passing it on to the survey app for inclusion in each response. This is the magic that will help you pinpoint customers to interview down the road, so take special care to test that all is working before sending to all recipients. How you construct the survey link will vary depending on what survey tool and email service provider you use, so don’t take my example as gospel. You’ll need to read the documentation for your survey and email apps to set things up properly. Pilot before sending By now, you’ve whipped yourself into a fever pitch over your brilliant survey and the data you hope to collect. Your finger is on the send button, poised for action, but there’s one very important thing to do before you send to the entire list of customers: send a pilot email. How do you know if your questions are clear, your form logic is sound, and you’re passing variables from the email to the survey properly? You won’t, unless you send to a small segment of your recipients first. The data collected in your pilot will make plain where your survey needs refinement. This data won’t be used in your final analysis, as you’re probably going to make a few changes to your questions. Send the pilot survey to enough people that you can really stress test the clarity of the questions and data you’re gathering, while considering how much data can you comfortably throw out. If you’re sending your final survey to a few thousand people, you might find a couple of hundred recipients for your pilot will give you enough insight into what to improve while leaving the vast majority of the recipients for your final survey. After you’ve sent your pilot, made your survey adjustments, and ensured the variables are being passed from your email into the survey app, you’re ready to send to the remainder of your customers. This is your moment of glory! Analyzing your results After a couple of weeks you can probably safely close the survey so no other responses come in as you transition from data gathering to data analysis. Any survey app worth its salt will chart responses to your multiple choice questions. Reviewing these charts is a great place to start your analysis. Is there anything particularly interesting that stands out? Jot down some of your observations. I like to print screenshots of the charts for each question, highlighting areas of interest. These prints become a particularly handy reference point for the next step in your analysis. Printing results from a survey makes comparing different customers easy. Viewing aggregate data about all responses is interesting, but the deltas between different types of customers are where the real revelations happen. Remember those filter questions you added to your survey? They’re the tool that’ll help you compare customer segments. Most survey apps will let you filter the data based on response to a question. If the one you’re using doesn’t, you can always export your data and create pivot tables in Excel. Try filtering your data based on one of your filter questions, such as industry, company size, or devices used. Now compare those printed screenshots of baseline responses to the filtered data. Chances are you’ll see some significant differences in how each group responded to your questions, giving you clues about the variance in interests and motivations in customer segments and a leg up as you work on future design projects. Open-ended responses are equally interesting, but much more time-consuming to analyze. Yes, you need to read through thousands of responses, some of which are constructive and some of which are not. Taking the time to tag each open response will help you see trends and filter out the responses that are unhelpful. Unlike questions with predefined answers, open-ended responses let users express unique ideas and use cases you may not be looking for. The tedium of reading thousands of response is always cut by eureka moments when users tell you something fascinating that changes your perspective on your app. These are the folks you want to pull out for follow-up interviews. Because you’ve already captured their email addresses when you set up your survey and your email, getting in touch will be a piece of cake. Filter, compare, interview, and summarize; then share your findings with your colleagues. Reports are great for head honchos, but if you want to really inform and inspire, create a video, a poster series, or even a comic to communicate what you’ve learned. Want to get really fancy? Store your survey results in a centrally accessible location so anyone in your company can research and discover the insights they need to make more informed designs. Good design researchers discover valuable insights. Great design researchers turn those insights into stories. Conclusion As we enter the new year, it’s a great time to reflect on the work we’ve done in the past and how we can do better in the future. Without a doubt, designers working with a foundation of insights about customers can make more effective UIs. But designers aren’t the only ones who stand to gain from the data collected in an annual survey—anyone who makes things for or communicates with customers will find themselves empowered to do better work when they know more about the people they serve. The data you collect with your survey is a fantastic holiday gift to your colleagues, one that they’ll appreciate throughout the year.",2013,Aarron Walter,aarronwalter,2013-12-13T00:00:00+00:00,https://24ways.org/2013/data-driven-design-with-an-annual-survey/,design 16,URL Rewriting for the Fearful,"I think it was Marilyn Monroe who said, “If you can’t handle me at my worst, please just fix these rewrite rules, I’m getting an internal server error.” Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from. The majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable — I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we’re going to go back to basics to try to make the whole rigmarole more understandable. When we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that’s the most common case, that’s what I’ll be sticking to here. If you work with a different server, there’s often documentation specifically for translating from Apache’s mod_rewrite rules. I even found an automatic converter for nginx. This isn’t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you’re doing. If you’ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don’t worry. As Michael Jackson once insipidly whined, you are not alone. The basics Rewrite rules form part of the Apache web server’s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is: RewriteEngine on The general formula for a rewrite rule is: RewriteRule URL/to/match URL/to/use/if/it/matches [options] When we talk about URL rewriting, we’re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We’ll look at those in turn. Redirects Redirects match an incoming URL, and then redirect the user’s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. In 1998, Sir Tim Berners-Lee wrote that Cool URIs don’t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it’s fine to move things around — especially to correct bad URL design choices of the past — provided that you can do so while keeping those old URLs working. That’s where redirects can help. A redirect might look like this RewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L] Rewriting By default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file. A rewrite rule changes that process by breaking the direct relationship between the URL and the file system. “When there’s a request for /about/history.html” a rewrite rule might say, “use the file /about_section.php instead.” This opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page. RewriteRule ^for/this/url/$ /use/this/file.php [L] Matching patterns By now you’ll have noted the weird ^ and $ characters wrapped around the URL we’re trying to match. That’s because what we’re actually using here is a pattern. Technically, it is what’s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We’ll call it a pattern because we’re not animals. What are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you’d wonder what I wanted your credit card details for, but you’d know that I wanted a two-digit month, a slash, and a two-digit year. That’s not a regular expression, but it’s the same idea: using some placeholder characters to define the pattern of the input you’re trying to match. We’ve already met two regexp characters. ^ Matches the beginning of a string $ Matches the end of a string When a pattern starts with ^ and ends with $ it’s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too: [0-9] Matches a number, 0–9. [2-4] would match numbers 2 to 4 inclusive. [a-z] Matches lowercase letters a–z [A-Z] Matches uppercase letters A–Z [a-z0-9] Combining some of these, this matches letters a–z and numbers 0–9 These are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you’re looking for within the brackets, as well as the ranges shown above. However, all these just match one single character. [0-9] would match 8 but not 84 — to match 84 we’d need to use [0-9] twice. [0-9][0-9] So, if we wanted to match 1984 we could to do this: [0-9][0-9][0-9][0-9] …but that’s getting silly. Instead, we can do this: [0-9]{4} That means any character between 0 and 9, four times. If we wanted to match a number, but didn’t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more. [0-9]+ This now matches 1, 123 and 1234567. Putting it into practice Let’s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this: 2013/article-title/ They start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We’d match it like this: ^[0-9]{4}/[a-z0-9-]+/$ If that looks frightening, don’t worry. Breaking it down, from the start of the URL (^) we’re looking for four numbers ([0-9]{4}). Then a slash — that’s just literal — and then anything lowercase a–z or 0–9 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$). Putting that into a rewrite rule, we end up with this: RewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php We’re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it’s supposed to display. Capturing groups, and replacements When rewriting URLs you’ll often want to take important parts of the URL you’re matching and pass them along to the script that handles the request. That’s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we’re looking for. That means we need to call it like this: /article.php?year=2013&slug=article-title To do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what’s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses. Our pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes: ^([0-9]{4})/([a-z0-9-]+)/$ To use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.) RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 The value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern. Options Several brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are: R=301 Perform an HTTP 301 redirect to send the user’s browser to the new URL. A status of 301 means a resource has moved permanently and so it’s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes. L Last. If this rule matches, don’t bother processing the following rules. Options are set in square brackets at the end of the rule. You can set multiple options by separating them with commas: RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L] or RewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] Common pitfalls Once you’ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. L for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does — the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. One way to avoid this problem is to keep your ‘real’ pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules. Useful snippets I find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to. Excluding a directory As mentioned above, if you’re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won’t be applied. RewriteRule ^_source - [L] This is also useful for excluding things like CMS folders from your website’s rewrite rules RewriteRule ^perch - [L] Adding or removing www from the domain Some folk like to use a www and others don’t. Usually, it’s best to pick one and go with it, and redirect the one you don’t want. On this site, we don’t use www.24ways.org so we redirect those requests to 24ways.org. This uses a RewriteCond which is like an if for a rewrite rule: “If this condition matches, then apply the following rule.” In this case, it’s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything: RewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC] RewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L] The [NC] flag means ‘no case’ — the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance. Removing file extensions Sometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^(.+)$ $1.php [L,QSA] This says if the file being asked for isn’t a file (!-f) and if it isn’t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means ‘query string append’: append the existing query string onto the rewritten URL. It’s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL. Logging for when it all goes wrong Although not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it’s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance. RewriteEngine On RewriteLog ""/full/system/path/to/rewrite.log"" RewriteLogLevel 5 To be doubly clear: this will not work from an .htaccess file — it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.) The white screen of death One of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn’t an error message, of course. It’s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log. If you have access to your server logs, check the Apache error log and you’ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.) In conclusion Rewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future. If you’re redesigning a site, remember that cool URIs don’t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working. Further reading To find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources. From the horse’s mouth, the Apache mod_rewrite documentation Particularly useful with that documentation is the RewriteRule Flags listing You may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial Friend of 24 ways, Neil Crosby has a mod_rewrite Beginner’s Guide which I’ve found handy over the years. As noted at the start, this isn’t a fully comprehensive guide, but I hope it’s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.",2013,Drew McLellan,drewmclellan,2013-12-01T00:00:00+00:00,https://24ways.org/2013/url-rewriting-for-the-fearful/,code 17,Bringing Design and Research Closer Together,"The ‘should designers be able to code’ debate has raged for some time, but I’m interested in another debate: should designers be able to research? Are you a designer who can do research? Good research and the insights you uncover inspire fresh ways of thinking and get your creative juices flowing. Good research brings clarity to a woolly brief. Audience insight helps sharpen your focus on what’s really important. Experimentation through research and design brings a sense of playfulness and curiosity to your work. Good research helps you do good design. Being a web designer today is pretty tough, particularly if you’re a freelancer and work on your own. There are so many new ideas, approaches to workflow and trends and tools to keep up with. How do you decide which things to do and which to ignore? A modern web designer needs to be able to consider the needs of the audience, design appropriate IAs and layouts, choose colour palettes, pick appropriate typefaces and type layouts, wrangle with content, style, code, dabble in SEO, and the list goes on and on. Not only that, but today’s web designer also has to keep up with the latest talking points in the industry: responsive design, Agile, accessibility, Sass, Git, lean UX, content first, mobile first, blah blah blah. Any good web designer doesn’t need to be persuaded about the merits of including research in their toolkit, but do you really have time to include research too? Who is responsible for research? Generally, research in the web industry forms part of other disciplines and isn’t so much a discipline in its own right. It’s very often thought of as part of UX, or activities that make up a process such as IA or content strategy. Research is often undertaken by UX designers, information architects or content strategists and isn’t something designers or developers get that involved in. Some people lump all of these activities together and label it design research and have design researchers to do it. Some companies, such as the one I run with my husband Mark, are lucky enough to have someone with specialist research knowledge (yup, that would be me folks) who can lead all or most of the research work undertaken by the company. See also Mule Design, GOV.UK, the BBC, Mailchimp, Facebook and Twitter. What if you’re not lucky enough to have your own researcher or team of researchers? Often research is the kind of thing that’s nice to have, or it can be cut from scope when doing the budget dance with a client. It often forms part of the discovery phase of a project and sometimes just becomes a tick-box exercise. But research isn’t just user testing and it shouldn’t just live in a report on Basecamp that no one reads. I would argue that research and experimentation is a way of working or an approach to how you design. Research can be used during the whole design process and must be a vital part of a designer’s workflow on every project. Even if you work in a small studio, you can still create a culture of audience insight. Even if you work on your own, you can still absorb yourself in as much audience data as you can throughout the project life cycle. Here’s how. Research is everyone’s job There is a subtle difference between writing a research report and delivering it to a client, and them actually using it and applying the insights to their thought process. In my experience of working in the audiences team at the BBC, research was most effective when the role was embedded in the production team and insights were used as part of the editorial process. In this section I’ll talk through some common problems you might encounter in a typical project life cycle and show you ways you can use research to help you. For the sake of this article, let’s imagine that we’re talking about a particular project here and not ongoing product development. The same principles can of course be applied then, but even if you work in-house rather than on the agency side, you’re probably used to working on distinct projects or phases of work. 1. Problem: I want to come up with a new product idea. Solution: Inspiration through insights. Before you begin a new project, a good way of quickly absorbing all the existing knowledge that there maybe about a theme, product type or website is to literally surround yourself with it. This is especially relevant for new ideas or product development. Create an incident room if you can: fill the walls of your meeting room, the walls near your desk, or even just use a pinboard or online pinboard if space is tight or you’re working with a dispersed team. The same process can be used throughout a project’s or product life cycle — read about how MailChimp has applied this idea. Let’s take a new product idea as an example. Say you wanted to develop a responsive tool for web designers but you weren’t sure what aspect of responsive design to focus on. First of all, you should pose a hypothesis or problem statement to gather ideas around. For example: “How to speed up a designer’s responsive workflow.” You would then need to gather insights around this topic. You could run some interviews with freelance designers about how they work responsively. You could shadow a development team for the day to understand their processes. You could observe conversations on Twitter or IRC or wherever your target audience interact to see what people talk about. You could search out industry data and articles currently available. The next stage is to comb through this data and extract insights from it. You can use good old Post-it notes and a sharpie: capture one insight or thought per Post-it. If one insight leads into another, use two Post-its. The objective is volume. Try to ensure clarity in each Post-it so you don’t have to go back and reference material again (maybe you could use a key if you think it’ll get confusing). After this, stick them all up and synthesise the same way you would for any kind of cluster or affinity sort. Organise into broad themes. These themes then become springboards for further exploration and idea generation. You might see a gap or opportunity in one particular area, both from a workflow perspective but also from a business perspective. Bingo. Your insights then become the fuel for ideas generation. This method doesn’t just have to be used for new products — it works particularly well in a discovery phase for new projects or for new features in an existing product. We’re doing something similar for our own responsive tool, Gridset at the moment. Resources: Sticky Wisdom by Dave Allan, Matt Kingdon, Kris Murrin, Daz Rudkin The Science of Serendipity by Matt Kingdon The Art of Innovation by Tom Kelley 2. Problem: You’re starting a new project and need to know the basics before you get headlong into designing or building. Solution: Quantitative survey. Common questions might be: Who are the users? How many are there? What are they like? Why do they use the site? What do they need from the site? What are their goals? Print out and stick up what you already know and have in your project space or ‘incident room’: any reports you have found or been given, analytics graphs, personas, pen portraits, as well as screengrabs of the current website, product or branding. Spend time looking through it all and identify the gaps. If you have very little existing audience data, a quick and easy way to get some baseline information is to run a quick user survey on a current website. You can establish basic demographic information, appreciation and views of the website as it stands, as well as delve a little deeper into needs and wants. This is also vital if you want some kind of trackable measures to go back to once you have designed and built your shiny new website for your client — read more in my article for 24 ways last year.) We use surveys a lot at Mark Boulton Design for our client work. Here’s a screen grab of one we ran in March on http://info.cern.ch before we redesigned the site and did the work on the First Website Project. We repeated the survey after the new website went live and were able to compare the results. Both surveys were a great source of insight to the project team as well as for the project stakeholders who needed to pitch the idea of the hack days and fundraise for them. Once you’ve run your survey, you should always write up a short summary for yourself and your client to refer to. If you’re not a trained researcher, you should try to read up on analysis techniques or data visualisation. It can be easy to misinterpret data and make it bend to the story you are trying to tell. You should be looking for the story in the data and present it without bias. If you’re using the ‘incident room’ method I mentioned earlier on, you can also extract the insights onto post it notes and add them to your growing body of knowledge. Resources: Using Questionnaires for Design Research by Emma Boulton Data-driven Design with an Annual Survey by Aarron Walter Research Methods for Product Design by Alex Milton and Paul Rodgers A Practical Guide to Designing with Data by Brian Suda 3. Problem: You have a prototype of a new design and you need some feedback from real users. Solution: User interviews and task based testing. Interviewing is a staple research method that every designer should master as it can be used throughout a project life cycle. Erika Hall recently wrote a great article on the basics for A List Apart. From stakeholder interviews in a discovery phase, to initial user research, right through to task based testing and iteration, interviews can be enormously helpful. They are very time-consuming, however, and although speaking to someone is better than speaking to no one, it’s always better to plan to do a few interviews at once, rather than one or two. I generally find that patterns only start to emerge after I’ve spoken to 4 or 5 people. Interviews are another thing we do a lot of at Mark Boulton Design. Most of the interviews we do are remote due to the location of our clients and their users. Rigour is an important consideration in all research activities and especially if you’re a non-researcher. Interviews particularly can be easily skewed by an inexperienced facilitator, which is why pairing can be a good approach. Building rapport, questioning, time keeping, note taking and thinking on your feet can be difficult to do all at once, so having a colleague take notes while you concentrate on leading the conversation can work really well. It’s important for the note taker to sit in on more than one interview so that they get a more rounded view of the feedback. The same person should also be involved in the analysis of the data. Interviews can be analysed and written up in a report or summary as with other types of research. I often use the same kind of collaborative process detailed earlier for deciding on themes, particularly if multiple members of the team have been involved in interviewing. Interviews are particularly useful for our incident room and can provide much colour and insight to an exploratory process. I often find verbatim quotes to be the most insightful type of data. You might find that an inexperienced researcher (or designer who is used to solving problems) will jump to interpretation too soon and forget to just listen to what the interviewee is saying. Capturing the exact form of words a person uses can help get away from this. Resources: Interviewing Humans by Erika Hall A Pocket Guide to Interviewing for Research by Andrew Travers Interviewing Users by Steve Portigal 4. Problem: How successful have I been with this new design? Solution: Key performance indicators Once your new design has been realised, it’s important to evaluate it. What works, what doesn’t work so well? As well as a straightforward design crit, don’t forget to introduce audience insights into a review meeting or project wash up. Work out what your KPIs — your key performance indicators — will be beforehand and then you can start to track them over time. For example, number of visits, appreciation of the site, willingness to recommend the site to a friend, number of sales, and number of conversions are all sensible measures to track. Interviews can again be helpful but cold, hard numbers are often better here. Read Corey Vilhauer’s take on this on A List Apart. Consistency is key here. If you have looked at your analytics and done a survey beforehand, you will have a baseline to start from. Don’t keep changing your measures and questions, or your data will not be comparable. Pick a few key questions or a set of measures, create a survey and then run it once a month, once a quarter, every six months or annually. You’ll start to see changes over time as the design beds in. You may see seasonal trends and spot patterns in the data related to other activities like marketing, promotion and so on. Keeping a record of all of this will increase your understanding of your audience. We’ve created a satisfaction survey for Gridset with a number of measures that we track on an ongoing basis. MailChimp has also created an annual survey with the aim of tracking their audience measures over time Resources: Search Analytics by Louis Rosenfeld A Primer on A/B Testing by Lara Swanson Lean UX by Jeff Gothelf Anyone can do research Research can be brought into the project life cycle at any stage. And of course, anyone can do research — you don’t need to be a researcher. Some of the main skills most designers possess are also key research skills: inquisitive nature, problem solving, playfulness, empathy, and so on. We have a small team at Mark Boulton Design. Most of the team are designers and the rest of us focus on supporting the team and clients both in terms of billable work (research, content strategy, project management) as well as the non-billable things like finance and studio management. Despite my best intentions, in the past I’ve undertaken research for clients in isolation — first being briefed by the design lead, carrying out the research and then delivering the findings back, trusting the design team to take the findings on board. This was often due to time and availability of resources. We’ve been trying hard to join up our processes and collaborate even more across the team. Undertaking heuristic or design reviews collaboratively; taking part in frequent critiques of our work and the work of others together; pairing a researcher and a designer to run interviews; workshopping results from interviews to come up with recommendations; working closely together on questionnaire design; shadowing each other on tasks that don’t fall within our core skills. A little thing like moving our desks around has also helped us have more conversations that we can all be a part of. I’ve come to the conclusion that my role as the research director at Mark Boulton Design is actually a facilitator of research. As well as carrying out research, I am responsible for ensuring that research happens consistently across the team. I am responsible for empowering and training our designers so they feel confident in carrying out their own user, audience or design research for clients. So they know what to look for, when to listen, when to probe and when to take note of something. So they know how to look for themes, how to synthesise insights from research and how to apply them to their work. Better research leads to better design So, are you a designer who can do research? Are you a researcher who can design? The best designers are a lucky combination of researcher and designer. If you’re not one of those, look at ways of enhancing the skills you lack. Because there’s no doubt in my mind, that becoming a better researcher will make you a better designer. General resources: Seeing the Elephant by Louis Rosenfeld Connected UX by Aarron Walter Beyond Usability Testing by Devan Goldstein Just Enough Research by Erika Hall The User Experience Team of One by Leah Buley Undercover User Experience Design by Cennydd Bowles and James Box A Pocket Guide to Psychology for Designers by Joe Leech A Pocket Guide to International User Research by Chui Chui Tan Remote Research by Nate Bolt and Tony Tulathimutte A Pocket Guide to Experiments for Designers by Colin McFarland",2013,Emma Boulton,emmaboulton,2013-12-22T00:00:00+00:00,https://24ways.org/2013/bringing-design-and-research-closer-together/,ux 19,In Their Own Write: Web Books and their Authors,"The currency of written communication — words on the page, words on the screen — comprises many denominations. To further our ends in web design and development, we freely spend and receive several: tweets aphoristic and trenchant, banal and perfunctory; blog posts and articles that call us to action or reflection; anecdotes, asides, comments, essays, guides, how-tos, manuals, musings, notes, opinions, stories, thoughts, tips pro and not-so-pro. So many, many words. Our industry (so much more than this, but what on earth are we, collectively?), our community thrives on writing and sharing knowledge and experience. 24 ways is a case in point. Everyone can learn and contribute through reading and writing — it’s what we’ve always done. To web authors and readers seeking greater returns, though, broader culture has vouchsafed an enduring and singular artefact: the book. Last month I asked a small sample of web book authors if they would be prepared to answer a few questions; most of them kindly agreed. In spirit, the survey was informal: I had neither hypothesis nor unground axe. I work closely with writers — and yes, I’ve edited or copy-edited books by several of the authors I surveyed — and wanted to share their thoughts about what it was like to write a book (“…it was challenging to find a coherent narrative”), why they did it (“Who wouldn’t want to?”) and what they learned from the experience (“That I could!”). Reasons for writing a book In web development the connection between authors and readers is unusually close and immediate. Working in our medium precipitates a unity that’s rare elsewhere. Yet writing and publishing a book, even during the current books revolution, is something only a few of us attempt and it remains daunting and a little remote. What spurs an author to try it? For some, it’s a deeply held resistance to prevailing trends: I felt that designers and developers needed to be shaken out of what seemed to me had been years of stagnation. —Andrew Clarke Or even a desire to protect us from ourselves: I felt that without a book that clearly defined progressive enhancement in a very approachable and succinct fashion, the web was at risk. I was seeing Tim Berners-Lee’s vision of universal availability slip away… —Aaron Gustafson Sometimes, there’s a knowledge gap to be filled by an author with the requisite excitement and need to communicate. Jon Hicks took his “pet subject” and was “enthused enough to want to spend all that time writing”, particularly because: …there was a gap in the market for it. No one had done it before, and it’s still on its own out there, with no competition. It felt like I was able to contribute something. Cennydd Bowles felt a professional itch at a particular point in his career, understanding that [a]s a designer becomes more senior, they start looking for ways to scale the effects of their work. For some, that leads into management. For others, into writing. Often, though, it’s also simply a personal challenge and ambition to explore a subject at length and create something substantial. Anna Debenham describes a motivation shared by several authors: To be able to point to something more tangible than an article and be able to say “I did that.” That sense of a book’s significance, its heft and gravity even, stems partly from the cultural esteem which honours books and their authors. Books have a long history as sources of wisdom, truth and power. Even with more books being published each year than ever before, writing one is still commonly considered a laudable achievement, including in our field. Challenges of writing a book Received wisdom has it that writing online should be brief and chunky and approachable: get to the point; divide it all up; subheadings and lists are our friends; write like you’re talking; no one has time to read. Much of such advice is true. Followed well, it lends our writing punch and pith, vigour and vim. The web is nimble, the web keeps up, and it suits what we write about developing for it. It’s perfect for delivering our observations, queries and investigations into all the various aspects of the work, professional and personal. Yet even for digital natives like web authors, books printed and electronic retain an attractive glister. Ideas can be developed more fully, their consequences explored to greater depth and extended with more varied examples, and the whole conveyed with more eloquence, more style. Why shouldn’t authors delay their conclusions if the intervening text is apposite, rich with value and helps to flesh out the skeleton of an argument? Conclusions might or might not be reached, of course, but a writer is at greater liberty in a book to digress in tangential and interesting ways. Writing a book involves committing time, energy, thought and money. As Brian Suda found, it can be tough “getting the ideas out of my head into a cohesive blob of text.” Some authors end up talking to themselves… It helps me to keep a real person in mind, someone who I’m talking to as I write. Sometimes I have the same conversations over and over in my head. —Andrew Clarke …while others are thinking ahead, concerned with how their book will be received: Would anyone want to read it? Would they care? Would it be respected by my peers? —Joe Leech Challenges that arose time and again included “starting” and “getting words on the page” as well as “knowing when to stop” or “letting go”. Personal organization problems and those caused by publishers were also widely mentioned. Time loomed large. Making time, finding time. Giving up “sleep and some sanity” and realizing “it will take you far, far, far longer than you naively assumed”. Importantly, writing time is time away from gainful employment: Aaron Gustafson found the hardest thing about writing a book to be “the loss of income while I was writing.” Perils and pleasures of editing Editing, be it structural, technical or copy editing, is founded on reciprocity. Without openness and a shared belief that the book is worthwhile, work can founder in acrimony and mistrust. Editors are a book’s first and most critical (in every sense) readers. Effective and perceptive editing makes a book as good as it can be, finding the book within the draft like sculpture reveals the statue in the stone. A good editor calls you out on poor assumptions and challenges you to really clarify your thinking. Whilst it can be difficult during the process to have your thinking challenged, it’s always been worth it — for me personally — in the long run. A good editor also reins you in when you’ve perhaps wandered off track or taken a little too long to make a point. —Christopher Murphy Andy Croll found editing “all positive” and Aaron Gustafson loves “working with a strong editor […] I want someone to tell it to me straight.” But it can be a rollercoaster, “both terrifying and the real moment of elation”. Mixed emotions during the editing process are common: It was very uncomfortable! I knew it was making the work stronger, but it was awkward having my inconsistencies and waffle picked apart. —Jon Hicks It can be distressing to have written work looked over by a professional, particularly for first-time book authors whose expertise lies elsewhere: I was a little nervous because I don’t consider myself a skilled writer — I never dreamed of becoming an author. I’m a designer, after all. —Geri Coady Communication is key, particularly when it comes to checking or changing the author’s words. I like a good banter between me and the tech editor — if we can have a proper argument in Word comments, that’s great. —Rachel Andrew But if handled poorly, small battles can break out. Rachel Andrew again: However, having had plenty of times where the technical editor has done nothing more than give a cursory glance, I started to leave little issues in for them to spot. If they picked them up I knew they were actually testing the code and I could be sure the work was being properly tech edited. If they didn’t spot them, I’d find someone myself to read through and check it! A major concern for writers is that their voices will be altered, filtered, mangled or otherwise obscured by the editing process. Good copy editing must remain unnoticed while enhancing the author’s voice in print. Donna Spencer appreciated the way her editor “tidied up my work and made it a million times better, but left it sounding exactly like me.” Similarly, Andrew Travers “was incredibly impressed at how well my editor tightened up my own writing without it feeling like another’s voice” and Val Head sums up the consensus that: the editor was able to help me express what I was trying to say in a better way […] I want to have editors for everything now. At the keyboard, keep your friends close, but your editors closer. Publishing and publishers Conditions ought to militate against the allure of writing a book about web design and development. More books are published each year than ever before, so readerships elude new authors and readers can struggle to find authors to trust in their fields of interest. New spaces for more expansive online writing about working on and with the web are opening up (sites like Contents Magazine and STET), and seminal online web development texts are emerging. Publishing online is simple, far-reaching and immediate. Much more so than articles and blog posts, books take time to research, write and read; add the complexity of commissioning, editing, designing, proofreading, printing, marketing and distribution processes, and it can take many months, even years to publish. The ceaseless headlong momentum of the web can leave articles more than a few weeks old whimpering in its wake, but updating them at least is straightforward; printed books about web development can depreciate as rapidly as the technology and techniques they describe, while retaining the “terrifying permanence that print bestows: your opinions will follow you forever”. So much moves on, and becomes out of date. Companies featured get bought by larger companies and die, techniques improve and solutions featured become terribly out of date. Unlike a website, which could be updated continuously, a book represents the thinking ‘at that time’. —Jon Hicks Publishers work hard to mitigate these issues, promoting new books and new authors, bringing authors and readers together under a trusted banner. When a publisher packages up and releases a writer’s words, it confers a seal of approval and “badge of quality”, very important to new authors. Publishers have other benefits to offer, from expert knowledge: My publisher was extraordinarily supportive (and patient). Her expertise in my chosen subject was both a pressure (I didn’t want to let her down) and a reassurance (if she liked it, I knew it was going to be fine). —Andrew Travers …to systems and support mechanisms set up specifically to encourage writers and publish books: Working as a team means you’re bringing in everyone’s expertise. —Chui Chui Tan As a writer, the best part about writing for a publisher was the writing infrastructure offered. —Christopher Murphy There can be drawbacks, however, and the occasional horror story: We were just one small package on a huge conveyor belt. The publisher’s process ruled all. —Cennydd Bowles It’s only looking back I realise how poorly some publishers treat writers — especially when the work is so poorly remunerated.My worst experience was when a publisher decided, after I had completed the book, that they wanted to push a different take on the subject than the brief I had been given. Instead of talking to me, they rewrote chunks of my words, turning my advice into something that I would never have encouraged. Ultimately, I refused to let the book go out under my name alone, and I also didn’t really promote the book as I would have had to point out the things I did not agree with that had been inserted! —Rachel Andrew Self-publishing is now a realistic option for web authors, and can offers “complete control over the end product” as well as the possibility of earning more than a “pathetic author revenue percentage”. There can be substantial barriers, of course, as self-publishing authors must face for themselves the risks and challenges conventional publishers usually bear. Ideally, creating a book is a collaboration between author and publisher. Geri Coady found that “working with my publisher felt more like working with a partner or co-worker, rather than working for a boss.” Wise words So, after meeting the personal costs of writing and publishing a web book — fear, uncertainty, doubt, typing (so much typing) — and then smelling the roses of success, what’s left for an author to say? Some words, perhaps, to people thinking of writing a book. Donna Spencer identifies a stumbling block common to many writers with an insight into the writing process: Having talked to a lot of potential authors, I think most have the problem that they haven’t actually figured out the ‘answer’ to their premise yet. They feel like they are stuck in the writing, but they are actually stuck in the thinking. For some no-nonsense, straightforward advice to cut through any anxiety or inadequacy, Rachel Andrew encourages authors to “treat it like any other work. There is no mystery to writing, you just have to write. Schedule the time, sit down, write words.” Tim Brown notes the importance of the editing process to refine a book and help authors reach their readers: Hire good editors. Editors are amazing thinkers who can vastly improve the quality and clarity of a piece of writing. We are too much beholden to the practical demands and challenges of technology, so Aaron Gustafson suggests a writer should “favor philosophies over techniques and your book will have a longer shelf life.” Most intimations of renown and recognition are nipped in the bud by Joe Leech’s warning: “Don’t expect fame and fortune.” Although Cennydd Bowles’ bitter experience can be discouraging: The sacrifices required are immense. You probably won’t make it. …he would do things differently for a future book: I would approach the book with […] far more concern about conveying the damn joy of what I do for a living. The pleasure of writing, not just having written is captured by James Chudley when he recalls: How much I enjoy writing and also how much I enjoy the discipline or having a side project like this. It’s a really good supplement to working life. And Jon Hicks has words that any author will find comforting: It will be fine. Everything will be fine. Just get on with it! As the web expands effortlessly and ceaselessly to make room for all our words, yet it can also discourage the accumulation of any particular theme in one space, dividing rich seams and scattering knowledge across the web’s surface and into its deepest reaches. How many words become weightless and insubstantial, signals lost in the constant white noise of indistinguishable voices, unloved, unlinked? The web forgets constantly, despite the (somewhat empty) promise of digital preservation: articles and data are sacrificed to expediency, profit and apathy; online attention, acknowledgement and interest wax and wane in days, hours even. Books can encourage deeper engagement in readers, and foster faith in an author, particularly if released under the imprint of a recognized publisher within the field. And books are changing. Although still not widely adopted, EPUB3 is the new standard in ebooks, bringing with it new possibilities for interaction and connection: readers with the text; readers with readers; and readers with authors. EPUB3 is built on HTML, CSS and JavaScript — sound familiar? In the past, we took what we could from the printed page to make the web; now books are rubbing up against what we’ve made. So: a book. Ever thought you could write one? Should write one? Would? I’d like to thank all the authors who wrote their books and answered my questions. Rachel Andrew · CSS3 Layout Modules, The CSS3 Anthology and more Cennydd Bowles · Undercover User Experience Design, with James Box Tim Brown · Combining Typefaces James Chudley · Usability of Web Photos Andrew Clarke · Hardboiled Web Design Geri Coady · Colour Accessibility Andy Croll · HTML Email Anna Debenham · Front-end Style Guides Aaron Gustafson · Adaptive Web Design Val Head · CSS Animations Jon Hicks · The Icon Handbook Joe Leech · Psychology for Designers Christopher Murphy · The Craft of Words, with Niklas Persson Donna Spencer · Information Architecture, Card Sorting and How to Write Great Copy for the Web Brian Suda · Designing with Data Chui Chui Tan · International User Research Andrew Travers · Interviewing for Research",2013,Owen Gregory,owengregory,2013-12-15T00:00:00+00:00,https://24ways.org/2013/web-books/,content 20,Make Your Browser Dance,"It was a crisp winter’s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue. This was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these… and working out ways to mix them as the music played. This was it. This was me VJing. This was all a lifetime (well a decade!) ago. When I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API But wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser? Well, there’s only one thing to do: let’s try it! Let’s take to the dance floor Over the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let’s take the same approach here. The main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right? So, let’s start at the beginning: creating a simple visual. For this I’m going to create a CSS animation. It’s just a funky i element with the opacity being changed to make it flash. See the Pen Creating a light by Rumyra (@Rumyra) on CodePen A note about prefixes: I’ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article Start the music Well, that’s pretty easy so far. Next up: loading in some sound. For this we’ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device’s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext. So, let’s start by initialising our audio context. var contextClass = window.AudioContext; if (contextClass) { //web audio api available. var audioContext = new contextClass(); } else { //web audio api unavailable //warn user to upgrade/change browser } Now let’s load our sound file into the new context we created with an XMLHttpRequest. function loadSound() { //set audio file url var audioFileUrl = '/octave.ogg'; //create new request var request = new XMLHttpRequest(); request.open(""GET"", audioFileUrl, true); request.responseType = ""arraybuffer""; request.onload = function() { //take from http request and decode into buffer context.decodeAudioData(request.response, function(buffer) { audioBuffer = buffer; }); } request.send(); } Phew! Now we’ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O’Reilly Web Audio API book by Boris Smus is available to read online free. All we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have. Learning the steps Let’s take a minute to step back and remember our school days and science class. I’m sure if I drew a picture of a sound wave, we would all start nodding our heads. The sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size. This is great when everything is analogue, but the waveform varies continuously and it’s not suitable for digital processing: there’s an infinite set of values. For digital processing, we need discrete numbers. We have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we’re doing in the code above is piping that data in the audio context. All we need to do now is access it. We can do this with the Web Audio API’s analysing functionality. Just pop in an analysing node before we connect the source to its destination node. function createAnalyser(source) { //create analyser node analyser = audioContext.createAnalyser(); //connect to source source.connect(analyzer); //pipe to speakers analyser.connect(audioContext.destination); } The data I’m really interested in here is frequency. Later we could look into amplitude or time, but for now I’m going to stick with frequency. The analyser node gives us frequency data via the getFrequencyByteData method. Don’t forget to count! To collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it? This is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context’s sampleRate attribute. This is good to bear in mind when you’re thinking about your resolution of frequencies. var sampleRate = audioContext.sampleRate; Let’s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we’re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0–20,000hz. So, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property. //set our FFT size analyzer.fftSize = 2048; //create an empty array with 1024 items var frequencyData = new Uint8Array(1024); So, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 ÷ 1,024 = 23.44Hz apart. The thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame. function update() { //constantly getting feedback from data requestAnimationFrame(update); analyzer.getByteFrequencyData(frequencyData); } Putting it all together Sweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier. We can easily pause and run our CSS animation from JavaScript: element.style.webkitAnimationPlayState = ""paused""; element.style.webkitAnimationPlayState = ""running""; Unfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle. There is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms. This seems a bit much for our proof of concept, so let’s backtrack a little. We know by the animation we’ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript. element.style.opacity = ""1""; element.style.opacity = ""0.2""; So let’s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I’ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold. //get light elements var lights = document.getElementsByTagName('i'); var totalLights = lights.length; for (var i=0; i 160){ //start animation on element lights[i].style.opacity = ""1""; } else { lights[i].style.opacity = ""0.2""; } } See all the code in action here. I suggest viewing in a modern browser :) Awesome! It is true — we can VJ in our browser! Let’s dance! So, let’s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs. Check it out! I don’t know about you, but I’m pretty excited — that’s just a bit of HTML, CSS and JavaScript! The other thing to think about, of course, is the sound that you would get at a venue. We don’t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I’ve found, is to capture what my laptop’s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia. Let’s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash. And relax :) There you have it. Sit back, play some music and enjoy the Winamp like experience in front of you. So, where do we go from here? I already have a wealth of ideas. We haven’t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it’s questionable whether the browser is the best environment for this. For one, I’m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it’s fun, and it looks cool and sometimes I think it’s OK to just dance.",2013,Ruth John,ruthjohn,2013-12-02T00:00:00+00:00,https://24ways.org/2013/make-your-browser-dance/,code 22,The Responsive Hover Paradigm,"CSS transitions and animations provide web designers with a whole slew of tools to spruce up our designs. Move over ActionScript tweens! The techniques we can now implement with CSS are reminiscent of Flash-based adventures from the pages of web history. Pairing CSS enhancements with our :hover pseudo-class allows us to add interesting events to our websites. We have a ton of power at our fingertips. However, with this power, we each have to ask ourselves: just because I can do something, should I? Why bother? We hear a lot of mantras in the web community. Some proclaim the importance of content; some encourage methods like mobile first to support content; and others warn of the overhead and speed impact of decorative flourishes and visual images. I agree, one hundred percent. At the same time, I believe that content can reign king and still provide a beautiful design with compelling interactions and acceptable performance impacts. Maybe, just maybe, we can even have a little bit of fun when crafting these systems! Yes, a site with pure HTML content and no CSS will load very fast on your mobile phone, but it leaves a lot to be desired. If you went to your local library and every book looked the same, how would you know which one to borrow? Imagine if every book was printed on the same paper stock with the same cover page in the same type size set at a legible point value… how would you know if you were going to purchase a cookbook about wild game or a young adult story about teens fighting to the death? For certain audiences, seeing a site with hip, lively hovers sure beats a stale website concept. I’ve worked on many higher education sites, and setting the interactive options is often a very important factor in engaging potential students, alumni, and donors. The same can go for e-commerce sites: enticing your audience with surprise and delight factors can be the difference between a successful and a lost sale. Knowing your content and audience can help you decide if an intriguing experience is appropriate for your site; if it is, then hover responses can be a real asset. Why hover? We have all these capabilities with CSS properties to create the aforementioned fun interactions, and it would be quite easy to fall back into some old patterns and animation abuse. The world of Flash intros and skip links could be recreated with CSS keyframes. However, I don’t think any of us want to go the route of forcing users into unwanted exchanges and road blocking content. What’s great about utilizing hover to pair with CSS powered actions is that it’s user initiated. It’s a well-established expectation that when a user mouses over an object, something changes. If we can identify that something as a link, then we will expect something to change as we move our mouse over it. By waiting to trigger a CSS-based response until a user chooses to engage with a target makes for a more polished experience (as opposed to barraging our screens with animations all willy-nilly). This makes it the perfect opportunity to add some unique spunk. What about mobile, touch, and responsive? So, you’re on board with this so far, but what about mobile and touch devices? Sure, some devices like the Samsung Galaxy S4 have some hovering capabilities, but certainly most do not. Beyond mobile devices, we also have to worry about desktops with touch capabilities. It’s super difficult to detect if a user is currently using touch or hover. One option we have is to design strictly for touch only and send hover enhancements to the graveyard. However, being that I’m all “fuck yeah hovers!,” I like to explore all options. So, let’s examine four different types of hover patterns and see how they can translate to our touch devices. 1. The essential text hover Changing text color on hover is something we’ve done for a while and it has helped aid in identifying links. To maintain the best accessibility we can achieve, it helps to have a different visual indicator on the default :link state, such as an underline. By making sure all text links have an underline, we won’t have to rely on visual changes during hover to make sure touch device users know that it is a link. For hover-enabled devices, we can add a basic color transition. Doing so creates a nice fade, which makes the change on hover less jarring. Kinda like smooth jazz. The code* to achieve this is quite simple: a { color: #6dd4b1; transition: color 0.25s linear; } a:hover, a:focus { color: #357099; } Browser prefixes are omitted You can see in the final result that, for both touch and hover, everyone wins: See the Pen Most Basic Link Transition by Jenn Lukas (@Jenn) on CodePen 2. Visual background wizardry and animated hovers We can take this a step further by again making changes to our aesthetic on hover, but not making any content changes. Altering image hovers for fun and personality can separate your site from others; that personality is important and can enhance our content. Let’s look at a few sites that do this really well. Scroll down to the judges section of CSS Off and check out the illustrations of the judges. On hover, the illustration fades into a photo of the judge. This provides a realistic alternative to the drawing. Users without the hover can click into the detail page, where they can see the full color picture and learn more about the judges; the information is still available through a different pathway. Going back to the higher education field, let’s visit Delaware Valley College. The school had recently gone through a rebranding that included loop icons as a symbol to connect ideas. These icons are brought into the website on hover of the slideshow arrows (WebKit browsers). The hover reveals a loop animation, tying in overall themes and adding some extra pizzazz that makes me think, “This is a hip place that feels current.” For visitors who can’t access the hover effect, the default arrow state clearly represents a clickable link, and there is swipe functionality on mobile devices to boot. DIY.org’s Frontend Dev page has a bunch of enjoyable hover actions happening, featuring scaling transforms and looping animations. Nothing new is revealed on hover, so touch devices won’t miss anything, but it intrigues the user who is visiting a site about front-end dev doing cool front-end things. It backs up its claim of front-end knowledge by adding this enhancement. The old Cowork Chicago (now redirecting) had a great example, captured here: Coop: Chicago Coworking from Jenn Lukas on Vimeo. The code for the Join areas is quite simple: .join-buttons .daily, .join-buttons .monthly { height: 260px; z-index: 0; margin-top: 30px; transition: height .2s linear,margin .2s linear; } .join-buttons .daily:hover, .join-buttons .monthly:hover { height: 280px; margin-top: 20px; } li.button:hover { z-index: 20; } The slight rotation on the photos, and the change of color and size of the rate options on hover, add to the fun factor. The site attempts to advertise the co-working space by letting bits of their charisma show through with these transitions. They don’t hit the user over the head with animations, but provide a nice addition to make sure visitors know it’s a welcoming place to work. Some text is added on the hover, but the text isn’t essential to determine where the link goes. 3. Image block hovers There have been more designs popping up with large image blocks acting as extensive hit area links to subsequent pages. On hover of these links, text is revealed, letting the user know where the link destination goes. See the Pen Transitioning Max Height by Jenn Lukas (@Jenn) on CodePen This type of link is tough for users on touch as the image might not provide enough context to reveal its target. If you weren’t aware of what my illustrated avatar from 2007 looked like (or even if you did), then how would you know that this is a link to my Twitter page? Instead, if we provide enough context — such as the @jennlukas handle — you could assume the destination. Users who receive the hover can also see the Twitter bio. It won’t break the experience for users that can’t hover, but it will provide a nice interaction and some more information for those that can. See the Pen Transitioning Max Height by Jenn Lukas (@Jenn) on CodePen The Esquire site follows this same pattern, in which the title of the story is shown and the subheading is revealed on hover. Dining at Altitude took the opposite approach, where all text is shown by default and, on hover, you can see more of the image that the text sits atop. This is a nice technique to follow. For touch users, following the link will allow them to see more of the image detail that was revealed on hover. 4. Drop-down navigation menu hovers Main navigation options that rely on hover have come up as a problem for touch. One way to address this is to be sure your top level items are all functional links to somewhere, and not blank anchors to trigger a submenu drop-down. This ensures that, even without the hover-triggered menu, users can still navigate to those top-level pages. From there, they should be able to access the tertiary pages shown in the drop-down. Following this arrangement, drop-down menus act as a quick shortcut and aren’t necessary to the navigational structure. If the top navigation items are your most visited pages, this execution won’t hinder your visitors. If the information within the menu is vital, such as a lone account menu, another option is to show drop-down menus on click instead of hover. This pattern will allow both mouse and touch users to access the drop-downs. Why can’t we just detect hover? This is a really tricky thing to do. Internet Explorer 10 on Windows 8 uses the aria-haspopup attribute to simulate hover on touch devices, but usually our audience stretches beyond that group. There’s been discussion around using Modernizr, but false positives have come with that. A W3C draft for Media Queries Level 4 includes a hover feature, but it’s not supported yet. Since some devices can hover and touch, should you rely on hover effects for those? Arguments have come up that users can be browsing your site with a mouse and then decide to switch to touch, or vice versa. That might be a large concern for you, or it might be an edge case that isn’t vital to your site’s success. For one site, I used mousemove and touchstart JavaScript events in order to detect if a visitor starts to browse the site with a mouse. The design initiates for touch users, showing all text on load, but as soon as a mouse movement occurs, the text becomes hidden and is then revealed on hover. See the Pen Detect Touch devices with mousemove and touchstart by Jenn Lukas (@Jenn) on CodePen One downside to this approach is that the text is viewable until a mouse enters the document, but if the elements are further down the page it might not be noticed. A second downside is if a user on a touch- and hover-enabled device starts browsing with the mouse and then switches back to touch, the hover-centric styles will remain until a new page load. These were acceptable scenarios in the project I worked on, but might not be for every project. Can we give our visitors a choice? I’ve been thinking about how we can combat the concern of not knowing if our customers are using touch or a mouse, not to mention keyboard or Wacom tablets or Minority Report screens. We can cover keyboards with our friend :focus, but that still doesn’t solve our other dilemmas. Remember when we couldn’t rely on browsers to zoom text and we had to use those small A, medium A, big A [AAA] buttons? On selection of one of those options, a different style sheet would load with small, medium, or large text sizes to satisfy our user’s request. We could even set cookies to remember their font choices. What if we offered a similar solution, a hover/touch switcher, for our new predicament? See the Pen cwuJf by Jenn Lukas (@Jenn) on CodePen We could add this switcher to our design. Maybe add it to the header on smaller screens and the footer on larger screens to play the odds. Then be sure to deliver the appropriate touch- or hover-optimized adventure for our guests. How about adding View options in the areas where we’re hiding content until hover? Looking at Delta Cycle, there’s logic in place to switch layouts on some mobile devices. On desktops we can see the layout shows the product and price by default, and the name of the item and an Add to cart button on hover. If you want to keep this hover, but also worry that touch users can’t access it — or even if you are concerned that people might want to view it with more details up front — we could add another view switcher. See the Pen List/Grid Views for Hover or Touch by Jenn Lukas (@Jenn) on CodePen Similar to the list versus grid view we often see in operating systems, a choice here could cover all of our bases. Conclusion There is no one-size-fits-all solution when it comes to hover patterns. Design for your content. If you are providing important information about driving directions or healthcare, you might want to err on the side of designing for touch only. If you are behind an educational site and trying to entice more traffic and sign-ups, or a more immersive e-commerce site selling pies, then hover activity can help support your content and engage your visitors without being a detriment. While content can be our top priority, let’s not forget that our designs and interactions, hovers included, can have a great positive impact on how visitors experience our site. Hover wisely, friends.",2013,Jenn Lukas,jennlukas,2013-12-12T00:00:00+00:00,https://24ways.org/2013/the-responsive-hover-paradigm/, 24,Kill It With Fire! What To Do With Those Dreaded FAQs,"In the mid-1640s, a man named Matthew Hopkins attempted to rid England of the devil’s influence, primarily by demanding payment for the service of tying women to chairs and tossing them into lakes. Unsurprisingly, his methods garnered criticism. Hopkins defended himself in The Discovery of Witches in 1647, subtitled “Certaine Queries answered, which have been and are likely to be objected against MATTHEW HOPKINS, in his way of finding out Witches.” Each “querie” was written in the voice of an imagined detractor, and answered in the voice of an imagined defender (always referring to himself as “the discoverer,” or “him”): Quer. 14. All that the witch-finder doth is to fleece the country of their money, and therefore rides and goes to townes to have imployment, and promiseth them faire promises, and it may be doth nothing for it, and possesseth many men that they have so many wizzards and so many witches in their towne, and so hartens them on to entertaine him. Ans. You doe him a great deale of wrong in every of these particulars. Hopkins’ self-defense was an early modern English FAQ. Digital beginnings Question and answer formatting certainly isn’t new, and stretches back much further than witch-hunt days. But its most modern, most notorious, most reviled incarnation is the internet’s frequently asked questions page. FAQs began showing up on pre-internet mailing lists as a way for list members to answer and pre-empt newcomers’ repetitive questions: The presumption was that new users would download archived past messages through ftp. In practice, this rarely happened and the users tended to post questions to the mailing list instead of searching its archives. Repeating the “right” answers becomes tedious… When all the users of a system can hear all the other users, FAQs make a lot of sense: the conversation needs to be managed and manageable. FAQs were a stopgap for the technological limitations of the time. But the internet moved past mailing lists. Online information can be stored, searched, filtered, and muted; we choose and control our conversations. New users no longer rely on the established community to answer their questions for them. And yet, FAQs are still around. They’re a content anti-pattern, replicated from site to site to solve a problem we no longer have. What we hate when we hate FAQs As someone who creates and structures online content – always with the goal of making that content as useful as possible to people – FAQs drive me absolutely batty. Almost universally, FAQs represent the opposite of useful. A brief list of their sins: Double trouble Duplicated content is practically a given with FAQs. They’re written as though they’ll be accessed in a vacuum – but search results, navigation patterns, and curiosity ensure that users will seek answers throughout the site. Is our goal to split their focus? To make them uncertain of where to look? To divert them to an isolated microcosm of the website? Duplicated content means user confusion (to say nothing of the duplicated workload for maintaining content). Leaving the job unfinished Many FAQs fail before they’re even out of the gate, presenting a list of questions that’s incomplete (too short and careless to be helpful) or irrelevant (avoiding users’ real concerns in favor of soundbites). Alternately, if the right questions are there, the answers may be convoluted, jargon-heavy, or otherwise difficult to understand. Long lists of not-my-question Getting a single answer often means sifting through a haystack of questions. For each potential question, the user must read, comprehend, assess, move on, rinse, repeat. That’s a lot of legwork for little reward – and a lot of opportunity for mistakes. Users may miss their question, or they may fail to recognize a differently worded version of their question, or they may not notice when their sought-after answer appears somewhere they didn’t expect. The ventriloquist act FAQs shift the point of view. While websites speak on behalf of the organization (“our products,” “our services,” “you can call us for assistance,” etc.), FAQs speak as the user – “I can’t find my password” or “How do I sign up?” Both voices are written from the first-person perspective, but speak for different entities, which is disorienting: it breaks the tone and messaging across the website. It’s also presumptuous: why do you get to speak for the user? These all underscore FAQs’ fatal flaw: they are content without context, delivered without regard for the larger experience of the website. You can hear the absurdity in the name itself: if users are asking the same questions so frequently, then there is an obvious gulf between their needs and the site content. (And if not, then we have a labeling problem.) Instead of sending users to a jumble of maybe-it’s-here-maybe-it’s-not questions, the answers to FAQs should be found naturally throughout a website. They are not separated, not isolated, not other. They are the content. To present it otherwise is to create a runaround, and users know it. Jay Martel’s parody, “F.A.Q.s about F.A.Q.s” captures the silliness and frustration of such a system: Q: Why are you so rude? A: For that answer, you would have to consult an F.A.Q.s about F.A.Q.s about F.A.Q.s. But your time might be better served by simply abandoning your search for a magic answer and taking responsibility for your own profound ignorance. FAQs aren’t magic answers. They don’t resolve a content dilemma or even help users. Yet they keep cropping up, defiant, weedy, impossible to eradicate. Where are they all coming from? Blame it on this: writing is hard. When generating content, most of us do whatever it takes to get some words on the screen. And the format of question and answer makes it easy: a reactionary first stab at content development. After all, the point of website content is to answer users’ questions. So this – to give everyone credit – is a really good move. Content creators who think in terms of questions and answers are actually thinking of their users, particularly first-time users, trying to anticipate their needs and write towards them. It’s a good start. But it’s scaffolding: writing that helps you get to the writing you’re supposed to be doing. It supports you while you write your way to the heart of your content. And once you get there, you have to look back and take the scaffolding down. Leaving content in the Q&A format that helped you develop it is missing the point. You’re not there to build scaffolding. You have to see your content in its naked purpose and determine the best method for communicating that purpose – and it usually won’t be what got you there. The goal (to borrow a lesson from content management systems) is to separate the content from its presentation, to let the meaning of the content inform its display. This is, of course, a nice theory. An occasionally necessary evil I have a lot of clients who adore FAQs. They’ve developed their content over a long period of time. They’ve listened to the questions their users are asking. And they’ve answered them all on a page that I simply cannot get them to part with. Which means I’ve had to consider that there may be occasions where an FAQ page is appropriate. As an example: one of my clients is a financial office in a large institution. Because this office manages several third-party systems that serve a range of niche audiences, they had developed FAQs that addressed hyper-specific instances of dysfunction within systems for different users – à la “I’m a financial director and my employee submitted an expense report in such-and-such system and it returned such-and-such error. What do I do?” Yes, this content could be removed from the question format and rewritten. But I’m not sure it would be an improvement. It won’t necessarily resolve concerns about length and searchability, and the different audiences may complicate the delivery. And since the work of rewriting it didn’t fit into the client workflow (small team, no writers, pressed for time), I didn’t recommend the change. I’ve had to make peace with not being to torch all the FAQs on the internet. Some content, like troubleshooting information or complex procedures, may be better in that format. It may be the smartest way for a particular client to handle that particular information. Of course, this has to be determined on a case-by-case basis, taking into account the amount of content, the subject matter, the skill levels of the content creators, the publishing workflow, and the search habits of the users. If you determine that an FAQ page is the only way to go, ask yourself: Is there a better label or more specific term for the page (support, troubleshooting, product concerns, etc.)? Is there way to structure the page, categorize the questions, or otherwise make it easier for users to navigate quickly to the answer they need? Is a question and answer format absolutely the best way to communicate this information? Form follows function Just as a question and answer format isn’t necessarily required to deliver the content, neither is it an inappropriate method in and of itself. Content professionals have developed a knee-jerk reaction: It’s an FAQ page! Quick, burn it! Buuuuurn it! But there’s no inherent evil in questions and answers. Framing content in an interrogatory construct is no more a deal with the devil than subheads and paragraphs, or narrative arcs, or bullet points. Yes, FAQs are riddled with communication snafus. They deserve, more often than not, to be tied to a chair and thrown into a lake. But that wouldn’t fix our content problems. FAQs are a shiny and obvious target for our frustration, but they’re not unique in their flaws. In any format, in any display, in any kind of page, weak content can rear its ugly, poorly written head. It’s not the Q&A that’s to blame, it’s bad content. Content without context will always fail users. That’s the real witch in our midst.",2013,Lisa Maria Martin,lisamariamartin,2013-12-08T00:00:00+00:00,https://24ways.org/2013/what-to-do-with-faqs/,content 26,Integrating Contrast Checks in Your Web Workflow,"It’s nearly Christmas, which means you’ll be sure to find an overload of festive red and green decorating everything in sight—often in the ugliest ways possible. While I’m not here to battle holiday tackiness in today’s 24 ways, it might just be the perfect reminder to step back and consider how we can implement colour schemes in our websites and apps that are not only attractive, but also legible and accessible for folks with various types of visual disabilities. This simulated photo demonstrates how red and green Christmas baubles could appear to a person affected by protanopia-type colour blindness—not as festive as you might think. Source: Derek Bruff I’ve been fortunate to work with Simply Accessible to redesign not just their website, but their entire brand. Although the new site won’t be launching until the new year, we’re excited to let you peek under the tree and share a few treats as a case study into how we tackled colour accessibility in our project workflow. Don’t worry—we won’t tell Santa! Create a colour game plan A common misconception about accessibility is that meeting compliance requirements hinders creativity and beautiful design—but we beg to differ. Unfortunately, like many company websites and internal projects, Simply Accessible has spent so much time helping others that they had not spent enough time helping themselves to show the world who they really are. This was the perfect opportunity for them to practise what they preached. After plenty of research and brainstorming, we decided to evolve the existing Simply Accessible brand. Or, rather, salvage what we could. There was no established logo to carry into the new design (it was a stretch to even call it a wordmark), and the Helvetica typography across the site lacked any character. The only recognizable feature left to work with was colour. It was a challenge, for sure: the oranges looked murky and brown, and the blues looked way too corporate for a company like Simply Accessible. We knew we needed to inject a lot of personality. The old Simply Accessible website and colour palette. After an audit to round up every colour used throughout the site, we dug in deep and played around with some ideas to bring some new life to this palette. Choose effective colours Whether you’re starting from scratch or evolving an existing brand, the first step to having an effective and legible palette begins with your colour choices. While we aren’t going to cover colour message and meaning in this article, it’s important to understand how to choose colours that can be used to create strong contrast—one of the most important ways to create hierarchy, focus, and legibility in your design. There are a few methods of creating effective contrast. Light and dark colours The contrast that exists between light and dark colours is the most important attribute when creating effective contrast. Try not to use colours that have a similar lightness next to each other in a design. The red and green colours on the left share a similar lightness and don’t provide enough contrast on their own without making some adjustments. Removing colour and showing the relationship in greyscale reveals that the version on the right is much more effective. It’s important to remember that red and green colour pairs cause difficulty for the majority of colour-blind people, so they should be avoided wherever possible, especially when placed next to each other. Complementary contrast Effective contrast can also be achieved by choosing complementary colours (other than red and green), that are opposite each other on a colour wheel. These colour pairs generally work better than choosing adjacent hues on the wheel. Cool and warm contrast Contrast also exists between cool and warm colours on the colour wheel. Imagine a colour wheel divided into cool colours like blues, purples, and greens, and compare them to warm colours like reds, oranges and yellows. Choosing a dark shade of a cool colour, paired with a light tint of a warm colour will provide better contrast than two warm colours or two cool colours. Develop colour concepts After much experimentation, we settled on a simple, two-colour palette of blue and orange, a cool-warm contrast colour scheme. We added swatches for call-to-action messaging in green, error messaging in red, and body copy and form fields in black and grey. Shades and tints of blue and orange were added to illustrations and other design elements for extra detail and interest. First stab at a new palette. We introduced the new palette for the first time on an internal project to test the waters before going full steam ahead with the website. It gave us plenty of time to get a feel for the new design before sharing it with the public. Putting the test palette into practice with an internal report It’s important to be open to changes in your palette as it might need to evolve throughout the design process. Don’t tell your client up front that this palette is set in stone. If you need to tweak the colour of a button later because of legibility issues, the last thing you want is your client pushing back because it’s different from what you promised. As it happened, we did tweak the colours after the test run, and we even adjusted the logo—what looked great printed on paper looked a little too light on screens. Consider how colours might be used Don’t worry if you haven’t had the opportunity to test your palette in advance. As long as you have some well-considered options, you’ll be ready to think about how the colour might be used on the site or app. Obviously, in such early stages it’s unlikely that you’re going to know every element or feature that will appear on the site at launch time, or even which design elements could be introduced to the site later down the road. There are, of course, plenty of safe places to start. For Simply Accessible, I quickly mocked up these examples in Illustrator to get a handle on the elements of a website where contrast and legibility matter the most: text colours and background colours. While it’s less important to consider the contrast of decorative elements that don’t convey essential information, it’s important for a reader to be able to discern elements like button shapes and empty form fields. A basic list of possible colour combinations that I had in mind for the Simply Accessible website Run initial tests Once these elements were laid out, I manually plugged in the HTML colour code of each foreground colour and background colour on Lea Verou’s Contrast Checker. I added the results from each colour pair test to my document so we could see at a glance which colours needed adjustment or which colours wouldn’t work at all. Note: Read more about colour accessibility and contrast requirements As you can see, a few problems were revealed in this test. To meet the minimum AA compliance, we needed to slightly darken the green, blue, and orange background colours for text—an easy fix. A more complicated problem was apparent with the button colours. I had envisioned some buttons appearing over a blue background, but the contrast ratios were well under 3:1. Although there isn’t a guide in WCAG for contrast requirements of two non-text elements, the ISO and ANSI standard for visible contrast is 3:1, which is what we decided to aim for. We also checked our colour combinations in Color Oracle, an app that simulates the most extreme forms of colour blindness. It confirmed that coloured buttons over blue backgrounds was simply not going to work. The contrast was much too low, especially for the more common deuteranopia and protanopia-type deficiencies. How our proposed colour pairs could look to people with three types of colour blindness Make adjustments if necessary As a solution, we opted to change all buttons to white when used over dark coloured backgrounds. In addition to increasing contrast, it also gave more consistency to the button design across the site instead of introducing a lot of unnecessary colour variants. Putting more work into getting compliant contrast ratios at this stage will make the rest of implementation and testing a breeze. When you’ve got those ratios looking good, it’s time to move on to implementation. Implement colours in style guide and prototype Once I was happy with my contrast checks, I created a basic style guide and added all the colour values from my colour exploration files, introduced more tints and shades, and added patterned backgrounds. I created examples of every panel style we were planning to use on the site, with sample text, links, and buttons—all with working hover states. Not only does this make it easier for the developer, it allows you to check in the browser for any further contrast issues. Run a final contrast check During the final stages of testing and before launch, it’s a good idea to do one more check for colour accessibility to ensure nothing’s been lost in translation from design to code. Unless you’ve introduced massive changes to the design in the prototype, it should be fairly easy to fix any issues that arise, particularly if you’ve stayed on top of updating any revisions in the style guide. One of the more well-known evaluation tools, WAVE, is web-based and will work in any browser, but I love using Chrome’s Accessibility Tools. Not only are they built right in to the Inspector, but they’ll work if your site is password-protected or private, too. Chrome’s Accessibility Tools audit feature shows that there are no immediate issues with colour contrast in our prototype The human touch Finally, nothing beats a good round of user testing. Even evaluation tools have their flaws. Although they’re great at catching contrast errors for text and backgrounds, they aren’t going to be able to find errors in non-text elements, infographics, or objects placed next to each other where discernible contrast is important. Our final palette, compared with our initial ideas, was quite different, but we’re proud to say it’s not just compliant, but shows Simply Accessible’s true personality. Who knows, it may not be final at all—there are so many opportunities down the road to explore and expand it further. Accessibility should never be an afterthought in a project. It’s not as simple as adding alt text to images, or running your site through a compliance checker at the last minute and assuming that a pass means everything is okay. Considering how colour will be used during every stage of your project will help avoid massive problems before launch, or worse, launching with serious issues. If you find yourself working on a personal project over the Christmas break, try integrating these checks into your workflow and make colour accessibility a part of your New Year’s resolutions.",2014,Geri Coady,gericoady,2014-12-22T00:00:00+00:00,https://24ways.org/2014/integrating-contrast-checks-in-your-web-workflow/,design 27,Putting Design on the Map,"The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things. I could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I’m going to explore something that gives us a direct relationship between a website and the physical world – maps. A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected. Reif Larsen, The Selected Works of T.S. Spivet The simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That’s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn’t just use dark grey Helvetica on every site, would you? Where’s the personality? Where’s the tailored experience? Where is the design? Jumping into design Let’s keep this simple – we all want to be better web folk, not cartographers. We don’t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I’m going to gloss over some of the technical areas and focus on the practical concepts. Tiles If you’ve ever noticed a map loading in sections, it’s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You’ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless. The tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google’s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours). Comparison of Google Maps (top) and the Ordnance Survey (bottom). Carefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You’re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide. I’ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum. A nice example of a customised base map can be found on Michael K Owens’ check-in pages: One of Michael K Owens’ check-in pages. As I’ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour. A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea. Technical You’re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you’re worried about the time it takes to design the maps, don’t be – you can automate most of it. You don’t need to manually draw each tile for the entire world! We’ve learned the importance of web standards the hard way, so you’ll be glad (and I won’t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does. Features Tiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time. Showing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent? Choropleth map showing ranked postcode areas, using ViziCities. Features are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement. Erwin Josephus Raisz, cartographer (1893–1968) Where lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user. Technical As you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can’t just use a vector format such as KML, GeoJSON (or even SVG) – and you can – but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level. User interface There is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps. As I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don’t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on. In some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google’s base map toggle is quite nice since it doesn’t offer many options yet provides very different contexts and styling. It isn’t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content. Putting it into practice I could ramble on for ages about what we can do to customise maps to fit a site’s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web. Google OK, I’ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn’t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets. Leaflet You have probably seen Leaflet before. It isn’t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don’t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API. Mapbox As a freemium service, Mapbox may not be perfect for every use case but it’s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don’t have to only use the maps on the web. Mapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, “Yes, but how on earth can I design those parts of a map?” then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet. If you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools. OpenLayers After a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too. OpenStreetMap I couldn’t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API. ViziCities The newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength. So there you have it – a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content.",2014,Shane Hudson,shanehudson,2014-12-11T00:00:00+00:00,https://24ways.org/2014/putting-design-on-the-map/,design 28,Why You Should Design for Open Source,"Let’s be honest. Most designers don’t like working for nothing. We rally against spec work and make a stand for contracts and getting paid. That’s totally what you should do as a professional designer in the industry. It’s your job. It’s your hard-working skill. It’s your bread and butter. Get paid. However, I’m going to make a case for why you could also consider designing for open source. First, I should mention that not all open source work is free work. Some companies hire open source contributors to work on their projects full-time, usually because that project is used by said company. There are other companies that encourage open source contribution and even offer 20%-time for these projects (where you can spend one day a week contributing to open source). These are super rad situations to be in. However, whether you’re able to land a gig doing this type of work, or you’ve decided to volunteer your time and energy, designing for open source can be rewarding in many other ways. Portfolio building New designers often find themselves in a catch-22 situation: they don’t have enough work experience showcased in their portfolio, which leads to them not getting much work because their portfolio is bare. These new designers often turn to unsolicited redesigns to fill their portfolio. An unsolicited redesign is a proof of concept in which a designer attempts to redesign a popular website. You can see many of these concepts on sites like Dribbble and Behance and there are even websites dedicated to showcasing these designs, such as Uninvited Designs. There’s even a subreddit for them. There are quite a few negative opinions on unsolicited redesigns, though some people see things from both sides. If you feel like doing one or two of these to fill your portfolio, that’s of course up to you. But here’s a better suggestion. Why not contribute design for an open source project instead? You can easily find many projects in great need of design work, from branding to information design, documentation, and website or application design. The benefits to doing this are far better than an unsolicited redesign. You get a great portfolio piece that actually has greater potential to get used (especially if the core team is on board with it). It’s a win-win situation. Not all designers are in need of portfolio filler, but there are other benefits to contributing design. Giving back to the community My first experience with voluntary work was when I collaborated with my friend, Vineet Thapar, on a pro bono project for the W3C’s Web Accessibility Initiative redesign project back in 2004. I was very excited to contribute CSS to a website that would get used by the W3C! Unfortunately, it decided to go a different direction and my work did not get used. However, it was still pretty exciting to have the opportunity, and I don’t regret a moment of that work. I learned a lot about accessibility from this experience and it helped me land some of the jobs I’ve had since. Almost a decade later, I got super into Sass. One of the core maintainers, Chris Eppstein, lamented on Twitter one day that the Sass website and brand was in dire need of design help. That led to the creation of an open source task force, Team Sass Design, and we revived the brand and the website, which launched at SassConf in 2013. It helped me in my current job. I showed it during my portfolio review when I interviewed for the role. Then I was able to use inspiration from a technique I’d tried on the Sass website to help create the more feature-rich design system that my team at work is building. But most importantly, I soon learned that it is exhilarating to be a part of the Sass community. This is the biggest benefit of all. It feels really good to give back to the technology I love and use for getting my work done. Ben Werdmuller writes about the need for design in open source. It’s great to see designers contributing to open source in awesome ways. When A List Apart’s website went open source, Anna Debenham contributed by helping build its pattern library. Bevan Stephens worked with FontForge on the design of its website. There are also designers who have created their own open source projects. There’s Dan Cederholm’s Pears, which shares common patterns in markup and style. There’s also Brad Frost’s Pattern Lab, which shares his famous method of atomic design and applies it to a design system. These systems and patterns have been used in real-world projects, such as RetailMeNot, so designers have contributed to the web in an even larger way simply by putting their work out there for others to use. That’s kind of fun to think about. How to get started So are you stoked about getting into the open source community? That’s great! Initially, you might get worried or uncomfortable in getting involved. That’s okay. But first consider that the project is open source for a reason. Your contribution (no matter how large or small) can help in a big way. If you find a project you’re interested in helping, make sure you do your research. Sometimes project team members will be attached to their current design. Is there already a designer on the core team? Reach out to that designer first. Don’t be too aggressive with why you think your design is better than theirs. Rather, offer some constructive feedback and a proposal of what would make the design better. Chances are, if the designer cares about the project, and you make a strong case, they’ll be up for it. Are there contribution guidelines? It’s proper etiquette to read these and follow the community’s rules. You’ll have a better chance of getting your work accepted, and it shows that you take the time to care and add to the overall quality of the project. Does the project lack guidelines? Consider starting a draft for that before getting started in the design. When contributing to open source, use your initiative to solve problems in a manageable way. Huge pull requests are hard to review and will often either get neglected or rejected. Work in small, modular, and iterative contributions. So this is my personal take on what I’ve learned from my experience and why I love open source. I’d love to hear from you if you have your own experience in doing this and what you’ve learned along the way as well. Please share in the comments! Thanks Drew McLellan, Eric Suzanne, Kyle Neath for sharing their thoughts with me on this!",2014,Jina Anne,jina,2014-12-19T00:00:00+00:00,https://24ways.org/2014/why-you-should-design-for-open-source/,design 29,What It Takes to Build a Website,"In 1994 we lost Kurt Cobain and got the world wide web as a weird consolation prize. In the years that followed, if you’d asked me if I knew how to build a website I’d have said yes, I know HTML, so I know how to build a website. If you’d then asked me what it takes to build a website, I’d have had to admit that HTML would hardly feature. Among the design nerdery and dev geekery it’s easy to think that the nuts and bolts of building a page just need to be multiplied up and Ta-da! There’s your website. That can certainly be true with weekend projects and hackery for fun. It works for throwing something together on GitHub or experimenting with ideas on your personal site. But what about working professionally on client projects? The web is important, so we need to build it right. It’s 2015 – your job involves people paying you money for building websites. What does it take to build a website and to do it right? What practices should we adopt to make really great, successful and professional web projects in 2015? I put that question to some friends and 24 ways authors to see what they thought. Getting the tech right Inevitably, it all starts with the technology. We work in a technical medium, after all. From Notepad and WinFTP through to continuous integration and deployment – how do you build sites? Create a stable development environment There’s little more likely to send a web developer into a wild panic and a client into a wild rage than making a new site live and things just not working. That’s why it’s important to have realistic development and staging environments that mimic the live server as closely as possible. Are you in the habit of developing new sites right on the client’s server? Or maybe in a subfolder on your local machine? It’s time to reconsider. Charlie Perrins writes: Don’t work on a live server – this feels like one of those gear-changing moments for a developer’s growth. Build something that works just as well locally on your own machine as it does on a live server, and capture the differences in the code between the local and live version in a single config file. Ultimately, if you can get all the differences between environments down to a config level then you’ll be in a really good position to automate the deployment process at some point in the future. Anything that creates a significant difference between the development and the live environments has the potential to cause problems you won’t know about until the site goes live – and at that point the problems are very public and very embarrassing, not to mention unprofessional. A reasonable solution is to use a tool like MAMP PRO which enables you to set up an individual local website for each project you work on. Importantly, individual sites give you both consistency of paths between development and live, but also the ability to configure server options (like PHP versions and configuration, for example) to match the live site. Better yet is to use a virtual machine, managed with a tool such as Vagrant. If you’re interested in learning more about that, we have an article on that subject later in the series. Use source control Trent Walton writes: We use source control, and it’s become the centerpiece for how we handle collaboration, enhancements, and issues. It drives our process. I’m hoping by now that you’re either using source control for all your work, or feeling a nagging guilt that you should be. Be it Git, Mercurial, Subversion (name your poison), a revision control system enables you to keep track of changes, revert anything that breaks, and keep rolling backups of your project. The benefits only start there, and Charlie Perrins recommends using source control “not just as a personal backup of your code, but as a way to play nicely with other developers.“ Noting the benefits when collaborating with other developers, he adds: Graduating from being the sole architect of your codebase to contributing to a shared codebase is a huge leap for a developer. Perhaps a practical way for people who tend to work on their own to do that would be to submit a pull request or a patch to an open source project or plugin.” Richard Rutter of Clearleft sees clear advantages for the client, too. He recommends using source control “preferably in some sort of collaborative environment that you can open up or hand over to the client” – a feature found with hosted services such as GitHub. If you’d like to hone your Git skills, Emma Jane Westby wrote Git for Grown-ups in last year’s 24 ways. Don’t repeat, automate! Tim Kadlec is a big proponent of automating your build process: I’ve been hammering that home to every client I’ve had this year. It’s amazing how many companies don’t really have a formal build/deployment process in place. So many issues on the web (performance, accessibility, etc.) can be greatly improved just by having a layer of automation involved. For example, graphic editing software spits out ridiculously bloated images. Very frequently, that’s what ends up getting put on a site. If you have a build process, you can have the compression automated and start seeing immediate gains for no effort. On a recent project, they were able to shave around 1.5MB from their site weight simply by automating compression. Once you have your code in source control, some of that automation can be made easier. Brian Suda writes: We have a few bash scripts that run on git commit: they compile the less, jslint and remove white-space, basically the 3 Cs, Compress, Concatenate, Combine. This is now part of our workflow without even realising it. One great way to get started with a build process is to use a tool like Grunt, and a great way to get started with Grunt is to read Chris Coyier’s Grunt for People Who Think Things Like Grunt are Weird and Hard. Tim reinforces: Issues like [image compression] — or simple accessibility issues like alt tags on images — should never be able to hit a live server. If you can detect it, you can automate it. And if you can automate it, you can free up time for designers and developers to focus on more challenging — and interesting — problems. A clear call to arms to tighten up and formalise development and deployment practices. The less that has to be done manually or is susceptible to change, the less that can go wrong when a site is built and deployed. Any procedures that are automated are no longer dependant on a single person’s knowledge, making it easier to build your team or just cope when someone important is out of the office or leaves. If you’re interested in kicking the FTP habit and automating your site deployments, we have an article later in the series just for you. Build systems, not sites One big theme arising this year was that of building websites as systems, not as individual pages. Brad Frost: For me, teams making websites in 2015 shouldn’t be working on just-another-redesign redesign. People are realizing that in order to make stable, future-friendly, scalable, extensible web experiences they’re going to need to think more systematically. That means crafting deliberate and thoughtful design systems. That means establishing front-end style guides. That means killing the out-dated, siloed, assembly-line waterfall process and getting cross-disciplinary teams working together in meaningful ways. That means treating development as design. That means treating performance as design. That means taking the time out of the day to establish the big picture, rather than aimlessly crawling along quarter by quarter. Designer and developer Jina Bolton also advocates the use of style guides, and recommends making the guide a project deliverable: Consider adding on a style guide/UI library to your project as a deliverable for maintainability and thinking through all UI elements and components. Val Head agrees: “build and maintain a style guide for each project” she wrote. On the subject of approaching a redesign, she added: A UI inventory goes a long way to helping get your head around what a design system needs in the early stages of a redesign project. So what about that old chestnut, responsive web design? Should we be making sites responsive by default? How about mobile first? Richard Rutter: Think mobile first unless you have a very good reason not to. Remember to take the client with you on this principle, otherwise it won’t work as a convincing piece of design. Trent Walton adds: The more you can test and sort of skew your perception for what is typical on the web, the better. 4k displays hooked up to 100Mbps connections can make one extremely unsympathetic. The value of testing with real devices is something Ruth John appreciates. She wrote: I still have my own small device lab at home, even though I work permanently for a well-established company (which has a LOT of devices at its disposal) – it just means I can get a good overview of how things are looking during development. And speaking of systems, Mark Norman Francis recommends the use of measuring tools to aid the design process; “[U]se analytics and make decisions from actual data” he suggests, rather than relying totally on intuition. Tim Kadlec adds a word on performance planning: I think having a performance budget in place should now be a given on any project. We’ve proven pretty conclusively through a hundred and one case studies that performance matters. And over the last year or so, we’ve really seen a lot of great tools emerge to help track and enforce performance budgets. There’s not really a good excuse for not using one any more. It’s clear that in the four years since Ethan Marcotte’s Responsive Web Design article the diversity of screen sizes, network connection speeds and input methods has only increased. New web projects should presume visitors will be using anything from a watch up to a big screen desktop display, and from being offline, through to GPRS, 3G and fast broadband. Will it take more time to design and build for those constraints? Yes, it most likely will. If Internet Explorer is brave enough to ask to be your default browser, you can be brave enough to tell your client they need to build responsively. Working collaboratively A big part of delivering a successful website project is how we work together, both as a design team and a wider project team with the client. Val Head recommends an open line of communication: Keep conversations going. With clients, with teammates. Talking is so important with the way we work now. A good team conversation place, like Slack, is slowly becoming invaluable for me too. Ruth John agrees: We’ve recently opened up our lines of communication by using Slack. It has transformed the way we work. We’re easily more productive and collaborative on projects, as well as making it a lot easier for us all to work remotely (including freelancers). She goes on to point out how tools can be combined to ease team communication without adding further complications: We have a private GitHub organisation (which everyone who works with us is granted access to), which not only holds all our project code but also a team wiki. This has lots of information to get you set up within the team, as well as coding guidelines and best practices and other admin info, like contact numbers/emails for the team. Small-A agile is also the theme of the day, with Mark Norman Francis suggesting an approach of “small iterations with constant feedback around individual features, not spec-it-all-first”. He also encourages you to review as you go, at each stage of the project: Always reflect on what went well and what went badly, and how you can learn from that, even if not Doing Agile™. Ultimately “best practices” should come from learning lessons (both good and bad). Richard Rutter echoes this, warning against working in isolation from the client for too long: Avoid big reveals. Your engagement with the client should be participatory. In business no one likes surprises. This experience rings true for Ruth John who recommends involving real users in the feedback loop, not just the client: We also try and get feedback on what we’re building as soon and as often as we can with our stakeholders/clients and real users. We should also remember that our role is to serve the client’s needs, not just bill them for whatever we can. Brian Suda adds: Don’t sell clients on things they don’t need. We can spout a lot of jargon and scare clients into thinking you are a god. We can do things few can now, but you can’t rip people off because they are unknowledgeable. But do clients know what they’re getting, even when they see it? Trent Walton has an interesting take: We focus on prototypes over image-based comps at all costs, especially when meetings are involved. It’s much easier to assess a prototype, and too often with image-based comps, discussions devolve into how something might feel when actually live, or how a layout could change to fit a given viewport. Val Head also likes to get work into the browser: Sketch design ideas with any software you like, but get to the browser as soon as possible. Beyond your immediate team, Emma Jane Westby has advice for looking further afield: Invest time into building relationships within your (technical) community. You never know when you might be able to lend a hand; or benefit from someone who’s able to lend theirs. And when things don’t go according to plan, Brian Suda has the following advice: If something doesn’t work out, be professional and don’t burn bridges. It will always come back to you. The best work comes from working collaboratively, not just as a team within an agency or department, but with the client and stakeholders too. If doing your job and chucking it over the fence ever worked, it certainly doesn’t fly any more. You can work in isolation, but doing really great work requires collaboration. The business end When you’re building sites professionally, every team member has to think about the business aspects. Estimating time, setting billing rates, and establishing deliverables are all part of the job. In 2008, Andrew Clarke gave us the Contract Killer sample contract we could use to establish a working agreement for a web design project. Richard Rutter agrees that contracts are still an essential part of business: They are there for both parties’ protection. Make sure you know what will happen if you decide you don’t want to work with the client any more (it happens) and, of course, what circumstances mean they can stop taking your services. Having a contract is one thing, but does it adequately protect both you and the client? Emma Jane Westby adds: Find a good IP lawyer/legal counsel. I routinely had an IP lawyer read all of my contracts to find loopholes I wouldn’t have noticed. I didn’t always change the contract, but at least I knew what might come back to bite me. So, you have a contract in place, and know what the project is. Brian Suda recommends keeping track of time and making sure you bill fairly for the hours the project costs you: If I go to a meeting and they are 15 minutes late, the billing clock has already started. They can’t expect me to be in the 1h meeting and not bill for the extra 15–30 minutes they wasted. It goes both ways too. You need to do your best to respect their deadlines and time frame – this is always hard to get right. As ever, it’s good business to do good business. Perhaps we can at last shed the old image of web designers being snowboarding layabouts and demonstrate to clients that we care as much about conducting professional business as they do. Time to review It’s a lot to take in. Some of these ideas and practices will be familiar, others new and yet to be evaluated. The web moves at a fast pace, and we need to be constantly reexamining our tools, techniques and working practices. The most important thing is not to blindly adopt any and all suggestions, but to carefully look at what the benefits might be and decide how they apply to your work. Could you benefit from more formalised development and deployment procedures? Would your design projects run more smoothly and have a longer maintainable life if you approached the solution as a componentised system rather than a series of pages? Are your teams structured in a way that enables the most fluid communication, or are there changes you could make? Are your billing procedures and business agreements serving you and your clients in the best way possible? The new year is a good time to look at your working practices and see what can be improved, and maybe this time next year you’ll look back and think “thank goodness we don’t work like that any more”.",2014,Drew McLellan,drewmclellan,2014-12-01T00:00:00+00:00,https://24ways.org/2014/what-it-takes-to-build-a-website/,business 30,"Making Sites More Responsive, Responsibly","With digital projects we’re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations. A product’s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities. The popularisation of responsive web design is a great example of how we are able to shape the web’s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn’t know or care about the difference in these approaches, but we did. Responsive design was similar in this respect – momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.  We tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text. The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens. When we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more responsive to people’s needs, enhancing their usability and accessibility along the way. Being responsibly responsive Of course, when we think about being more responsive, there’s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that: A responsible responsive design equally considers the following throughout a project: Usability: The way a website’s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions. Access: The ability for users of all devices, browsers, and assistive technologies to access and understand a site’s features and content. Sustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future. Performance: The speed at which a site’s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface. Scott’s book covers these ideas in a lot more detail than I’ll be able to here (put it on your Christmas list if it’s not there already), but for now let’s think a bit more about our roles as digital creators and the power this gives us. Our choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies. The power of the web We all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions. You’ll have seen articles with flashy titles such as “Top 5 JavaScript APIs You’ve Never Heard Of!”, which you’ll probably read, think “That’s quite cool”, yet never use in any real work. There is great potential for technologies like these to be misused, but there are also great prospects for them to be used well to enhance experiences. Let’s have a look at a few examples you may not have considered. Offline first When we make websites, many of us follow a process which involves user stories – standardised snippets of context explaining who needs what, and why. “As a student I want to pay online for my course so I don’t have to visit the college in person.” “As a retailer I want to generate unique product codes so I can manage my stock.” We very often focus heavily on what needs doing, but may not consider carefully how it will be done. As in Scott’s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense – including under different conditions. Offline first is yet another ‘first’ methodology (my personal favourite being ‘tea first’), which encourages us to develop so that connectivity itself is an enhancement – letting users continue with tasks even when they’re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK’s signal-barren East Anglian wilderness as I do), then you’ll realise that connectivity isn’t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I’m sure we’re all familiar with – the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone’s likely to be offline after all. Wouldn’t it be better if we could do something like this instead? Someone visits our conference website. On this initial run, some assets may be cached for future use: the conference schedule, the site’s CSS, photos of the speakers. When the attendee revisits the site on the day, the page shell loads up from the cache. If we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use. If we don’t have something cached already, then we can try grabbing it online. If for any reason our requests for new content fail (we’re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is cached. There are a number of ways we can make something like this, including using the application cache (AppCache) if you’re that way inclined. However, you may want to look into service workers instead. There are also some great resources on Offline First! if you’d like to find out more about this. Building in offline functionality isn’t necessarily about starting offline first, and it’s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we’d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott’s criteria. As I mentioned, this isn’t necessarily the kind of development choice that our clients will ask us for, but it’s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation. Even more accessible We’ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices? Again using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen. Input Ambient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It’s not new – many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels. If our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this: window.addEventListener('devicelight', function(e) { var lux = e.value; if (lux < 50) { //Change things for dim light } if (lux >= 50 && lux <= 10000) { //Change things for normal light } if (lux > 10000) { //Change things for bright light } }); Live demo (requires light sensor and supported browser). Soon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it’ll probably look something like this: @media (light-level: dim) { /*Change things for dim light*/ } @media (light-level: normal) { /*Change things for normal light*/ } @media (light-level: washed) { /*Change things for bright light*/ } While we may be quick to dismiss this kind of detection as being a gimmick, it’s important to consider that apps such as Light Detector, listed on Apple’s accessibility page, provide important context around exactly this functionality. “If you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.” everywaretechnologies.com/apps/lightdetector Input can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft. Output Web technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users. When we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we’re seeing and hearing. Haptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here’s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds. navigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]); Live demo (requires supported browser) With great power… What you’ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn’t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects – if we don’t root for them, they probably won’t happen. This is where our professional responsibility comes in. We won’t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we’ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let’s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations. When you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.",2014,Sally Jenkinson,sallyjenkinson,2014-12-10T00:00:00+00:00,https://24ways.org/2014/making-sites-more-responsive-responsibly/,code 31,Dealing with Emergencies in Git,"The stockings were hung by the chimney with care, In hopes that version control soon would be there. This summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I’ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information. Let’s start by painting an updated version of Clement Clarke Moore’s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen. Perhaps a few of these images are familiar, or maybe they’re just settings you’ve seen in a movie. It doesn’t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: ‘What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?’ It’s okay if the map isn’t perfect. As you refine your understanding of a new topic, you’ll outgrow the initial metaphors, analogies, and comparisons. With apologies to Mr. Moore, let’s give it a try. Getting Interrupted in Git When on the roof there arose such a clatter! You’re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you’ve been working on is going to need to wait while you investigate the commotion. If you’ve got even a little bit of experience working with Git, you know that you cannot simply change what you’re working on in times of emergency. If you’ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state. Up to this point, you’ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of ‘switching branches for a sec’. This isn’t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren’t finished with what you were working on. You don’t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren’t putting your book away though, you’re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down. Let’s say you’ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof: $ git stash After running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work. With the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it’s not reindeer after all, but just your boss who thought they’d help out by writing some code on the project you’ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time. You fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy: $ git fetch $ git branch -r $ git checkout -b helpful-boss-branch origin/helpful-boss-branch You are now in a local copy of the branch where you are free to look around, and figure out exactly what’s going on. You sigh audibly and say, ‘Okay. Tell me what was happening when you first realised you’d gotten into a mess’ as you look through the log messages for the branch. $ git log --oneline $ git log By using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof. You may also want to compare the work your boss has done to the main branch for your project. For this article, we’ll assume the main branch is named master. $ git diff master Looking through the commits, you may be able to see that things started out okay but then took a turn for the worse. Checking out a single commit Using commands you’re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch. Using the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let’s say the unique identifier you want to checkout is 25f6d7f. $ git checkout 25f6d7f Note: checking out '25f6d7f'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example: $ git checkout -b new_branch_name HEAD is now at 25f6d7f... Removed first paragraph. This is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic. Take a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you’ve temporarily disconnected from a known chain of events. In other words, you’re currently looking at the middle of a story (or branch) about what happened – and you’re not at the endpoint for this particular story. Git allows you to view the history of your repository as a timeline (technically it’s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you’re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work. $ git checkout master Warning: you are leaving 1 commit behind, not connected to any of your branches: 7a85788 Your witty holiday commit message. If you want to keep them by creating a new branch, this may be a good time to do so with: $ git branch new_branch_name 7a85788 Switched to branch 'master' Your branch is up-to-date with 'origin/master'. So, if you want to save the commits you’ve made while in a detached HEAD state, you simply need to put them on a new branch. $ git branch saved-headless-commits 7a85788 With this trick under your belt, you can jingle around in history as much as you’d like. It’s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you’ll need to move back to the branch tip by checking out the branch again. $ git checkout helpful-boss-branch You’re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you’ve followed the instructions Git gave you. That wasn’t so scary after all, now, was it? Back to our reindeer problem. If your boss is anything like the bosses I’ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you’ll want to capture the good work using one of several different methods. Back in the living room, we’ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work. Saving just one commit About that bowl of nuts. If you’re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we’re just going to pick out one nut from the bowl. In Git terms, we’re going to cherry-pick a commit and save it to another branch. First, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into. $ git checkout master $ git checkout -b rescue-the-boss From your boss’s branch, helpful-boss-branch locate the commit you want to keep. $ git log --oneline helpful-boss-branch Let’s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch. $ git cherry-pick e08740b If you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss’s branch. At this point you might need to make a few additional fixes to help your boss out. (You’re angling for a bonus out of all this. Go the extra mile.) Once you’ve made your additional changes, you’ll need to add that work to the branch as well. $ git add [filename(s)] $ git commit -m ""Building on boss's work to improve feature X."" Go ahead and test everything, and make sure it’s perfect. You don’t want to introduce your own mistakes during the rescue mission! Uploading the fixed branch The next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch. $ git push -u origin rescue-the-boss Cleaning up and getting back to work With your boss rescued, and your bonus secured, you can now delete the local temporary branches. $ git branch --delete rescue-the-boss $ git branch --delete helpful-boss-branch And settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat. $ git checkout waiting-for-st-nicholas $ git stash pop Your working directory has been returned to exactly the same state you were in at the beginning of the article. Having fun with analogies I’ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it’s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it’s like trying to find that one broken light on the string of Christmas lights). It doesn’t matter if the analogy isn’t perfect. It’s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner’s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works. Or, if you’re like me, you can choose to never grow old and just keep mucking about in the analogies. I’d argue it’s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.",2014,Emma Jane Westby,emmajanewestby,2014-12-02T00:00:00+00:00,https://24ways.org/2014/dealing-with-emergencies-in-git/,code 32,Cohesive UX,"With Yosemite, Apple users can answer iPhone calls on their MacBooks. This is weird. And yet it’s representative of a greater trend toward cohesion. Shortly after upgrading to Yosemite, a call came in on my iPhone and my MacBook “rang” in parallel. And I was all, like, “Wut?” This was a new feature in Yosemite, and honestly it was a little bizarre at first. Apple promotional image showing a phone call ringing simultaneously on multiple devices. However, I had just spoken at a conference on the very topic you’re reading about now, and therefore I appreciated the underlying concept: the cohesion of user experience, the cohesion of screens. This is just one of many examples I’ve encountered since beginning to speak about this topic months ago. But before we get ahead of ourselves, let’s look back at the past few years, specifically the role of responsive web design. RWD != cohesive experience I needn’t expound on the virtues of responsive web design (RWD). You’ve likely already encountered more than a career’s worth on the topic. This is a good thing. Count me in as one of its biggest fans. However, if we are to sing the praises of RWD, we must also acknowledge its shortcomings. One of these is that RWD ends where the browser ends. For all its goodness, RWD really has no bearing on native apps or any other experiences that take place outside the browser. This makes it challenging, therefore, to create cohesion for multi-screen users if RWD is the only response to “let’s make it work everywhere.” We need something that incorporates the spirit of RWD while unifying all touchpoints for the entire user experience—single device or several devices, in browser or sans browser, native app or otherwise. I call this cohesive UX, and I believe it’s the next era of successful user experiences. Toward a unified whole Simply put, the goal of cohesive UX is to deliver a consistent, unified user experience regardless of where the experience begins, continues, and ends. Two facets are vital to cohesive UX: Function and form Data symmetry Let’s examine each of these. Function AND form Function over form, of course. Right? Not so fast, kiddo. Consider Bruce Lawson’s dad. After receiving an Android phone for Christmas and thumbing through his favorite sites, he was puzzled why some looked different from their counterparts on the desktop. “When a site looked radically different,” Bruce observed, “he’d check the URL bar to ensure that he’d typed in the right address. In short, he found RWD to be confusing and it meant he didn’t trust the site.” A lack of cohesive form led to a jarring experience for Bruce’s dad. Now, if I appear to be suggesting websites must look the same in every browser—you already learned they needn’t—know that I recognize the importance of context, especially in regards to mobile. I made a case for this more than seven years ago. Rather, cohesive UX suggests that form deserves the same respect as function when crafting user experiences that span multiple screens or devices. And users are increasingly comfortable traversing media. For example, more than 40% of adults in the U.S. owning more than one device start an activity on one screen and finish it on another, according to a study commissioned by Facebook. I suspect that percentage will only increase in 2015, and I suspect the tech-affluent readers of 24 ways are among the 40%. There are countless examples of cohesive form and function. Consider Gmail, which displays email conversations visually as a stack that can be expanded and collapsed like the bellows of an accordion. This visual metaphor has been consistent in virtually any instance of Gmail—website or app—since at least 2007 when I captured this screenshot on my Nokia 6680: Screenshot captured while authoring Mobile Web Design (2007). Back then we didn’t call this an app, but rather a ‘smart client’. When the holistic experience is cohesive as it is with Gmail, users’ mental models and even muscle memory are preserved.1 Functionality and aesthetics align with the expectations users have for how things should function and what they should look like. In other words, the experience is roughly the same across screens. But don’t be ridiculous, peoples. Note that I said “roughly.” It’s important to avoid mindless replication of aesthetics and functionality for the sake of cohesion. Again, the goal is a unified whole, not a carbon copy. Affordances and concessions should be made as context and intuition require. For example, while Facebook users are accustomed to top-aligned navigation in the browser, they encounter bottom-aligned navigation in the iOS app as justified by user testing: The iOS app model has held up despite many attempts to better it: http://t.co/rSMSAqeh9m pic.twitter.com/mBp36lAEgc— Luke Wroblewski (@lukew) December 10, 2014 Despite the (rather minor) lack of consistency in navigation placement, other elements such as icons, labels, and color theme work in tandem to produce a unified, holistic whole. Data symmetry Data symmetry involves the repetition, continuity, or synchronicity of data across screens, devices, and platforms. As regards cohesive UX, data includes not just the material (such as an article you’re writing on Medium) but also the actions that can be performed on or with that material (such as Medium’s authoring tools). That is to say, “sync verbs, not just nouns” (Josh Clark). In my estimation, Amazon is an archetype of data symmetry, as is Rdio. When logged in, data is shared across virtually any device of any kind, irrespective of using a browser or native app. Add a product to your Amazon cart from your phone during the morning commute, and finish the transaction at work on your laptop. Easy peasy. Amazon’s aesthetics are crazy cohesive, to boot: Amazon web (left) and native app (right). With Rdio, not only are playlists and listening history synced across screens as you would expect, but the cohesion goes even further. Rdio’s remote control feature allows you to control music playing on one device using another device, all in real time. Rdio’s remote control feature, as viewed on my MacBook while music plays on my iMac. At my office I often work from my couch using my MacBook, but my speakers are connected to my iMac. When signed in to Rdio on both devices, my MacBook serves as proxy for controlling Rdio on my iMac, much the same as any Yosemite-enabled device can serve as proxy for an incoming iPhone call. Me, in my office. Note the iMac and speakers at far right. This is a brilliant example of cohesive design, and it’s executed entirely via the cloud. Things to consider Consider the following when crafting cohesive experiences: Inventory the elements that comprise your product experience, and cohesify them.2 Consider things such as copy, tone, typography, iconography, imagery, flow, placement, brand identification, account data, session data, user preferences, and so on. Then, create cohesion among these elements to the greatest extent possible, while adapting to context as needed. Store session data in the cloud rather than locally. For example, avoid using browser cookies to store shopping cart data, as cookies are specific to a single browser on a single device. Instead, store this data in the cloud so it can be accessed from other devices, as well as beyond the browser. Consider using web views when developing your native app. “You’re already using web apps in native wrappers without even noticing it,” Lukas Mathis contends. “The fact that nobody even notices, the fact that this isn’t a story, shows that, when it comes to user experience, web vs. native doesn’t matter anymore.” Web views essentially allow you to display HTML content inside a native wrapper. This can reduce the time and effort needed to make the overall experience cohesive. So whereas the navigation bar may be rendered by the app, for example, the remaining page display may be rendered via the web. There’s readily accessible documentation for using web views in C++, iOS, Android, and so forth. Nature is calling Returning to the example of Yosemite and sychronized phone calls, is it really that bizarre in light of cohesive UX? Perhaps at first. But I suspect that, over time, Yosemite’s cohesiveness — and the cohesiveness of other examples like the ones we’ve discussed here — will become not only more natural but more commonplace, too. 1 I browse Flipboard on my iPad nearly every morning as part of my breakfast routine. Swiping horizontally advances to the next page. Countless times I’ve done the same gesture in Flipboard for iPhone only to have it do nothing. This is because the gesture for advancing is vertical on phones. I’m so conditioned to the horizontal swipe that I often fail to make the switch to vertical swipe, and apparently others suffer from the same muscle memory, too. 2 Cohesify isn’t a thing. But chances are you understood what I meant. Yay neologism!",2014,Cameron Moll,cameronmoll,2014-12-24T00:00:00+00:00,https://24ways.org/2014/cohesive-ux/,ux 34,Collaborative Responsive Design Workflows,"Much has been written about workflow and designer-developer collaboration in web design, but many teams still struggle with this issue; either with how to adapt their internal workflow, or how to communicate the need for best practices like mobile first and progressive enhancement to their teams and clients. Christmas seems like a good time to have another look at what doesn’t work between us and how we can improve matters. Why is it so difficult? We’re still beginning to understand responsive design workflows, acknowledging the need to move away from static design tools and towards best practices in development. It’s not that we don’t want to change – so why is it so difficult? Changing the way we do something that has become routine is always problematic, even with small things, and the changes today’s web environment requires from web design and development teams are anything but small. Although developers also have a host of new skills to learn and things to consider, designers are probably the ones pushed furthest out of their comfort zones: as well as graphic design, a web designer today also needs an understanding of interaction design and ergonomics, because more and more websites are becoming tools rather than pages meant to be read like a book or magazine. In addition to that there are thousands of different devices and screen sizes on the market today that layout and interactions need to work on. These aspects make it impossible to design in a static design tool, so beyond having to learn about new aspects of design, the designer has to either learn how to code or learn to work with a responsive design tool. Why do it That alone is enough to leave anyone overwhelmed, as learning a new skill takes time and slows you down in a project – and on most projects time is in short supply. Yet we have to make time or fall behind in the industry as others pitch better, interactive designs. For an efficient workflow, both designers and developers must familiarise themselves with new tools and techniques. A designer has to be able to play with ideas, make small adjustments here and there, look at the result, go back to the settings and make further adjustments, and so on. You can only realistically do that if you are able to play with all the elements of a design, including interactivity, accessibility and responsiveness. Figuring out the right breakpoints in a layout is one of the foremost reasons for designing in a responsive design tool. Even if you create layouts for three viewport sizes (i.e. smartphone, tablet and the most common desktop size), you’d only cover around 30% of visitors and you might miss problems like line breaks and padding at other viewport sizes. Another advantage is consistency. In static design tools changes will not be applied across all your other layouts. A developer referring back to last week’s comps might work with outdated metrics. Furthermore, you cannot easily test what impact changes might have on previously designed areas. In a dynamic design tool such changes will be applied to the entire design and allow you to test things in site areas you had already finished. No static design tool allows you to do this, and having somebody else produce a mockup from your static designs or wireframes will duplicate work and is inefficient. How to do it When working in a responsive design tool rather than in the browser, there is still the question of how and when to communicate with the developer. I have found that working with Sass in combination with a visual style guide is very efficient, but it does need careful planning: fundamental metrics for padding, margins and font sizes, but also design elements like sliders, forms, tabs, buttons and navigational elements, should be defined at the beginning of a project and used consistently across the site. Working with a grid can help you develop a consistent design language across your site. Create a visual style guide that shows what the elements look like and how they behave across different screen sizes – and when interacted with. Put all metrics on paddings, margins, breakpoints, widths, colours and so on in a text document, ideally with names that your developer can use as Sass variables in the CSS. For example: $padding-default-vertical: 1.5em; Developers, too, need an efficient workflow to keep code maintainable and speed up the time needed for more complex interactions with an eye on accessibility and performance. CSS preprocessors like Sass allow you to work with variables and mixins for default rules, as well as style sheet partials for different site areas or design elements. Create your own boilerplate to use for your projects and then update your variables with the information from your designer for each individual project. How to get buy-in One obstacle when implementing responsive design, accessibility and content strategy is the logistics of learning new skills and iterating on your workflow. Another is how to sell it. You might expect everyone on a project (including the client) to want to design and develop the best website possible: ultimately, a great site will lead to more conversions. However, we often hear that people find it difficult to convince their teammates, bosses or clients to implement best practices. Why is that? Well, I believe a lot of it is down to how we sell it. You will have experienced this yourself: some people you trust to know what they are talking about, and others you don’t. Think about why you trust that first person but don’t buy what the other one is telling you. It is likely because person A has a self-assured, calm and assertive demeanour, while person B seems insecure and apologetic. To sell our ideas, we need to become person A! For a timid designer or developer suffering from imposter syndrome (like many of us do in this industry) that is a difficult task. So how can we become more confident in selling our expertise? Write We need to become experts. And I mean not just in writing great code or coming up with beautiful designs but at explaining why we’re doing what we’re doing. Why do you code this way or that? Why is this the best layout? Why does a website have to be accessible and responsive? Write about it. Putting your thoughts down on paper or screen is a really efficient way of getting your head around a topic and learning to make a case for something. You may even find that you come up with new ideas as you are writing, so you’ll become a better designer or developer along the way. Talk Then, talk about it. Start out in front of your team, then do a lightning talk at a web event near you, then a longer talk or workshop. Having to talk about a topic is going to help you put into spoken words the argument that you’ve previously put together in writing. Writing comes more easily when you’re starting out but we use a different register when writing than talking and you need to learn how to speak your case. Do the talk a couple of times and after each talk make adjustments where you found it didn’t work well. By this time, you are more than ready to make your case to the client. In fact, you’ve been ready since that first talk in front of your colleagues ;) Pitch Pitches used to be based on a presentation of static layouts for for three to five typical pages and three different designs. But if we want to sell interactivity, structure, usability, accessibility and responsiveness, we need to demonstrate these things and I believe that it can only do us good. I have seen a few pitches sitting in the client’s chair and static layouts are always sort of dull. What makes a website a website is the fact that I can interact with it and smooth interactions or animations add that extra sparkle. I can’t claim personal experience for this one but I’d be bold and go for only one design. One demo page matching the client’s corporate design but not any specific page for the final site. Include design elements like navigation, photography, typefaces, article layout (with real content), sliders, tabs, accordions, buttons, forms, tables (yes, tables) – everything you would include in a style tiles document, only interactive. Demonstrate how the elements behave when clicked, hovered and touched, and how they change across different screen sizes. You may even want to demonstrate accessibility features like tabbed navigation and screen reader use. Obviously, there are many approaches that will work in different situations but don’t give up on finding a process that works for you and that ultimately allows you to build delightful, accessible, responsive user experiences for the web. Make time to try new tools and techniques and don’t just work on them on the side – start using them on an actual project. It is only when we use a tool or process in the real world that we become true experts. Remember your driving lessons: once the instructor had explained how to operate the car, you were sent to practise driving on the road in actual traffic!",2014,Sibylle Weber,sibylleweber,2014-12-07T00:00:00+00:00,https://24ways.org/2014/collaborative-responsive-design-workflows/,process 37,JavaScript Modules the ES6 Way,"JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. This is something nearly all other languages come with out of the box, whether it be Ruby’s require, Python’s import, or any other language you’re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let’s be clear: it doesn’t mean the new module system in the upcoming version of JavaScript won’t be useful to you if you’re building smaller websites rather than the next Instagram. Thankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won’t be landing in browsers for a while yet – but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I’d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It’s much simpler than you might think. What is ES6? ECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It’s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn’t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet. The ES6 module spec The ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I’ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it’s what I’ll focus on more. Module syntax The key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we’ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it. Another important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There’s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous. Named exports Modules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword: export function double(x) { return x + x; }; You can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces. var double = function(x) { return x + x; } export { double }; A module can then import the double function like so: import { double } from 'mymodule'; double(2); // 4 Again, curly braces are required around the variable you would like to import. It’s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension. The reason for those extra braces is that this syntax lets you export multiple variables: var double = function(x) { return x + x; } var square = function(x) { return x * x; } export { double, square } I personally prefer this syntax over the export function …, but only because it makes it much clearer to me what the module exports. Typically I will have my export {…} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting. A file importing both double and square can do so in just the way you’d expect: import { double, square } from 'mymodule'; double(2); // 4 square(3); // 9 With this approach you can’t easily import an entire module and all its methods. This is by design – it’s much better and you’re encouraged to import just the functions you need to use. Default exports Along with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so: export default function(x) { return x + x; } And that can be imported: import double from 'mymodule'; double(2); // 4 This time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you’d like. Default exports are not named, so you can import them as anything you like: import christmas from 'mymodule'; christmas(2); // 4 The above is entirely valid. Although it’s not something that is used too often, a module can have both named exports and a default export, if you wish. One of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that’s fine, and you shouldn’t change that to meet the preferences of those designing the spec. Programmatic API Along with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It’s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user’s browser didn’t support a feature your app relied on. An example of doing this is: if(someFeatureNotSupported) { System.import('my-polyfill').then(function(myPolyFill) { // use the module from here }); } System.import will return a promise, which, if you’re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete. This programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task. How to use it today It’s all well and good having this new syntax, but right now it won’t work in any browser – and it’s not likely to for a long time. Maybe in next year’s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we’re stuck with a bit of extra work. ES6 module transpiler One solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it – not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp. The advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don’t have any system in place for handling modules in the browser, using the transpiler doesn’t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn’t do anything to help you get that code running in the browser, but if you have that part sorted it’s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler. SystemJS Another solution is SystemJS. It’s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser). To load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn’t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you. All you need to do to get SystemJS running is to add a When you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn’t make a lot of requests on the live site. In development though, it makes for a really nice workflow. When working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application’s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file. SystemJS also provides a workflow for bundling your application together into one file. Conclusion ES6 modules may be at least six months to a year away (if not more) but that doesn’t mean they can’t be used today. Although there is an overhead to using them now – with the work required to set up SystemJS, the module transpiler, or another solution – that doesn’t mean it’s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You’ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution. If you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road. Further reading If you’d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources: ECMAScript 6 modules: the final syntax by Axel Rauschmayer Practical Workflows for ES6 Modules by Guy Bedford ECMAScript 6 resources for the curious JavaScripter by Addy Osmani Tracking ES6 support by Addy Osmani ES6 Tools List by Addy Osmani Using Grunt and the ES6 Module Transpiler by Thomas Boyt JavaScript Modules and Dependencies with jspm by myself Using ES6 Modules Today by Guy Bedford",2014,Jack Franklin,jackfranklin,2014-12-03T00:00:00+00:00,https://24ways.org/2014/javascript-modules-the-es6-way/,code 40,Don’t Push Through the Pain,"In 2004, I lost my web career. In a single day, it was gone. I was in too much pain to use a keyboard, a Wacom tablet (I couldn’t even click the pen), or a trackball. Switching my mouse to use my left (non-dominant) hand only helped a bit; then that hand went, too. I tried all the easy-to-find equipment out there, except for expensive gizmos with foot pedals. I had tingling in my fingers—which, when I was away from the computer, would rhythmically move as if some other being controlled them. I worried about Parkinson’s because the movements were so dramatic. Pen on paper was painful. Finally, I discovered one day that I couldn’t even turn a doorknob. The only highlight was that I couldn’t dust, scrub, or vacuum. We were forced to hire someone to come in once a week for an hour to whip through the house. You can imagine my disappointment. My injuries had gradually slithered into my life without notice. I’d occasionally have sore elbows, or my wrist might ache for a day, or my shoulders feel tight. But nothing to keyboard home about. That’s the critical bit of news. One day, you’re pretty fine. The next day, you don’t have your job—or any job that requires the use of your hands and wrists. I had to walk away from the computer for over four months—and partially for several months more. That’s right: no income. If I hadn’t found a gifted massage therapist, the right book of stretches, the equipment I should have been using all along, and learned how to pay attention to my body—even just a little bit more—I quite possibly wouldn’t be writing this article today. I wouldn’t be writing anything, anywhere. Most of us have heard of (and even claimed to have read all of) Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience, who describes the state of flow—the place our minds go when we are fully engaged and in our element. This lovely state of highly focused activity is deeply satisfying, often creative, and quite familiar to many of us on the web who just can’t quit until the copy sings or the code is untangled or we get our highest score yet in Angry Birds. Our minds may enter that flow, but too often as our brains take flight, all else recedes. And we leave something very important behind. Our bodies. My body wasn’t made to make the same minute movements thousands of times a day, most days of the year, for decades, and neither was yours. The wear and tear sneaks up on you, especially if you’re the obsessive perfectionist that we all pretend not to be. Oh? You’re not obsessed? I wasn’t like this all the time, but I remember sitting across from my husband, eating dinner, and I didn’t hear a word he said. I’d left my brain upstairs in my office, where it was wrestling in a death match with the box model or, God help us all, IE 5.2. I was a writer, too, and I was having my first inkling that I was a content strategist. Work was exciting. I could sit up late, in the flow, fingers flying at warp speed. I could sit until those wretched birds outside mocked me with their damn, cheerful “Hurray, it’s morning!” songs. Suddenly, while, say, washing dishes, the one magical phrase that captured the essence of a voice or idea would pop up, and I would have mowed down small animals and toddlers to get to my computer and hammer out that website or article, to capture that thought before it escaped. Note my use of the word hammer. Sound at all familiar? But where was my body during my work? Jaw jutting forward to see the screen, feet oddly positioned—and then left in place like chunks of marble—back unsupported, fingers pounding the keys, wrists and arms permanently twisted in unnatural angles that we thought were natural. And clicking. Clicking, clicking, clicking that mouse. Thumbing tiny keyboards on phones. A lethal little gesture for tiny little tendons. Though I was fine from, say 1997 to 2004, by the end of 2004 this behavior culminated in disaster. I had repetitive stress injuries, aka repetitive motion injuries. As the Apple site says, “A brief exposure to these conditions would not cause harm. But a prolonged exposure may, in some people, result in reduced ability to function.” I’ll say. I frantically turned to people on lists and forums. “Try a track ball.” Already did that. “Try a tablet.” Worse. One person wrote, “I still come here once in a while and can type a couple sentences, but I’ve permanently got thoracic outlet syndrome and I’ll never work again.” Oh, beauteous web, oh, long-distance friends, farewell. The Wrist Bone’s Connected to the Brain Bone That variation on the old song tells part of the story. Most people (and many of their physicians) believe that tingling fingers and aching wrists MUST be carpel tunnel syndrome. Nope. If your neck juts forward, it tenses and stays tense the entire time you work in that position. Remember how your muscles felt after holding a landline phone with your neck tilted to one side for a long client meeting? Regrettable. Tensing your shoulders because your chair’s not designed properly puts you at risk for thoracic outlet syndrome, a career-killer if ever there was one. The nerves and tendons in your neck and shoulder refer down your arms, and muscles swell around nerves, causing pain and dysfunction. Your elbows have a tendon that is especially vulnerable to repetitive movements (think tennis elbow). Your wrists are performing something akin to a circus act with one thousand shows a day. So, all the fine tendons and ligaments in your fingers have problems that may not start at your wrists at all. Though some people truly do have carpal tunnel syndrome, my finger and wrist problems weren’t solved by heavily massaging my fingers (though, that was helpful, too) or my wrists. They were fixed by work on my neck, upper back, shoulders, arms, and elbows. This explains why many people have surgery for carpal tunnel syndrome and just months later say, “What?! How can I possibly have it again? I had an operation!” Well, fellow buckaroo, you may never have had carpel tunnel syndrome. You may have had—or perhaps will have—one long disaster area from your neck to your fingertips. How to Crawl Back Before trying extreme measures, you may be able to function again even if you feel hopeless. I managed to heal, and so have others, but I’ll always be at risk. As Jen Simmons, of The Web Ahead podcast and other projects told me, “It took a long time to injure myself. It took a long time to get back to where I was. My right arm between my elbow and wrist would start aching intermittently. Eventually, my arm even ached at night. I started each day with yesterday’s pain.” Simple measures, used consistently, helped her back. 1. Massage therapy I don’t remember what the rest of the world is like, but in Portland, Oregon, we have more than one massage therapy college. (Of course we do.) I saw a former teacher at the most respected school. This is not your “It was all so soothing. Why, I fell asleep!” massage. This is “Holy crap, he’s grinding his elbow into my armpit!“ massage therapy, with the emphasis on therapy. I owe him everything. Make sure you have someone who really knows what they’re doing. Get many referrals. Try a question, “Does my psoas muscle affect my back?” If they can’t answer it, flee. Regularly see the one you choose and after a while, depending on how injured you are, you may be able to taper off. 2. Change your equipment You may need to be hands-on with several pieces of equipment before you find the ones that don’t cause more pain. Many companies have restocking fees, charges to ship the equipment you want to return, and other retail atrocities. Always be sure to ask what the return policies are at any company before purchasing. Mice You may have more success than I did with equipment such as the Wacom tablet. Mine came with a pen, and it hurt to repetitively click it. Trackballs are another option but, for many, they are better at prevention than recovery. But let’s get to the really effective stuff. One of the biggest sources of pain is using your mouse. One major reason is that your hand and wrist are in a perpetually unnatural position and you’re also moving your arm quite a bit. Each time you move the mouse, it is placing stress on your neck, shoulders and arms, because you need to lift them slightly in order to move the mouse and you need to angle your wrist. You may also be too injured to use the trackpad all the time, and this mouse, the vertical mouse is a dandy preventative measure, too. Shaking up your patterns is a wise move. I have long fingers, not especially thin, yet the small size works best for me. (They have larger choices available.) What?! A sideways mouse? Yep. All the weight of your hand will be resting on it in the handshake position. Your forearms aren’t constantly twisting over hill and dale. You aren’t using any muscles in your wrist or hand. They are relaxing. You’ll adapt in a day, and oh, oh, what a relief it is. Keyboards I really liked doing business with the people at Kinesis-Ergo. (I’m not affiliated with them in any way.) They have the vertical mouse and a number of keyboards. The one that felt the most natural to me, and, once again, it only takes a day to adapt, is the Freestyle2 for the Mac. They have several options. I kept the keyboard halves attached to each other at first, and then spread them apart a little more. I recommend choosing one that slants and can separate. You can adjust the angle. For a little extra, they’ll make sure it’s all set up and ready to go for you. I’m guessing that some Googling will find you similar equipment, wherever you live. Warning: if you use the ergonomic keyboards, you may have fewer USB ports. The laptop will be too far away to see unless you find a satisfactory setup using a stand. This is the perfect excuse for purchasing a humongous display. You may not look cool while jetting coast to coast in your skinny jeans and what appears to be the old-time orthopedic shoe version of computing gear. But once you have rested and used many of these suggestions consistently, you may be able to use your laptop or other device in all its lovely sleekness during the trip. Other doohickies The Kinesis site and The Human Solution have a wide selection of ergonomic products: standing desks, ergonomically correct chairs, and, yes, even things with foot pedals. Explore! 3. Stop clicking, at least for a while Use keyboard shortcuts, but use them slowly. This is not the time to show off your skillz. You’ll be sort of like a recovering alcoholic, in that you’ll be a recovering repetitive stress survivor for the rest of your life, once you really injure yourself. Always be vigilant. There’s also a bit of software sold by The Human Solution and other places, and it was my salvation. It’s called the McNib for Macs, and the Nib for PCs. (I’ve only used the McNib.) It’s for click-free mousing. I found it tricky to use when writing markup and code, but you may become quite adept at it. A little rectangle pops up on your screen, you mouse over it and choose, let’s say, “Double-click.” Until you change that choice, if you mouse over a link or anything else, it will double-click it for you. All you do is glide your mouse around. Awkward for a day or two, but you’ll pick it up quickly. Though you can use it all day for work, even if you just use this for browsing LOLcats or Gary Vaynerchuk’s YouTube videos, it will help you by giving your fingers a sweet break. But here’s the sad news. The developer who invented this died a few years ago. (Yes, I used to speak to him on the phone.) While it is for sale, it isn’t compatible with Mac OS X Lion or anything subsequent. PowerPC strikes again. His site is still up. Demos for use with older software can be downloaded free at his old site, or at The Human Solution. Perhaps an enterprising developer can invent something that would provide this help, without interfering with patents. Rumor has it among ergonomic retailers (yes, I’m like a police dog sniffing my way to a criminal once I head down a trail) that his company was purchased by a company in China, with no update in sight. 4. Use built-in features That little microphone icon that comes up alongside the keyboard on your iPhone allows you to speak your message instead of incessantly thumbing it. I believe it works in any program that uses the keyboard. It’s not Siri. She’s for other things, like having a personal relationship with an inanimate object. Apple even has a good section on ergonomics. You think I’m intense about this subject? To improve your repetitive stress, Apple doesn’t want you to use oral contraceptives, alcohol, or tobacco, to which I say, “Have as much sex, bacon, and chocolate as possible to make up for it.” Apple’s info even has illustrations of things like a faucet dripping into what is labeled a bucket full of “TRAUMA.” Sounds like upgrading to Yosemite, but I digress. 5. Take breaks If it’s a game or other non-essential activity, take a break for a month. Fine, now that I’ve called games non-essential, I suppose you’ll all unfollow me on Twitter. 6. Whether you are sore or not, do stretches throughout the day This is a big one. Really big. The best book on the subject of repetitive stress injuries is Conquering Carpal Tunnel Syndrome and Other Repetitive Strain Injuries: A Self-Care Program by Sharon J. Butler. Don’t worry, most of it is illustrations. Pretend it’s a graphic novel. I’m notorious for never reading instructions, and who on earth reads the introduction of a book, unless they wrote it? I wrote a book a long time ago, and I bet my house, husband, and life savings that my own parents never read the intro. Well, I did read the intro to this book, and you should, too. Stretching correctly, in a way that doesn’t further hurt you, that keeps you flexible if you aren’t injured, that actually heals you, calls for precision. Read and you’ll see. The key is to stretch just until you start to feel the stretch, even if that’s merely a tiny movement. Don’t force anything past that point. Kindly nurse yourself back to health, or nurture your still-healthy body by stretching. Over the following days, weeks, months, you’ll be moving well past that initial stretch point. The book is brimming with examples. You only have to pick a few stretches, if this is too much to handle. Do it every single day. I can tell you some of the best ones for me, but it depends on the person. You’ll also discover in Butler’s book that areas that you think are the problem are sometimes actually adjacent to the muscle or tendon that is the source of the problem. Add a few stretches or two for that area, too. But please follow the instructions in the introduction. If you overdo it, or perform some other crazy-ass hijinks, as I would be tempted to do, I am not responsible for your outcome. I give you fair warning that I am not a healthcare provider. I’m just telling you as a friend, an untrained one, at that, who has been through this experience. 7. Follow good habits Develop habits like drinking lots of water (which helps with lactic acid buildup in muscles), looking away from the computer for twenty seconds every twenty to thirty minutes, eating right, and probably doing everything else your mother told you to do. Maybe this is a good time to bring up flossing your teeth, and going outside to play instead of watching TV. As your mom would say, “It’s a beautiful day outside, what are you kids doing in here?” 8. Speak instead of writing, if you can Amber Simmons, who is very smart and funny, once tweeted in front of the whole world that, “@carywood is a Skype whore.” I was always asking people on Twitter if we could Skype instead of using iChat or exchanging emails. (I prefer the audio version so I don’t have to, you know, do something drastic like comb my hair.) Keyboarding is tough on hands, whether you notice it or not at the time, and when doing rapid-fire back-and-forthing with people, you tend to speed up your typing and not take any breaks. This is a hand-killer. Voice chats have made such a difference for me that I am still a rabid Skype whore. Wait, did I say that out loud? Speak your text or emails, using Dragon Dictate or other software. In about 2005, accessibility and user experience design expert, Derek Featherstone, in Canada, and I, at home, chatted over the internet, each of us using a different voice-to-text program. The programs made so many mistakes communicating with each other that we began that sort of endless, tearful laughing that makes you think someone may need to call an ambulance. This type of software has improved quite a bit over the years, thank goodness. Lack of accessibility of any kind isn’t funny to Derek or me or to anyone who can’t use the web without pain. 9. Watch your position For example, if you lift up your arms to use the computer, or stare down at your laptop, you’ll need to rearrange your equipment. The internet has a lot of information about ideal ergonomic work areas. Please use a keyboard drawer. Be sure to measure the height carefully so that even a tented keyboard, like the one I recommend, will fit. I also recommend getting the version of the Freestyle with palm supports. Just these two measures did much to help both Jen Simmons and me. 10. If you need to take anti-inflammatories, stop working If you are all drugged up on ibuprofen, and pounding and clicking like mad, your body will not know when you are tired or injuring yourself. I don’t recommend taking these while using your computing devices. Perhaps just take it at night, though I’m not a fan of that category of medications. Check with your healthcare provider. At least ibuprofen is an anti-inflammatory, which may help you. In contrast, acetaminophen (paracetamol) only makes your body think it’s not in pain. Ice is great, as is switching back and forth between ice and heat. But again, if you need ice and ibuprofen you really need to take a major break. 11. Don’t forget the rest of your body I’ve zeroed in on my personal area of knowledge and experience, but you may be setting yourself up for problems in other areas of your body. There’s what is known to bad writers as “a veritable cornucopia” of information on the web about how to help the rest of your body. A wee bit of research on the web and you’ll discover simple exercises and stretches for the rest of your potential catastrophic areas: your upper back, your lower back, your legs, ankles, and eyes. Do gentle stretches, three or four times a day, rather than powering your way through. Ease into new equipment such as standing desks. Stretch those newly challenged areas until your body adapts. Pay attention to your body, even though I too often forget mine. 12. Remember the children Kids are using equipment to play highly addictive games or to explore amazing software, and if these call for repetitive motions, children are being set up for future injuries. They’ll grab hold of something, as parents out there know, and play it 3,742 times. That afternoon. Perhaps by the time they are adults, everything will just be holograms and mind-reading, but adult fingers and hands are used for most things in life, not just computing devices and phones with keyboards sized for baby chipmunks. I’ll be watching you Quickly now, while I (possibly) have your attention. Don’t move a muscle. Is your neck tense? Are you unconsciously lifting your shoulders up? How long since you stopped staring at the screen? How bright is your screen? Are you slumping (c’mon now, ‘fess up) and inviting sciatica problems? Do you have to turn your hands at an angle relative to your wrist in order to type? Uh-oh. That’s a bad one. Your hands, wrists, and forearms should be one straight line while keyboarding. Future you is begging you to change your ways. Don’t let your #ThrowbackThursday in 2020 say, “Here’s a photo from when I used to be able to do so many wonderful things that I can’t do now.” And, whatever you do, don’t try for even a nanosecond to push through the pain, or the next thing you know, you’ll be an unpaid extra in The Expendables 7.",2014,Carolyn Wood,carolynwood,2014-12-06T00:00:00+00:00,https://24ways.org/2014/dont-push-through-the-pain/,business 41,What Is Vagrant and Why Should I Care?,"If you run a web server, a database server and your scripting language(s) of choice on your main machine and you have not yet switched to using virtualisation in your workflow then this essay may be of some value to you. I know you exist because I bump into you daily: freelancers coming in to work on our projects; internet friends complaining about reinstalling a development environment because of an operating system upgrade; fellow agency owners who struggle to brief external help when getting a particular project up and running; or even hardcore back-end developers who “don’t do ops” and prefer to run their development stack of choice locally. There are many perfectly reasonable arguments as to why you may not have already made the switch, from being simply too busy, all the way through to a distrust of the new. I’ll admit that there are many new technologies or workflows that I hear of daily and instantly disregard because I have tool overload, that feeling I get when I hear about a new shiny thing and think “Well, what I do now works – I’ll leave it for others to play with.” If that’s you when it comes to Vagrant then I hope you’ll hear me out. The business case is compelling enough for you to make that switch; as a bonus it’s also really easy to get going. In this article we’ll start off by going through the high level, the tools available and how it all fits together. Then we’ll touch on the justification for making the switch, providing a few use cases that might resonate with you. Finally, I’ll provide a very simple example that you can follow to get yourself up and running. What? You already know what virtualisation is. You use the ability to run an operating system within another operating system every day. Whether that’s Parallels or VMware on your laptop or similar server-based tools that drive the ‘cloud’, squeezing lots of machines on to physical hardware and making it really easy to copy servers and even clusters of servers from one place to another. It’s an amazing technology which has changed the face of the internet over the past fifteen years. Simply put, Vagrant makes it really easy to work with virtual machines. According to the Vagrant docs: If you’re a designer, Vagrant will automatically set everything up that is required for that web app in order for you to focus on doing what you do best: design. Once a developer configures Vagrant, you don’t need to worry about how to get that app running ever again. No more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing. While I’m not sure I agree with the implication that all designers would get others to do the configuring, I think you’ll agree that the “Just check out the code… and start designing” premise is very compelling. You don’t need Vagrant to develop your web applications on virtual machines. All you need is a virtualisation software package, something like VMware Workstation or VirtualBox, and some code. Download the half-gigabyte operating system image that you want and install it. Then download and configure the stack you’ll be working with: let’s say Apache, MySQL, PHP. Then install some libraries, CuRL and ImageMagick maybe, and finally configure the ability to easily copy files from your machine to the new virtual one, something like Samba, or install an FTP server. Once this is all done, copy the code over, import the database, configure Apache’s virtual host, restart and cross your fingers. If you’re a bit weird like me then the above is pretty easy to do and secretly quite fun. Indeed, the amount of traffic to one of my more popular blog posts proves that a lot of people have been building themselves development servers from scratch for some time (or at least trying to anyway), whether that’s on virtual or physical hardware. Or you could use Vagrant. It allows you, or someone else, to specify in plain text how the machine’s virtual hardware should be configured and what should be installed on it. It also makes it insanely easy to get the code on the server. You check out your project, type vagrant up and start work. Why? It’s worth labouring the point that Vagrant makes it really easy; I mean look-no-tangle-of-wires-or-using-vim-and-loads-of-annoying-command-line-stuff easy to run a development environment. That’s all well and good, I hear you say, but there’s a steep learning curve, an overhead to switch. You’re busy and this all sounds great but you need to get on; you’ve got a career to build or a business to run and you don’t have time to learn new stuff right now. In short, what’s the business case? The business case involves saved time, a very low barrier to entry and the ability to give the exact same environment to somebody else. Getting your first development virtual machine running will take minutes, not counting download time. Seriously, use pre-built Vagrant files and provisioners (we’ll touch on this below) and you can start developing immediately. Once you’ve finished developing you can check in your changes, ask a colleague or freelancer to check them out, and then they run the code on the exact same machine – even if they are on the other side of the world and regardless of whether they are on Windows, Linux or Apple OS X. The configuration to build the machine isn’t a huge binary disk image that’ll take ages to download from Git; it’s two small text files that can be version controlled too, so you can see any changes made to the config and roll back if needed. No more ‘It works for me’ reports; no ‘Oh, I was using PHP 5.3.3, not PHP 5.3.11’ – you’re both working on exact same copies of the development environment. With a tested and verified provisioning file you’ll have the confidence that when you brief your next freelancer in to your team there won’t be that painful to and fro of getting the system up and running, where you’re on a Skype call and they are uttering the immortal words, ‘It still doesn’t work’. You know it works because you can run it too. This portability becomes even more important when you’re working on larger sites and systems. Need a load balancer? Multiple front-end servers and a clustered database back-end? No problem. Add each server into the same Vagrant file and a single command will build all of them. As you’ll know if you work on larger, business critical systems, keeping the operating systems in sync is a real problem: one server with a slightly different library causing sporadic and hard to trace issues is a genuine time black hole. Well, the good news is that you can use the same provisioning files to keep test and production machines in sync using your current build workflow. Let’s also not forget the most simple use case: a single developer with multiple websites running on a single machine. If that’s you and you switch to using Vagrant-managed virtual machines then the next time you upgrade your operating system or do a fresh install there’s no chance that things will all stop working. The server config is all tucked away in version control with your code. Just pull it down and carry on coding. OK, got it. Show me already If you want to try this out you’ll need to install the latest VirtualBox and Vagrant for your platform. If you already have VMware Workstation or another supported virtualisation package installed you can use that instead but you may need to tweak my Vagrant file below. Depending on your operating system, a reboot might also be wise. Note: the commands below were executed on my MacBook, but should also work on Windows and Linux. If you’re using Windows make sure to run the command prompt as Administrator or it’ll fall over when trying to update the hosts file. As a quick sanity check let’s just make sure that we have the vagrant command in our path, so fire up a terminal and check the version number: $ vagrant -v Vagrant 1.6.5 We’ve one final thing to install and that’s the vagrant-hostsupdater plugin. Once again, in your terminal: $ vagrant plugin install vagrant-hostsupdater Installing the 'vagrant-hostsupdater' plugin. This can take a few minutes... Installed the plugin 'vagrant-hostsupdater (0.0.11)'! Hopefully that wasn’t too painful for you. There are two things that you need to manage a virtual machine with Vagrant: a Vagrant file: this tells Vagrant what hardware to spin up a provisioning file: this tells Vagrant what to do on the machine To save you copying and pasting I’ve supplied you with a simple example (ZIP) containing both of these. Unzip it somewhere sensible and in your terminal make sure you are inside the Vagrant folder: $ cd where/you/placed/it/24ways $ ls -l -rw-r--r--@ 1 bealers staff 11055 9 Nov 09:16 bealers-24ways.md -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 8 Nov 22:54 vagrant $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 8 Nov 21:50 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3841 9 Nov 08:00 provision.sh The Vagrant file tells Vagrant how to configure the virtual hardware of your development machine. Skipping over some of the finer details, here’s what’s in that Vagrant file: www.vm.box = ""ubuntu/trusty64"" Use Ubuntu 14.04 for the VM’s OS. Vagrant will only download this once. If another project uses the same OS, Vagrant will use a cached version. www.vm.hostname = ""bealers-24ways.dev"" Set the machine’s hostname. If, like us, you’re using the vagrant-hostsupdater plugin, this will also get added to your hosts file, pointing to the virtual machine’s IP address. www.vm.provider :virtualbox do |vb| vb.customize [""modifyvm"", :id, ""--cpus"", ""2"" ] end Here’s an example of configuring the virtual machine’s hardware on the fly. In this case we want two virtual processors. Note: this is specific for the VirtualBox provider, but you could also have a section for VMware or other supported virtualisation software. www.vm.network ""private_network"", ip: ""192.168.13.37"" This specifies that we want a private networking link between your computer and the virtual machine. It’s probably best to use a reserved private subnet like 192.168.0.0/16 or 10.0.0.0/8 www.vm.synced_folder ""../"", ""/var/www/24ways"", owner: ""www-data"", group: ""www-data"" A particularly handy bit of Vagrant magic. This maps your local 24ways parent folder to /var/www/24ways on the virtual machine. This means the virtual machine already has direct access to your code and so do you. There’s no messy copying or synchronisation – just edit your files and immediately run them on the server. www.vm.provision :shell, :path => ""provision.sh"" This is where we specify the provisioner, the script that will be executed on the machine. If you open up the provisioner you’ll see it’s a bash script that does things like: install Apache, PHP, MySQL and related libraries configure the libraries: set permissions, enable logging create a database and grant some access rights set up some code for us to develop on; in this case, fire up a vanilla WordPress installation To get this all up and running you simply need to run Vagrant from within the vagrant folder: $ vagrant up You should now get a Matrix-like stream of stuff shooting up the screen. If this is the first time Vagrant has used this particular operating system image – remember we’ve specified the latest version of Ubuntu – it’ll download the disc image and cache it for future reuse. Then all the packages are downloaded and installed and finally all our configuration steps occur incluing the download and configuration of WordPress. Halfway through proceedings it’s likely that the process will halt at a prompt something like this: ==> www: adding to (/etc/hosts) : 192.168.13.37 bealers-24ways.dev # VAGRANT: 2dbfbced1b1e79d2a0942728a0a57ece (www) / 899bd80d-4251-4f6f-91a0-d30f2d9918cc Password: You need to enter your password to give vagrant sudo rights to add the IP address and hostname mapping to your local hosts file. Once finished, fire up your browser and go to http://bealers-24ways.dev. You should see a default WordPress installation. The username for wp-admin is admin and the password is 24ways. If you take a look at your local filesystem the 24ways folder should now look like: $ cd ../ $ ls -l -rw-r--r--@ 1 bealers staff 13074 9 Nov 10:14 bealers-24ways.md drwxr-xr-x 21 bealers staff 714 9 Nov 10:06 code drwxr-xr-x 3 bealers staff 102 9 Nov 10:06 etc -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 9 Nov 10:03 vagrant -rwxr-xr-x 1 bealers staff 1315849 9 Nov 10:06 wp-cli $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 9 Nov 09:41 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3836 9 Nov 10:06 provision.sh The code folder contains all the WordPress files. You can edit these directly and refresh that page to see your changes instantly. Staying in the vagrant folder, we’ll now SSH to the machine and have a quick poke around. $ vagrant ssh Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-39-generic x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 9 10:03:38 UTC 2014 System load: 1.35 Processes: 102 Usage of /: 2.7% of 39.34GB Users logged in: 0 Memory usage: 16% IP address for eth0: 10.0.2.15 Swap usage: 0% Graph this data and manage this system at: https://landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. vagrant@bealers-24ways:~$ You’re now logged in as the Vagrant user; if you want to become root this is easy: vagrant@bealers-24ways:~$ sudo su - root@bealers-24ways:~# Or you could become the webserver user, which is a good idea if you’re editing the web files directly on the server: root@bealers-24ways:~# su - www-data www-data@bealers-24ways:~$ www-data’s home directory is /var/www so we should be able to see our magically mapped files: www-data@bealers-24ways:~$ ls -l total 4 drwxr-xr-x 1 www-data www-data 306 Nov 9 10:09 24ways drwxr-xr-x 2 root root 4096 Nov 9 10:05 html www-data@bealers-24ways:~$ cd 24ways/ www-data@bealers-24ways:~/24ways$ ls -l total 1420 -rw-r--r-- 1 www-data www-data 13682 Nov 9 10:19 bealers-24ways.md drwxr-xr-x 1 www-data www-data 714 Nov 9 10:06 code drwxr-xr-x 1 www-data www-data 102 Nov 9 10:06 etc -rw-r--r-- 1 www-data www-data 118152 Nov 9 10:08 it-works.png drwxr-xr-x 1 www-data www-data 170 Nov 9 10:03 vagrant -rwxr-xr-x 1 www-data www-data 1315849 Nov 9 10:06 wp-cli We can also see some of our bespoke configurations: www-data@bealers-24ways:~/24ways$ cat /etc/php5/mods-available/siftware.ini upload_max_filesize = 15M log_errors = On display_errors = On display_startup_errors = On error_log = /var/log/apache2/php.log memory_limit = 1024M date.timezone = Europe/London www-data@bealers-24ways:~/24ways$ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 43 Nov 9 10:06 bealers-24ways.dev.conf -> /var/www/24ways/etc/bealers-24ways.dev.conf If you want to leave the server, simply type Ctrl+D a few times and you’ll be back where you started. www-data@bealers-24ways:~/24ways$ logout root@bealers-24ways:~# logout vagrant@bealers-24ways:~$ logout Connection to 127.0.0.1 closed. $ You can now halt the machine: $ vagrant halt ==> www: Attempting graceful shutdown of VM... ==> www: Removing hosts Bonus level The example I’ve provided isn’t very realistic. In the real world I’d expect the Vagrant file and provisioner to be included with the project and for it not to create the directory structure, which should already exist in your project. The same goes for the Apache VirtualHost file. You’ll also probably have a default SQL script to populate the database. As you work with Vagrant you might start to find bash provisioning to be quite limiting, especially if you are working on larger projects which use more than one server. In that case I would suggest you take a look at Ansible, Puppet or Chef. We use Ansible because we like YAML but they all do the same sort of thing. The main benefit is being able to use the same Vagrant provisioning scripts to also provision test, staging and production environments using your build workflows. Having to supply a password so the hosts file can be updated gets annoying very quicky so you can give Vagrant sudo rights: $ sudo visudo Add these lines to the bottom (Shift+G then i then Ctrl+V then Esc then :wq) Cmnd_Alias VAGRANT_HOSTS_ADD = /bin/sh -c echo ""*"" >> /etc/hosts Cmnd_Alias VAGRANT_HOSTS_REMOVE = /usr/bin/sed -i -e /*/ d /etc/hosts %staff ALL=(root) NOPASSWD: VAGRANT_HOSTS_ADD, VAGRANT_HOSTS_REMOVE Vagrant caches the operating system images that you download but it’ll download the installed software packages every time. You can get around this by using a plugin like vagrant-cachier or, if you’re really keen, maintain local Apt repositories (or whatever the equivalent is for your server architecture). At some point you might start getting a large number of virtual machines running on your poor hardware all at the same time, especially if you’re switching between projects a lot and each of those projects use lots of servers. We’re just getting to that stage now, so are considering a medium-term move to a containerised option like Docker, which seems to be maturing now. If you are keen not to use any command line tools whatsoever and you’re on OS X then you could check out Vagrant Manager as it looks quite shiny. Finally, there are a huge amount of resources to give you pre-built Vagrant machines from the likes of VVV for Wordpress, something similar for Perch, PuPHPet for generating various configurations, and a long list of pre-built operating systems at VagrantBox.es. Wrapping up Hopefully you can now see why it might be worthwhile to add Vagrant to your development workflow. Whether you’re an agency drafting in freelancers or a one-person band running lots of sites on your laptop using MAMP or something similar. Vagrant makes it easy to launch exact copies of the same machine in a repeatable and version controlled way. The learning curve isn’t too steep and, once configured, you can forget about it and focus on getting your work done.",2014,Darren Beale,darrenbeale,2014-12-05T00:00:00+00:00,https://24ways.org/2014/what-is-vagrant-and-why-should-i-care/,process 47,Developing Robust Deployment Procedures,"Once you have developed your site, how do you make it live on your web hosting? For many years the answer was to log on to your server and upload the files via FTP. Over time most hosts and FTP clients began to support SFTP, ensuring your files were transmitted over a secure connection. The process of deploying a site however remained the same. There are issues with deploying a site in this way. You are essentially transferring files one by one to the server without any real management of that transfer. If the transfer fails for some reason, you may end up with a site that is only half updated. It can then be really difficult to work out what hasn’t been replaced or added, especially where you are updating an existing site. If you are updating some third-party software your update may include files that should be removed, but that may not be obvious to you and you risk leaving outdated files littering your file system. Updating using (S)FTP is a fragile process that leaves you open to problems caused by both connectivity and human error. Is there a better way to do this? You’ll be glad to know that there is. A modern professional deployment workflow should have you moving away from fragile manual file transfers to deployments linked to code committed into source control. The benefits of good practice You may never have experienced any major issues while uploading files over FTP, and good FTP clients can help. However, there are other benefits to moving to modern deployment practices. No surprises when you launch If you are deploying in the way I suggest in this article you should have no surprises when you launch because the code you committed from your local environment should be the same code you deploy – and to staging if you have a staging server. A missing vital file won’t cause things to start throwing errors on updating the live site. Being able to work collaboratively Source control and good deployment practice makes working with your clients and other developers easy. Deploying first to a staging server means you can show your client updates and then push them live. If you subcontract some part of the work, you can give your subcontractor the ability to deploy to staging, leaving you with the final push to launch, once you know you are happy with the work. Having a proper backup of site files with access to them from anywhere The process I will outline requires the use of hosted, external source control. This gives you a backup of your latest commit and the ability to clone those files and start working on them from any machine, wherever you are. Being able to jump back into a site quickly when the client wants a few changes When doing client work it is common for some work to be handed over, then several months might go by without you needing to update the site. If you don’t have a good process in place, just getting back to work on it may take several hours for what could be only a few hours of work in itself. A solid method for getting your local copy up to date and deploying your changes live can cut that set-up time down to a few minutes. The tool chain In the rest of this article I assume that your current practice is to deploy your files over (S)FTP, using an FTP client. You would like to move to a more robust method of deployment, but without blowing apart your workflow and spending all Christmas trying to put it back together again. Therefore I’m selecting the most straightforward tools to get you from A to B. Source control Perhaps you already use some kind of source control for your sites. Today that is likely to be Git but you might also use Subversion or Mercurial. If you are not using any source control at all then I would suggest you choose Git, and that is what I will be working with in this article. When you work with Git, you always have a local repository. This is where your changes are committed. You also have the option to push those changes to a remote repository; for example, GitHub. You may well have come across GitHub as somewhere you can go to download open source code. However, you can also set up private repositories for sites whose code you don’t want to make publicly accessible. A hosted Git repository gives you somewhere to push your commits to and deploy from, so it’s a crucial part of our tool chain. A deployment service Once you have your files pushed to a remote repository, you then need a way to deploy them to your staging environment and live server. This is the job of a deployment service. This service will connect securely to your hosting, and either automatically (or on the click of a button) transfer files from your Git commit to the hosting server. If files need removing, the service should also do this too, so you can be absolutely sure that your various environments are the same. Tools to choose from What follows are not exhaustive lists, but any of these should allow you to deploy your sites without FTP. Hosted Git repositories GitHub Beanstalk Bitbucket Standalone deployment tools Deploy dploy.io FTPloy I’ve listed Beanstalk as a hosted Git repository, though it also includes a bundled deployment tool. Dploy.io is a standalone version of that tool just for deployment. In this tutorial I have chosen two separate services to show how everything fits together, and because you may already be using source control. If you are setting up all of this for the first time then using Beanstalk saves having two accounts – and I can personally recommend them. Putting it all together The steps we are going to work through are: Getting your local site into a local Git repository Pushing the files to a hosted repository Connecting a deployment tool to your web hosting Setting up a deployment Get your local site into a local Git repository Download and install Git for your operating system. Open up a Terminal window and tell Git your name using the following command (use the name you will set up on your hosted repository). > git config --global user.name ""YOUR NAME"" Use the next command to give Git your email address. This should be the address that you will use to sign up for your remote repository. > git config --global user.email ""YOUR EMAIL ADDRESS"" Staying in the command line, change to the directory where you keep your site files. If your files are in /Users/rachel/Sites/mynicewebite you would type: > cd /Users/rachel/Sites/mynicewebsite The next command tells Git that we want to create a new Git repository here. > git init We then add our files: > git add . Then commit the files: > git commit -m “Adding initial files” The bit in quotes after -m is a message describing what you are doing with this commit. It’s important to add something useful here to remind yourself later why you made the changes included in the commit. Your local files are now in a Git repository! However, everything should be just the same as before in terms of working on the files or viewing them in a local web server. The only difference is that you can add and commit changes to this local repository. Want to know more about Git? There are some excellent resources in a range of formats here. Setting up a hosted Git repository I’m going to use Atlassian Bitbucket for my first example as they offer a free hosted and private repository. Create an account on Bitbucket. Then create a new empty repository and give it a name that will identify the repository easily. Click Getting Started and under Command Line select “I have an existing project”. This will give you a set of instructions to run on the command line. The first instruction is just to change into your working directory as we did before. We then add a remote repository, and run two commands to push everything up to Bitbucket. cd /path/to/my/repo git remote add origin https://myuser@bitbucket.org/myname/24ways-tutorial.git git push -u origin --all git push -u origin --tags When you run the push command you will be asked for the password that you set for Bitbucket. Having entered that, you should be able to view the files of your site on Bitbucket by selecting the navigation option Source in the sidebar. You will also be able to see commits. When we initially committed our files locally we added the message “Adding initial files”. If you select Commits from the sidebar you’ll see we have one commit, with the message we set locally. You can imagine how useful this becomes when you can look back and see why you made certain changes to a project that perhaps you haven’t worked on for six months. Before working on your site locally you should run: > git pull in your working directory to make sure you have all of the most up-to-date files. This is especially important if someone else might work on them, or you just use multiple machines. You then make your changes and add any changed or modified files, for example: > git add index.php Commit the change locally: > git commit -m “updated the homepage” Then push it to Bitbucket: > git push origin master If you want to work on your files on a different computer you clone them using the following command: > git clone https://myuser@bitbucket.org/myname/24ways-tutorial.git You then have a copy of your files that is already a Git repository with the Bitbucket repository set up as a remote, so you are all ready to start work. Connecting a deployment tool to your repository and web hosting The next step is deploying files. I have chosen to use a deployment tool called Deploy as it has support for Bitbucket. It does have a monthly charge – but offers a free account for open source projects. Sign up for your account then log in and create your first project. Select Create an empty project. Under Configure Repository Details choose Bitbucket and enter your username and password. If Deploy can connect, it will show you your list of projects. Select the one you want. The next screen is Add New Server and here you need to configure the server that you want to deploy to. You might set up more than one server per project. In an ideal world you would deploy to a staging server for your client preview changes and then deploy once everything is signed off. For now I’ll assume you just want to set up your live site. Give the server a name; I usually use Production for the live web server. Then choose the protocol to connect with. Unless your host really does not support SFTP (which is pretty rare) I would choose that instead of FTP. You now add the same details your host gave you to log in with your SFTP client, including the username and password. The Path on server should be where your files are on the server. When you log in with an SFTP client and you get put in the directory above public_html then you should just be able to add public_html here. Once your server is configured you can deploy. Click Deploy now and choose the server you just set up. Then choose the last commit (which will probably be selected for you) and click Preview deployment. You will then get a preview of which files will change if you run the deployment: the files that will be added and any that will be removed. At the very top of that screen you should see the commit message you entered right back when you initially committed your files locally. If all looks good, run the deployment. You have taken the first steps to a more consistent and robust way of deploying your websites. It might seem like quite a few steps at first, but you will very soon come to realise how much easier deploying a live site is through this process. Your new procedure step by step Edit your files locally as before, testing them through a web server on your own computer. Commit your changes to your local Git repository. Push changes to the remote repository. Log into the deployment service. Hit the Deploy now button. Preview the changes. Run the deployment and then check your live site. Taking it further I have tried to keep things simple in this article because so often, once you start to improve processes, it is easy to get bogged down in all the possible complexities. If you move from deploying with an FTP client to working in the way I have outlined above, you’ve taken a great step forward in creating more robust processes. You can continue to improve your procedures from this point. Staging servers for client preview When we added our server we could have added an additional server to use as a staging server for clients to preview their site on. This is a great use of a cheap VPS server, for example. You can set each client up with a subdomain – clientname.yourcompany.com – and this becomes the place where they can view changes before you deploy them. In that case you might deploy to the staging server, let the client check it out and then go back and deploy the same commit to the live server. Using Git branches As you become more familiar with using Git, and especially if you start working with other people, you might need to start developing using branches. You can then have a staging branch that deploys to staging and a production branch that is always a snapshot of what has been pushed to production. This guide from Beanstalk explains how this works. Automatic deployment to staging I wouldn’t suggest doing automatic deployment to the live site. It’s worth having someone on hand hitting the button and checking that everything worked nicely. If you have configured a staging server, however, you can set it up to deploy the changes each time a commit is pushed to it. If you use Bitbucket and Deploy you would create a deployment hook on Bitbucket to post to a URL on Deploy when a push happens to deploy the code. This can save you a few steps when you are just testing out changes. Even if you have made lots of changes to the staging deployment, the commit that you push live will include them all, so you can do that manually once you are happy with how things look in staging. Further Reading The tutorials from Git Client Tower, already mentioned in this article, are a great place to start if you are new to Git. A presentation from Liam Dempsey showing how to use the GitHub App to connect to Bitbucket Try Git from Code School The Git Workbook a self study guide to Git from Lorna Mitchell Get set up for the new year I love to start the New Year with a clean slate and improved processes. If you are still wrangling files with FTP then this is one thing you could tick off your list to save you time and energy in 2015. Post to the comments if you have suggestions of tools or ideas for ways to enhance this type of set-up for those who have already taken the first steps.",2014,Rachel Andrew,rachelandrew,2014-12-04T00:00:00+00:00,https://24ways.org/2014/developing-robust-deployment-procedures/,process 48,A Holiday Wish,"A friend and I were talking the other day about why clients spend more on toilet cleaning than design, and how the industry has changed since the mid-1990s, when we got our starts. Early in his career, my friend wrote a fine CSS book, but for years he has called himself a UX designer. And our conversation got me thinking about how I reacted to that title back when I first started hearing it. “Just what this business needs,” I said to myself, “another phony expert.” Okay, so I was wrong about UX, but my touchiness was not altogether unfounded. In the beginning, our industry was divided between freelance jack-of-all-trade punks, who designed and built and coded and hosted and Photoshopped and even wrote the copy when the client couldn’t come up with any, and snot-slick dot-com mega-agencies that blew up like Alice and handed out titles like impoverished nobles in the years between the world wars. I was the former kind of designer, a guy who, having failed or just coasted along at a cluster of other careers, had suddenly, out of nowhere, blossomed into a web designer—an immensely curious designer slash coder slash writer with a near-insatiable lust to shave just one more byte from every image. We had modems back then, and I dreamed in sixteen colors. My source code was as pretty as my layouts (arguably prettier) and I hoovered up facts and opinions from newsgroups and bulletin boards as fast as any loudmouth geek could throw them. It was a beautiful life. But soon, too soon, the professional digital agencies arose, buying loft buildings downtown, jacking in at T1 speeds, charging a hundred times what I did, and communicating with their clients in person, in large artfully bedecked rooms, wearing hand-tailored Barney’s suits and bringing back the big city bullshit I thought I’d left behind when I quit advertising to become a web designer. Just like the big bad ad agencies of my early career, the new digital agencies stocked every meeting with a totem pole worth of ranks and titles. If the client brought five upper middle managers to the meeting, the agency did likewise. If fifteen stakeholders got to ask for a bigger logo, fifteen agency personnel showed up to take notes on the percentage of enlargement required. But my biggest gripe was with the titles. The bigger and more expensive the agency, the lousier it ran with newly invented titles. Nobody was a designer any more. Oh, no. Designer, apparently, wasn’t good enough. Designer was not what you called someone you threw that much money at. Instead of designers, there were user interaction leads and consulting middleware integrators and bilabial experience park rangers and you name it. At an AIGA Miami event where I was asked to speak in the 1990s, I once watched the executive creative director of the biggest dot-com agency of the day make a presentation where he spent half his time bragging that the agency had recently shaved down the number of titles for people who basically did design stuff from forty-six to just twenty-three—he presented this as though it were an Einsteinian coup—and the other half of his time showing a film about the agency’s newly opened branch in Oslo. The Oslo footage was shot in December. I kept wondering which designer in the audience who lived in the constant breezy balminess of Miami they hoped to entice to move to dark, wintry Norway. But I digress. Shortly after I viewed this presentation, the dot-com world imploded, brought about largely by the euphoric excess of the agencies and their clients. But people still needed websites, and my practice flourished—to the point where, in 1999, I made the terrifying transition from guy in his underwear working freelance out of his apartment to head of a fledgling design studio. (Note: you never stop working on that change.) I had heard about experience design in the 1990s, but assumed it was a gig for people who only knew one font. But sometime around 2004 or 2005, among my freelance and small-studio colleagues, like a hobbit in the Shire, I began hearing whispers in the trees of a new evil stirring. The fires of Mordor were burning. Web designers were turning in their HTML editing tools and calling themselves UXers. I wasn’t sure if they pronounced it “uck-sir,” or “you-ex-er,” but I trusted their claims to authenticity about as far as I trusted the actors in a Doctor Pepper commercial when they claimed to be Peppers. I’m an UXer, you’re an UXer, wouldn’t you like to be an UXer too? No thanks, said I. I still make things. With my hands. Such was my thinking. I may have earned an MFA at the end of some long-past period of soul confusion, but I have working-class roots and am profoundly suspicious of, well, everything, but especially of anything that smacks of pretense. I got exporting GIFs. I didn’t get how white papers and bullet points helped anybody do anything. I was wrong. And gradually I came to know I was wrong. And before other members of my tribe embraced UX, and research, and content strategy, and the other airier consultant services, I was on board. It helped that my wife of the time was a librarian from Michigan, so I’d already bought into the cult of information architecture. And if I wasn’t exactly the seer who first understood how borderline academic practices related to UX could become as important to our medium and industry as our craft skills, at least I was down a lot faster than Judd Apatow got with feminism. But I digress. I love the web and all the people in it. Today I understand design as a strategic practice above all. The promise of the web, to make all knowledge accessible to all people, won’t be won by HTML5, WCAG 2, and responsive web design alone. We are all designers. You may call yourself a front-end developer, but if you spend hours shaving half-seconds off an interaction, that’s user experience and you, my friend, are a designer. If the client asks, “Can you migrate all my old content to the new CMS?” and you answer, “Of course we can, but should we?”, you are a designer. Even our users are designers. Think about it. Once again, as in the dim dumb dot-com past, we seem to be divided by our titles. But, O, my friends, our varied titles are only differing facets of the same bright gem. Sisters, brothers, we are all designers. Love on! Love on! And may all your web pages, cards, clusters, clumps, asides, articles, and relational databases be bright.",2014,Jeffrey Zeldman,jeffreyzeldman,2014-12-18T00:00:00+00:00,https://24ways.org/2014/a-holiday-wish/,ux 49,Universal React,"One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications. More generally we’ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google’s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed. The good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows: The user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page. In the background, the client-side JavaScript is executed and takes over the duty of rendering the page. The next time a user clicks, rather than being sent to the server, the client-side app is in control. If the user doesn’t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again. This means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that’s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem. It’s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you’d like a refresher, the ReactJS docs are a good place to start.  Getting started We’re going to create a tiny ReactJS application that will work on the server and the client. First we’ll need to create a new project and install some dependencies. In a new, blank directory, run: npm init -y npm install --save ejs express react react-router react-dom That will create a new project and install our dependencies: ejs is a templating engine that we’ll use to render our HTML on the server. express is a small web framework we’ll run our server on. react-router is a popular routing solution for React so our app can fully support and respect URLs. react-dom is a small React library used for rendering React components. We’re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too. npm install --save-dev babel-cli babel-preset-es2015 babel-preset-react Then, create a .babelrc file that contains the following: { ""presets"": [""es2015"", ""react""] } What we’ve done here is install Babel’s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We’ll need the React transforms when we start writing JSX when working with React. Creating a server For now, our ExpressJS server is pretty straightforward. All we’ll do is render a view that says ‘Hello World’. Here’s our server code: import express from 'express'; import http from 'http'; const app = express(); app.use(express.static('public')); app.set('view engine', 'ejs'); app.get('*', (req, res) => { res.render('index'); }); const server = http.createServer(app); server.listen(3003); server.on('listening', () => { console.log('Listening on 3003'); }); Here we’re using ES6 modules, which I wrote about on 24 ways last year, if you’d like a reminder. We tell the app to render the index view on any GET request (that’s what app.get('*') means, the wildcard matches any route). We now need to create the index view file, which Express expects to be defined in views/index.ejs: My App Hello World Finally, we’re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command: ./node_modules/.bin/babel-node server.js And you should now be able to visit http://localhost:3003 and see ‘Hello World’ right there: Building the React app Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs. Firstly, let’s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so: <%- markup %> Next, we’ll define the routes we want our app to have using React Router. For now we’ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it’s clearer to define them as an object. Here’s what we’re starting with: const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] } These are just placed at the top of server.js, after the import statements. Later we’ll move these into a separate file, but for now they are fine where they are. Notice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let’s quickly define components/app.js and components/index.js. app.js looks like so: import React from 'react'; export default class AppComponent extends React.Component { render() { return (

Welcome to my App

{ this.props.children }
); } } When a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland: import React from 'react'; export default class IndexComponent extends React.Component { render() { return (

This is the index page

); } } Server-side routing with React Router Head back into server.js, and firstly we’ll need to add some new imports: import React from 'react'; import { renderToString } from 'react-dom/server'; import { match, RoutingContext } from 'react-router'; import AppComponent from './components/app'; import IndexComponent from './components/index'; The ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It’s this method that we’ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we’ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don’t need to concern yourself about how this component works, so don’t worry too much. Now for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes: app.get('*', (req, res) => { // routes is our object of React routes defined above match({ routes, location: req.url }, (err, redirectLocation, props) => { if (err) { // something went badly wrong, so 500 with a message res.status(500).send(err.message); } else if (redirectLocation) { // we matched a ReactRouter redirect, so redirect from the server res.redirect(302, redirectLocation.pathname + redirectLocation.search); } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { // no route match, so 404. In a real app you might render a custom // 404 view here res.sendStatus(404); } }); }); We call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we’ve made it this far it means we found a matching component to render and we can use this code to render it: ... } else if (props) { // if we got props, that means we found a valid component to render // for the given route const markup = renderToString(); // render `index.ejs`, but pass in the markup we want it to display res.render('index', { markup }) } else { ... } The renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components. Note the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent: // OR: const props = { a: ""foo"", b: ""bar"" }; Running the server again I know that felt like a lot of work, but the good news is that once you’ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working! Refactoring and one more route Before we move on to getting this code running on the client, let’s add one more route and do some tidying up. First, move our routes object out into routes.js: import AppComponent from './components/app'; import IndexComponent from './components/index'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent } ] } export { routes }; And then update server.js. You can remove the two component imports and replace them with: import { routes } from './routes'; Finally, let’s add one more route for ./about and links between them. Create components/about.js: import React from 'react'; export default class AboutComponent extends React.Component { render() { return (

A little bit about me.

); } } And then you can add it to routes.js too: import AppComponent from './components/app'; import IndexComponent from './components/index'; import AboutComponent from './components/about'; const routes = { path: '', component: AppComponent, childRoutes: [ { path: '/', component: IndexComponent }, { path: '/about', component: AboutComponent } ] } export { routes }; If you now restart the server and head to http://localhost:3003/about` you’ll see the about page! For the finishing touch we’ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so: import React from 'react'; import { Link } from 'react-router'; export default class AppComponent extends React.Component { render() { return (

Welcome to my App

  • Home
  • About
{ this.props.children }
); } } You can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we’re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience. Client-side rendering First, we’re going to make a small change to views/index.ejs. React doesn’t like rendering directly into the body and will give a warning when you do so. To prevent this we’ll wrap our app in a div:
<%- markup %>
I’ve also added in a script tag to build.js, which is the file we’ll generate containing all our client-side code. Next, create client-render.js. This is going to be the only bit of JavaScript that’s exclusive to the client side. In it we need to pull in our routes and render them to the DOM. import React from 'react'; import ReactDOM from 'react-dom'; import { Router } from 'react-router'; import { routes } from './routes'; import createBrowserHistory from 'history/lib/createBrowserHistory'; ReactDOM.render( , document.getElementById('app') ) The first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser’s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we’ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation. Finally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element. Generating build.js We’re actually almost there! The final thing we need to do is generate our client side bundle. For this we’re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We’ll install it and babel-loader, a webpack plugin for transforming code through Babel. npm install --save-dev webpack babel-loader To run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code: var path = require('path'); module.exports = { entry: path.join(process.cwd(), 'client-render.js'), output: { path: './public/', filename: 'build.js' }, module: { loaders: [ { test: /.js$/, loader: 'babel' } ] } } Note first that this file can’t be written in ES6 as it doesn’t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location – if we just gave it the string ‘client-render.js’, webpack wouldn’t be able to find it. Next, we tell webpack where to output our file, and here I’m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first. Now we’re ready to generate the bundle! ./node_modules/.bin/webpack This will take a fair few seconds to run (on my machine it’s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you’ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect! The first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you’re developing. Conclusions First, if you’d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions. Next, I want to stress that you shouldn’t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you’d be right. I used it as it’s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it’s a suitable infrastructure for you. With that, all that’s left for me to do is wish you a very merry Christmas and best of luck with your React applications!",2015,Jack Franklin,jackfranklin,2015-12-05T00:00:00+00:00,https://24ways.org/2015/universal-react/,code 54,Putting My Patterns through Their Paces,"Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work. The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me. Meet the Teaser Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.) In its simplest form, a teaser contains a headline, which links to an article: Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile. Nearly every element visible on this page is built out of our generic “teaser” pattern. But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline. The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts. And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML:

By Author Name

126 comments

Article Title

Lorem ipsum dolor sit amet, consectetur…

But then I caught myself, and realized this wasn’t the best approach. Moving Beyond Layout Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself: What if someone doesn’t browse the web like I do? …Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web. And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated. That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be? So I took a step back, and came up with a different approach:

Article Title

By Author Name

Lorem ipsum dolor sit amet, consectetur… 126 comments

Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place: Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline: With that in place, we can use flexbox to tweak our layout, like so: .teaser-hed { display: flex; flex-direction: column-reverse; } flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children. Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser. Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used: var doc = document.body || document.documentElement; var style = doc.style; if ( style.webkitFlexWrap == '' || style.msFlexWrap == '' || style.flexWrap == '' ) { doc.className += "" supports-flex""; } Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design: .supports-flex .teaser-hed { display: flex; flex-direction: column-reverse; } .supports-flex .teaser .comment-count { position: absolute; right: 0; top: 1.1em; } If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design. And with that, our teaser’s complete. Diving Into Device-Agnostic Design This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well. Open video Even a simple search form can be conditionally enhanced, given a little layered thinking. This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote, Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability. We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future. (It won’t help you get enough to eat at holiday parties, though.)",2015,Ethan Marcotte,ethanmarcotte,2015-12-10T00:00:00+00:00,https://24ways.org/2015/putting-my-patterns-through-their-paces/,code 55,How Tabs Should Work,"Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented. So this post is my definition of how a tabbing system should work, and one approach of implementing that. But… tabs are easy, right? I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system: var tabs = $('.tab').click(function () { tabs.hide().filter(this.hash).show(); }).map(function () { return $(this.hash)[0]; }); $('.tab:first').click(); Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution. Requirements: what makes the perfect tab? All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible). ARIA roles. The tabs are anchor links that: are clickable have block layout have their href pointing to the id of the panel element use the correct cursor (i.e. cursor: pointer). Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open. Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected. Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place). The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work. The last three points are JavaScript problems. Let’s investigate that. The shitmus test Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented: Change tab, then use the Back button (or keyboard shortcut) and it breaks The tab isn’t a link, so you can’t open it in a new tab These two basic things are, to me, the bare minimum that a tabbing system should have. Why is this important? The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users. If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. :breath: URI fragment, absolute URL or query string? A URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content. This decision really depends on the context of your tabbing system. For something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes. For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system. I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything). For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs. Markup I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…):
It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code. URL-driven tabbing systems Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free. With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support). Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors. function show(id) { // remove the selected class from the tabs, // and add it back to the one the user selected $('.tab').removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it $('.panel').hide().filter(id).show(); } $(window).on('hashchange', function () { show(location.hash); }); // initialise by showing the first panel show('#dizzy'); This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box. However, there’s a number of problems we need to fix: The initialised tab is hard-coded to the first panel, rather than what’s on the URL. If there’s no hash on the URL, all the panels are hidden (and thus broken). If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system. I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying. From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next. Using the URL to initialise correctly and protect from breakage Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available. The problem is: what if the hash on the URL isn’t actually for a tab? The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached. So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance). // collect all the tabs var tabs = $('.tab'); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')); function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', function () { var hash = location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } }); // initialise show(targets.indexOf(location.hash) !== -1 ? location.hash : ''); The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab. The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break. So far we’ve got a tabbing system that: Works without JavaScript. Supports right-click and Shift-click (and doesn’t select in these cases). Loads the correct panel if you start with a hash. Supports native browser navigation. Supports the keyboard. The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause. Removing the jump to tab You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support. There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said. We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page. The change involves the following: Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4). In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line). For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id). When the hashchange event fires, if we have a target value, let’s put the id back on the panel. These changes result in this final code: /*global $*/ // a temp value to cache *what* we're about to show var target = null; // collect all the tabs var tabs = $('.tab').on('click', function () { target = $(this.hash).removeAttr('id'); // if the URL isn't going to change, then hashchange // event doesn't fire, so we trigger the update manually if (location.hash === this.hash) { // but this has to happen after the DOM update has // completed, so we wrap it in a setTimeout 0 setTimeout(update, 0); } }); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')).each(function () { // keep a copy of what the original el.id was $(this).data('old-id', this.id); }); function update() { if (target) { target.attr('id', target.data('old-id')); target = null; } var hash = window.location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } } function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', update); // initialise if (targets.indexOf(window.location.hash) !== -1) { update(); } else { show(); } This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add. ARIA roles This article on ARIA tabs made it very easy to get the tabbing system working as I wanted. The tasks were simple: Add aria-role set to tab for the tabs, and tabpanel for the panels. Set aria-controls on the tabs to point to their related panel (by id). I use JavaScript to add tabindex=0 to all the tab elements. When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false. When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false. And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible. Check out the final version (and the non-jQuery version as promised). In conclusion There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience. Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year!",2015,Remy Sharp,remysharp,2015-12-22T00:00:00+00:00,https://24ways.org/2015/how-tabs-should-work/,code 57,Cooking Up Effective Technical Writing,"Merry Christmas! May your preparations for this festive season of gluttony be shaping up beautifully. By the time you read this I hope you will have ordered your turkey, eaten twice your weight in Roses/Quality Street (let’s not get into that argument), and your Christmas cake has been baked and is now quietly absorbing regular doses of alcohol. Some of you may be reading this and scoffing Of course! I’ve also made three batches of mince pies, a seasonal chutney and enough gingerbread men to feed the whole street! while others may be laughing Bake? Oh no, I can’t cook to save my life. For beginners, recipes are the step-by-step instructions that hand-hold us through the cooking process, but even as a seasoned expert you’re likely to refer to a recipe at some point. Recipes tell us what we need, what to do with it, in what order, and what the outcome will be. It’s the documentation behind our ideas, and allows us to take the blueprint for a tasty morsel and to share it with others so they can recreate it. In fact, this is a little like the open source documentation and tutorials that we put out there, similarly aiming to guide other developers through our creations. The ‘just’ification of documentation Lately it feels like we’re starting to consider the importance of our words, and the impact they can have on others. Brad Frost warned us of the dangers of “Just” when it comes to offering up solutions to queries: “Just use this software/platform/toolkit/methodology…” “Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word. “Just” by Brad Frost I can really empathise with these sentiments. My relationship with code started out as many good web tales do, with good old HTML, CSS and JavaScript. University years involved some time with Perl, PHP, Java and C. In my first job I worked primarily with ColdFusion, a bit of ActionScript, some classic  ASP and pinch of Java. I’d do a bit of PHP outside work every now and again. .NET came in, but we never really got on, and eventually I started learning some Ruby, Python and Node. It was a broad set of learnings, and I enjoyed the similarities and differences that came with new languages. I don’t develop day in, day out any more, and my interests and work have evolved over the years, away from full-time development and more into architecture and strategy. But I still make things, and I still enjoy learning. I have often found myself bemoaning the lack of tutorials or courses that cater for the middle level – someone who may be learning a new language, but who has enough programming experience under their belt to not need to revise the concepts of how loops or objects work, and is perfectly adept at googling the syntax for getting a substring. I don’t want snippets out of context; I want an understanding of architectural principles, of the strengths and weaknesses, of the type of applications that work well with the language. I’m caught in the place between snoozing off when ‘Using the Instagram API with Ruby’ hand-holds me through what REST is, and feeling like I’m stupid and need to go back to dev school when I can’t get my environment and dependencies set up, let alone work out how I’m meant to get any code to run. It’s seems I’m not alone with this – Erin McKean seems to have been here too: “Some tutorials (especially coding tutorials) like to begin things in media res. Great for a sense of dramatic action, bad for getting to “Step 1” without tears. It can be really discouraging to fire up a fresh terminal window only to be confronted by error message after error message because there were obligatory steps 0.1.0 through 0.9.9 that you didn’t even know about.” “Tips for Learning What You Don’t Know You Don’t Know” by Erin McKean I’m sure you’ve been here too. Many tutorials suffer badly from the fabled ‘how to draw an owl’-itis. It’s the kind of feeling you can easily get when sifting through recipes as well as with code. Far from being the simple instructions that let us just follow along, they too can be a minefield. Fall in too low and you may be skipping over an explanation of what simmering is, or set your sights too high and you may get stuck at the point where you’re trying to sous vide a steak using your bathtub and a Ziploc bag. Don’t be a turkey, use your loaf! My mum is a great cook in my eyes (aren’t all mums?). I love her handcrafted collection of gathered recipes from over the years, including the one below, which is a great example of how something may make complete sense to the writer, but could be impermeable to a reader. Depending on your level of baking knowledge, you may ask: What’s SR flour? What’s a tsp? Should I use salted or unsalted butter? Do I use sticks of cinnamon or ground? Why is chopped chocolate better? How do I cream things? How big should the balls be? How well is “well spaced”? How much leeway do I have for “(ish!!)”? Does the “20” on the other cookie note mean I’ll end up with twenty? At any point, making a wrong call could lead to rubbish cookies, and lead to someone heading down the path of an I can’t cook mentality. You may be able to cook (or follow recipes), but you may not understand the local terms for ingredients, may not be able to acquire something and need to know what kind of substitutes you can use, or may need to actually do some prep before you jump into the main bit. However, if we look at good examples of recipes, I think there’s a lot we can apply when it comes to technical writing on the web. I’ve written before about the benefit of breaking documentation into small, reusable parts, and this will help us, but we can also take it a bit further. Here are my five top tips for better technical writing. 1. Structure and standardise your information Think of the structure of a recipe. We very often have some common elements and they usually follow roughly the same format. We have standards and conventions that allow us to understand very quickly what a recipe is and how it should be used.  Great recipes help their chefs know what they need to get ready in advance, both in terms of buying ingredients and putting together their kit. They then talk through the process, using appropriate language, and without making assumptions that the person can fill in any gaps for themselves; they explain why things are done the way they are. The best recipes may also suggest how you can take what you’ve done and put your own spin on it. For instance, a good recipe for the simple act of boiling an egg will explain cooking time in relation to your preference for yolk gooiness. There are also different flavour combinations to try, accompaniments, or presentation suggestions.  By breaking down your technical writing into similar sections, you can help your audience understand the elements they’ll be working with, what they need to do once they have these, and how they can move on from your self-contained illustration. Title Ensure your title is suitably descriptive and representative of the result. Getting Started with Python perhaps isn’t as helpful as Learn Python: General Syntax and Basics. Result Many recipes include a couple of lines as an overview of what you’ll end up with, and many include a photo of the finished dish. With our technical writing we can do the same: In this tutorial we’re going to learn how to set up our development environment, and we’ll then undertake some exercises to explore the general syntax, finishing by building a mini calculator. Ingredients What are the components we’ll be working with, whether in terms of versions, environment, languages or the software packages and libraries you’ll need along the way? Listing these up front gives the reader a great summary of the things they’ll be using, and any gotchas. Being able to provide a small amount of supporting information will also help less experienced users. Ideally, explain briefly what things are and why we’re using it. Prep As we heard from Erin above, not fully understanding the prep needed can be a huge source of frustration. Attempting to run a code snippet without context will often lead to failure when the prerequisites and process aren’t clear. Be sure to include information around any environment set-up, installation or config you’ll need to have done before you start. Stu Robson’s Simple Sass documentation aims to do this before getting into specifics, although ideally this would also include setting up Sass itself. Instructions The body of the tutorial itself is the whole point of our writing. The next four tips will hopefully make your tutorial much more successful. Variations Like our ingredients section, as important as explaining why we’re using something in this context is, it’s also great to explain alternatives that could be used instead, and the impact of doing so. Perhaps go a step further, explaining ways that people can change what you have done in your tutorial/readme for use in different situations, or to provide further reading around next steps. What happens if they want to change your static array of demo data to use JSON, for instance? By giving some thought to follow-up questions, you can better support your readers. While not in a separate section, the source code for GreenSock’s GSAP JS basics explains: We’ll use a window.onload for simplicity, but typically it is best to use either jQuery’s $(document).ready() or $(window).load() or cross-browser event listeners so that you’re not limited to one. Keep in mind to both: Explain what variations are possible. Explain why certain options may be more desirable than others in different situations. 2. Small, reusable components Reusable components are for life, not just for Christmas, and they’re certainly not just for development. If you start to apply the structure above to your writing, you’re probably going to keep coming across the same elements: Do I really have to explain how to install Sass and Node.js again, Sally? The danger with more clarity is that our writing becomes bloated and overly convoluted for advanced readers, those who don’t need to be told how to beat an egg for the hundredth time.  Instead, by making our writing reusable and modular, and by creating smaller, central resources, we can provide context and extra detail where needed without diluting our core message. These could be references we create, or those already created well by others. This recipe for katsudon makes use of this concept. Rather than explaining how to make tonkatsu or dashi stock, these each have their own page. Once familiar, more advanced readers will likely skip over the instructions for the component parts. 3. Provide context to aid accessibility Here I’m talking about accessibility in the broadest sense. Small, isolated snippets can be frustrating to those who don’t fully understand the wider context of how our examples work. Showing an exciting standalone JavaScript function is great, but giving someone the full picture of how and when this is called, and how it should be included in relation to other HTML and CSS is even better. Giving your readers the ability to view a big picture version, and ideally the ability to download a full version of the source, will help to reduce some of the frustrations of trying to get your component to work in their set-up.  4. Be your own tech editor A good editor can be invaluable to your work, and wherever possible I’d recommend that you try to get a neutral party to read over your writing. This may not always be possible, though, and you may need to rely on yourself to cast a critical eye over your work. There are many tips out there around general editing, including printing out your work onto paper, or changing the font size: both will force your eyes to review it in a new light. Beyond this, I’d like to encourage you to think about the following: Explain what things are. For example, instead of referencing Grunt, in the first instance perhaps reference “Grunt (a JavaScript task runner that minimises repetitive activities through automation).” Explain how you get things, even if this is a link to official installers and documentation. Don’t leave your readers having to search. Why are you using this approach/technology over other options? What happens if I use something else? What depends on this? Avoid exclusionary lingo or acronyms. Airbnb’s JavaScript Style Guide includes useful pointers around their reasoning: Use computed property names when creating objects with dynamic property names. Why? They allow you to define all the properties of an object in one place. The language we use often makes assumptions, as we saw with “just”. An article titled “ES6 for Beginners” is hugely ambiguous: is this truly for beginner coders, or actually for people who have a good pre-existing understanding of JavaScript but are new to these features? Review your writing with different types of readers in mind. How might you confuse or mislead them? How can you better answer their questions? This doesn’t necessarily mean supporting everyone – your audience may need to have advanced skills – but even if you’re providing low-level, deep-dive, reference material, trying not to make assumptions or take shortcuts will hopefully lead to better, clearer writing. 5. A picture is worth a thousand words… …or even better: use a thousand pictures, stitched together into a quick video or animated GIF. People learn in different ways. Just as recipes often provide visual references or a video to work along with, providing your technical information with alternative demonstrations can really help get your point across. Your audience will be able to see exactly what you’re doing, what they should expect as interaction responses, and what the process looks like at different points. There are many, many options for recording your screen, including QuickTime Player on Mac OS X (File → New Screen Recording), GifGrabber, or Giffing Tool on Windows. Paul Swain, a UX designer, uses GIFs to provide additional context within his documentation, improving communication: “My colleagues (from across the organisation) love animated GIFs. Any time an interaction is referenced, it’s accompanied by a GIF and a shared understanding of what’s being designed. The humble GIF is worth so much more than a thousand words; and it’s great for cats.” Paul Swain Next time you’re cooking up some instructions for readers, think back to what we can learn from recipes to help make your writing as accessible as possible. Use structure, provide reusable bitesize morsels, give some context, edit wisely, and don’t scrimp on the GIFs. And above all, have a great Christmas!",2015,Sally Jenkinson,sallyjenkinson,2015-12-18T00:00:00+00:00,https://24ways.org/2015/cooking-up-effective-technical-writing/,content 58,Beyond the Style Guide,"Much like baking a Christmas cake, designing for the web involves creating an experience in layers. Starting with a solid base that provides the core experience (the fruit cake), we can add further layers, each adding refinement (the marzipan) and delight (the icing). Don’t worry, this isn’t a misplaced cake recipe, but an evaluation of modular design and the role style guides can play in acknowledging these different concerns, be they presentational or programmatic. The auteur’s style guide Although trained as a graphic designer, it was only when I encountered the immediacy of the web that I felt truly empowered as a designer. Given a desire to control every aspect of the resulting experience, I slowly adopted the role of an auteur, exploring every part of the web stack: front-end to back-end, and everything in between. A few years ago, I dreaded using the command line. Today, the terminal is a permanent feature in my Dock. In straddling the realms of graphic design and programming, it’s the point at which they meet that I find most fascinating, with each dicipline valuing the creation of effective systems, be they for communication or code efficiency. Front-end style guides live at this intersection, demonstrating both the modularity of code and the application of visual design. Painting by numbers In our rush to build modular systems, design frameworks have grown in popularity. While enabling quick assembly, these come at the cost of originality and creative expression – perhaps one reason why we’re seeing the homogenisation of web design. In editorial design, layouts should accentuate content and present it in an engaging manner. Yet on the web we see a practice that seeks templated predictability. In ‘Design Machines’ Travis Gertz argued that (emphasis added): Design systems still feel like a novelty in screen-based design. We nerd out over grid systems and modular scales and obsess over style guides and pattern libraries. We’re pretty good at using them to build repeatable components and site-wide standards, but that’s sort of where it ends. […] But to stop there is to ignore the true purpose and potential of a design system. Unless we consider how interface patterns fully embrace the design systems they should be built upon, style guides may exacerbate this paint-by-numbers approach, encouraging conformance and suppressing creativity. Anatomy of a button Let’s take a look at that most canonical of components, the button, and consider what we might wish to document and demonstrate in a style guide. The different layers of our button component. Content The most variable aspect of any component. Content guidelines will exert the most influence here, dictating things like tone of voice (whether we should we use stiff, formal language like ‘Submit form’, or adopt a more friendly tone, perhaps ‘Send us your message’) and appropriate language. For an internationalised interface, this may also impact word length and text direction or orientation. Structure HTML provides a limited vocabulary which we can use to structure content and add meaning. For interactive elements, the choice of element can also affect its behaviour, such as whether a button submits form data or links to another page: Button text Note: One of the reasons I prefer to use An icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users: Open video A screen reader announcing a button with a title. However, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn’t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts: The mobile GitHub view with and without external fonts. Even if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn’t load. Providing always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons. This also reads just fine in screen readers: Open video A screen reader announcing the revised button. Clever usability enhancements don’t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the “captioned videos” or “audio description” categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don’t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study. More information W3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out “How People with Disabilities Use the Web”, there are “Tips for Getting Started” for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way. Conclusion You can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don’t notice when those fundamental aspects of good website design and development are done right. But they’ll always know when they are implemented poorly. If you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn’t know they’d need it: Open video In this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks “What did she say?” the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.",2015,Eric Eggert,ericeggert,2015-12-17T00:00:00+00:00,https://24ways.org/2015/the-accessibility-mindset/,code 67,What I Learned about Product Design This Year,"2015 was a humbling year for me. In September of 2014, I joined a tiny but established startup called SproutVideo as their third employee and first designer. The role interests me because it affords the opportunity to see how design can grow a solid product with a loyal user-base into something even better. The work I do now could also have a real impact on the brand and user experience of our product for years to come, which is a thrilling prospect in an industry where much of what I do feels small and temporary. I got in on the ground floor of something special: a small, dedicated, useful company that cares deeply about making video hosting effortless and rewarding for our users. I had (and still have) grand ideas for what thoughtful design can do for a product, and the smaller-scale product design work I’ve done or helped manage over the past few years gave me enough eager confidence to dive in head first. Readers who have experience redesigning complex existing products probably have a knowing smirk on their face right now. As I said, it’s been humbling. A year of focused product design, especially on the scale we are trying to achieve with our small team at SproutVideo, has taught me more than any projects in recent memory. I’d like to share a few of those lessons. Product design is very different from marketing design The majority of my recent work leading up to SproutVideo has been in marketing design. These projects are so fun because their aim is to communicate the value of the product in a compelling and memorable way. In order to achieve this goal, I spent a lot of time thinking about content strategy, responsive design, and how to create striking visuals that tell a story. These are all pursuits I love. Product design is a different beast. When designing a homepage, I can employ powerful imagery, wild gradients, and somewhat-quirky fonts. When I began redesigning the SproutVideo product, I wanted to draw on all the beautiful assets I’ve created for our marketing materials, but big gradients, textures, and display fonts made no sense in this new context. That’s because the product isn’t about us, and it isn’t about telling our story. Product design is about getting out of the way so people can do their job. The visual design is there to create a pleasant atmosphere for people to work in, and to help support the user experience. Learning to take “us” out of the equation took some work after years of creating gorgeous imagery and content for the sales-driven side of businesses. I’ve learned it’s very valuable to design both sides of the experience, because marketing and product design flex different muscles. If you’re currently in an environment where the two are separate, consider switching teams in 2016. Designing for product when you’ve mostly done marketing, or vice versa, will deepen your knowledge as a designer overall. You’ll face new unexpected challenges, which is the only way to grow. Product design can not start with what looks good on Dribbble I have an embarrassing confession: when I began the redesign, I had a secret goal of making something that would look gorgeous in my portfolio. I have a collection of product shots that I admire on Dribbble; examples of beautiful dashboards and widgets and UI elements that look good enough to frame. I wanted people to feel the same way about the final outcome of our redesign. Mistakenly, this was a factor in my initial work. I opened Photoshop and crafted pixel-perfect static buttons and form elements and color palettes that — when applied to our actual product — looked like a toddler beauty pageant. It added up to a lot of unusable shininess, noise, and silliness. I was disappointed; these elements seemed so lovely in isolation, but in context, they felt tacky and overblown. I realized: I’m not here to design the world’s most beautiful drop down menu. Good design has nothing to do with ego, but in my experience designers are, at least a little bit, secret divas. I’m no exception. I had to remind myself that I am not working in service of a bigger Dribbble following or to create the most Pinterest-ing work. My function is solely to serve the users — to make life a little better for the good people who keep my company in business. This meant letting go of pixel-level beauty to create something bigger and harder: a system of elements that work together in harmony in many contexts. The visual style exists to guide the users. When done well, it becomes a language that users understand, so when they encounter a new feature or have a new goal, they already feel comfortable navigating it. This meant stripping back my gorgeous animated menu into something that didn’t detract from important neighboring content, and could easily fit in other parts of the app. In order to know what visual style would support the users, I had to take a wider view of the product as a whole. Just accept that designing a great product – like many worthwhile pursuits – is initially laborious and messy Once I realized I couldn’t start by creating the most Dribbble-worthy thing, I knew I’d have to begin with the unglamorous, frustrating, but weirdly wonderful work of mapping out how the product’s content could better be structured. Since we’re redesigning an existing product, I assumed this would be fairly straightforward: the functionality was already in place, and my job was just to structure it in a more easily navigable way. I started by handing off a few wireframes of the key screens to the developer, and that’s when the questions began rolling in: “If we move this content into a modal, how will it affect this similar action here?” “What happens if they don’t add video tags, but they do add a description?” “What if the user has a title that is 500 characters long?” “What if they want their video to be private to some users, but accessible to others?”. How annoying (but really, fantastic) that people use our product in so many ways. Turns out, product design isn’t about laying out elements in the most ideal scenario for the user that’s most convenient for you. As product designers, we have to foresee every outcome, and anticipate every potential user need. Which brings me to another annoying epiphany: if you want to do it well, and account for every user, product design is so much more snarly and tangled than you’d expect going in. I began with a simple goal: to improve the experience on just one of our key product pages. However, every small change impacts every part of the product to some degree, and that impact has to be accounted for. Every decision is based on assumptions that have to be tested; I test my assumptions by observing users, talking to the team, wireframing, and prototyping. Many of my assumptions are wrong. There are days when it’s incredibly frustrating, because an elegant solution for users with one goal will complicate life for users with another goal. It’s vital to solve as many scenarios as possible, even though this is slow, sometimes mind-bending work. As a side bonus, wireframing and prototyping every potential state in a product is tedious, but your developers will thank you for it. It’s not their job to solve what happens when there’s an empty state, error, or edge case. Showing you’ve accounted for these scenarios will win a developer’s respect; failing to do so will frustrate them. When you’ve created and tested a system that supports user needs, it will be beautiful Remember what I said in the beginning about wanting to create a Dribbble-worthy product? When I stopped focusing on the visual details of the design (color, spacing, light and shadow, font choices) and focused instead on structuring the content to maximize usability and delight, a beautiful design began to emerge naturally. I began with grayscale, flat wireframes as a strategy to keep me from getting pulled into the visual style before the user experience was established. As I created a system of elements that worked in harmony, the visual style choices became obvious. Some buttons would need to be brighter and sit off the page to help the user spot important actions. Some elements would need line separators to create a hierarchy, where others could stand on their own as an emphasized piece of content. As the user experience took shape, the visual style emerged naturally to support it. The result is a product that feels beautiful to use, because I was thoughtful about the experience first. A big takeaway from this process has been that my assumptions will often be proven wrong. My assumptions about how to design a great product, and how users will interact with that product, have been tested and revised repeatedly. At SproutVideo we’re about to undertake the biggest test of our work; we’re going to launch a small part of the product redesign to our users. If I’ve learned anything, it’s that I will continue to be humbled by the ongoing effort of making the best product I can, which is a wonderful thing. Next year, I hope you all get to do work that takes you out of our comfort zone. Be regularly confounded and embarrassed by your wrong assumptions, learn from them, and come back and tell us what you learned in 2016.",2015,Meagan Fisher,meaganfisher,2015-12-14T00:00:00+00:00,https://24ways.org/2015/what-i-learned-about-product-design-this-year/,design 68,"Grid, Flexbox, Box Alignment: Our New System for Layout","Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers. Grid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work. Another new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. Owing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. If there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we’ll be easily able to date a website to circa 2015 by seeing
or
in the markup. In this article, I’m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them. To see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. Relationship Items only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It’s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout. At a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox: See the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen. And in grid layout (requires a CSS grid-supporting browser): See the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen. Alignment Full-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you’d like to have under your tree this Christmas, then this is the box you’ll want to unwrap first. The box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things. Our examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property. The examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self. Take a look at an example in flexbox: See the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen. And also in grid layout (requires a CSS grid-supporting browser): See the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen. The alignment properties used with CSS grid layout. Fluid grids A cornerstone of responsive design is the concept of fluid grids. “[…]every aspect of the grid—and the elements laid upon it—can be expressed as a proportion relative to its container.” —Ethan Marcotte, “Fluid Grids” The method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element. h1 { margin-left: 14.575%; /* 144px / 988px = 0.14575 */ width: 70.85%; /* 700px / 988px = 0.7085 */ } In more recent years, we’ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple. The most basic of flexbox demos shows this fluidity in action. The justify-content property – another property defined in the box alignment module – can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion. In this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between. See the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen. For true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion. The flexbox flex property is a shorthand for: flex-grow flex-shrink flex-basis The flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value. flex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis. flex: 0 0 200px: a box that will be 200px and cannot grow or shrink. flex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller. In this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis. See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. What I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out. If I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo: See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. Once you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes. In this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr. grid-template-columns: 1fr 2fr 2fr 2fr; See the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen. The four-track grid. Separation of concerns My younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup. Browsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another. Here is a snippet from the Bootstrap grid examples – two columns with two nested columns:
.col-md-8
.col-md-6
.col-md-6
.col-md-4
Not a million miles away from something I might have written in 1999.
.col-md-8
.col-md-6 .col-md-6
.col-md-4
Grid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer. Flexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him. See the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen. Grid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser): See the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen. Laying out our items in two dimensions using grid layout. As these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout. A note on accessibility and reordering The issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor’s draft states “Authors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.” —CSS Flexible Box Layout Module Level 1, Editor’s Draft (3 December 2015) This is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning. Automatic content placement with rules Having control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities. Something that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, “Display this content, using these rules.” As an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever. The poem is displayed first in the source as a set of paragraphs. I’ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I’ve added a class of landscape to the landscape ones. The mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid. At wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order. I also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour. Take a look and play around with the full demo (requires a CSS grid layout-supporting browser): See the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen. The final automatic placement example. My wish for 2016 I really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production — that we can start to move away from using the wrong tools for the job. However, I also hope that we’ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web. Some further reading I maintain a site of grid layout examples and resources at Grid by Example. The three CSS specifications I’ve discussed can be found as editor’s drafts: CSS grid, flexbox, box alignment. I wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification. More examples of box alignment and grid layout. My presentation at Fronteers earlier this year, in which I explain more about these concepts.",2015,Rachel Andrew,rachelandrew,2015-12-15T00:00:00+00:00,https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/,code 71,Upping Your Web Security Game,"When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in. If the web security game was hard to win before, it’s doomed to fail now. In today’s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them. Security teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application’s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well. Where To Start? Embracing security isn’t something you do overnight. A good place to start is by reviewing things you’re already doing – and trying to make them more secure. Here are three concrete steps you can take to get going. HTTPS Threats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they’re talking to, and that the information exchanged can be neither modified nor sniffed. HTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video. Using HTTPS is becoming easier and cheaper. Here are a few free tools that can help: Get free and easy HTTPS delivery from Cloudflare (be sure to use “Full SSL”!) Get a free and automation-friendly certificate from Let’s Encrypt (now in open beta). Test how well your HTTPS is set up using SSLTest. Other vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows. Two-Factor Authentication The most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more. All of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents. The typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn’t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don’t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others. If implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you’re sharing with them in the process. Tracking Known Vulnerabilities Most of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack. This is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps. To find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk’s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP’s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. This process is still way too painful, which means most teams don’t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk’s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk’s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves. Note that newly disclosed vulnerabilities usually impact old code – the one you’re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk’s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now – you can register to get updated when we expand. Securing Yourself In addition to making your application secure, you should make the contributors to that application secure – including you. Earlier this year we’ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn’t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised. There’s no single step that will make you fully secure, but here are a few steps that can make a big impact: Use 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I’d recommend using 2FA on all your personal services too. Use a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don’t let the attackers access your other systems too. Secure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications. Be very wary of phishing. Smart attackers use ‘spear phishing’ techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails. Don’t install things through curl | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don’t do it on your machines, and definitely don’t do it in your CI/CD systems. Seriously. Staying secure should be important to you personally, but it’s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors. A Culture of Security Using HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey. The end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves – and our users – safe.",2015,Guy Podjarny,guypodjarny,2015-12-11T00:00:00+00:00,https://24ways.org/2015/upping-your-web-security-game/,code 72,Designing with Contrast,"When an appetite for aesthetics over usability becomes the bellwether of user interface design, it’s time to reconsider who we’re designing for. Over the last few years, we have questioned the signifiers that gave obvious meaning to the function of interface elements. Strong textures, deep shadows, gradients — imitations of physical objects — were discarded. And many, rightfully so. Our audiences are now more comfortable with an experience that feels native to the technology, so we should respond in kind. Yet not all of the changes have benefitted users. Our efforts to simplify brought with them a trend of ultra-minimalism where aesthetics have taken priority over legibility, accessibility and discoverability. The trend shows no sign of losing popularity — and it is harming our experience of digital content. A thin veneer We are in a race to create the most subdued, understated interface. Visual contrast is out. In its place: the thinnest weights of a typeface and white text on bright color backgrounds. Headlines, text, borders, backgrounds, icons, form controls and inputs: all grey. While we can look back over the last decade and see minimalist trends emerging on the web, I think we can place a fair share of the responsibility for the recent shift in priorities on Apple. The release of iOS 7 ushered in a radical change to its user interface. It paired mobile interaction design to the simplicity and eloquence of Apple’s marketing and product design. It was a catalyst. We took what we saw, copied and consumed the aesthetics like pick-and-mix. New technology compounds this trend. Computer monitors and mobile devices are available with screens of unprecedented resolutions. Ultra-light type and subtle hues, difficult to view on older screens, are more legible on these devices. It would be disingenuous to say that designers have always worked on machines representative of their audience’s circumstances, but the gap has never been as large as it is now. We are running the risk of designing VIP lounges where the cost of entry is a Mac with a Retina display. Minimalist expectations Like progressive enhancement in an age of JavaScript, many good and sensible accessibility practices are being overlooked or ignored. We’re driving unilateral design decisions that threaten accessibility. We’ve approached every problem with the same solution, grasping on to the integrity of beauty, focusing on expression over users’ needs and content. Someone once suggested to me that a client’s website should include two states. The first state would be the ideal experience, with low color contrast, light font weights and no differentiation between links and text. It would be the default. The second state would be presented in whatever way was necessary to meet accessibility standards. Users would have to opt out of the default state via a toggle if it wasn’t meeting their needs. A sort of first-class, upper deck cabin equivalent of graceful degradation. That this would divide the user base was irrelevant, as the aesthetics of the brand were absolute. It may seem like an unusual anecdote, but it isn’t uncommon to see this thinking in our industry. Again and again, we place the burden of responsibility to participate in a usable experience on others. We view accessibility and good design as mutually exclusive. Taking for granted what users will tolerate is usually the forte of monopolistic services, but increasingly we apply the same arrogance to our new products and services. Imitation without representation All of us are influenced in one way or another by one another’s work. We are consciously and unconsciously affected by the visual and audible activity around us. This is important and unavoidable. We do not produce work in a vacuum. We respond to technology and culture. We channel language and geography. We absorb the sights and sounds of film, television, news. To mimic and copy is part and parcel of creating something an audience of many can comprehend and respond to. Our clients often look first to their competitors’ products to understand their success. However, problems arise when we focus on style without context; form without function; mimicry as method. Copied and reused without any of the ethos of the original, stripped of deliberate and informed decision-making, the so-called look and feel becomes nothing more than paint on an empty facade. The typographic and color choices so in vogue today with our popular digital products and services have little in common with the brands they are meant to represent. For want of good design, the message was lost The question to ask is: does the interface truly reflect the product? Is it an accurate characterization of the brand and organizational values? Does the delivery of the content match the tone of voice? The answer is: probably not. Because every organization, every app or service, is unique. Each with its own personality, its own values and wonderful quirks. Design is communication. We should do everything in our role as professionals to use design to give voice to the message. Our job is to clearly communicate the benefits of a service and unreservedly allow access to information and content. To do otherwise, by obscuring with fashionable styles and elusive information architecture, does a great disservice to the people who chose to engage with and trust our products. We can achieve hierarchy and visual rhythm without resorting to extreme reduction. We can craft a beautiful experience with fine detail and curiosity while meeting fundamental standards of accessibility (and strive to meet many more). Standards of excellence It isn’t always comfortable to step back and objectively question our design choices. We get lost in the flow of our work, using patterns and preferences we’ve tried and tested before. That our decisions often seem like second nature is a gift of experience, but sometimes it prevents us from finding our blind spots. I was first caught out by my own biases a few years ago, when designing an interface for the Bank of England. After deciding on the colors for the typography and interactive elements, I learned that the site had to meet AAA accessibility standards. My choices quickly fell apart. It was eye-opening. I had to start again with restrictions and use size, weight and placement instead to construct the visual hierarchy. Even now, I make mistakes. On a recent project, I used large photographs on an organization’s website to promote their products. Knowing that our team had control over the art direction, I felt confident that we could compose the photographs to work with text overlays. Despite our best effort, the cropped images weren’t always consistent, undermining the text’s legibility. If I had the chance to do it again, I would separate the text and image. So, what practical things can we consider to give our users the experience they deserve? Put guidelines in place Think about your brand values. Write down keywords and use them as a framework when choosing a typeface. Explore colors that convey the organization’s personality and emotional appeal. Define a color palette that is web-ready and meets minimum accessibility standards. Note which colors are suitable for use with text. Only very dark hues of grey are consistently legible so keep them for non-essential text (for example, as placeholders in form inputs). Find which background colors you can safely use with white text, and consider integrating contrast checks into your workflow. Use roman and medium weights for body copy. Reserve lighter weights of a typeface for very large text. Thin fonts are usually the first to break down because of aliasing differences across platforms and screens. Check that the size, leading and length of your type is always legible and readable. Define lower and upper limits. Small text is best left for captions and words in uppercase. Avoid overlaying text on images unless it’s guaranteed to be legible. If it’s necessary to optimize space in the layout, give the text a container. Scrims aren’t always reliable: the text will inevitably overlap a part of the photograph without a contrasting ground. Test your work Review legibility and contrast on different devices. It’s just as important as testing the layout of a responsive website. If you have a local device lab, pay it a visit. Find a computer monitor near a window when the sun is shining. Step outside the studio and try to read your content on a mobile device with different brightness levels. Ask your friends and family what they use at home and at work. It’s one way of making sure your feedback isn’t always coming from a closed loop. Push your limits You define what the user sees. If you’ve inherited brand guidelines, question them. If you don’t agree with the choices, make the case for why they should change. Experiment with size, weight and color to find contrast. Objects with low contrast appear similar to one another and undermine the visual hierarchy. Weak relationships between figure and ground diminish visual interest. A balanced level of contrast removes ambiguity and creates focal points. It captures and holds our attention. If you’re lost for inspiration, look to graphic design in print. We have a wealth of history, full of examples that excel in using contrast to establish visual hierarchy. Embrace limitations. Use boundaries as an opportunity to explore possibilities. More than just a facade Designing with standards encourages legibility and helps to define a strong visual hierarchy. Design without exclusion (through neither negligence or intent) gets around discussions of demographics, speaks to a larger audience and makes good business sense. Following the latest trends not only weakens usability but also hinders a cohesive and distinctive brand. Users will make means when they need to, by increasing browser font sizes or enabling system features for accessibility. But we can do our part to take as much of that burden off of the user and ask less of those who need it most. In architecture, it isn’t buildings that mimic what is fashionable that stand the test of time. Nor do we admire buildings that tack on separate, poorly constructed extensions to meet a bare minimum of safety regulations. We admire architecture that offers well-considered, remarkable, usable spaces with universal access. Perhaps we can take inspiration from these spaces. Let’s give our buildings a bold voice and make sure the doors are open to everyone.",2015,Mark Mitchell,markmitchell,2015-12-13T00:00:00+00:00,https://24ways.org/2015/designing-with-contrast/,design 73,How to Make Your Site Look Half-Decent in Half an Hour,"Programmers like me are often intimidated by design – but a little effort can give a huge return on investment. Here are one coder’s tips for making any site quickly look more professional. I am a programmer. I am not a designer. I have a degree in computer science, and I don’t mind Comic Sans. (It looks cheerful. Move on.) But although I am a programmer, I want to make my sites look attractive. This is partly out of vanity, and partly realism. Vanity because I want people to think my work is good, and realism because the research shows that people won’t think a site is credible unless it also looks attractive. For a very long time after I became a programmer, I was scared of design. Design seemed to consist of complicated rules that weren’t written down anywhere, plus an unlearnable sense of taste, possessed only by a black-clad elite. But a little while ago, I decided to do my best to hack what it took to make my own projects look vaguely attractive. And although this doesn’t come close to the effect a professional designer could achieve, gathering these resources for improving a site’s look and feel has been really helpful. If I hadn’t figured out some basic design shortcuts, it’s unlikely that a weekend hack of mine would have ended up on page three of the Daily Mail. And too often now, I see excellent programming projects that don’t reach the audience they deserve, simply because their design doesn’t match their execution. So, if you are a developer, my Christmas present to you is this: my own collection of hacks that, used rightly, can make your personal programming projects look professional, quickly. None are hard to learn, most are free, and they let you focus on writing code. One thing to note about these tips, though. They are a personal, pragmatic compilation. They are suggestions, not a definitive guide. You will definitely get better results by working with a professional designer, and by studying design more deeply. If you are a designer, I would love to hear your suggestions for the best tools that I’ve missed, and I’d love to know how we programmers can do things better. With that, on to the tools… 1. Use Bootstrap If you’re not already using Bootstrap, start now. I really think that Bootstrap is one of the most significant technical achievements of the last few years: it democratizes the whole process of web design. Essentially, Bootstrap is a a grid system, with a bunch of common elements. So you can lay your site out how you want, drop in simple elements like forms and tables, and get a good-looking, consistent result, without spending hours fiddling with CSS. You just need HTML. Another massive upside is that it makes it easy to make any site responsive, so you don’t have to worry about writing media queries. Go, get Bootstrap and check out the examples. To keep your site lightweight, you can customize your download to include only the elements you want. If you have more time, then Mark Otto’s article on why and how Bootstrap was created at Twitter is well worth a read. 2. Pimp Bootstrap Using Bootstrap is already a significant advance on not using Bootstrap, and massively reduces the tedium of front-end development. But you also run the risk of creating Yet Another Bootstrap Site, or Hack Day Design, as it’s known. If you’re really pressed for time, you could buy a theme from Wrap Bootstrap. These are usually created by professional designers, and will give a polish that we can’t achieve ourselves. Your site won’t be unique, but it will look good quickly. Luckily, it’s pretty easy to make Bootstrap not look too much like Bootstrap – using fonts, CSS effects, background images, colour schemes and so on. Most of the rest of this article covers different ways to achieve this. We are going to customize this Bootstrap example page. This already has some custom CSS in the . We’ll pull it all out, and create a new CSS file, custom.css. Then we add a reference to it in the header. Now we’re ready to start customizing things. 3. Fonts Web fonts are one of the quickest ways to make your site look distinctive, modern, and less Bootstrappy, so we’ll start there. First, we can add some sweet fonts, from Google Web Fonts. The intimidating bit is choosing fonts that look nice together. Luckily, there are plenty of suggestions around the web: we’re going to use one of DesignShack’s suggested free Google Fonts pairings. Our fonts are Corben (for headings) and Nobile (for body copy). Then we add these files to our . …and this to custom.css: h1, h2, h3, h4, h5, h6 { font-family: 'Corben', Georgia, Times, serif; } p, div { font-family: 'Nobile', Helvetica, Arial, sans-serif; } Now our example looks like this. It’s not going to win any design awards, but it’s immediately better: I also recommend the web font services Fontdeck, or Typekit – these have a wider selection of fonts, and are worth the investment if you regularly need to make sites look good. For more font combinations, Just My Type suggests appealing pairings from Typekit. Finally, you can experiment with type pairing ideas at Type Connection. For the design background on pairing fonts, Typekit’s post is worth a read. 4. Textures An instant way to make a site look classy is to use textures. You know the grey, stripy, indefinably elegant background on 24ways.org? That. If only there was a superb resource listing attractive, free, ready-to-use textures… Oh wait, there’s Atle Mo’s Subtle Patterns. We’re going to use Cream Dust, for an effect that can only be described as subtle. We download the file to a new /img/ directory, then add this to the CSS file: body { background: url(/img/cream_dust.png) repeat 0 0; } Bang: For some design background on patterns, I recommend reading through Smashing Magazine’s guidelines on textures. (TL;DR version: use textures to enhance beauty, and clarify the information architecture of your site; but don’t overdo it, or inadvertently obscure your text.) Still more to do, though. Onwards. 5. Icons Last year’s 24 ways taught us to use icon fonts for our site icons. This is great for the time-pressed coder, because icon fonts don’t just cut down on HTTP requests – they’re a lot quicker to set up than image-based icons, too. Bootstrap ships with an extensive, free for commercial use icon set in the shape of Font Awesome. Its icons are safe for screen readers, and can even be made to work in IE7 if needed (we’re not going to bother here). To start using these icons, just download Font Awesome, and add the /fonts/ directory to your site and the font-awesome.css file into your /css/ directory. Then add a reference to the CSS file in your header: Finally, we’ll add a truck icon to the main action button, as follows. Why a truck? Why not? Sign up today We’ll also tweak our CSS file to stop the icon nudging up against the button text: .jumbotron .btn i { margin-right: 8px; } And this is how it looks: Not the most exciting change ever, but it livens up the page a bit. The licence is CC-BY-3.0, so we also include a mention of Font Awesome and its URL in the source code. If you’d like something a little more distinctive, Shifticons lets you pay a few cents for individual icons, with the bonus that you only have to serve the icons you actually use, which is more efficient. Its icons are also friendly to screen readers, but won’t work in IE7. 6. CSS3 The next thing you could do is add some CSS3 goodness. It can really help the key elements of the site stand out. If you are pressed for time, just adding box-shadow and text-shadow to emphasize headings and standouts can be useful: h1 { text-shadow: 1px 1px 1px #ccc; } .div-that-you want-to-stand-out { box-shadow: 0 0 1em 1em #ccc; } We have a little more time though, so we’re going to do something more subtle. We’ll add a radial gradient behind the main heading, using an online gradient editor. The output is hefty, but you can see it in the CSS. Note that we also need to add the following to our HTML, for IE9 support: And the effect – I don’t know what a designer would think, but I like the way it makes the heading pop. For a crash course in useful modern CSS effects, I highly recommend CodeSchool’s online course in Functional HTM5 and CSS3. It costs money ($25 a month to subscribe), but it’s worth it for the time you’ll save. As a bonus, you also get access to some excellent JavaScript, Ruby and GitHub courses. (Incidentally, if you find yourself fighting with basic float and display attributes in CSS – and there’s no shame in it, CSS layout is not intuitive – I recommend the CSS Cross-Country course at CodeSchool.) 7. Add a twist We could leave it there, but we’re going to add a background image, and give the site some personality. This is the area of design that I think many programmers find most intimidating. How do we create the graphics and photographs that a designer would use? The answer is iStockPhoto and its competitors – online image libraries where you can find and pay for images. They won’t be unique, but for our purposes, that’s fine. We’re going to use a Christmassy image. For a twist, we’re going to use Backstretch to make it responsive. We must pay for the image, then download it to our /img/ directory. Then, we set it as our ’s background-image, by including a JavaScript file with just the following line: $.backstretch(""/img/winter.jpg""); We also reset the subtle pattern to become the background for our container image. It would look much better transparent, so we can use this technique in GIMP to make it see-through: .container-narrow { background: url(/img/cream_dust_transparent.png) repeat 0 0; } We also fiddle with the padding on body and .container-narrow a bit, and this is the result: (Aside: If this were a real site, I’d want to buy images in multiple sizes and ensure that Backstretch chose the most appropriately sized image for our screen, perhaps using responsive images.) How to find the effects that make a site interesting? I keep a set of bookmarks for interesting JavaScript and CSS effects I might want to use someday, from realistic shadows to animating grids. The JavaScript Weekly newsletter is a great source of ideas. 8. Colour schemes We’re just about getting there – though we’re probably past half an hour now – but that button and that menu still both look awfully Bootstrappy. Real sites, with real designers, have a colour palette, carefully chosen to harmonize and match the brand profile. For our purposes, we’re just going to borrow some colours from the image. We use Gimp’s colour picker tool to identify the hex values of the blue of the snow. Then we can use Color Scheme Designer to find contrasting, but complementary, colours. Finally, we use those colours for our central button. There are lots of tools to help us do this, such as Bootstrap Buttons. The new HTML is quite long, so I won’t paste it all here, but you can find it in the CSS file. We also reset the colour of the pills in the navigation menu, which is a bit easier: .nav-pills > .active > a, .nav-pills > .active > a:hover { background-color: #FF9473; } I’m not sure if the result is great to be honest, but at least we’ve lost those Bootstrap-blue buttons: Another way to do it, if you didn’t have an image to match, would be to borrow an attractive colour scheme. Colourlovers is a community where people create and share ready-made colour palettes. The key thing is to find a palette with an open licence, so you can legitimately use it. Unfortunately, you can’t search for palettes by licence type, but many do have open licences. Here’s a popular palette with a CC-BY-SA licence that allows reuse with attribution. As above, you can use the hex values from the palette in your custom CSS, and bask in the newly colourful results. 9. Read on With the above techniques, you can make a site that is starting to look slightly more professional, pretty quickly. If you have the time to invest, it’s well worth learning some design principles, if only so that design seems less intimidating and more like fun. As part of my design learning, I read a few introductory design books aimed at coders. The best I found was David Kadavy’s Design for Hackers: Reverse-Engineering Beauty, which explains the basic principles behind choosing colours, fonts, typefaces and layout. In the introduction to his book, David writes: No group stands to gain more from design literacy than hackers do… The one subject that is exceedingly frustrating for hackers to try to learn is design. Hackers know that in order to compete against corporate behemoths with just a few lines of code, they need to have good, clear design, but the resources with which to learn design are simply hard to find. Well said. If you have half a day to invest, rather than half an hour, I recommend getting hold of David’s book. And the journey is over. Perhaps that took slightly more than half an hour, but with practice, using the above techniques can become second nature. What useful tools have I missed? Designers, how would you improve on the above? I would love to know, so please give me your views in the comments.",2012,Anna Powell-Smith,annapowellsmith,2012-12-16T00:00:00+00:00,https://24ways.org/2012/how-to-make-your-site-look-half-decent/,design 74,Should We Be Reactive?,"Evolution Looking at the evolution of the web and the devices we use should help remind us that the times we’re adjusting to are just another step on a journey. These times seem to be telling us that we need to embrace flexibility. Imagine an HTML file containing nothing but text. It’s viewable on any web-capable device and reasonably readable: the notion of the universality of the web was very much a founding principle. Right from the beginning, browser vendors understood that we’d want text to reflow (why wouldn’t we?), so I consider the first websites to have been fluid. As we attempted to exert more control through our designs in the early days of the web, debates about whether we should produce fixed or fluid sites raged. We could create fluid designs using tables, but what we didn’t have then was a wide range of web capable devices or the ability to control this fluidity. The biggest changes occurred when stats showed enough people using a different screen resolution we could cater for. To me, the techniques of responsive web design provide the control we were missing. Combining new approaches to layout and images with media queries empowered us to learn how to embrace the inherent flexibility of the web in ways to suit our work and the devices used by our audience. Perhaps another kind of flexibility might be found in how we use context to affect how we present our content; to consider how we might use the information we can access from people, browsers and devices to provide web experiences – effectively creating sites that react to initial or changing circumstances in the relationship between people and our content. Embracing flexibility So what is context? Put simply, you could think of it as a secondary piece of information that helps clarify the meaning of the first. It helps set a scene or describe circumstances. I think that Cennydd Bowles has summed it up really well through talks he’s given recently, in which he’s arrived at the acronym DETAILS (Device, Environment, Time, Activity, Individual, Location, Social) – I encourage you to keep an eye out for his next book due in the new year where he’ll explore this idea much further. This clarity over what context could mean in terms of what we do on the web is fundamental, directing us towards ways we might use it. When you stop to think about it, we’ve been using some basic pieces of this information right from the beginning, like bits of JavaScript or Java applets that serve an appropriate greeting to your site’s visitors, or show their location, or even local weather. But what if we think of this from the beginning of our projects? We should think about our content first. Once we know this and have a direction, perhaps then we can think about what context, or even multiple contexts, might help us to communicate more effectively. The real world There’s always been a disconnect between the real world and the web, which is to be expected. But the world around us is a sea of data; every fundamental building block: people, places, events and things have information waiting to be explored. For sites based around physical objects or locations, this divide is really apparent. We don’t ordinarily take the time to describe in code the properties of a place, or consider whether your relationship to the place in the real world should have any impact on your relationship with a site about it. When I think about local businesses, they have such rich properties to draw on and yet we don’t really explore them in any meaningful way, even through something as simple as opening hours. Now we have data… We’ve long had access to the current time both on server- and client-sides. The use of geolocation is easier than ever, but when we look at the range of information we could glean to help us make some choices, maybe there’s some help on the horizon from projects like the W3C Device APIs Working Group. This might prove useful to help make us aware of network and battery conditions of a device, along with the potential to gain data from other sensors, which could tell us about lighting conditions, ambient noise levels and temperature depending on the capabilities of the device. It may be that our sites have some form of login or access to your profile from another site. Along with data from our devices and browsers, this should give us a sense of how best to talk to our audience in certain situations. We don’t necessarily need to know any personal details, just enough to make decisions about how to present our sites. The reactive web? So why reactive web design? I’m hoping that a name might help us to have a common vocabulary not only about what we mean when we talk about context, but how it could be considered through our projects, right from the early stages. How could this manifest itself? A simple example might be a location-aware panel on your site. Perhaps the space is a little down in your content hierarchy but serves a perfectly valid purpose by default. To visitors outside the country perhaps this works fine, but within your country maybe this panel could be used to communicate more effectively. Further still, if we knew the visitors were in the vicinity, we could talk to them more directly. What if both time and location were relevant? This space could work as before but you could consider how time could intersect with your local audience. If you know they’re local and it’s a certain time of day, you could communicate directly with them. This example isn’t beyond what banner ads often do and uses easily accessible information. There are more unusual combinations we may be able to find, such as movement and presence. Perhaps a site that tells a story, which changes design and content based on whether you’re moving, how long you’ve been on the site and how far you’ve travelled. This isn’t what we typically expect from websites, but we should bear in mind that what websites are now will not be what they become in the future. You could do much of this contextual presentation through native apps, of course. The Silent History, an app novel written and designed for iPad and iPhone, uses an exploration element, providing “hundreds of location-based stories across the U.S. and around the world. These can be read only when your device’s GPS matches the coordinates of the specified location.” But considering the universality of the web, we could redefine what web-based experiences should be like. Not all methods would work well on the web, but that’s a decision that has to be made for a specific project. By thinking more broadly about any web-capable device, we can use what we know to provide relevant experiences for our site’s visitors. We need to be sure what we mean by relevant, of course! Reality bites While there are incredible possibilities, from a simple panel on a site to something bordering on living sites that evolve and change with our circumstances, we need to act with a degree of pragmatism and understand how much of what we could do is based on assumptions and the bias of our own experiences. We could go wild with changing the way our content is presented based on contextual information, but if we’re not careful what we end up with confuses and could provide a very fractured experience. As much as possible we need to think more ethnographically, observe and question people in the situations we think may be relevant, and test our assumptions as early as we can. Even on small projects, there may be ways we can validate our assumptions and test with our audience. The key to applying contextual content or cues is not to break the experience between contextual views (as I think we now wouldn’t when hiding content on a mobile view). It’s another instance of progressive enhancement – as we know certain pieces of information, we can enhance the experience. Also, if you do change content, how can you not make a more cumbersome experience for your visitors? It’s all about communication Content is at the core of what we do, but if we consider context we need to understand the impact on that. The effect could be as subtle as an altered hierarchy, involve swapping out panels of content, or in extreme instances perhaps all of your content might change. In some ways, this extends the notion of adaptive content that Karen McGrane has been talking about, to how we write and store the content we create. Thinking about the the impact of context may require us to re-evaluate our site structure, too. Whatever we decide, we have to be clear what will happen and manage the expectations of our users. The bottom line What I’m proposing isn’t that we go crazy and end up with a confused, disjointed set of experiences across the web. What I hope is that starting right from the beginning of a project, we think about what context is and could be, and see what relevance it might have to what we’re trying to communicate. This strategic process leads us to think about design. We are slowly adapting to what it means to be flexible through responsive and adaptive processes. What does thinking about contextual states mean to us (or designing for state in general)? Does this highlight again how difficult it’ll be for our tools to keep up with our processes and output? In terms of code, the vast majority of this data comes from the client-side through JavaScript. While we can progressively enhance, this could lead to a lot of code bloat through feature or capability detection, and potentially a lot of conditional loading of scripts. It’s a real shame we don’t get much we can rely on from the server-side – we know how unreliable user agents are! We need to understand why we’d do this. Are we trying to communicate well and be useful, or doing it to show off? Underneath it all, what do we base our decisions on? Do we have actual insight or are we proceeding from our assumptions and the bias of our own experiences? Scott Jenson summed it up best for me: (to paraphrase) the pain we put people through has to be greatly outweighed by the value we offer. I see that this could be another potential step in our evolution on the web; continuing this exploration of the flexibility the web allows us. It’s amazing we can do such incredible things from what is essentially a set of disparate, linked documents.",2012,Dan Donald,dandonald,2012-12-09T00:00:00+00:00,https://24ways.org/2012/should-we-be-reactive/,design 77,Colour Accessibility,"Here’s a quote from Josef Albers: In visual perception a colour is almost never seen as it really is[…] This fact makes colour the most relative medium in art.Josef Albers, Interaction of Color, 1963 Albers was a German abstract painter and teacher, and published a very famous course on colour theory in 1963. Colour is very relative — not just in the way that it appears differently across different devices due to screen quality and colour management, but it can also be seen differently by different people — something we really need to be more mindful of when designing. What is colour blindness? Colour blindness very rarely means that you can’t see any colour at all, or that people see things in greyscale. It’s actually a decreased ability to see colour, or a decreased ability to tell colours apart from one another. How does it happen? Inside the typical human retina, there are two types of receptor cells — rods and cones. Rods are the cells that allow us to see dark and light, and shape and movement. Cones are the cells that allow us to perceive colour. There are three types of cones, each responsible for absorbing blue, red, and green wavelengths in the spectrum. Problems with colour vision occur when one or more of these types of cones are defective or absent entirely, and these problems can either be inherited through genetics, or acquired through trauma, exposure to ultraviolet light, degeneration with age, an effect of diabetes, or other factors. Colour blindness is a sex-linked trait and it’s much more common in men than in women. The most common type of colour blindness is called deuteranomaly which occurs in 7% of males, but only 0.5% of females. That’s a pretty significant portion of the population if you really stop and think about it — we can’t ignore this demographic. What does it look like? People with the most common types of colour blindness, like protanopia and deuteranopia, have difficulty discriminating between red and green hues. There are also forms of colour blindness like tritanopia, which affects perception of blue and yellow hues. Below, you can see what a colour wheel might look like to these different people. What can we do? Here are some things you can do to make your websites and apps more accessible to people with all types of colour blindness. Include colour names and show examples One of the most common annoyances I’ve heard from people who are colour-blind is that they often have difficulty purchasing clothing and they will sometimes need to ask another person for a second opinion on what the colour of the clothing might actually be. While it’s easier to shop online than in a physical store, there are still accessibility issues to consider on shopping websites. Let’s say you’ve got a website that sells T-shirts. If you only show a photo of the shirt, it may be impossible for a person to tell what colour the shirt really is. For clarification, be sure to reference the name of the colour in the description of the product. United Pixelworkers does a great job of following this rule. The St. John’s T-shirt has a quirky palette inspired by the unofficial pink, white and green Newfoundland flag, and I can imagine many people not liking it. Another common problem occurs when a colour filter has been added to a product search. Here’s an example from a clothing website with unlabelled colour swatches, and how that might look to someone with deuteranopia-type colour blindness. The colour search filter below, from the H&M website, is much better since it uses names instead. At first glance, Urban Outfitters also uses unlabelled colour swatches on product pages (below), but on closer inspection, the colour name is displayed on hover. This isn’t an ideal solution, because although it’ll work on a desktop browser, it won’t work on a touchscreen device where hovering isn’t an option. Using overly fancy colour names, like the ones you might find labelling high-end interior paint can be just as confusing as not using a colour name at all. Names like grape instead of purple don’t really give the viewer any useful information about what the colour actually is on a colour wheel. Is grape supposed to be purple, or could it refer to red grapes or even green? Stick with hue names as much as possible. Avoid colour-specific instructions When designing forms, avoid labelling required fields only with coloured text. It’s safer to use a symbol cue like the asterisk which is colour-independent. A similar example would be directing a user to click a green button to purchase a product. Label your buttons clearly and reference them in the site copy by function, not colour, to avoid confusion. Don’t rely on colour coding Designing accessible maps and infographics can be much more challenging. Don’t rely on colour coding alone — try to use a combination of colour and texture or pattern, along with precise labels, and reflect this in the key or legend. Combine a blue background with a crosshatched pattern, or a pink background with a stippled dot — your users will always have two pieces of information to work with. The map of the London subway system is an iconic image not just in London, but around the world. Unfortunately, it contains some colours that are indistinguishable from each other to a person with a vision problem. This is true not only for the London underground, but also for any other wayfinding system that relies on colour coding as the only key in a legend. There are printable versions of the map available online in black and white, using patterns and shades of black and grey that are distinguishable, but the point is that there would be no need for such a map if it were designed with accessibility in mind from the beginning. And, if you’re a person who has a physical disability as well as a vision problem, the “Step-Free” guide map which shows stations is based on the original coloured map. Provide alternatives and customization While it’s best to consider these issues and design your app to be accessible by default, sometimes this might not be possible. Providing alternative styles or allowing users to edit their own colours is a feature to keep in mind. The developers of the game Faster Than Light created an alternate colour-blind mode and asked for public feedback to make sure that it passed the test. Not much needed to be done, but you can see they added stripes to the red zones and changed some outlines to blue. iChat is also a good example. Although by default it uses coloured bubbles to indicate a user’s status (available for chat, away or idle, or busy), included in the preferences is a “User Shapes to Indicate Status” option, which changes the shape of the standard circles to green circles, yellow triangles and red squares. Pay attention to contrast Colours that are similar in value but different in hue may be easy to distinguish between for a user with good vision, but a person who suffers from colour blindness may not be able to tell them apart at all. Proofing your work in greyscale is a quick way to tell if there’s enough contrast between the most important information in your design. Check with a simulator There are many tools out there for simulating different types of colour blindness, and it’s worth checking your design to catch any potential problems up front. One is called Sim Daltonism and it’s available for Mac OS X. It’ll show a pop-up preview next to your cursor and you can choose which type of colour blindness you want to test from a drop-down menu. You can also proof for the two most common types of colour blindness right in Photoshop or Illustrator (CS4 and later) while you’re designing. The colour contrast check tool from designer and developer Jonathan Snook gives you the option to enter a colour code for a background, and a colour code for text, and it’ll tell you if the colour contrast ratio meets the Web Content Accessibility Guidelines 2.0. You can use the built-in sliders to adjust your colours until they meet the compliant contrast ratios. This is a great tool to test your palette before going live. For live websites, you can use the accessibility tool called WAVE, which also has a contrast checker. It’s important to keep in mind, though, that while WAVE can identify contrast errors in text, other things can slip through, so a site that passes the test does not automatically mean it’s accessible in reality. For example, the contrast checker here doesn’t notice that our red link in the introduction isn’t underlined, and therefore could blend into the surrounding paragraph text. I know that once I started getting into the habit of checking my work in a simulator, I became more mindful of any potential problem areas and it was easier to avoid them up front. It’s also made me question everything I see around me and it sends red flags off in my head if I think it’s a serious colour blindness fail. Understanding that colour is relative in the planning stages and following these tips will help us make more accessible design for all.",2012,Geri Coady,gericoady,2012-12-04T00:00:00+00:00,https://24ways.org/2012/colour-accessibility/,design 79,Responsive Images: What We Thought We Needed,"If you were to read a web designer’s Christmas wish list, it would likely include a solution for displaying images responsively. For those concerned about users downloading unnecessary image data, or serving images that look blurry on high resolution displays, finding a solution has become a frustrating quest. Having experimented with complex and sometimes devilish hacks, consensus is forming around defining new standards that could solve this problem. Two approaches have emerged. The element markup pattern was proposed by Mat Marquis and is now being developed by the Responsive Images Community Group. By providing a means of declaring multiple sources, authors could use media queries to control which version of an image is displayed and under what conditions:

Accessible text

A second proposal put forward by Apple, the srcset attribute, uses a more concise syntax intended for use with the element, although it could be compatible with the element too. This would allow authors to provide a set of images, but with the decision on which to use left to the browser: Enter Scrooge Men’s courses will foreshadow certain ends, to which, if persevered in, they must lead. Ebenezer Scrooge Given the complexity of this issue, there’s a heated debate about which is the best option. Yet code belies a certain truth. That both feature verbose and opaque syntax, I’m not sure either should find its way into the browser – especially as alternative approaches have yet to be fully explored. So, as if to dampen the festive cheer, here are five reasons why I believe both proposals are largely redundant. 1. We need better formats, not more markup As we move away from designs defined with fixed pixel values, bitmap images look increasingly unsuitable. While simple images and iconography can use scalable vector formats like SVG, for detailed photographic imagery, raster formats like GIF, PNG and JPEG remain the only suitable option. There is scope within current formats to account for varying bandwidth but this requires cooperation from browser vendors. Newer formats like JPEG2000 and WebP generate higher quality images with smaller file sizes, but aren’t widely supported. While it’s tempting to try to solve this issue by inventing new markup, the crux of it remains at the file level. Daan Jobsis’s experimentation with image compression strengthens this argument. He discovered that by increasing the dimensions of a JPEG image while simultaneously reducing its quality, a smaller files could be produced, with the resulting image looking just as good on both standard and high-resolution displays. This may be a hack in lieu of a more permanent solution, but it’s applied in the right place. Easy to accomplish with existing tools and without compatibility issues, it has few downsides. Further experimentation in this area should be encouraged, with standardisation efforts more helpful if focused on developing new image formats or, preferably, extending existing ones. 2. Art direction doesn’t belong in markup A desired benefit of the markup pattern is to allow for greater art direction. For example, rather than scaling down images on smaller displays to the point that their content is hard to discern, we could present closer crops instead: This can be achieved with CSS of course, although with a download penalty for those parts of an image not shown. This point may be negligible, however, since in the context of adaptable layouts, these hidden areas may end up being revealed anyway. Art direction concerns design, not content. If we wish to maintain a separation of concerns, including presentation within our markup seems misguided. 3. The size of a display has little relation to the size of an image By using media queries, the element allows authors to choose which characteristics of the screen or viewport to query for different images to be displayed. In developing sites at Clearleft, we have noticed that the viewport is essentially arbitrary, with the size of an image’s containing element more important. For example, look at how this grid of images may adapt at different viewport widths: As we build more modular systems, components need to be adaptable in and of themselves. There is a case to be made for developing more contextual methods of querying, rather than those based on attributes of the display. 4. We haven’t lived with the problem long enough A key strength of the web is that the underlying platform can be continually iterated. This can also be problematic if snap judgements are made about what constitutes an improvement. The early history of the web is littered with such examples, be it the perceived need for blinking text or inline typographic styling. To build a platform for the future, additions to it should be carefully considered. And if we want more consistent support across browsers, burdening vendors with an ever increasing list of features seems counterproductive. Only once the need for a new feature is sufficiently proven, should we look to standardise it. Before we could declare hover effects, rounded corners and typographic styling in CSS, we used JavaScript as a polyfill. Sure, doing so was painful, but use cases were fully explored, and the CSS specification better reflected the needs of authors. 5. Images and the web aesthetic The srcset proposal has emerged from a company that markets its phones as being able to browse the real – yet squashed down, tapped and zoomable – web. Perhaps Apple should make its own website responsive before suggesting how the rest of us should do so. Converserly, while the proposal has the backing of a few respected developers and designers, it was born out of the work Mat Marquis and Filament Group did for the Boston Globe. As the first large-scale responsive design, this was a landmark project that ignited the responsive web design movement and proved its worth. But it was the first. Its design shares a vernacular to that of contemporary newspaper websites, with a columnar, image-laden and densely packed layout. Compared to more recent examples – Quartz, The Next Web and the New York Times Skimmer – it feels out of step with the future direction of news sites. In seeking out a truer aesthetic for the web in which software interfaces have greater influence, we might discover that the need for responsive images isn’t as great as originally thought. Building for the future With responsive design, we’ve accepted the idea that a fully fluid layout, rather than a set of fixed layouts, is best suited to the web’s unpredictable nature. Current responsive image proposals are antithetical to this approach. We need solutions that lack complexity, are device-agnostic and work within existing workflows. Any proposal that requires different versions of the same image to be created, is likely to have to acquiesce under the pressure of reality. While it’s easy to get distracted about the size and quality of an image, and how we might choose to serve it, often the simplest solution is not to include it at all. After years of gluttonous design practice, in which fast connections and expansive display sizes were an accepted norm, we have got use to filling pages with needless images and countless items of page furniture. To design more adaptable experiences, the presence of every element needs to be questioned, for its existence requires additional data to be downloaded or futher complexity within a design system. Conditional loading techniques mean that the inclusion of images is no longer a binary choice, but can instead appear in a progressively enhanced manner. So here is my proposal. Instead of spending the next year worrying about responsive images, let’s embrace the constraints of the medium, and seek out new solutions that can work within them.",2012,Paul Lloyd,paulrobertlloyd,2012-12-11T00:00:00+00:00,https://24ways.org/2012/responsive-images-what-we-thought-we-needed/,code 80,HTML5 Video Bumpers,"Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it’s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices. I recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context. Known as bumpers, these short introduction clips help brand a video and make it look a lot more professional. Adding bumpers to a video The simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you’d need to re-edit and re-encode all your videos. Not a fun task. What if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users. The trade-off, of course, is that if you dynamically add your bumpers, there’s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you’d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice. HTML5 bumpers If you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project. My initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I’d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I’d use JavaScript to fade the top image out, revealing the second image beneath it. Now that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video’s onended event, and fade it out to reveal the main feature behind. Good idea, right? Wrong Remember that this is the web. It’s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone – when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There’s no opportunity to fade or switch z-index, as the video isn’t being viewed in the page. Your page is left powerless. Powerless! So what can we do? What can we control? Those of us with particularly long memories might recall a time before CSS, when we’d have to use JavaScript to perform image rollovers. As CSS background images weren’t a practical reality, we would use lots of elements, and perform a rollover by modifying the src attribute of the image. Turns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a