rowid,title,contents,year,author,author_slug,published,url,topic
290,Creating a Weekly Research Cadence,"Working on a product team, it’s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development.
As my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers—always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development.
I wanted to find a way for our team to test ideas and validate assumptions sooner and more often—but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate.
Setting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred—pushing us out of your comfort zone into a process of more rapid experimentation.
Best of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like “Remember what Jenna said last week, about not being able to customize her lists?” would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions.
Establishing an efficient recruitment process
The key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process—along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users.
The process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs.
Pick a dedicated time each week for research
In order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn’t easy to do! Nevertheless, it’s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn’t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team.
I blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours.
Divide your time into several research slots
After my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users’ busy schedules. Depending on your work, you may need schedule longer sessions—but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own.
I used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall.
Invite users to sign up for a time that’s convenient for them
With a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we’re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule.
Setting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block.
It may take a bit of experimentation to fine tune your process, but it’s worth the effort to get it right. (The worst thing that’s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design.
Getting the most out of your research sessions
As you get comfortable with the rhythm of talking to users each week, you’ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress—such as mockups or a simple prototype—and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features—either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that’s the best way to validate the assumptions your team is working from.
Whatever you do, plan some time a day or two later to come back together and review what you’ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation.
Over time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don’t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you’re ready.",2016,Wren Lanier,wrenlanier,2016-12-02T00:00:00+00:00,https://24ways.org/2016/creating-a-weekly-research-cadence/,ux
111,Geometric Background Patterns,"When the design is finished and you’re about to start the coding process, you have to prepare your graphics. If you’re working with a pattern background you need to export only the repeating fragment. It can be a bit tricky to isolate a fragment to achieve a seamless pattern background. For geometric patterns there is a method I always follow and that I want to share with you. Take for example a perfect 45° diagonal line pattern.
How do you define this pattern fragment so it will be rendered seamlessly?
Here is the method I usually follow to avoid a mismatch. First, zoom in so you see enough detail and you can distinguish the pixels. Select the Rectangular Marquee Selection tool and start your selection at the intersection of 2 different colors of a diagonal line. Hold down the Shift key while dragging so you drag a perfect square.
Release the mouse when you reach the exact same intesection (as your starting) point at the top right.
Copy this fragment (using Copy Merged: Cmd/Ctrl + Shift + C) and paste the fragment in a new layer. Give this layer the name ‘pattern’. Now hold down the Command Key (Control Key on Windows) and click on the ‘pattern’ layer in the Layers Palette to select the fragment. Now go to Edit > Define Pattern, enter a name for your pattern and click OK. Test your pattern in a new document. Create a new document of 600 px by 400px, hit Cmd/Ctrl + A and go to Edit > Fill… and choose your pattern. If the result is OK, you have created a perfect pattern fragment.
Below you see this pattern enlarged. The guides show the boundaries of the pattern fragment and the red pixels are the reference points. The red pixels at the top right, bottom right and bottom left should match the red pixel at the top left.
This technique should work for every geometric pattern. Some patterns are easier than others, but this, and the Photoshop pattern fill test, has always been my guideline.
Other geometric pattern examples
Example 1
Not all geometric pattern fragments are squares. Some patterns look easy at first sight, because they look very repetitive, but they can be a bit tricky.
Zoomed in pattern fragment with point of reference shown:
Example 2
Some patterns have a clear repeating point that can guide you, such as the blue small circle of this pattern as you can see from this zoomed in screenshot:
Zoomed in pattern fragment with point of reference shown:
Example 3
The different diagonal colors makes a bit more tricky to extract the correct pattern fragment.
The orange dot, which is the starting point of the selection is captured a few times inside the fragment selection:",2008,Veerle Pieters,veerlepieters,2008-12-02T00:00:00+00:00,https://24ways.org/2008/geometric-background-patterns/,design
131,Random Lines Made With Mesh,"I know that Adobe Illustrator can be a bit daunting for people who aren’t really advanced users of the program, but you would be amazed by how easy you can create cool effects or backgrounds. In this short tutorial I show you how to create a cool looking background only in 5 steps.
Step 1 – Create Lines
Create lines using random widths and harmonious suitable colors. If you get stuck on finding the right colors, check out Adobe’s Kuler and start experimenting.
Step 2 – Convert Strokes to Fills
Select all lines and convert them to fills. Go to the Object menu, select Path > Outline Stroke. Select the Rectangle tool and draw 1 big rectangle on top the lines. Give the rectangle a suitable color. With the rectangle still selected, go to the Object menu, select Arrange > Send to Back.
Step 3 – Convert to Mesh
Select all objects by pressing the command key (for Mac users), control key (for Windows users) + the “a” key. Go to the Object menu and select the Envelope Distort > Make with Mesh option. Enter 2 rows and 2 columns. Check the preview box to see what happens and click the OK button.
Step 4 – Play Around with The Mesh Points
Play around with the points of the mesh using the Direct Selection tool (the white arrow in the Toolbox). Click on the top right point of the mesh. Once you’re starting to drag hold down the shift key and move the point upwards.
Now start dragging the bezier handles on the mesh to achieve the effect as shown in the above picture. Of course you can try out all kind of different effects here.
The Final Result
This is an example of how the final result can look. You can try out all kinds of different shapes dragging the handles of the mesh points. This is just one of the many results you can get. So next time you haven’t got inspiration for a background of a header, a banner or whatever, just experiment with a few basic shapes such as lines and try out the ‘Envelope Distort’ options in Illustrator or the ‘Make with Mesh’ option and experiment, you’ll be amazed by the unexpected creative results.",2006,Veerle Pieters,veerlepieters,2006-12-08T00:00:00+00:00,https://24ways.org/2006/random-lines-made-with-mesh/,design
232,Optimize Your Web Design Workflow,"I’m not sure about you, but I still favour using Photoshop to create my designs for the web. I agree that this application, even with its never-ending feature set, is not the perfect environment to design websites in. The ideal application doesn’t exist yet, however, so until it does it’s maybe not such a bad idea to investigate ways to optimize our workflow.
Why use Photoshop?
It will probably not come as a surprise if I say that Photoshop and Illustrator are the applications that I know best and feel most comfortable and creative in. Some people prefer Fireworks for web design. Even though I understand people’s motivations, I still prefer Photoshop personally. On the occasions that I gave Fireworks a try, I ended up just using the application to export my images as slices, or to prepare a dummy for the client. For some reason, I’ve never been able to find my way in that app. There were always certain things missing that could only be done in either Photoshop or Illustrator, which bothered me.
Why not start in the browser?
These days, with CSS3 styling emerging, there are people who find it more efficient to design in the browser. I agree that at a certain point, once the basic design is all set and defined, you can jump right into the code and go from there. But the actual creative part, at least for me, needs to be done in an application such as Photoshop.
As a designer I need to be able to create and experiment with shapes on the fly, draw things, move them around, change colours, gradients, effects, and so on. I can’t see me doing this with code. I’m sure if I switch to markup too quickly, I might end up with a rather boxy and less interesting design. Once I start playing with markup, I leave my typical ‘design zone’. My brain starts thinking differently – more rational and practical, if you know what I mean; I start to structure and analyse how to mark up my design in the most efficient semantic way. When I design, I tend to let that go for a bit. I think more freely and not so much about the limitations, as it might hinder my creativity. Now that you know my motivations to stick with Photoshop for the time being, let’s see how we can optimize this beast.
Optimize your Photoshop workspace
In Photoshop CS5 you have a few default workspace options to choose from which can be found at the top right in the Application Bar (Window > Application Bar).
You can set up your panels and palettes the way you want, starting from the ‘Design’ workspace option, and save this workspace for future web work. Here is how I have set up things for when I work on a website design:
I have the layers palette open, and I keep the other palettes collapsed. Sometimes, when space permits, I open them all. For designers who work both on print and web, I think it’s worthwhile to save a workspace for both, or for when you’re doing photo retouching.
Set up a grid
When you work a lot with Shape Layers like I do, it’s really helpful to enable the Grid (View > Show > Grid) in combination with Snap to Grid (View > Snap To > Grid). This way, your vector-based work will be pixel-sharp, as it will always snap to the grid, and so you don’t end up with blurry borders.
To set up your preferred grid, go to Preferences > Guides, Grids and Slices. A good setting is to use ‘Gridline Every 10 pixels’ and ‘Subdivision 10’. You can switch it on and off at any time using the shortcut Cmd/Ctrl + ’.
It might also help to turn on Smart Guides (View > Show > Smart Guides).
Another important tip for making sure your Shape Layer boxes and other shapes are perfectly aligned to the pixel grid when you draw them is to enable Snap to Pixels. This option can be enabled in the Application bar in the Geometry options dropdown menu when you select one of the shape tools from the toolbox.
Use Shape Layers
To keep your design as flexible as possible, it’s a good thing to use Shape Layers wherever you can as they are scalable. I use them when I design for the iPhone. All my icons, buttons, backgrounds, illustrative graphics – they are all either Smart Objects placed from Illustrator, or Shape Layers. This way, the design is scalable for the retina display.
Use Smart Objects
Among the things I like a lot in Photoshop are Smart Objects. Smart Objects preserve an image’s source content with all its original characteristics, enabling you to perform non-destructive editing to the layer. For me, this is the ideal way of making my design flexible.
For example, a lot of elements are created in Illustrator and are purely vector-based. Placing these elements in Photoshop as Smart Objects (via copy and paste, or dragging from Illustrator into Photoshop) will keep them vector-based and scalable at all times without loss of quality.
Another way you could use Smart Objects is whenever you have repeating elements; for example, if you have a stream or list of repeating items. You could, for instance, create one, two or three different items (for the sake of randomness), make each one a Smart Object, and repeat them to create the list. Then, when you have to update, you need only change the Smart Object, and the update will be automatically applied in all its linked instances.
Turning photos into Smart Objects before you resize them is also worth considering – you never know when you’ll need that same photo just a bit bigger. It keeps things more flexible, as you leave room to resize the image at a later stage. I use this in combination with the Smart Filters a lot, as it gives me such great flexibility.
I usually use Smart Objects as well for the main sections of a web page, which are repeated across different pages of a site. So, for elements such as the header, footer and sidebar, it can be handy for bigger projects that are constantly evolving, where you have to create a lot of different pages in Photoshop.
You could save a template page that has the main sections set up as Smart Objects, always in their latest version. Each time you need to create new page, you can start from that template file. If you need to update an existing page because the footer (or sidebar, or header) has been updated, you can drag the updated Smart Object into this page. Although, do I wish Photoshop made it possible to have Smart Objects live as separate files, which are then linked to my different pages. Then, whenever I update the Smart Object, the pages are automatically updated next time I open the file. This is how linked files work in InDesign and Illustrator when you place a external image.
Use Layer Comps
In some situations, using Layer Comps can come in handy. I try to use them when the design consists of different states; for example, if there are hidden and show states of certain content, such as when content is shown after clicking a certain button. It can be useful to create a Layer Comp for each state. So, when you switch between the two Layer Comps, you’re switching between the two states.
It’s OK to move or hide content in each of these states, as well as apply different layer styles. I find this particularly useful when I need to save separate JPEG versions of each state to show to the client, instead of going over all the eye icons in the layers palette to turn the layers’ visibility on or off.
Create a set of custom colour swatches
I tend to use a distinct colour Swatches palette for each project I work on, by saving a separate Swatches palette in project’s folder (as an .ase file). You can do this through the palette’s dropdown menu, choosing Save Swatches for Exchange.
Selecting this option gives you the flexibility to load this palette in other Adobe applications like Illustrator, InDesign or Fireworks. This way, you have the colours of any particular project at hand. I name each colour, using the hexadecimal values.
Loading, saving or changing the view of the Swatches palette can be done via the palette’s dropdown menu. My preferred view is ‘Small List’ so I can see the hexadecimal values or other info I have added in the description.
I do wish Photoshop had the option of loading several different Styles palettes, so I could have two or more of them open at the same time, but each as a separate palette. This would be handy whenever I switch to another project, as I’m usually working on more than one project in a day. At the moment, you can only add a set of colours to the palette that is already open, which is frustrating and inefficient if you need to update the palette of a project separately.
Create a set of custom Styles
Just like saving a Swatches palette, I also always save the styles I apply in the Styles palette as a separate Styles file in the project’s folder when I work on a website design or design for iPhone/iPad. During the design process, I can save it each time styles are added. Again, though, it would be great if we could have different Styles palettes open at the same time.
Use a scratch file
What I also find particularly timesaving, when working on a large project, is using some kind of scratch file. By that, I mean a file that has elements in place that you reuse a lot in the general design. Think of buttons, icons and so on, that you need in every page or screen design. This is great for both web design work and iPad/iPhone work.
Use the slice tool
This might not be something you think of at first, because you probably associate this way of working with ‘old-school’ table-based techniques. Still, you can apply your slice any way you want, keeping your way of working in mind. Just think about it for a second. If you use the slice tool, and you give each slice its proper filename, you don’t have to worry about it when you need to do updates on the slice or image. Photoshop will remember what the image of that slice is called and which ‘Save for Web’ export settings you’ve used for it. You can also export multiple slices all at once, or export only the ones you need using ‘Save selected slices’.
I hope this list of optimization tips was useful, and that they will help you improve and enjoy your time in Photoshop. That is, until the ultimate web design application makes its appearance. Somebody is building this as we speak, right?",2010,Veerle Pieters,veerlepieters,2010-12-10T00:00:00+00:00,https://24ways.org/2010/optimize-your-web-design-workflow/,process
61,Animation in Responsive Design,"Animation and responsive design can sometimes feel like they’re at odds with each other. Animation often needs space to do its thing, but RWD tells us that the amount of space we’ll have available is going to change a lot. Balancing that can lead to some tricky animation situations.
Embracing the squishiness of responsive design doesn’t have to mean giving up on your creative animation ideas. There are three general techniques that can help you balance your web animation creativity with your responsive design needs. One or all of these approaches might help you sneak in something just a little extra into your next project.
Focused art direction
Smaller viewports mean a smaller stage for your motion to play out on, and this tends to amplify any motion in your animation. Suddenly 100 pixels is really far and multiple moving parts can start looking like they’re battling for space. An effect that looked great on big viewports can become muddled and confusing when it’s reframed in a smaller space.
Making animated movements smaller will do the trick for simple motion like a basic move across the screen. But for more complex animation on smaller viewports, you’ll need to simplify and reduce the number of moving parts. The key to this is determining what the vital parts of the animation are, to zone in on the parts that are most important to its message. Then remove the less necessary bits to distill the motion’s message down to the essentials.
For example, Rally Interactive’s navigation folds down into place with two triangle shapes unfolding each corner on larger viewports. If this exact motion was just scaled down for narrower spaces the two corners would overlap as they unfolded. It would look unnatural and wouldn’t make much sense.
Open video
The main purpose of this animation is to show an unfolding action. To simplify the animation, Rally unfolds only one side for narrower viewports, with a slightly different animation. The action is still easily interpreted as unfolding and it’s done in a way that is a better fit for the available space. The message the motion was meant to convey has been preserved while the amount of motion was simplified.
Open video
Si Digital does something similar. The main concept of the design is to portray the studio as a creative lab. On large viewports, this is accomplished primarily through an animated illustration that runs the full length of the site and triggers its animations based on your scroll position. The illustration is there to support the laboratory concept visually, but it doesn’t contain critical content.
Open video
At first, it looks like Si Digital just turned off the animation of the illustration for smaller viewports. But they’ve actually been a little cleverer than that. They’ve also reduced the complexity of the illustration itself. Both the amount of motion (reduced down to no motion) and the illustration were simplified to create a result that is much easier to glean the concept from.
Open video
The most interesting thing about these two examples is that they’re solved more with thoughtful art direction than complex code. Keeping the main concept of the animations at the forefront allowed each to adapt creative design solutions to viewports of varying size without losing the integrity of their design.
Responsive choreography
Static content gets moved around all the time in responsive design. A three-column layout might line up from left to right on wide viewports, then stack top to bottom on narrower viewports. The same approach can be used to arrange animated content for narrower views, but the animation’s choreography also needs to be adjusted for the new layout. Even with static content, just scaling it down or zooming out to fit it into the available space is rarely an ideal solution. Rearranging your animations’ choreography to change which animation starts when, or even which animations play at all, keeps your animated content readable on smaller viewports.
In a recent project I had three small animations that played one after the other, left to right, on wider viewports but needed to be stacked on narrower viewports to be large enough to see. On wide viewports, all three animations could play one right after the other in sequence because all three were in the viewable area at the same time. But once these were stacked for the narrower viewport layouts, that sequence had to change.
Open video
What was essentially one animation on wider viewports became three separate animations when stacked on narrower viewports. The layout change meant the choreography had to change as well. Each animation starts independently when it comes into view in the stacked layout instead of playing automatically in sequence. (I’ve put the animated parts in this demo if you want to peek under the hood.)
Open video
I choose to use the GreenSock library, with the choreography defined in two different timelines for this particular project. But the same goals could be accomplished with other JavaScript options or even CSS keyframe animations and media queries.
Even more complex responsive choreography can be pulled off with SVG. Media queries can be used to change CSS animations applied to SVG elements at specific breakpoints for starters. For even more responsive power, SVG’s viewBox property, and the positioning of the objects within it, can be adjusted at JavaScript-defined breakpoints. This lets you set rules to crop the viewable area and arrange your animating elements to fit any space.
Sarah Drasner has some great examples of how to use this technique with style in this responsive infographic and this responsive interactive illustration. On the other hand, if smart scalability is what you’re after, it’s also possible to make all of an SVG’s shapes and motion scale with the SVG canvas itself. Sarah covers both these clever responsive SVG techniques in detail. Creative and complex animation can easily become responsive thanks to the power of SVG!
Open video
Bake performance into your design decisions
It’s hard to get very far into a responsive design discussion before performance comes up. Performance goes hand in hand with responsive design and your animation decisions can have a big impact on the overall performance of your site.
The translate3D “hack”, backface-visibility:hidden, and the will-change property are the heavy hitters of animation performance. But decisions made earlier in your animation design process can have a big impact on rendering performance and your performance budget too.
Pick a technology that matches your needs
One of the biggest advantages of the current web animation landscape is the range of tools we have available to us. We can use CSS animations and transitions to add just a dash of interface animation to our work, go all out with webGL to create a 3D experience, or anywhere in between. All within our browsers! Having this huge range of options is amazing and wonderful but it also means you need to be cognizant of what you’re using to get the job done.
Loading in the full weight of a robust JavaScript animation library is going to be overkill if you’re only animating a few small elements here and there. That extra overhead will have an impact on performance. Performance budgets will not be pleased.
Always match the complexity of the technology you choose to the complexity of your animation needs to avoid unnecessary performance strain. For small amounts of animation, stick to CSS solutions since it’s the most lightweight option. As your animations grow in complexity, or start to require more robust logic, move to a JavaScript solution that can accomplish what you need.
Animate the most performant properties
Whether you’re animating in CSS or JavaScript, you’re affecting specific properties of the animated element. Browsers can animate some properties more efficiently than others based on how many steps need to happen behind the scenes to visually update those properties.
Browsers are particularly efficient at animating opacity, scale, rotation, and position (when the latter three are done with transforms). This article from Paul Irish and Paul Lewis gives the full scoop on why. Conveniently, those are also the most common properties used in motion design. There aren’t many animated effects that can’t be pulled off with this list. Stick to these properties to set your animations up for the best performance results from the start. If you find yourself needing to animate a property outside of this list, check CSS Triggers… to find out how much of an additional impact it might have.
Offset animation start times
Offsets (the concept of having a series of similar movements execute one slightly after the other, creating a wave-like pattern) are a long-held motion graphics trick for creating more interesting and organic looking motion. Employing this trick of the trade can also be smart for performance. Animating a large number of objects all at the same time can put a strain on the browser’s rendering abilities even in the best cases. Adding short delays to offset these animations in time, so they don’t all start at once, can improve rendering performance.
Go explore the responsive animation possibilities for yourself!
With smart art direction, responsive choreography, and an eye on performance you can create just about any creative web animation you can think up while still being responsive. Keep these in mind for your next project and you’ll pull off your animations with style at any viewport size!",2015,Val Head,valhead,2015-12-09T00:00:00+00:00,https://24ways.org/2015/animation-in-responsive-design/,design
76,Giving CSS Animations and Transitions Their Place,"CSS animations and transitions may not sit squarely in the realm of the behaviour layer, but they’re stepping up into this area that used to be pure JavaScript territory. Heck, CSS might even perform better than its JavaScript equivalents in some cases. That’s pretty serious! With CSS’s new tricks blurring the lines between presentation and behaviour, it can start to feel bloated and messy in our CSS files. It’s an uncomfortable feeling.
Here are a pair of methods I’ve found to be pretty helpful in keeping the potential bloat and wire-crossing under control when CSS has its hands in both presentation and behaviour.
Same eggs, more baskets
Structuring your CSS to have separate files for layout, typography, grids, and so on is a fairly common approach these days. But which one do you put your transitions and animations in? The initial answer, as always, is “it depends”.
Small effects here and there will likely sit just fine with your other styles. When you move into more involved effects that require multiple animations and some logic support from JavaScript, it’s probably time to choose none of the above, and create a separate CSS file just for them.
Putting all your animations in one file is a huge help for code organization. Even if you opt for a name less literal than animations.css, you’ll know exactly where to go for anything CSS animation related. That saves time and effort when it comes to editing and maintenance. Keeping track of which animations are still currently used is easier when they’re all grouped together as well. And as an added bonus, you won’t have to look at all those horribly unattractive and repetitive prefixed @-keyframe rules unless you actually need to.
An animations.css file might look something like the snippet below. It defines each animation’s keyframes and defines a class for each variation of that animation you’ll be using. Depending on the situation, you may also want to include transitions here in a similar way. (I’ve found defining transitions as their own class, or mixin, to be a huge help in past projects for me.)
// defining the animation
@keyframes catFall {
from { background-position: center 0;}
to {background-position: center 1000px;}
}
@-webkit-keyframes catFall {
from { background-position: center 0;}
to {background-position: center 1000px;}
}
@-moz-keyframes catFall {
from { background-position: center 0;}
to {background-position: center 1000px;}
}
@-ms-keyframes catFall {
from { background-position: center 0;}
to {background-position: center 1000px;}
}
…
// class that assigns the animation
.catsBackground {
height: 100%;
background: transparent url(../endlessKittens.png) 0 0 repeat-y;
animation: catFall 1s linear infinite;
-webkit-animation: catFall 1s linear infinite;
-moz-animation: catFall 1s linear infinite;
-ms-animation: catFall 1s linear infinite;
}
If we don’t need it, why load it?
Having all those CSS animations and transitions in one file gives us the added flexibility to load them only when we want to. Loading a whole lot of things that will never be used might seem like a bit of a waste.
While CSS has us impressed with its motion chops, it falls flat when it comes to the logic and fine-grained control. JavaScript, on the other hand, is pretty good at both those things. Chances are the content of your animations.css file isn’t acting alone. You’ll likely be adding and removing classes via JavaScript to manage your CSS animations at the very least. If your CSS animations are so entwined with JavaScript, why not let them hang out with the rest of the behaviour layer and only come out to play when JavaScript is supported?
Dynamically linking your animations.css file like this means it will be completely ignored if JavaScript is off or not supported. No JavaScript? No additional behaviour, not even the parts handled by CSS.
This technique comes up in progressive enhancement techniques as well, but it can help here to keep your presentation and behaviour nicely separated when more than one language is involved. The aim in both cases is to avoid loading files we won’t be using.
If you happen to be doing something a bit fancier – like 3-D transforms or critical animations that require more nuanced fallbacks – you might need something like modernizr to step in to determine support more specifically. But the general idea is the same.
Summing it all up
Using a couple of simple techniques like these, we get to pick where to best draw the line between behaviour and presentation based on the situation at hand, not just on what language we’re using. The power of when to separate and how to reassemble the individual pieces can be even greater if you use preprocessors as part of your process. We’ve got a lot of options! The important part is to make forward-thinking choices to save your future self, and even your current self, unnecessary headaches.",2012,Val Head,valhead,2012-12-08T00:00:00+00:00,https://24ways.org/2012/giving-css-animations-and-transitions-their-place/,code
210,Stop Leaving Animation to the Last Minute,"Our design process relies heavily on static mockups as deliverables and this makes it harder than it needs to be to incorporate UI animation in our designs. Talking through animation ideas and dancing out the details of those ideas can be fun; but it’s not always enough to really evaluate or invest in animated design solutions.
By including deliverables that encourage discussing animation throughout your design process, you can set yourself (and your team) up for creating meaningful UI animations that feel just as much a part of the design as your colour palette and typeface. You can get out of that “running out of time to add in the animation” trap by deliberately including animation in the early phases of your design process. This will give you both the space to treat animation as a design tool, and the room to iterate on UI animation ideas to come up with higher quality solutions. Two deliverables that can be especially useful for this are motion comps and animated interactive prototypes.
Motion comps - an animation deliverable
Motion comps (also called animatics or motion mock-ups) are usually video representation of UI animations. They are used to explore the details of how a particular animation might play out. And they’re most often made with timeline-based tools like Adobe After Effects, Adobe Animate, or Tumult Hype.
The most useful things about motion comps is how they allow designers and developers to share the work of creating animations. (Instead of pushing all the responsibility of animation on one group or the other.) For example, imagine you’re working on a design that has a content panel that can either be open or closed. You might create a mockup like the one below including the two different views: the closed state and the open state. If you’re working with only static deliverables, these two artboards might be exactly what you handoff to developers along with the instruction to animate between the two.
On the surface that seems pretty straight forward, but even with this relatively simple transition there’s a lot that those two artboards don’t address. There are seven things that change between the closed state and the open state. That’s seven things the developer building this out has to figure out how to move in and out of view, when, and in what order. And all of that is even before starting to write the code to make it work.
By providing only static comps, all the logic of the animation falls on the developer. This might go ok if she has the bandwidth and animation knowledge, but that’s making an awful lot of assumptions.
Instead, if you included a motion mock up like this with your static mock ups, you could share the work of figuring out the logic of the animation between design and development. Designers could work out the logic of the animation in the motion comp, exploring which items move at which times and in which order to create the opening and closing transitions.
The motion comp can also be used to iterate on different possible animation approaches before any production code has to be committed too. Sharing the work and giving yourself time to explore animation ideas before you’re backed up again the deadline will lead to happier teammates and better design solutions.
When to use motion comps
I’m not a fan of making more deliverables just for the sake of having more things to make, so I find it helps to narrow down what question I’m trying answer before choosing which sort of deliverable to make to investigate.
Motion comps can be most helpful for answering questions like:
Exactly how should this animation look?
Which items should move? Where? And when?
Do the animation qualities reflect our brand or our voice and tone?
One of the added bonuses of creating motion comps to answer these questions is that you’ll have a concrete thing to bring to design critiques or reviews to get others’ input on them as well.
Using motion comps as handoff
Motion comps are often used to handoff animation ideas from design to development. They can be super useful for this, but they’re even more useful when you include the details of the motion specs with them. (It’s difficult, if not impossible, to glean these details from playing back a video.)
More specifically, you’ll want to include:
Durations and the properties animated for each animation
Easing curve values or spring values used
Delay values and repeat counts
In many cases you’ll have to collect these details up manually. But this isn’t necessarily something that that will take a lot of time. If you take note of them as you’re creating the motion comp, chances are most of these details will already be top of mind. (Also, if you use After Effects for your motion comps, the Inspector Spacetime plugin might be helpful for this task.)
Animated prototypes - an interactive deliverable
Making prototypes isn’t a new idea for web work by any stretch, but creating prototypes that include animation – or even creating prototypes specifically to investigate potential animation solutions – can go a long way towards having higher quality animations in your final product.
Interactive prototypes are web or app-based, or displayed in a particular tool’s preview window to create a useable version of interactions that might end up in the end product. They’re often made with prototyping apps like Principle, Framer, or coded up in HTML, CSS and JS directly like the example below.
See the Pen Prototype example by Val Head (@valhead) on CodePen.
The biggest different between motion comps and animated prototypes is the interactivity. Prototypes can reposed to taps, drags or gestures, while motion comps can only play back in a linear fashion. Generally speaking, this makes prototypes a bit more of an effort to create, but they can also help you solve different problems. The interactive nature of prototypes can also make them useful for user testing to further evaluate potential solutions.
When to use prototypes
When it comes to testing out animation ideas, animated prototypes can be especially helpful in answering questions like these:
How will this interaction feel to use? (Interactive animations often have different timing needs than animations that are passively viewed.)
What will the animation be like with real data or real content?
Does this animation fit the context of the task at hand?
Prototypes can be used to investigate the same questions that motion comps do if you’re comfortable working in code or your prototyping tool of choice has capabilities to address high fidelity animation details. There are so many different prototyping tools out there at the moment, you’re sure to be able to find one that fits your needs.
As a quick side note: If you’re worried that your coding skills might not be up to par to prototype in code, know that prototype code doesn’t have to be production quality code. Animated prototypes’ main concern is working out the animation details. Once you’ve arrived at a combination of animations that works, the animation specifics can be extracted or the prototype can be refactored for production.
Motion comp or prototype?
Both motion comps and prototypes can be extremely useful in the design process and you can use whichever one (or ones) that best fits your team’s style. The key thing that both offer is a way to make animation ideas visible and sharable. When you and your teammate are both looking at the same deliverable, you can be confident you’re talking about the same thing and discuss its pros and cons more easily than just describing the idea verbally.
Motion comps tend to be more useful earlier in the design process when you want to focus on the motion without worrying about the underlying structure or code yet. Motion comps also be great when you want to try something completely new. Some folks prefer motion comps because the tools for making them feel more familiar to them which means they can work faster.
Prototypes are most useful for animations that rely heavily on interaction. (Getting the timing right for interactions can be tough without the interaction part sometimes.) Prototypes can also be helpful to investigate and optimize performance if that’s a specific concern.
Give them a try
Whichever deliverables you choose to highlight your animation decisions, including them in your design reviews, critiques, or other design discussions will help you make better UI animation choices. More discussion around UI animation ideas during the design phase means greater buy-in, more room for iteration, and higher quality UI animations in your designs. Why not give them a try for your next project?",2017,Val Head,valhead,2017-12-08T00:00:00+00:00,https://24ways.org/2017/stop-leaving-animation-to-the-last-minute/,design
205,Why Design Systems Fail,"Design systems are so hot right now, and for good reason. They promote a modular approach to building a product, and ensure organizational unity and stability via reusable code snippets and utility styles. They make prototyping a breeze, and provide a common language for both designers and developers.
A design system is a culmination of several individual components, which can include any or all of the following (and more):
Style guide or visual pattern library
Design tooling (e.g. Sketch Library)
Component library (where the components live in code)
Code usage guidelines and documentation
Design usage documentation
Voice and tone guideline
Animation language guideline
Design systems are standalone (internal or external) products, and have proven to be very effective means of design-driven development. However, in order for a design system to succeed, everyone needs to get on board.
I’d like to go over a few considerations to ensure design system success and what could hinder that success.
Organizational Support
Put simply, any product, including internal products, needs support. Something as cross-functional as a design system, which spans every vertical project team, needs support from the top and bottom levels of your organization.
What I mean by that is that there needs to be top-level support from project managers up through VP’s to see the value of a design system, to provide resources for its implementation, and advocate for its use company-wide. This is especially important in companies where such systems are being put in place on top of existing, crufty codebases, because it may mean there needs to be some time and effort put in the calendar for refactoring work.
Support from the bottom-up means that designers and engineers of all levels also need to support this system and feel responsibility for it. A design system is an organization’s product, and everyone should feel confident contributing to it. If your design system supports external clients as well (such as contractors), they too can become valuable teammates.
A design system needs support and love to be nurtured and to grow. It also needs investment.
Investment
To have a successful design system, you need to make a continuous effort to invest resources into it. I like to compare this to working out.
You can work out intensely for 3 months and see some gains, but once you stop working out, those will slowly fade away. If you continue to work out, even if its less often than the initial investment, you’ll see yourself maintaining your fitness level at a much higher rate than if you stopped completely.
If you invest once in a design system (say, 3 months of overhauling it) but neglect to keep it up, you’ll face the same situation. You’ll see immediate impact, but that impact will fade as it gets out of sync with new designs and you’ll end up with strange, floating bits of code that nobody is using. Your engineers will stop using it as the patterns become outdated, and then you’ll find yourself in for another round of large investment (while dreading going through the process since its fallen so far out of shape).
With design systems, small incremental investments over time lead to big gains overall.
With this point, I also want to note that because of how they scale, design systems can really make a large impact across the platform, making it extremely important to really invest in things like accessibility and solid architecture from the start. You don’t want to scale a brittle system that’s not easy to use.
Take care of your design systems, and keep working on them to ensure their effectiveness. One way to ensure this is to have a dedicated team working on this design system, managing tickets and styling updates that trickle out to the rest of your company.
Responsibility
With some kind of team to act as an owner of a design system, whether it be the design team, engineering team, or a new team
made of both designers and engineers (the best option), your company is more likely to keep a relevant, up-to-date system that doesn’t break.
This team is responsible for a few things:
Helping others get set up on the system (support)
Designing and building components (development)
Advocating for overall UI consistency and adherence (evangelism)
Creating a rollout plan and update system (product management)
As you can see, these are a lot of roles, so it helps to have multiple people on this team, at least part of the time, if you can. One thing I’ve found to be effective in the past is to hold office hours for coworkers to book slots within to help them get set up and to answer any questions about using the system. Having an open Slack channel also helps for this sort of thing, as well as for bringing up bugs/issues/ideas and being an channel for announcements like new releases.
Communication
Once you have resources and a plan to invest in a design system, its really important that this person or team acts as a bridge between design and engineering. Continuous communication is really important here, and the way you communicate is even more important.
Remember that nobody wants to be told what to do or prescribed a solution, especially developers, who are used to a lot of autonomy (usually they get to choose their own tools at work). Despite how much control the other engineers have on the process, they need to feel like they have input, and feel heard.
This can be challenging, especially since ultimately, some party needs to be making a final decision on direction and execution. Because it’s a hard balance to strike, having open communication channels and being as transparent as possible as early as possible is a good start.
Buy-in
For all of the reasons we’ve just looked over, good communication is really important for getting buy-in from your users (the engineers and designers), as well as from product management.
Building and maintaining a design system is surprisingly a lot of people-ops work.
To get buy-in where you don’t have a previous concensus that this is the right direction to take, you need to make people want to use your design system. A really good way to get someone to want to use a product is to make it the path of least resistance, to show its value.
Gather examples and usage wins, because showing is much more powerful than telling.
If you can, have developers use your product in a low-stakes situation where it provides clear benefits. Hackathons are a great place to debut your design system. Having a hackathon internally at DigitalOcean was a perfect opportunity to:
Evangelize for the design system
See what people were using the component library for and what they were struggling with (excellent user testing there)
Get user feedback afterward on how to improve it in future iterations
Let people experience the benefits of using it themselves
These kinds of moments, where people explore on their own are where you can really get people on your side and using the design system, because they can get their hands on it and draw their own conclusions (and if they don’t love it — listen to them on how to improve it so that they do). We don’t always get so lucky as to have this sort of instantaneous user feedback from our direct users.
Architecture
I briefly mentioned the scalable nature of design systems. This is exactly why it’s important to develop a solid architecture early on in the process. Build your design system with growth and scalability in mind. What happens if your company acquires a new product? What happens when it develops a new market segment? How can you make sure there’s room for customization and growth?
A few things we’ve found helpful include:
Namespacing
Use namespacing to ensure that the system doesn’t collide with existing styles if applying it to an existing codebase. This means prefixing every element in the system to indicate that this class is a part of the design system. To ensure that you don’t break parts of the existing build (which may have styled base elements), you can namespace the entire system inside of a parent class. Sass makes this easy with its nested structure.
This kind of namespacing wouldn’t be necessary per se on new projects, but it is definitely useful when integrating new and old styles.
Semantic Versioning
I’ve used Semantic Versioning on all of the design systems I’ve ever worked on. Semantic versioning uses a system of Major.Minor.Patch for any updates. You can then tag released on Github with versioned updates and ensure that someone’s app won’t break unintentionally when there is an update, if they are anchored to a specific version (which they should be).
We also use this semantic versioning as a link with our design system assets at DigitalOcean (i.e. Sketch library) to keep them in sync, with the same version number corresponding to both Sketch and code.
Our design system is served as a node module, but is also provided as a series of built assets using our CDN for quick prototyping and one-off projects. For these built assets, we run a deploy script that automatically creates folders for each release, as well as a latest folder if someone wanted the always-up-to-date version of the design system.
So, semantic versioning for the system I’m currently building is what links our design system node module assets, sketch library assets, and statically built file assets.
The reason we have so many ways of consuming our design system is to make adoption easier and to reduce friction.
Friction
A while ago, I posed the question of why design systems become outdated and unused, and a major conclusion I drew from the conversation was:
“If it’s harder for people to use than their current system, people just won’t use it”
You have to make your design system the path of least resistance, lowering cognitive overhead of development, not adding to it. This is vital. A design system is intended to make development much more efficient, enforce a consistent style across sites, and allow for the developer to not worry as much about small decisions like naming and HTML semantics. These are already sorted out for them, meaning they can focus on building product.
But if your design system is complicated and over-engineered, they may find it frustrating to use and go back to what they know, even if its not the best solution. If you’re a Sass expert, and base your system on complex mixins and functions, you better hope your user (the developer) is also a Sass expert, or wants to learn. This is often not the case, however. You need to talk to your audience.
With the DigitalOcean design system, we provide a few options:
Option 1
Users can implement the component library into a development environment and use Sass, select just the components they want to include, and extend the system using a hook-based system. This is the most performant and extensible output. Only the components that are called upon are included, and they can be easily extended using mixins.
But as noted earlier, not everyone wants to work this way (including Sass a dependency and potentially needing to set up a build system for it and learn a new syntax). There is also the user who just wants to throw a link onto their page and have it look nice, and thats where our versioned built assets come in.
Option 2
With Option 2, users pull in links that are served via a CDN that contain JS, CSS, and our SVG icon library. The code is a bit bigger than the completely customized version, but often this isn’t the aim when people are using Option 2.
Reducing friction for adoption should be a major goal of your design system rollout.
Conclusion
Having a design system is really beneficial to any product, especially as it grows. In order to have an effective system, it’s important to primarily always keep your user in mind and garner support from your entire company. Once you have support and acceptance, this system will flourish and grow. Make sure someone is responsible for it, and make sure its built with a solid foundation from the start which will be carefully maintained toward the future. Good luck, and happy holidays!",2017,Una Kravets,unakravets,2017-12-14T00:00:00+00:00,https://24ways.org/2017/why-design-systems-fail/,process
175,Front-End Code Reusability with CSS and JavaScript,"Most web standards-based developers are more than familiar with creating their sites with semantic HTML with lots and lots of CSS. With each new page in a design, the CSS tends to grow and grow and more elements and styles are added. But CSS can be used to better effect.
The idea of object-oriented CSS isn’t new. Nicole Sullivan has written a presentation on the subject and outlines two main concepts: separate structure and visual design; and separate container and content. Jeff Croft talks about Applying OOP Concepts to CSS:
I can make a class of .box that defines some basic layout structure, and another class of .rounded that provides rounded corners, and classes of .wide and .narrow that define some widths, and then easily create boxes of varying widths and styles by assigning multiple classes to an element, without having to duplicate code in my CSS.
This concept helps reduce CSS file size, allows for great flexibility, rapid building of similar content areas and means greater consistency throughout the entire design. You can also take this concept one step further and apply it to site behaviour with JavaScript.
Build a versatile slideshow
I will show you how to build multiple slideshows using jQuery, allowing varying levels of functionality which you may find on one site design. The code will be flexible enough to allow you to add previous/next links, image pagination and the ability to change the animation type. More importantly, it will allow you to apply any combination of these features.
Image galleries are simply a list of images, so the obvious choice of marking the content up is to use a
. Many designs, however, do not cater to non-JavaScript versions of the website, and thus don’t take in to account large multiple images. You could also simply hide all the other images in the list, apart from the first image. This method can waste bandwidth because the other images might be downloaded when they are never going to be seen.
Taking this second concept — only showing one image — the only code you need to start your slideshow is an tag. The other images can be loaded dynamically via either a per-page JavaScript array or via AJAX.
The slideshow concept is built upon the very versatile Cycle jQuery Plugin and is structured in to another reusable jQuery plugin. Below is the HTML and JavaScript snippet needed to run every different type of slideshow I have mentioned above.
Slideshow plugin
If you’re not familiar with jQuery or how to write and author your own plugin there are plenty of articles to help you out.
jQuery has a chainable interface and this is something your plugin must implement. This is easy to achieve, so your plugin simply returns the collection it is using:
return this.each(
function () {}
};
Local Variables
To keep the JavaScript clean and avoid any conflicts, you must set up any variables which are local to the plugin and should be used on each collection item. Defining all your variables at the top under one statement makes adding more and finding which variables are used easier. For other tips, conventions and improvements check out JSLint, the “JavaScript Code Quality Tool”.
var $$, $div, $images, $arrows, $pager,
id, selector, path, o, options,
height, width,
list = [], li = 0,
parts = [], pi = 0,
arrows = ['Previous', 'Next'];
Cache jQuery Objects
It is good practice to cache any calls made to jQuery. This reduces wasted DOM calls, can improve the speed of your JavaScript code and makes code more reusable.
The following code snippet caches the current selected DOM element as a jQuery object using the variable name $$. Secondly, the plugin makes its settings available to the Metadata plugin‡ which is best practice within jQuery plugins.
For each slideshow the plugin generates a
with a class of slideshow and a unique id. This is used to wrap the slideshow images, pagination and controls.
The base path which is used for all the images in the slideshow is calculated based on the existing image which appears on the page. For example, if the path to the image on the page was /img/flowers/1.jpg the plugin would use the path /img/flowers/ to load the other images.
$$ = $(this);
o = $.metadata ? $.extend({}, settings, $$.metadata()) : settings;
id = 'slideshow-' + (i++ + 1);
$div = $('').addClass('slideshow').attr('id', id);
selector = '#' + id + ' ';
path = $$.attr('src').replace(/[0-9]\.jpg/g, '');
options = {};
height = $$.height();
width = $$.width();
Note: the plugin uses conventions such as folder structure and numeric filenames. These conventions help with the reusable aspect of plugins and best practices.
Build the Images
The cycle plugin uses a list of images to create the slideshow. Because we chose to start with one image we must now build the list programmatically. This is a case of looping through the images which were added via the plugin options, building the appropriate HTML and appending the resulting
to the DOM.
$.each(o.images, function () {
list[li++] = '
';
list[li++] = '';
list[li++] = '
';
});
$images = $('
').addClass('cycle-images');
$images.append(list.join('')).appendTo($div);
Although jQuery provides the append method it is much faster to create one really long string and append it to the DOM at the end.
Update the Options
Here are some of the options we’re making available by simply adding classes to the . You can change the slideshow effect from the default fade to the sliding effect. By adding the class of stopped the slideshow will not auto-play and must be controlled via pagination or previous and next links.
// different effect
if ($$.is('.slide')) {
options.fx = 'scrollHorz';
}
// don't move by default
if ($$.is('.stopped')) {
options.timeout = 0;
}
If you are using the same set of images throughout a website you may wish to start on a different image on each page or section. This can be easily achieved by simply adding the appropriate starting class to the .
// based on the class name on the image
if ($$.is('[class*=start-]')) {
options.startingSlide = parseInt($$.attr('class').replace(/.*start-([0-9]+).*/g, ""$1""), 10) - 1;
}
For example:
By default, and without JavaScript, the third image in this slideshow is shown. When the JavaScript is applied to the page the slideshow must know to start from the correct place, this is why the start class is required.
You could capture the default image name and parse it to get the position, but only the default image needs to be numeric to work with this plugin (and could easily be changed in future). Therefore, this extra specifically defined option means the plugin is more tolerant.
Previous/Next Links
A common feature of slideshows is previous and next links enabling the user to manually progress the images. The Cycle plugin supports this functionality, but you must generate the markup yourself. Most people add these directly in the HTML but normally only support their behaviour when JavaScript is enabled. This goes against progressive enhancement. To keep with the best practice progress enhancement method the previous/next links should be generated with JavaScript.
The follow snippet checks whether the slideshow requires the previous/next links, via the arrows class. It restricts the Cycle plugin to the specific slideshow using the selector we created at the top of the plugin. This means multiple slideshows can run on one page without conflicting each other.
The code creates a
using the arrows array we defined at the top of the plugin. It also adds a class to the slideshow container, meaning you can style different combinations of options in your CSS.
// create the arrows
if ($$.is('.arrows') && list.length > 1) {
options.next = selector + '.next';
options.prev = selector + '.previous';
$arrows = $('
').addClass('cycle-arrows');
$.each(arrows, function (i, val) {
parts[pi++] = '
';
});
$arrows.append(parts.join('')).appendTo($div);
$div.addClass('has-cycle-arrows');
}
The arrow array could be placed inside the plugin settings to allow for localisation.
Pagination
The Cycle plugin creates its own HTML for the pagination of the slideshow. All our plugin needs to do is create the list and selector to use. This snippet creates the pagination container and appends it to our specific slideshow container. It sets the Cycle plugin pager option, restricting it to the specific slideshow using the selector we created at the top of the plugin. Like the previous/next links, a class is added to the slideshow container allowing you to style the slideshow itself differently.
// create the clickable pagination
if ($$.is('.pagination') && list.length > 1) {
options.pager = selector + '.cycle-pagination';
$pager = $('
').addClass('cycle-pagination');
$pager.appendTo($div);
$div.addClass('has-cycle-pagination');
}
Note: the Cycle plugin creates a
with anchors listed directly inside without the surrounding
. Unfortunately this is invalid markup but the code still works.
Demos
Well, that describes all the ins-and-outs of the plugin, but demos make it easier to understand! Viewing the source on the demo page shows some of the combinations you can create with a simple , a few classes and some thought-out JavaScript.
View the demos →
Decide on defaults
The slideshow plugin uses the exact same settings as the Cycle plugin, but some are explicitly set within the slideshow plugin when using the classes you have set.
When deciding on what functionality is going to be controlled via this class method, be careful to choose your defaults wisely. If all slideshows should auto-play, don’t make this an option — make the option to stop the auto-play. Similarly, if every slideshow should have previous/next functionality make this the default and expose the ability to remove them with a class such as “no-pagination”.
In the examples presented on this article I have used a class on each . You can easily change this to anything you want and simply apply the plugin based on the jQuery selector required.
Grab your images
If you are using AJAX to load in your images, you can speed up development by deciding on and keeping to a folder structure and naming convention. There are two methods: basing the image path based on the current URL; or based on the src of the image. The first allows a different slideshow on each page, but in many instances a site will have a couple of sets of images and therefore the second method is probably preferred.
Metadata ‡
A method which allows you to directly modify settings in certain plugins, which also uses the classes from your HTML already exists. This is a jQuery plugin called Metadata. This method allows for finer control over the plugin settings themselves. Some people, however, may dislike the syntax and prefer using normal classes, like above which when sprinkled with a bit more JavaScript allows you to control what you need to control.
The takeaway
Hopefully you have understood not only what goes in to a basic jQuery plugin but also learnt a new and powerful idea which you can apply to other areas of your website.
The idea can also be applied to other common interfaces such as lightboxes or mapping services such as Google Maps — for example creating markers based on a list of places, each with different pin icons based the anchor class.",2009,Trevor Morris,trevormorris,2009-12-06T00:00:00+00:00,https://24ways.org/2009/front-end-code-reusability-with-css-and-javascript/,code
82,Being Prepared To Contribute,"“You’ll figure it out.” The advice my dad gives has always been the same, whether addressing my grade school homework or paying bills after college. If I was looking for a shortcut, my dad wasn’t going to be the one to provide it.
When I was a kid it infuriated the hell out of me, but what I then perceived to be a lack of understanding turned out to be a keystone in my upbringing. As an adult, I realize the value in not receiving outright solutions, but being forced to figure things out.
Even today, when presented with a roadblock while building for the web, I am temped to get by with the help of the latest grid system, framework, polyfill, or plugin. In and of themselves these resources are harmless, but before I can drop them in, those damn words still echo in the back of my mind: “You’ll figure it out.”
I know that if I blindly implement these tools as drag and drop solutions I fail to understand the intricacies behind how and why they were built; repeatedly using them as shortcuts handicaps my skill set. When I solely rely on the tools of others, my work is at their mercy, leaving me less creative and resourceful, and, thus, less able to contribute to the advancement of our industry and community.
One of my favorite things about this community is how generous and collaborative it can be. I’ve loved seeing FitVids used all over the web and regularly improved upon at Github. I bet we can all think of a time where implementing a shared resource has benefitted our own work and sanity. Because these resources are so valuable, it’s important that we continue to be a part of the conversation in order to further develop solutions and ideas. It’s easy to assume there’s someone smarter or more up-to-date in any one area, but with a degree of understanding and perspective, we can all participate.
This open form of collaboration is in our web DNA. After all, its primary purpose was to promote the exchange and development of new ideas.
Tim Berners-Lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier “Enquire” work, it was designed to allow people to work together by combining their knowledge in a web of hypertext documents.
I’m delighted to find that this spirit of collaborative ingenuity is alive and well on the web today. Take the story of Off Canvas as an example. I was at an ATX Dribbble meet up where I met Jason Weaver and chatted to him about his recent work on the responsive layout prototype, Off Canvas. Jason said he came across a post by Luke Wroblewski outlining the idea and saw this:
If anyone is interested in building a complete example of this approach using responsive Web design techniques, let me know!
From there Luke recounts:
We went back and forth on email, with me laying out ideas and Jason doing all the hard work to see if they can be done and improving them bit by bit! Once we got to something we both liked, I wrote up an article explaining things and he hosted the examples.
Luke took the time to clearly outline and diagram his ideas, and Jason responded with a solid proof of concept that has evolved into a tool we all have at our disposal. Victory!
I have also benefitted from comrades who have taken an idea of mine into development. After blogging about some concerns in regards to maintaining hierarchy as media queries are used to shift layouts, Jordan Moore rebounded with some responsive demos where he used flexbox to (re)order content as viewport sizing changes.
Similar stories can be found behind the development of things like FitVids, FitText, and Molten Leading. I love this pattern of collaboration because it involves a fairly specific process:
Initial idea or prototype is outlined or built, then shared
Discuss
Someone develops or improves it, then shares it
Discuss
Someone else develops or improves it, then shares it.
Infinity.
This is what the web looks like when we build it together, and I’d argue that steps 2+ are absolutely crucial. A web where everyone develops their own ideas and tools independent of one another is like a room full of people talking and no one listening.
The pattern itself mimics a literal web structure, and ideally we’d be able to follow a strand from one idea to the next and so on.
Blessed are the curators
Sometimes those lines aren’t easy to find or follow. Thankfully, there are people who painstakingly log each experiment and index much of what’s out there. Chris Coyier does this with CSS in general, and Brad Frost is doing this for responsive and multi-device design with his Pattern Library. Seriously, take a look at this page and imagine what it would take to find, track and organize the progression of each of these resources yourself. I’d argue that ongoing collections like these are more valuable than the sum of their parts when they are updated regularly as opposed to a top ten tips blog post format.
Here’s my soapbox
Here are a few things I appreciate about how things are shared and contributed online. And yes, I could do way better at all of them myself.
Concise write-ups: honor others’ time by getting to the point. Not every idea or solution needs two thousand words to convey fully. I love long-form posts, but there’s a time and a place for them.
Visual aids: if a quick illustration, screenshot, or graphic helps illustrate your point or problem, yes please.
By the way, Luke Wroblewski rules the school on both of these.
Demo it: host it yourself, or put it on CodePen or JS Bin for others to see.
Put it on Github: share and improve with the rest of the community. Consider, however, that because someone puts something on Github doesn’t mean they’re forever bound to provide support or instruction.
This isn’t a call for everyone to learn everything all the time, but if you’re curious or interested in something, skip the shortcut and get your hands dirty: sketch, prototype, question, debate, fork, and share. Figuring these things out on our own makes us valuable contributors to the web – the thing that ultimately we’re all trying to figure out together.",2012,Trent Walton,trentwalton,2012-12-03T00:00:00+00:00,https://24ways.org/2012/being-prepared-to-contribute/,process
11,JavaScript: Taking Off the Training Wheels,"JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it’s understandable that when 24 ways asked, “What one thing do you wish you had more time to learn about?”, a number of you answered “JavaScript!”
This article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can’t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.
Why learn JavaScript?
So what’s in it for you? Why take the next step and learn the fundamentals?
Confidence with jQuery
If nothing else, learning JavaScript will improve your jQuery code; you’ll be comfortable writing jQuery from scratch and feel happy bending others’ code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it’s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.
In fact, you could say that jQuery’s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That’s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn’t needed.
An example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here’s a comparison of some jQuery code and the equivalent without.
$('.counter').each(function (index) {
$(this).text(index + 1);
});
var counters = document.querySelectorAll('.counter');
[].slice.call(counters).forEach(function (elem, index) {
elem.textContent = index + 1;
});
Solving problems no one else has!
When you have to go to the internet to solve a problem, you’re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.
Node.js
Node.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don’t worry: the Node community is thriving, very friendly and willing to help.
I think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!
Here’s an example web server written with Node. http is a module that allows you to create servers and, like jQuery’s $.ajax, make requests. It’s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it’s certainly not out of your reach.
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World');
}).listen(1337);
console.log('Server running at http://localhost:1337/');
Grunt and other website tools
Node has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I’ll talk a little bit about Grunt here.
Grunt is a task runner, and many people use it for compiling Sass or compressing their site’s JavaScript and images. It’s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.
Ways to improve your skills
So you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.
Rebuild a jQuery app
Converting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you’re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way’s jQuery to JavaScript article.
Find a mentor
If you think you’d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I’d look out for someone who’s active and friendly on Twitter, and does the kind of work you’d like to do. Introduce yourself over Twitter or send them an email. I wouldn’t expect a full tutoring course (although that is another option!) but they’ll be very glad to answer a question and any follow-ups every now and then.
Go to a workshop
Many conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there’s one in your area. Workshops are great because you can ask direct questions, and you’re in an environment where others are learning just like you are — no need to learn alone!
Set yourself challenges
This is one way I like to learn new things. I have a new thing that I’m not very good at, so I pick something that I think is just out of my reach and I try to build it. It’s learning by doing and, even if you fail, it can be enormously valuable.
Where to start?
If you’ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.
I’ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.
Beginner
If you’re just getting started with JavaScript, I’d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They’re all reputable sources (although, I’ve included something I wrote — you can decide about that one!) and will not lead you astray.
jQuery’s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.
Codecademy’s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.
HTMLDog’s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]
The tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.
Getting in-depth
For more comprehensive documentation and help I’d recommend adding these places to your list of go-tos.
MDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it’s a great place to just go and browse.
Axel Rauschmayer’s 2ality is a stunning collection of articles that will take you deep into JavaScript. It’s certainly worth looking at.
Addy Osmani’s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.
And finally…
I think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you’ll get there. I bet you’ll learn a whole lot along the way. Good luck!
Many thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.",2013,Tom Ashworth,tomashworth,2013-12-05T00:00:00+00:00,https://24ways.org/2013/javascript-taking-off-the-training-wheels/,code
294,New Tricks for an Old Dog,"Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy.
I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on.
In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways!
How do you do, fellow kids?
To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React.
Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us.
Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge.
There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it.
Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on.
To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone.
But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort!
Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using.
Boiled down, what we liked seemed quite simple:
Component nesting & composition
Easy, predictable state management
Normal functions for data manipulation
Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app.
While the specifics of our situation and Flight aren’t really important, this experience taught me something:
Distill good tech into great ideas. You can apply great ideas anywhere.
You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already?
Times, they are a changin’
Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code.
Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate.
In fact, we found ourselves with a mix of both AMD syntaxes:
define([""lodash""], function (_) {
// . . .
});
define(function (require) {
var _ = require(""lodash"");
// . . .
});
This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax:
import _ from ""lodash"";
These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there?
To explain, I want to introduce you to codemods and jscodeshift.
A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(""..."") to new URL(""..."").
jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes.
Here’s an example that renames all instances of the variable foo to bar:
module.exports = function (fileInfo, api) {
return api
.jscodeshift(fileInfo.source)
.findVariableDeclarators('foo')
.renameTo('bar')
.toSource();
};
It’s a seriously powerful tool, and we’ve used it to write a series of codemods that:
rename modules,
unify our use of AMD to a single syntax,
transition from one testing framework to another, and
switch from AMD to CommonJS.
These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS:
commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc
Date: Tue Jul 12 2016
Run AMD -> CommonJS codemod
418 files changed, 47550 insertions(+), 48468 deletions(-)
Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone!
From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes:
Find all the existing patterns
Choose the two most similar
Unify with a codemod
Repeat.
For example:
For module loading, we had 2 competing AMD patterns plus some use of CommonJS
The two AMD syntaxes were the most similar
We used a codemod to move to unify the AMD patterns
Later we returned to AMD to convert it to CommonJS
It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer.
Welcome aboard!
As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie.
Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks.
Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding!
And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help.
This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies.
After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help!
For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts:
What are our dependencies?
Where are the potential points of failure?
Where does authentication live? Storage? Caching?
Who owns “X”?
Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track.
To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review.
In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started:
Every pull request gets a +1 from someone else.
That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master.
At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things:
Don’t review for more than hour 1
Keep reviews smaller than ~400 lines 2
Code review your own code first 2
After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them.
On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration.
But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool.
It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours!
And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace.
For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results.
If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this!
More, please!
There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations.
And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk.
Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩
Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩",2016,Tom Ashworth,tomashworth,2016-12-18T00:00:00+00:00,https://24ways.org/2016/new-tricks-for-an-old-dog/,code
119,Rocking Restrictions,"I love my job. I live my job. For every project I do, I try to make it look special. I’ll be honest: I have a fetish for comments like “I never saw anything like that!” or, “I wish I thought of that!”. I know, I have an ego-problem. (Eleven I’s already)
But sometimes, you run out of inspiration. Happens to everybody, and everybody hates it. “I’m the worst designer in the world.” “Everything I designed before this was just pure luck!” No it wasn’t.
Countless articles about finding inspiration have already been written. Great, but they’re not the magic potion you’d expect them to be when you need it. Here’s a list of small tips that can have immediate effect when applying them/using them. Main theme: Liberate yourself from the designers’ block by restricting yourself.
Do’s
Grids
If you aren’t already using grids, you’re doing something wrong. Not only are they a great help for aligning your design, they also restrict you to certain widths and heights. (For more information about grids, I suggest you read Mark Boulton’s series on designing grid systems. Oh, he’s also publishing a book I think.)
So what’s the link between grids and restrictions? Instead of having the option to style a piece of layout with a width of 1 to 960 pixels, you have to choose from values like 60 pixels, 140, 220, 300, …
Start small
Having a hard time finding a style for the layout, why don’t you start with one small object? No, not that small object, I meant a piece of a form, or a link, or try styling your headers (h1 – h6).
Let’s take a submit button of a form: it’s small, but needs much attention. People will click it. People will hover it. Maybe sometimes it’s disabled? Also: a button needs to look like a button, so typically it requires more styling then a regular link. Once you’ve got the button, move on, following the button’s style.
Color palettes
There are lots of resources on the web for finding inspiration for color palettes. Some of the most famous are COLOURlovers, wear palettes and Adobe’s Kuler. Browse through them (or create your own from a picture), pick a color palette you like and which works with the subject you’re handling, and stick with it. 4-5 colors, maybe with some tonal variations, but that’s it.
Fonts
There aren’t many fonts available for the web (Richard Rutter has a great article on this subject), but you’d be surprised how long they go. A simple text-transform: uppercase; or font-style: italic; can change a dull looking font into something entirely fresh.
Play around with the fonts you want to use and the variations you’ll be using, and make a list. Pick five combinations of fonts and their variations, and stick with them throughout the layout.
Single-task
Most of us use multiple monitors. They’re great to increase productivity, but make it harder to focus on a single task. Here’s what you do: try using only your smallest monitor. Maybe it’s the one from your laptop, maybe it’s an old 1024×768 you found in the attic. Having Photoshop (or Fireworks or…) taking over your entire workspace blocks out all the other distractions on your screen, and works quite liberating.
Mute everything…
…but not entirely. I noticed I was way more focused when I set NetNewsWire to refresh it’s feeds only once every two hours. After two hours, I need a break anyway. Turning off Twitterrific was a mistake, as it’s my window to the world, and it’s the place where the people I like to call colleagues live. You can’t exactly ask them to bring you a cup of coffee when they go to the vending machine, but they do keep you fresh, and it stops you from going human-shy. Instead I changed the settings to not play a notification sound when new Tweets arrive so it doesn’t disturb me when I’m zoning.
Don’ts
CSS galleries
Don’t start browsing all kinds of CSS galleries. Either you’ll feel bad, or you just start using elements in a way you can’t call “inspired” anymore. Instead gather your own collection of inspiration. Example: I use LittleSnapper in which I dump everything I find inspiring. This goes from a smart layout idea, to a failed picture someone posted on Flickr. Everything is inspiring.
Panicking
Don’t panic. It’s the worst thing you could do. Instead, get away from the computer, and go to bed early. A good night of sleep combined with a hot/cold shower can give you a totally new perspective on a design. Got a deadline by tomorrow? Well, you should’ve started earlier. Got a good excuse to start on this design this late? Tell your client it was either that or a bad design.
120-hour work-week
Don’t work all day long, including evenings and early mornings. Write off that first hour, you don’t really think you’ll get anything productive done before 9AM?! I don’t even think you should work on one and the same design all day long. If you’re stuck, try working in blocks of 1 or 2 hours on a certain design. Mixing projects isn’t for everyone, but it might just do the trick for you.
Summary
Use grids, not only for layout purposes.
Pick a specific element to start with.
Use a colour palette.
Limit the amount of fonts and variations you’ll use.
Search for the smallest monitor around, and restrict yourself to that one.
Reduce the amount of noise.
Don’t start looking on the internet for inspiration. Build your own little inspirarchive.
Work in blocks.",2008,Tim Van Damme,timvandamme,2008-12-14T00:00:00+00:00,https://24ways.org/2008/rocking-restrictions/,process
191,CSS Animations,"Friend: You should learn how to write CSS!
Me: …
Friend: CSS; Cascading Style Sheets. If you’re serious about web design, that’s the next thing you should learn.
Me: What’s wrong with tags?
That was 8 years ago. Thanks to the hard work of Jeffrey, Andy, Andy, Cameron, Colly, Dan and many others, learning how to decently markup a website and write lightweight stylesheets was surprisingly easy. They made it so easy even a complete idiot (OH HAI) was able to quickly master it.
And then… nothing. For a long time, it seemed like there wasn’t happening anything in the land of CSS, time stood still. Once you knew the basics, there wasn’t anything new to keep up with. It looked like a great band split, but people just kept re-releasing their music in various “Best Of!” or “Remastered!” albums.
Fast forward a couple of years to late 2006. On the official WebKit blog Surfin’ Safari, there’s an article about something called CSS animations. Great new stuff to play with, but only supported by nightly builds (read: very, very beta) of WebKit. In the following months, they release other goodies, like CSS gradients, CSS reflections, CSS masks, and even more CSS animation sexiness. Whoa, looks like the band got back together, found their second youth, and went into overdrive! The problem was that if you wanted to listen to their new albums, you had to own some kind of new high-tech player no one on earth (besides some early adopters) owned.
Back in the time machine. It is now late 2009, close to Christmas. Things have changed. Browsers supporting these new toys are widely available left and right. Even non-techies are using these advanced browsers to surf the web on a daily basis!
Epic win? Almost, but at least this gives us enough reason to start learning how we could use all this new CSS voodoo. On Monday, Natalie Downe showed you a good tutorial on Going Nuts with CSS Transitions. Today, I’m taking it one step further…
Howto: A basic spinner
No matter how fast internet tubes or servers are, we’ll always need spinners to indicate something’s happening behind the scenes. Up until now, people would go to some site, pick one of the available templates, customize their foreground and background colors, and download a beautiful GIF image.
There are some downsides to this though:
It’s only _semi_-transparent: If you change your mind and pick a slightly different background color, you need to go back to the site, set all the parameters again, and replace your current image. There isn’t even a way to pick an image or gradient as background.
Limited number of frames, probable to keep the file-size as small as possible (don’t forget this thing needs to be loaded before whatever process is finished in the background), and you don’t have that 24 frames per second smoothness.
This is just too fucking easy. As a front-end code geek, there must be a “cooler” way to do this!
What do we need to make a spinner with CSS animations? One image, and one element on our webpage we can hook on to. Yes, that’s it. I created a simple transparent PNG that looks it might be a spinner, and for the element on the page, I wrote this piece of genius HTML:
Using a grid system similar to this can easily create quite the tag soup. It could fill the HTML full of divs that may become complex to understand and difficult to edit.
Although there is this reliance on several
s to lay out the components on a page it does allow a tidy way to place the component code within that page. It abstracts the layout of the page to its own code, its own system, so the components can ‘fit’ where needed.
The requirements of the new grid system
Moving forward I set myself some goals for what I’d like to have achieved in this new grid system:
It needs to behave like the existing grid systems
We are not ripping up the existing grid system, it would be too much work, for now, to retrofit all of the existing components to work in a grid that has a different amount of columns, and spacing at various viewport widths.
Allow full-width components
Currently the grid system is a 14 column grid that becomes centred on the page when viewport is wide enough. We have, in the past, written some CSS that would allow for a full-width component, but his had always felt like a hack. We want the option to have a full-width element as part of the new grid system, not something that needs CSS to fight against.
Less of a tag soup
Ideally we want to end up writing less HTML to layout the page. Although the existing system can be quite clear as to what each element is doing, it can also become a little laborious in working out what each grid row or block is doing where.
I would like to move the layout logic to CSS as much as is possible, potentially creating some utility classes or additional ‘layout classes’ for the components.
Easier for people to use and author
With many people using the existing design systems codebase we need to create a new grid system that is as easy or easier to use than the existing one. I think and hope this would be helped by removing as many
s as needed and would require new documentation and examples, and potentially some initial training.
Separating layout from style
There still needs to be a separation of layout from the styles for the component. To allow for the component itself to be placed wherever needed in the page we need to make sure that the CSS for the layout is a separate entity to the CSS for that styling.
With these base requirements I took to CodePen and started working on some throwaway code to get started.
Making the new grid(s)
The Full-Width Grid
To start with I created a grid that had three columns, one for the left, one for the middle, and one for the right. This would give the full-width option to components.
Thankfully, one of Rachel Andrew’s many articles on Grid discussed this exact requirement of the new grid system to break out with Grid.
I took some of the code in the examples and edited to make grid we needed.
.container {
display: grid;
grid-template-columns:
[full-start]
minmax(.75em, 1fr)
[main-start]
minmax(0, 1008px)
[main-end]
minmax(.75em, 1fr)
[full-end];
}
We are declaring a grid, we have four grid column lines which we name and we define how the three columns they create react to the viewport width. We have a left and right column that have a minimum of 12px, and a central column with a maximum width of 1008px.
Both left and right columns fill up any additional space if the viewport is wider that 1032px wide. We are also not declaring any gutters to this grid, the left and right columns would act as gutters at smaller viewports.
At this point I noticed that older versions of Sass cannot parse the brackets in this code. To combat this I used Sass’ unquote method to wrap around the value of the grid-template-column.
.container {
display: grid;
grid-template-columns:
unquote(""
[full-start]
minmax(.75em, 1fr)
[main-start]
minmax(0, 1008px)
[main-end]
minmax(.75em, 1fr)
[full-end]
"");
}
The existing codebase makes use of Sass variables, mixins and functions so to remove that would be a problem, but luckily the version of Sass used is up-to-date (note: example CodePens will be using CSS).
The initial full-width grid displays on a webpage as below:
The 14 column grid
I decided to work out the 14 column grid as a separate prototype before working out how it would fit within the full-width grid. This grid is very similar to the 12 column grids that have been used in web design. Here we need 14 columns with a gutter between each one.
Along with the many other resources on Grid, Mozilla’s MDN site had a page on common layouts using CSS Grid. This gave me the perfect CSS I needed to create my grid and I edited it as required:
.inner {
display: grid;
grid-template-columns: repeat(14, [col-start] 1fr);
grid-gap: .75em;
}
We, again, are declaring a grid, and we are splitting up the available space by creating 14 columns with 1 fr-unit and giving each one a starting line named col-start.
This grid would display on web page as below:
Bringing the grids together
Now that we have got the two grids we need to help fulfil our requirements we need to put them together so that they are actually we we need.
The subgrid
There is no subgrid in CSS, yet. To workaround this for the new grid system we could nest the 14 column grid inside the full-width grid.
In the HTML we nest the 14 column inner grid inside the full-width container.
So that the inner knows where to be laid out within the container we tell it what column to start and end with, with this code it would be the start and end of the main column.
.inner {
display: grid;
grid-column: main-start / main-end;
grid-template-columns: repeat(14, [col-start] 1fr);
grid-gap: .75em;
}
The CSS for the container remains unchanged.
This works, but we have added another div to our HTML. One of our requirements is to try and remove the potential for tag soup.
The faux subgrid subgrid
I wanted to see if it would be possible to place the CSS for the 14 column grid within the CSS for the full-width grid. I replaced the CSS for the main grid and added the grid-column-gap to the .container.
.container {
display: grid;
grid-gap: .75em;
grid-template-columns:
[full-start]
minmax(.75em, 1fr)
[main-start]
repeat(14, [col-start] 1fr)
[main-end]
minmax(.75em, 1fr)
[full-end];
}
What this gave me was a 16 column grid. I was unable to find a way to tell the main grid, the grid betwixt main-start and main-end to be a maximum of 1008px as required.
I trawled the internet to find if it was possible to create our main requirement, a 14 column grid which also allows for full-width components. I found that we could not reverse minmax to minmax(1fr, 72px) as 1fr is not allowed as a minimum if there is a maximum. I tried working out if we could make the min larger than its max but in minmax it would be ignored.
I was struggling, I was hoping for a cleaner version of the grid system we currently use but getting to the point where needing that extra
would be a necessity.
At 3 in the morning, when I was failing to get to sleep, my mind happened upon an question: “Could you use calc?”
At some point I drifted back to sleep so the next day I set upon seeing if this was possible. I knew that the maximum width of the central grid needed to be 1008px. The left and right columns needed to be however many pixels were left in the viewport divided by 2. In CSS it looked like I would need to use calc twice. The first time to takeaway 1008px from 100% of the viewport width and the second to divide that result by 2.
calc(calc(100% - 1008px) / 2)
The CSS above was part of the value that I would need to include in the declaration for the grid.
.container {
display: grid;
grid-gap: .75em;
grid-template-columns:
[full-start]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[main-start]
repeat(14, [col-start] 1fr)
[main-end]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[full-end];
}
We have created the grid required. A full-width grid, with a central 14 column grid, using fewer
elements.
See the Pen Design Systems and CSS Grid, 6 by Stuart Robson (@sturobson) on CodePen.
Success!
Progressive enhancement
Now that we have created the grid system required we need to back-track a little.
Not all browsers support Grid, over the last 9 months or so this has gotten a lot better. However there will still be browsers that visit that potentially won’t have support. The effort required to make the grid system fall back for these browsers depends on your product or sites browser support.
To determine if we will be using Grid or not for a browser we will make use of feature queries. This would mean that any version of Internet Explorer will not get Grid, as well as some mobile browsers and older versions of other browsers.
@supports (display: grid) {
/* Styles for browsers that support Grid */
}
If a browser does not pass the requirements for @supports we will fallback to using flexbox where possible, and if that is not supported we are happy for the page to be laid out in one column.
A website doesn’t have to look the same in every browser after all.
A responsive grid
We started with the big picture, how the grid would be at a large viewport and the grid system we have created gets a little silly when the viewport gets smaller.
At smaller viewports we have a single column layout where every item of content, every component stacks atop each other. We don’t start to define a grid before we the viewport gets to 700px wide. At this point we have an 8 column grid and if the viewport gets to 1100px or wider we have our 14 column grid.
/*
* to start with there is no 'grid' just a single column
*/
.container {
padding: 0 .75em;
}
/*
* when we get to 700px we create an 8 column grid with
* a left and right area to breakout of the grid.
*/
@media (min-width: 700px) {
.container {
display: grid;
grid-gap: .75em;
grid-template-columns:
[full-start]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[main-start]
repeat(8, [col-start] 1fr)
[main-end]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[full-end];
padding: 0;
}
}
/*
* when we get to 1100px we create an 14 column grid with
* a left and right area to breakout of the grid.
*/
@media (min-width: 1100px) {
.container {
grid-template-columns:
[full-start]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[main-start]
repeat(14, [col-start] 1fr)
[main-end]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[full-end];
}
}
Being explicit in creating this there is some repetition that we could avoid, we will define the number of columns for the inner grid by using a Sass variable or CSS custom properties (more commonly termed as CSS variables).
Let’s use CSS custom properties. We need to declare the variable first by adding it to our stylesheet.
:root {
--inner-grid-columns: 8;
}
We then need to edit a few more lines. First make use of the variable for this line.
repeat(8, [col-start] 1fr)
/* replace with */
repeat(var(--inner-grid-columns), [col-start] 1fr)
Then at the 1100px breakpoint we would only need to change the value of the —inner-grid-columns value.
@media (min-width: 1100px) {
.container {
grid-template-columns:
[full-start]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[main-start]
repeat(14, [col-start] 1fr)
[main-end]
minmax(calc(calc(100% - 1008px) / 2), 1fr)
[full-end];
}
}
/* replace with */
@media (min-width: 1100px) {
.container {
--inner-grid-columns: 14;
}
}
See the Pen Design Systems and CSS Grid, 8 by Stuart Robson (@sturobson) on CodePen.
The final grid system
We have finally created our new grid for the design system. It stays true to the existing grid in place, adds the ability to break-out of the grid, removes a
that could have been needed for the nested 14 column grid.
We can move on to the new component.
Creating a new component
Back to the new components we are needing to create.
To me there are two components one of which is a slight variant of the first. This component contains a title, subtitle, a paragraph (potentially paragraphs) of content, a list, and a call to action.
To start with we should write the HTML for the component, something like this:
To place the component on the existing grid is fine, but as child elements are not affected by the container grid we need to define another grid for the features component.
As the grid doesn’t get invoked until 700px it is possible to negate the need for a media query.
.features {
grid-column: col-start 1 / span 6;
}
@supports (display: grid) {
@media (min-width: 1100px) {
.features {
grid-column-end: 9;
}
}
}
We can also avoid duplication of declarations by making use of the grid-column shorthand and longhand. We need to write a little more CSS for the variant component, the one that will sit on the right side of the page too.
.features:nth-of-type(even) {
grid-column-start: 4;
grid-row: 2;
}
@supports (display: grid) {
@media (min-width: 1100px) {
.features:nth-of-type(even) {
grid-column-start: 9;
grid-column-end: 16;
}
}
}
We cannot place the items within features on the container grid as they are not direct children. To make this work we have to define a grid for the features component.
We can do this by defining the grid at the first breakpoint of 700px making use of CSS custom properties again to define how many columns there will need to be.
.features {
grid-column: col-start 1 / span 6;
--features-grid-columns: 5;
}
@supports (display: grid) {
@media (min-width: 700px) {
.features {
display: grid;
grid-gap: .75em;
grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr);
}
}
}
@supports (display: grid) {
@media (min-width: 1100px) {
.features {
grid-column-end: 9;
--features-grid-columns: 7;
}
}
}
See the Pen Design Systems and CSS Grid, 10 by Stuart Robson (@sturobson) on CodePen.
Laying out the parts
Looking at the spec and reading several articles I feel there are two ways that I could layout the text of this component on the grid.
We could use the grid-column shorthand that incorporates grid-column-start and grid-column-end or we can make use of grid-template-areas.
grid-template-areas allow for a nice visual way of representing how the parts of the component would be laid out. We can take the the mock of the features on the grid and represent them in text in our CSS.
Within the .features rule we can add the relevant grid-template-areas value to represent the above.
.features {
display: grid;
grid-template-columns: repeat(var(--features-grid-columns), [col-start] 1fr);
grid-template-areas:
"". title title title title title title""
"". subtitle subtitle subtitle subtitle subtitle . ""
"". content content content content . . ""
"". list list list . . . ""
"". . . . link link link "";
}
In order to make the variant of the component we would have to create the grid-template-areas for that component too.
We then need to tell each element of the component in what grid-area it should be placed within the grid.
.features__title { grid-area: title; }
.features__subtitle { grid-area: subtitle; }
.features__content { grid-area: content; }
.features__list { grid-area: list; }
.features__link { grid-area: link; }
See the Pen Design Systems and CSS Grid, 12 by Stuart Robson (@sturobson) on CodePen.
The other way would be to use the grid-column shorthand and the grid-column-start and grid-column-end we have used previously.
.features .features__title {
grid-column: col-start 2 / span 6;
}
.features .features__subtitle {
grid-column: col-start 2 / span 5;
}
.features .features__content {
grid-column: col-start 2 / span 4;
}
.features .features__list {
grid-column: col-start 2 / span 4;
}
.features .features__link {
grid-column: col-start 5 / span 3;
}
For the variant of the component we can use the grid-column-start property as it will inherit the span defined in the grid-column shorthand.
.features:nth-of-type(even) .features__title {
grid-column-start: col-start 1;
}
.features:nth-of-type(even) .features__subtitle {
grid-column-start: col-start 1;
}
.features:nth-of-type(even) .features__content {
grid-column-start: col-start 3;
}
.features:nth-of-type(even) .features__list {
grid-column-start: col-start 3;
}
.features:nth-of-type(even) .features__link {
grid-column-start: col-start 1;
}
See the Pen Design Systems and CSS Grid, 14 by Stuart Robson (@sturobson) on CodePen.
I think, for now, we will go with using grid-column properties rather than grid-template-areas. The repetition needed for creating the variant feels too much where we can change the grid-column-start instead, keeping the components elements layout properties tied a little closer to the elements rather than the grid.
Some additional decisions
The current component library has existing styles for titles, subtitles, lists, paragraphs of text and calls to action. These are name-spaced so that they shouldn’t clash with any other components. Looking forward there will be a chance that other products adopt the component library, but they may bring their own styles for titles, subtitles, etc.
One way that we could write our code now for that near future possibility is to make sure our classes are working hard. Using class-attribute selectors we can target part of the class attributes that we know the elements in the component will have using *=.
.features [class*=""title""] {
grid-column: col-start 2 / span 6;
}
.features [class*=""subtitle""] {
grid-column: col-start 2 / span 5;
}
.features [class*=""content""] {
grid-column: col-start 2 / span 4;
}
.features [class*=""list""] {
grid-column: col-start 2 / span 4;
}
.features [class*=""link""] {
grid-column: col-start 5 / span 3;
}
See the Pen Design Systems and CSS Grid, 15 by Stuart Robson (@sturobson) on CodePen.
Although the component we have created have a title, subtitle, paragraphs, a list, and a call to action there may be a time where one ore more of these is not required or available. One thing I found out is that if the element doesn’t exist then grid will not create space for it. This may be obvious, but it can be really helpful in making a nice malleable component.
We have only looked at columns, as existing components have their own spacing for the vertical rhythm of the page we don’t really want to have them take up equal space in the component and just take up the space as needed. We can do this by adding grid-auto-rows: min-content; to our .features. This is useful if you also need your component to take up a height that is more than the component itself.
The grid of the future
From prototyping this new grid and components in CSS Grid, I’ve found it a fantastic way to reimagine how we can create a layout or grid system for our sites. It gives us options to create the same layouts in differing ways that could suit a project and its needs.
It allows us to carry on – if we choose to – using a
-based grid but swapping out floats for CSS Grid or to tie it to our components so they have specific places to go depending on what component is being used. Or we could have several ‘grid components’ in our design system that we could use to layout various components throughout a page.
If you find yourself tasked with creating some new components for your design system try it. If you are starting from scratch I believe you really should start with CSS Grid for your layout.
It really feels like the possibilities are endless in terms of layout for the web.
Resources
Here are just a few resources I have pawed over these last few weeks whilst getting acquainted with CSS Grid.
A collection of CodePens from this article
Grid by Example from Rachel Andrew
A Complete Guide to CSS Grid on Codrops from Hui Jing Chen
Rachel Andrew’s Blog Archive tagged: cssgrid
CSS Grid Layout Examples
MDN’s CSS Grid Layout
A Complete Guide to Grid from CSS-Tricks
CSS Grid Layout Module Level 1 Specification",2017,Stuart Robson,stuartrobson,2017-12-12T00:00:00+00:00,https://24ways.org/2017/design-systems-and-css-grid/,code
157,Capturing Caps Lock,"One of the more annoying aspects of having to remember passwords (along with having to remember loads of them) is that if you’ve got Caps Lock turned on accidentally when you type one in, it won’t work, and you won’t know why. Most desktop computers alert you in some way if you’re trying to enter your password to log on and you’ve enabled Caps Lock; there’s no reason why the web can’t do the same. What we want is a warning – maybe the user wants Caps Lock on, because maybe their password is in capitals – rather than something that interrupts what they’re doing. Something subtle.
But that doesn’t answer the question of how to do it. Sadly, there’s no way of actually detecting whether Caps Lock is on directly. However, there’s a simple work-around; if the user presses a key, and it’s a capital letter, and they don’t have the Shift key depressed, why then they must have Caps Lock on! Simple.
DOM scripting allows your code to be notified when a key is pressed in an element; when the key is pressed, you get the ASCII code for that key. Capital letters, A to Z, have ASCII codes 65 to 90. So, the code would look something like:
on a key press
if the ASCII code for the key is between 65 and 90 *and* if shift is pressed
warn the user that they have Caps Lock on, but let them carry on
end if
end keypress
The actual JavaScript for this is more complicated, because both event handling and keypress information differ across browsers. Your event handling functions are passed an event object, except in Internet Explorer where you use the global event object; the event object has a which parameter containing the ASCII code for the key pressed, except in Internet Explorer where the event object has a keyCode parameter; some browsers store whether the shift key is pressed in a shiftKey parameter and some in a modifiers parameter. All this boils down to code that looks something like this:
keypress: function(e) {
var ev = e ? e : window.event;
if (!ev) {
return;
}
var targ = ev.target ? ev.target : ev.srcElement;
// get key pressed
var which = -1;
if (ev.which) {
which = ev.which;
} else if (ev.keyCode) {
which = ev.keyCode;
}
// get shift status
var shift_status = false;
if (ev.shiftKey) {
shift_status = ev.shiftKey;
} else if (ev.modifiers) {
shift_status = !!(ev.modifiers & 4);
}
// At this point, you have the ASCII code in “which”,
// and shift_status is true if the shift key is pressed
}
Then it’s just a check to see if the ASCII code is between 65 and 90 and the shift key is pressed. (You also need to do the same work if the ASCII code is between 97 (a) and 122 (z) and the shift key is not pressed, because shifted letters are lower-case if Caps Lock is on.)
if (((which >= 65 && which <= 90) && !shift_status) ||
((which >= 97 && which <= 122) && shift_status)) {
// uppercase, no shift key
/* SHOW THE WARNING HERE */
} else {
/* HIDE THE WARNING HERE */
}
The warning can be implemented in many different ways: highlight the password field that the user is typing into, show a tooltip, display text next to the field. For simplicity, this code shows the warning as a previously created image, with appropriate alt text. Showing the warning means creating a new tag with DOM scripting, dropping it into the page, and positioning it so that it’s next to the appropriate field. The image looks like this:
You know the position of the field the user is typing into (from its offsetTop and offsetLeft properties) and how wide it is (from its offsetWidth properties), so use createElement to make the new img element, and then absolutely position it with style properties so that it appears in the appropriate place (near to the text field).
The image is a transparent PNG with an alpha channel, so that the drop shadow appears nicely over whatever else is on the page. Because Internet Explorer version 6 and below doesn’t handle transparent PNGs correctly, you need to use the AlphaImageLoader technique to make the image appear correctly.
newimage = document.createElement('img');
newimage.src = ""http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png"";
newimage.style.position = ""absolute"";
newimage.style.top = (targ.offsetTop - 73) + ""px"";
newimage.style.left = (targ.offsetLeft + targ.offsetWidth - 5) + ""px"";
newimage.style.zIndex = ""999"";
newimage.setAttribute(""alt"", ""Warning: Caps Lock is on"");
if (newimage.runtimeStyle) {
// PNG transparency for IE
newimage.runtimeStyle.filter += ""progid:DXImageTransform.Microsoft.AlphaImageLoader(src='http://farm3.static.flickr.com/2145/2067574980_3ddd405905_o_d.png',sizingMethod='scale')"";
}
document.body.appendChild(newimage);
Note that the alt text on the image is also correctly set. Next, all these parts need to be pulled together. On page load, identify all the password fields on the page, and attach a keypress handler to each. (This only needs to be done for password fields because the user can see if Caps Lock is on in ordinary text fields.)
var inps = document.getElementsByTagName(""input"");
for (var i=0, l=inps.length; i
The “create an image” code from above should only be run if the image is not already showing, so instead of creating a newimage object, create the image and attach it to the password field so that it can be checked for later (and not shown if it’s already showing). For safety, all the code should be wrapped up in its own object, so that its functions don’t collide with anyone else’s functions. So, create a single object called capslock and make all the functions be named methods of the object:
var capslock = {
...
keypress: function(e) {
}
...
}
Also, the “create an image” code is saved into its own named function, show_warning(), and the converse “remove the image” code into hide_warning(). This has the advantage that developers can include the JavaScript library that has been written here, but override what actually happens with their own code, using something like:
And that’s all. Simply include the JavaScript library in your pages, override what happens on a warning if that’s more appropriate for what you’re doing, and that’s all you need.
See the script in action.",2007,Stuart Langridge,stuartlangridge,2007-12-04T00:00:00+00:00,https://24ways.org/2007/capturing-caps-lock/,code
203,Jobs-to-Be-Done in Your UX Toolbox,"Part 1: What is JTBD?
The concept of a “job” in “Jobs-To-Be-Done” is neatly encapsulated by a oft-quoted line from Theodore Levitt:
“People want a quarter-inch hole, not a quarter inch drill”.
Even so, Don Norman pointed out that perhaps Levitt “stopped too soon” at what the real customer goal might be. In the “The Design of Everyday Things”, he wrote:
“Levitt’s example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon. Once you realize that they don’t really want the drill, you realize that perhaps they don’t really want the hole, either: they want to install their bookshelves. Why not develop methods that don’t require holes? Or perhaps books that don’t require bookshelves.”
In other words, a “job” in JTBD lingo is a way to express a user need or provide a customer-centric problem frame that’s independent of a solution. As Tony Ulwick says:
“A job is stable, it doesn’t change over time.”
An example of a job is “tiding you over from breakfast to lunch.” You could hire a donut, a flapjack or a banana for that mid-morning snack—whatever does the job. If you can arrive at a clearly identified primary job (and likely some secondary ones too), you can be more creative in how you come up with an effective solution while keeping the customer problem in focus.
The team at Intercom wrote a book on their application of JTBD. In it, Des Traynor cleverly characterised how JTBD provides a different way to think about solutions that compete for the same job:
“Economy travel and business travel are both capable candidates applying for [the job: Get me face-to-face with my colleague in San Francisco], though they’re looking for significantly different salaries. Video conferencing isn’t as capable, but is willing to work for a far smaller salary. I’ve a hiring choice to make.”
So far so good: it’s relatively simple to understand what a job is, once you understand how it’s different from a “task”. Business consultant and Harvard professor Clay Christensen talks about the concept of “hiring” a product to do a “job”, and firing it when something better comes along. If you’re a company that focuses solutions on the customer job, you’re more likely to succeed. You’ll find these concepts often referred to as “Jobs-to-be-Done theory”. But the application of Jobs-to-Be-Done theory is a little more complicated; it comprises several related approaches.
I particularly like Jim Kalbach’s description of how JTBD is a “lens through which to understand value creation”. But it is also more. In my view, it’s a family of frameworks and methods—and perhaps even a philosophy.
Different facets in a family of frameworks
JTBD has its roots in market research and business strategy, and so it comes to the research table from a slightly different place compared to traditional UX or design research—we have our roots in human-computer interaction and ergonomics. I’ve found it helpful to keep in mind is that the application of JTBD theory is an evolving beast, so it’s common to find contradictions across different resources. My own use of it has varied from project to project. In speaking to others who have adopted it in different measures, it seems that we have all applied it in somewhat multifarious ways. As we like to often say in interviews: there are no wrong answers.
Outcome Driven Innovation
Tony Ulwick’s version of the JTBD history began with Outcome Driven Innovation (ODI), and this approach is best outlined in his seminal article published in the Harvard Business Review in 2002. To understand his more current JTBD approach in his new book “Jobs to Be Done: Theory to Practice”, I actually found it beneficial to read his approach in the original 2002 article for a clearer reference point.
In the earlier article, Ulwick presented a rigorous approach that combines interviews, surveys and an “opportunity” algorithm—a sequence of steps to determine the business opportunity. ODI centres around working with “desired outcome statements” that you unearth through interviews, followed by a means to quantify the gap between importance and satisfaction in a survey to different types of customers.
Since 2008, Ulwick has written about using job maps to make sense of what the customer may be trying to achieve. In a recent article, he describes the aim of the activity is “to discover what the customer is trying to get done at different points in executing a job and what must happen at each juncture in order for the job to be carried out successfully.”
A job map is not strictly a journey map, however tempting it is to see it that way. From a UX perspective, is one of many models we can use—and as our research team at Clearleft have found, how we use model can depend on the nature of the jobs we’ve uncovered in interviews and the characteristics of the problem we’re attempting to solve.
Figure 1. Universal job map
Ulwick’s current methodology is outlined in his new book, where he describes a complete end-to-end process: from customer and competitor research to framing market and product strategy.
The Jobs-To-Be-Done Interview
Back in 2013, I attended a workshop by Chris Spiek and Bob Moesta from the ReWired Group on JTBD at the behest of a then-MailChimp colleague, and I came away excited about their approach to product research. It felt different from anything I’d done before and for the first time in years, I felt that I was genuinely adding something new to my research toolbox.
A key idea is that if you focus on the stories of those who switched to you, and those who switch away from you, you can uncover the core jobs through looking at these opposite ends of engagement.
This framework centres around the JTBD interview method, which harnesses the power of a narrative framework to elicit the real reasons why someone “hired” something to do a job—be it something physical like a new coffee maker, or a digital service, such as a to-do list app. As you interview, you are trying to unearth the context around the key moments on the JTBD timeline (Figure 2). A common approach is to begin from the point the customer might have purchased something, back to the point where the thought of buying this thing first occurred to them.
Figure 2. JTBD Timeline
Figure 3. The Four Forces
The Forces Diagram (Figure 3) is a post-interview analysis tool where you can map out what causes customers to switch to something new and what holds them back.
The JTBD interview is effective at identifying core and secondary jobs, as well as some context around the user need. Because this method is designed to extract the story from the interviewee, it’s a powerful way to facilitate recall. Having done many such interviews, I’ve noticed one interesting side effect: participants often remember more details later on after the conversation has formally ended. It is worth scheduling a follow-up phone call or keep the channels open.
Strengths aside, it’s good to keep in mind that the JTBD interview is still primarily an interview technique, so you are relying on the context from the interviewee’s self-reported perspective. For example, a stronger research methodology combines JTBD interviews with contextual research and quantitative methods.
Job Stories
Alan Klement is credited for coming up with the term “job story” to describe the framing of jobs for product design by the team at Intercom:
“When … I want to … so I can ….”
Figure 4. Anatomy of a Job Story
Unlike a user story that traditionally frames a requirement around personas, job stories frame the user need based on the situation and context. Paul Adams, the VP of Product at Intercom, wrote:
“We frame every design problem in a Job, focusing on the triggering event or situation, the motivation and goal, and the intended outcome. […] We can map this Job to the mission and prioritise it appropriately. It ensures that we are constantly thinking about all four layers of design. We can see what components in our system are part of this Job and the necessary relationships and interactions required to facilitate it. We can design from the top down, moving through outcome, system, interactions, before getting to visual design.”
Systems of Progress
Apart from advocating using job stories, Klement believes that a core tenet of applying JTBD revolves around our desire for “self-betterment”—and that focusing on everyone’s desire for self-betterment is core to a successful strategy.
In his book, Klement takes JTBD further to being a tool for change through applying systems thinking. There, he introduces the systems of progress and how it can help focus product strategy approach to be more innovative.
Coincidentally, I applied similar thinking on mapping systemic change when we were looking to improve users’ trust with a local government forum earlier this year. It’s not just about capturing and satisfying the immediate job-to-be-done, it’s about framing the job so that you can a clear vision forward on how you can help your users improve their lives in the ways they want to.
This is really the point where JTBD becomes a philosophy of practice.
Part 2: Mixing It Up
There has been some misunderstanding about how adopting JTBD means ditching personas or some of our existing design tools or research techniques. This couldn’t have been more wrong.
Figure 5: Jim Kalbach’s JTBD model
Jim Kalbach has used Outcome-Driven Innovation for around 10 years. In a 2016 article, he presents a synthesised model of how to think about that has key elements from ODI, Christensen’s theories and the structure of the job story.
More interestingly, Kalbach has also combined the use of mental models with JTBD.
Claire Menke of UDemy has written a comprehensive article about using personas, JTBD and customer journey maps together in order to communicate more complete story from the users’ perspective. Claire highlights an especially interesting point in her article as she described her challenges:
“After much trial and error, I arrived at a foundational research framework to suit every team’s needs — allowing everyone to share the same holistic understanding, but extract the type of information most applicable to their work.”
In other words, the organisational context you are in likely can dictate what works best—after all the goal is to arrive at the best user experience for your audiences. Intercom can afford to go full-on on applying JTBD theory as a dominant approach because they are a start-up, but a large company or organisation with multiple business units may require a mix of tools, outputs and outcomes.
JTBD is an immensely powerful approach on many fronts—you’ll find many different references that lists the ways you can apply JTBD. However, in the context of this discussion, it might also be useful to we examine where it lies in our models of how we think about our UX and product processes.
JTBD in the UX ecosystem
Figure 6. The Elements of User Experience (source)
There are many ways we have tried to explain the UX discipline but I think Jesse James Garrett’s Elements of User Experience is a good place to begin.
I sometimes also use little diagram to help me describe the different levels you might work at when you work through the complexity of designing and developing a product. A holistic UX strategy needs to address all the different levels for a comprehensive experience: your individual product UI, product features, product propositions and brand need to have a cohesive definition.
Figure 7. Which level of product focus?
We could, of course, also think about where it fits best within the double diamond.
Again, bearing in mind that JTBD has its roots in business strategy and market research, it is excellent at clarifying user needs, defining high-level specifications and content requirements. It is excellent for validating brand perception and value proposition —all the way down to your feature set. In other words, it can be extremely powerful all the way through to halfway of the second diamond. You could quite readily combine the different JTBD approaches because they have differences as much as overlaps. However, JTBD generally starts getting a little difficult to apply once we get to the details of UI design.
The clue lies in JTBD’s raison d’être: a job statement is solution independent. Hence, once we get to designing solutions, we potentially fall into a existential black hole.
That said, Jim Kalbach has a quick case study on applying JTBD to content design tucked inside the main article on a synthesised JTBD model. Alan Klement has a great example of how you could use UI to resolve job stories. You’ll notice that the available language of “jobs” drops off at around that point.
Job statements and outcome statements provide excellent “mini north-stars” as customer-oriented focal points, but purely satisfying these statements would not necessarily guarantee that you have created a seamless and painless user experience.
Playing well with others
You will find that JTBD plays well with Lean, and other strategy tools like the Value Proposition Canvas. With every new project, there is potential to harness the power of JTBD alongside our established toolbox.
When we need to understand complex contexts where cultural or socioeconomic considerations have to be taken into account, we are better placed with combining JTBD with more anthropological approaches. And while we might be able to evaluate if our product, website or app satisfies the customer jobs through interviews or surveys, without good old-fashioned usability testing we are unlikely to be able to truly validate why the job isn’t being represented as it should. In this case, individual jobs solved on the UI can be set up as hypotheses to be proven right or wrong.
The application of Jobs-to-be-Done is still evolving. I’ve found it to be very powerful and I struggle to remember what my UX professional life was like before I encountered it—it has completely changed my approach to research and design.
The fact JTBD is still evolving as a practice means we need to be watchful of dogma—there’s no right way to get a UX job done after all, it nearly always depends. At the end of the day, isn’t it about having the right tool for the right job?",2017,Steph Troeth,stephtroeth,2017-12-04T00:00:00+00:00,https://24ways.org/2017/jobs-to-be-done-in-your-ux-toolbox/,ux
88,"Think First, Code Later","This is a story that’s best told from the end, and it’s probably one you’re all familiar with.
You, or someone just like you, have been building a website, probably as part of a skilled and capable team. You’re a front-end developer, focusing on JavaScript – it’s either your sole responsibility or shared around. It’s quite a big job, been going on for months, and at last it feels like you’re reaching the end of it.
But, in a brief moment of downtime, you step back and take a look at the code as a whole. You notice that the folder called “jQuery plugins” suddenly looks rather full, and maybe there’s evidence of several methods of doing the same thing; there are loads of little niggly fixes in the bug tracker; and every place you use Ajax the structure of the data is slightly different. You sigh, and your shoulders droop slightly, and you think “Yeah, we’ll do that more cleanly next time.”
The thing is, you probably already know how to rewrite the start of this story to make the ending work better. This situation is not really anyone’s fault – it’s just an accumulation of all the things you decided along the way, all the things you agreed you’d fix later that have disappeared into the black hole of technical debt, and accomodating all the “can we just…?” requests from around the team and the client.
So, the solution to this is easy, right? More interminable planning meetings, more tightly controlled and documented specifications, less freedom to innovate, to try out new ideas and enjoy what you’re doing.
Wait, that sounds even less fun than the old way.
Minimum viable planning
Actually, planning and specifications are exactly what you need, but the way you go about them can make a real difference, both to the quality of your code, and the quality of your life as a developer. It can be as simple as being a little more thoughtful before starting on any new piece of functionality. Involve your whole team if possible, or at least those working on what you’re doing. Canvass opinions and work out what the solution to the problem might look like first, rather than coding speculatively to find out.
There are easy ways you can get into this habit of putting the thought and design up front, and it doesn’t have to mean spending more time on the project as a whole. It also doesn’t have to result in reams of functional specifications. Instead, let the code itself form the specification.
As JavaScript applications become more complex, unit testing is becoming ever more important. So embrace it, whether you prefer QUnit, or Mocha, or any of the other JavaScript testing frameworks out there. The TDD (or test-driven development) pattern is all about writing the tests first and then writing functional code to pass those tests; or, if you prefer, code that meets the specification given by the tests.
Sounds like a hassle at first, but once you get into the rhythm of it you should find that the time spent writing tests up front is no greater, and often significantly less, than the time you would have spent fixing bugs afterwards.
If what you’re working on requires an API between client and server (usually Ajax but this can apply to any method of sending or receiving data) then spend a bit of time with the back-end developer to design the data contracts, before either of you cut any code. Work out what the API endpoints are going to be, and what the data structure you’ll get back from a certain endpoint looks like. A mock JSON object documented on a wiki is enough and it can be atomic. Don’t worry about planning the entire project at once, just plan enough to get on with your current tasks.
Definition in this way doesn’t have to make your API immutable – change is still fine – but if you know roughly where you’re heading, then not only can your team’s efforts become more parallel, but you’re far more likely to have an easier time making it all work. And again, you have a specification – the shape of the data – to write your JavaScript against.
Putting everything together, you end up with a logical flow of development, from the specification agreed with the client (your backlog), to the specification agreed with your team (the API contract design), to the specification agreed with your code (your unit tests). Hopefully, there will be ample clues in all of this to inform your front-end library choices, because by then you should have a better picture of what you’re going to need.
What the framework?
As a JavaScript developer predominantly, these are the choices I’m particularly interested in – how and why you use JavaScript libraries and frameworks, both what you expect from them and what you actually get.
If we look back at how web development, and specifically JavaScript development has progressed – from the earliest days of using lines and lines of Dreamweaver code-barf to make an image rollover effect, to today’s large frameworks that handle working with the DOM, Ajax communication and visual effects all in one hit – the purpose of it is clear: to smooth over the inconsistent bumps between browsers and give a solid, reliable, predictable base on which to put our desired functionality.
Understanding what we expect the language as a specification to do, and matching that to what we observe browsers actually doing, and then smoothing out the differences, is a big job. Since the language and the implementations are also changing as we go along, it also feels like a never-ending job. So make full use of this valuable effort. Use jQuery or YUI or anything else you’re comfortable with, but it still pays to think early on about what you need your library to do and what the best choice is to meet that need.
I’ve come in to projects as a fixer and found, to take a recent example, that jQuery UI was being used just to provide a date picker and a modal effect. That’s a lot of code weight to provide two fairly simple pieces of functionality that could easily be covered by smaller plugins. Which isn’t to say that jQuery UI itself is a bad choice, but I could see that it had been included late on just to do those things, whereas a more considered approach would have been to put the library in early and use it more universally.
There are other choices, too. If you automatically throw in jQuery (or whatever your favourite main library is) to a small site with limited functionality, you might only touch a tiny fraction of its scope. In my own development I started looking at what I actually needed from a JavaScript library. For a simple project like What the Framework?, all jQuery needed to do was listen for .ready() and then perform some light DOM selection before handing over to a client-side MVC framework. So perhaps there was another way to go about this while still avoiding the cross-browser headaches.
Deleting jQuery
But the jQuery pattern is compelling and familiar. And once you’re comfortable with something, it’s a bit of an effort to force yourself out of that comfort zone and learn. But looking back at my whole career, I realised that I’ve relearned pretty much everything I do, probably several times, since I started out. So it’s worth keeping in mind that learning and trying new things is how development has advanced to where it is now, and how it will keep advancing in the future.
In the end this lead me to Ender, which is billed as an NPM-style package manager for the browser, letting you search for and manage small, loosely coupled modules and their dependencies, and compile them to one file with a common API.
For What the Framework I ended up with a set of DOM tools, Underscore and Knockout, all minified into 25kb of JavaScript. This compares really well with 32kb minified for jQuery on its own, and Ender’s use of the dollar variable and the jQuery-like syntax in many modules makes switching over a low-friction experience.
On more complex projects, where you’re really going to use all the features of something like jQuery, but want to minimise the loading of other dependencies when you don’t need them, I’ve recently started looking at Jam. This uses the RequireJS pattern to compile commonly used code into a library file and then manage dependencies and bring in others on a per-page basis depending on how you need it. Again, it all comes down to thinking about what you need and using it only when you need it. And the configurability of tools like Ender or Jam allow you to be responsive to changing requirements as your project grows.
There is no right answer
That’s not to say this way of working automatically makes things easier. It doesn’t. On a large, long-running project or one where future functionality is unknown, it’s still hard to predict and plan for everything – at least until crystal balls as a service come about. But by including strong engineering practices in your front-end, and trying to minimise technical debt, you’re at least giving yourself a decent safety net to guard against the “can we just…?” tendencies that are a fact of life.
So, really, this is not an advocation of using a particular technology or framework, because I can’t tell you what works for you or your team. But what I can tell you is that working this way round has done wonders for my productivity and enthusiasm, both for code quality and for trying out new libraries. Give it a go, you might like it!",2012,Stephen Fulljames,stephenfulljames,2012-12-07T00:00:00+00:00,https://24ways.org/2012/think-first-code-later/,process
201,Lint the Web Forward With Sonarwhal,"Years ago, when I was in a senior in college, much of my web development courses focused on two things: the basics like HTML and CSS (and boy, do I mean basic), and Adobe Flash. I spent many nights writing ActionScript 3.0 to build interactions for the websites that I would add to my portfolio. A few months after graduating, I built one website in Flash for a client, then never again. Flash was dying, and it became obsolete in my résumé and portfolio.
That was my first lesson in the speed at which things change in technology, and what a daunting realization that was as a new graduate looking to enter the professional world. Now, seven years later, I work on the Microsoft Edge team where I help design and build a tool that would have lessened my early career anxieties: sonarwhal.
Sonarwhal is a linting tool, built by and for the web community. The code is open source and lives under the JS Foundation. It helps web developers and designers like me keep up with the constant change in technology while simultaneously teaching how to code better websites.
Introducing sonarwhal’s mascot Nellie
Good web development is hard. It is more than HTML, CSS, and JavaScript: developers are expected to have a grasp of accessibility, performance, security, emerging standards, and more, all while refreshing this knowledge every few months as the web evolves. It’s a lot to keep track of.
Web development is hard
Staying up-to-date on all this knowledge is one of the driving forces for developing this scanning tool. Whether you are just starting out, are a student, or you have over a decade of experience, the sonarwhal team wants to help you build better websites for all browsers.
Currently sonarwhal checks for best practices in five categories: Accessibility, Interoperability, Performance, PWAs, and Security. Each check is called a “rule”. You can configure them and even create your own rules if you need to follow some specific guidelines for your project (e.g. validate analytics attributes, title format of pages, etc.).
You can use sonarwhal in two ways:
An online version, that provides a quick and easy way to scan any public website.
A command line tool, if you want more control over the configuration, or want to integrate it into your development flow.
The Online Scanner
The online version offers a streamlined way to scan a website; just enter a URL and you will get a web page of scan results with a permalink that you can share and revisit at any time.
The online version of sonarwal
When my team works on a new rule, we spend the bulk of our time carefully researching each subject, finding sources, and documenting it rather than writing the rule’s code. Not only is it important that we get you the right results, but we also want you to understand why something is failing. Next to each failing rule you’ll find a link to its detailed documentation, explaining why you should care about it, what exactly we are testing, examples that pass and examples that don’t, and useful links to even more in-depth documentation if you are interested in the subject.
We hope that between reading the documentation and continued use of sonarwhal, developers can stay on top of best practices. As devs continue to build sites and identify recurring issues that appear in their results, they will hopefully start to automatically include those missing elements or fix those pieces of code that are producing errors. This also isn’t a one-way communication: the documentation is not only available on the sonarwhal site, but also on GitHub for editing so you can help us make it even better!
A results report
The current configuration for the online scanner is very strict, so it might hurt your feelings (it did when I first tested it on my personal website). But you can configure sonarwhal to any level of strictness as well as customize the command line tool to your needs!
Sonarwhal’s CLI
The CLI gives you full control of sonarwhal: what rules to use, tweaks to them, domains that are out of your control, and so on. You will need the latest node LTS (v8) or Stable (v9) and your favorite package manager, such as npm:
npm install -g sonarwhal
You can now run sonarwhal from anywhere via:
sonarwhal https://example.com
Using the CLI
The configuration is done via a .sonarwhalrc file. When analyzing a site, if no file is available, you will be prompted to answer a series of questions:
What connector do you want to use? Connectors are what sonarwhal uses to access a website and gather all the information about the requests, resources, HTML, etc. Currently it supports jsdom, Microsoft Edge, and Google Chrome.
What formatter? This is how you want to see the results: summary, stylish, etc. Make sure to look at the full list. Some are concise for, perfect for a quick build assessment, while others are more verbose and informative.
Do you want to use the recommended rules configuration? Rules are the things we are validating. Unless you’ve read the documentation and know what you are doing, first timers should probably use the recommended configuration.
What browsers are you targeting? One of the best features of sonarwhal is that rules can adapt their feedback depending on your targeted browsers, suggesting to add or remove things. For example, the rule “Highest Document Mode” will tell you to add the “X-UA-Compatible” header if IE10 or lower is targeted or remove if the opposite is true.
sonarwhal configuration generator questions
Once you answer all these questions the scan will start and you will have a .sonarwhalrc file similar to the following:
{
""connector"": {
""name"": ""jsdom"",
""options"": {
""waitFor"": 1000
}
},
""formatters"": ""stylish"",
""rulesTimeout"": 120000,
""rules"": {
""apple-touch-icons"": ""error"",
""axe"": ""error"",
""content-type"": ""error"",
""disown-opener"": ""error"",
""highest-available-document-mode"": ""error"",
""validate-set-cookie-header"": ""warning"",
// ...
}
}
You should see the scan initiate in the command line and within a few seconds the results should start to appear. Remember, the scan results will look different depending on which formatter you selected so try each one out to see which one you like best.
sonarwhal results on my website and hurting my feelings 💔
Now that you have a list of errors, you can get to work improving the site! Note though, that when you scan your website, it scans all the resources on that page and if you’ve added something like analytics or fonts hosted elsewhere, you are unable to change those files. You can configure the CLI to ignore files from certain domains so that you are only getting results for files you are in control of.
The documentation should give enough guidance on how to fix the errors, but if it’s insufficient, please help us and suggest edits or contribute back to it. This is a community effort and chances are someone else will have the same question as you.
When I scanned both my websites, sonarwhal alerted me to not having an Apple Touch Icon. If I search on the web as opposed to using the sonarwhal documentation, the first top 3 results give me outdated information: I need to include many different icon sizes. I don’t need to include all the different size icons that target different devices. Declaring one icon sized 180px x 180px will provide a large enough icon for devices and it will scale down as appropriate for people on older devices.
The information at the top of the search results isn’t always the correct answer to an issue and we don’t want you to have to search through outdated documentation. As sonarwhal’s capabilities expand, the goal is for it to be the one stop shop for helping preflight your website.
The journey up until now and looking forward
On the Microsoft Edge team, we’re passionate about empowering developers to build great websites. Every day we see so many sites come through our issue tracker. (Thanks for filing those bugs, they help us make Microsoft Edge better and better!) Some issues we see over and over are honest mistakes or outdated ‘best practices’ that could be avoided, so we built this tool to help everyone help make the web a better place.
When we decided to create sonarwhal, we wanted to create a tool that would help developers write better and more up-to-date code for their websites. We want sonarwhal to be useful to anyone so, early on, we defined three guiding principles we’ve used along the way:
Community Driven. We build for the community’s best interests. The web belongs to everyone and this project should too. Not only is it open source, we’ve also donated it to the JS Foundation and have an inclusive governance model that welcomes the collaboration of anyone, individual or company.
User Centric. We want to put the user at the center, making sonarwhal configurable for your needs and easy to use no matter what your skill level is.
Collaborative. We didn’t want to reinvent the wheel, so we collaborated with existing tools and services that help developers build for the web. Some examples are aXe, snyk.io, Cloudinary, etc.
This is just the beginning and we still have lots to do. We’re hard at work on a backlog of exciting features for future releases, such as:
New rules for a variety of areas like performance, accessibility, security, progressive web apps, and more.
A plug-in for Visual Studio Code: we want sonarwhal to help you write better websites, and what better moment than when you are in your editor.
Configuration options for the online service: as we fine tune the infrastructure, the rule configuration for our scanner is locked, but we look forward to adding CLI customization options here in the near future.
This is a tool for the web community by the web community so if you are excited about sonarwhal, making a better web, and want to contribute, we have a few issues where you might be able to help. Also, don’t forget to check the rest of the sonarwhal GitHub organization. PRs are always welcome and appreciated!
Let us know what you think about the scanner at @NarwhalNellie on Twitter and we hope you’ll help us lint the web forward!",2017,Stephanie Drescher,stephaniedrescher,2017-12-02T00:00:00+00:00,https://24ways.org/2017/lint-the-web-forward-with-sonarwhal/,code
43,Content Production Planning,"While everyone agrees that getting the content of a website right is vital to its success, unless you’re lucky enough to have an experienced editor or content strategist on board, planning content production often seems to fall through the cracks. One reason is that, for most of the team, it feels like someone else’s problem. Not necessarily a specific person’s problem. Just someone else’s. It’s only when everyone starts urgently asking when the content is going to be ready, that it becomes clear the answer is, “Not as soon as we’d like it”.
The good news is that there are some quick and simple things you can do, even if you’re not the official content person on a project, to get everyone on the same content planning page.
Content production planning boils down to answering three deceptively simple questions:
What content do you need?
How much of it do you need?
Who’s going to make it?
Even if it’s not your job to come up with the answers, by asking these questions early enough and agreeing who is going to come up with the answers, you’ll be a long way towards avoiding the last-minute content problems which so often plague projects.
How much content do we need?
People tend to underestimate two crucial things about content: how much content they need, and how long that content takes to produce.
When I ask someone how big their website is – how many pages it contains – I usually double or triple the answer I get. That’s because almost everyone’s mental model of their website greatly underestimates its true size. You can see the problem for yourself if you look at a site map. Site maps are great at representing a mental model of a website. But because they’re a deliberate simplification they naturally lead us to underestimate how much content is involved in populating them.
Several years ago I was asked to help a client create a new microsite (their word) which they wanted ready in two weeks for a conference they were attending. Here’s the site map they had in mind. At first glance it looks like a pretty small website. Maybe twenty to thirty pages?
That’s what the client thought.
But see those boxes which are multiple boxes stacked on top of one another, for product categories, descriptions and supporting material? They’re known as page stacks, and page stacks are the content strategy equivalent of Here Be Dragons.
Say we have:
five product categories
each with five products
which all have two or three supporting documents
Those are still fairly small numbers. But small numbers multiplied by other small numbers tend to lead to big numbers.
5 categories = 5 category descriptions
plus
5 categories × 5 products each = 25 product descriptions
plus
25 products × 2.5 (average) supporting documents = 63 supporting documents
equals
93 pages
Suddenly our twenty- or thirty-page website is running towards one hundred.
That’s probably enough to get most project teams to sit up and take notice. But there’s still the danger of underestimating how long it’s going to take to create the content. After all, assuming the supporting documents already exist in some form, there are only about twenty-five to thirty pages of new copy to write.
How much work is it?
Again, we have the problem that small numbers when multiplied by other small numbers tend to lead to big numbers. Let’s make a rough guess that it’ll take four hours to write each product category and description page we need. That feels a little conservative if we’re writing stuff from scratch, but assuming the person doing it already knows the products fairly well it’s not unreasonable.
30 pages × 4 hours each = 120 hours
120 hours ÷ 7.5 working hours a day = 16 days
Ouch.
At this point it’s pretty clear we’re not getting this site launched in two weeks.
The goal is the conversation
By breaking down the site into its content components, and putting some rough estimates on how long each might take to produce, the client instantly realised that there was no way they would be ready to launch it in two weeks. Although we still didn’t know exactly when it would be ready, getting to that realisation right at the start of the project was a major win for everybody. Without it, the design agency would have bust a gut to get the design, front-end and CMS all done in double-quick time, only to find it was all for nothing as barely half the content was ready. As it was, an early discussion about content, albeit a brief one, bought everyone time to tackle the project properly, without pulling any long nights or working weekends.
If you haven’t been able to get people to discuss content plans for the project, these kinds of rough estimates should give you enough evidence to get everyone to start taking it seriously. Your goal is to get everyone on the project to a place where they are ready to talk in detail about who is going to create this content, and how long it’s really going to take them, and to get to those conversations before lack of content becomes a problem.
Be careful though. It’s best to talk in ranges and round numbers when your estimates are this uncertain. And watch those multipliers. Given small numbers multiplied by other small numbers lead to big numbers, changing just one number can greatly change the overall estimate. I like to run a couple of different scenarios to check what things look like if I’ve under- or overestimated either how many pages we’re going to need, or how long they’re going to take to create. For example:
Top end: 30 pages × 5 hours = 150 hours, or 20 days
Bottom end: 25 pages × 4 hours = 100 hours, or 13.3 days
So rather than say, “I estimate the content will take around sixteen days to produce”, I’m going to say, “I think the content will take about three to four weeks to produce”. Even with qualifiers like estimate and around, sixteen days sounds too precise. Whereas three to four weeks instantly conveys that this is just a rough figure.
Who’s going to make it?
So, people tend to underestimate two crucial things about content: how much content they need, and how long content takes to write. At this stage, you’re still in danger of the latter, because it’s tempting to simply estimate how much time content takes to write (or record, if we’re talking audio or visual content), and overlook all the other work that needs to goes on around it.
Take 24 ways as an example. In terms of our three deceptively simple questions: what is practical articles about web design; how many is twenty-four, one for each day of Advent; and who are experts working on the web, one to write each article.
But there’s another who you might not have considered.
Someone needs to select those authors in the first place, make sure they deliver their articles on time (and find someone to replace them if they don’t), review drafts, copy-edit and proofread final versions, upload them to the site, promote them, keep an eye on the comments and make sure there are still presents under the tree on Christmas morning.
Even if each of those tasks only takes an hour or so, it then needs multiplying by twenty-four (except the presents, obviously). And as we’ve already seen, small numbers multiplied by small numbers quickly turn into much bigger numbers. Just a few hours per article, when multiplied by twenty-four articles, easily multiplies up to days or even weeks of effort.
To get a more accurate estimate of how long the different kinds of content are going to take, you need to break down the content production work into its constituent stages, starting with planning, moving on through the main work of creation, to reviewing, approvals and finally publishing. You need to think about who needs to be involved at each step, and how much time they’ll need to do their bit.
Taken together, these things make up your content workflow. The workflow will be different for each organisation, but might look something like this:
Eddie the web editor will work out the key messages and objectives for each page, and agree them with Mo the marketing director.
Eddie will then get Cal, the copywriter, to write the first draft.
As part of that, Cal will interview Sam the subject expert to understand the intricacies of the subject and get all the facts straight.
Once Cal’s done the first draft, it’ll go to Sam to check for accuracy, while Eddie reviews it for style and message.
Once Cal has incorporated their feedback it’s time to get Mo to have a look at the final draft.
If Mo’s happy, it’ll get a final proofread, be uploaded to the CMS, and Mo will give the final sign-off and release it for publishing.
You can plot this on a table, with the stages of the content production process down the side, and the key roles or personnel along with top. Then the team can estimate how much time they think each of them needs at each stage.
Mo (marketing director)
Sam (subject expert)
Eddie (web editor)
Cal (copywriter)
Outline: define key messages and objectives
30 min
Review outline
15 min
First draft
30 min
3 hours
Review 1st draft
30 min
30 min
2nd draft
1 hour
Review 2nd draft
15 min
15 min
15 min
Final amendments
30 min
Proofread
15 min
Upload
15 min
Sign-off
10 min
TOTAL
40 min
1 hour 15 min
1 hour 30 min
4 hours 45 min
You can then bring out your calculator again, and come up with some more big scary numbers showing how much time it’s going to take for the whole team to get all the content needed not just written, but also planned, reviewed, approved and published.
With an experienced team you can run this exercise as a group workshop and get some fairly accurate estimates pretty quickly. If this is all a bit new to you, check out Gather Content’s Content Production Planning for Agencies ebook for a useful guide to common content roles, ballpark estimates for how much time each one needs on a typical piece of content, and how to run a process and estimating workshop to dig into them in more detail.
On a small team, one person might play many roles, but you should still sanity-check your estimates by breaking down the process and putting a rough estimate on each stage. With only a couple of people involved, it’s even easier to only include the core activity like writing or recording in your estimates, and forget to allow time for the planning, reviewing, proofreading, publishing and promoting you’ll still need to do. And even in a team of one, if at all possible you should find at least one other person to act as a second pair of eyes, and give anything you produce a quick once-over and proofread before it’s published.
Depending on the kind of content you’re making, you should also consider what will happen after it’s published. The full content life cycle should include promotion, monitoring and regular reviews to make sure content stays accurate and up to date. Making sure you have the time and resources available to do all those things for each piece of content is essential for creating a sustainable content programme.
The proof of the pudding
Even after digging into workflow and getting the whole team involved in estimating, you’re still largely in the realm of the guesstimate. The good news, though, is that you can quite quickly start finding out if your guesstimates are right or not. As soon as you can, pilot the production process with some real content. This is a double-win: you start finding out how long it really takes to produce all this fab new content, and you get real content to work with in designs and prototypes.
Once you’ve run a few things through your process, you’ll be able to refine your estimates, confirm your workflow, and give everyone involved a clear idea of when it will all be ready, and what you need from them.
Keeping it all on track
At this point I like to pull everything together into the content strategist’s favourite tool: the spreadsheet.
A simple content production checklist is a bit like a content inventory or audit, but for the content you don’t yet have, not the stuff already done. You can grab an example here.
Each piece of content gets its own row, with columns for basic information like page title, ID (which should match the site map), and who’s responsible for making it. You can capture simple details like target audience and key messages here too, though for more complex content, page description tables like those described by Relly Annett-Baker in “Extracting the Content” may be a better tool to use. Just adapt these columns to whatever makes sense for your content.
I then have columns to track where each piece is in the production process. I usually keep this simple, with a column each to mark whether it’s draft, final or uploaded. The status column on the left automatically shows the item’s status, using a simple traffic light colour scheme for whether the item is still to do (red), in draft (amber), or done (green). Seeing the whole thing slowly turn from red to green is a nice motivator.
If you want to track the workflow in more detail, a kanban board in a tool like Trello is a great way for a team to collaborate on content production, track each item’s progress, and keep an eye out for bottlenecks and delays.
Getting to the content strategy conversation
It’s a relatively simple exercise, then, to decide not just what kinds of pages you need, but also how many of them: put some rough estimates of effort on the tasks needed to create those pages – not just the writing, but all the other stages of planning, reviewing, approving, publishing and promoting – and then multiply all those things together. This will quickly bring some reality to grand visions and overambitious plans. Do it early enough, and even when the final big scary number is a lot bigger and scarier than everyone thought, you’ll still have time to do something about it.
As well as getting everyone on board for some proper content planning activities, that big scary number is your opportunity to get to the real core questions of content strategy: do we really need all this content? Where can existing content be reused and repurposed? How do we prioritise our efforts? What really matters to our readers and users?
Time and again, case studies show that less content delivers more: more leads, more sales, more self-service support and savings in the call centre. Although that argument is primarily one you should make from a good-for-the-users perspective, it doesn’t hurt to be able to make it from the cheaper-for-the-business perspective as well, and to have some big scary numbers to back that up.",2014,Sophie Dennis,sophiedennis,2014-12-17T00:00:00+00:00,https://24ways.org/2014/content-production-planning/,content
168,Unobtrusively Mapping Microformats with jQuery,"Microformats are everywhere. You can’t shake an electronic stick these days without accidentally poking a microformat-enabled site, and many developers use microformats as a matter of course. And why not? After all, why invent your own class names when you can re-use pre-defined ones that give your site extra functionality for free?
Nevertheless, while it’s good to know that users of tools such as Tails and Operator will derive added value from your shiny semantics, it’s nice to be able to reuse that effort in your own code.
We’re going to build a map of some of my favourite restaurants in Brighton. Fitting with the principles of unobtrusive JavaScript, we’ll start with a semantically marked up list of restaurants, then use JavaScript to add the map, look up the restaurant locations and plot them as markers.
We’ll be using a couple of powerful tools. The first is jQuery, a JavaScript library that is ideally suited for unobtrusive scripting. jQuery allows us to manipulate elements on the page based on their CSS selector, which makes it easy to extract information from microformats.
The second is Mapstraction, introduced here by Andrew Turner a few days ago. We’ll be using Google Maps in the background, but Mapstraction makes it easy to change to a different provider if we want to later.
Getting Started
We’ll start off with a simple collection of microformatted restaurant details, representing my seven favourite restaurants in Brighton. The full, unstyled list can be seen in restaurants-plain.html. Each restaurant listing looks like this:
Since we’re dealing with a list of restaurants, each hCard is marked up inside a list item. Each restaurant is an organisation; we signify this by placing the classes fn and org on the element surrounding the restaurant’s name (according to the hCard spec, setting both fn and org to the same value signifies that the hCard represents an organisation rather than a person).
The address information itself is contained within a div of class adr. Note that the HTML element is not suitable here for two reasons: firstly, it is intended to mark up contact details for the current document rather than generic addresses; secondly, address is an inline element and as such cannot contain the paragraphs elements used here for the address information.
A nice thing about microformats is that they provide us with automatic hooks for our styling. For the moment we’ll just tidy up the whitespace a bit; for more advanced style tips consult John Allsop’s guide from 24 ways 2006.
.vcard p {
margin: 0;
}
.adr {
margin-bottom: 0.5em;
}
To plot the restaurants on a map we’ll need latitude and longitude for each one. We can find this out from their address using geocoding. Most mapping APIs include support for geocoding, which means we can pass the API an address and get back a latitude/longitude point. Mapstraction provides an abstraction layer around these APIs which can be included using the following script tag:
While we’re at it, let’s pull in the other external scripts we’ll be using:
That’s everything set up: let’s write some JavaScript!
In jQuery, almost every operation starts with a call to the jQuery function. The function simulates method overloading to behave in different ways depending on the arguments passed to it. When writing unobtrusive JavaScript it’s important to set up code to execute when the page has loaded to the point that the DOM is available to be manipulated. To do this with jQuery, pass a callback function to the jQuery function itself:
jQuery(function() {
// This code will be executed when the DOM is ready
});
Initialising the map
The first thing we need to do is initialise our map. Mapstraction needs a div with an explicit width, height and ID to show it where to put the map. Our document doesn’t currently include this markup, but we can insert it with a single line of jQuery code:
jQuery(function() {
// First create a div to host the map
var themap = jQuery('').css({
'width': '90%',
'height': '400px'
}).insertBefore('ul.restaurants');
});
While this is technically just a single line of JavaScript (with line-breaks added for readability) it’s actually doing quite a lot of work. Let’s break it down in to steps:
var themap = jQuery('')
Here’s jQuery’s method overloading in action: if you pass it a string that starts with a < it assumes that you wish to create a new HTML element. This provides us with a handy shortcut for the more verbose DOM equivalent:
var themap = document.createElement('div');
themap.id = 'themap';
Next we want to apply some CSS rules to the element. jQuery supports chaining, which means we can continue to call methods on the object returned by jQuery or any of its methods:
var themap = jQuery('').css({
'width': '90%',
'height': '400px'
})
Finally, we need to insert our new HTML element in to the page. jQuery provides a number of methods for element insertion, but in this case we want to position it directly before the
we are using to contain our restaurants. jQuery’s insertBefore() method takes a CSS selector indicating an element already on the page and places the current jQuery selection directly before that element in the DOM.
var themap = jQuery('').css({
'width': '90%',
'height': '400px'
}).insertBefore('ul.restaurants');
Finally, we need to initialise the map itself using Mapstraction. The Mapstraction constructor takes two arguments: the first is the ID of the element used to position the map; the second is the mapping provider to use (in this case google ):
// Initialise the map
var mapstraction = new Mapstraction('themap','google');
We want the map to appear centred on Brighton, so we’ll need to know the correct co-ordinates. We can use www.getlatlon.com to find both the co-ordinates and the initial map zoom level.
// Show map centred on Brighton
mapstraction.setCenterAndZoom(
new LatLonPoint(50.82423734980143, -0.14007568359375),
15 // Zoom level appropriate for Brighton city centre
);
We also want controls on the map to allow the user to zoom in and out and toggle between map and satellite view.
mapstraction.addControls({
zoom: 'large',
map_type: true
});
Adding the markers
It’s finally time to parse some microformats. Since we’re using hCard, the information we want is wrapped in elements with the class vcard. We can use jQuery’s CSS selector support to find them:
var vcards = jQuery('.vcard');
Now that we’ve found them, we need to create a marker for each one in turn. Rather than using a regular JavaScript for loop, we can instead use jQuery’s each() method to execute a function against each of the hCards.
jQuery('.vcard').each(function() {
// Do something with the hCard
});
Within the callback function, this is set to the current DOM element (in our case, the list item). If we want to call the magic jQuery methods on it we’ll need to wrap it in another call to jQuery:
jQuery('.vcard').each(function() {
var hcard = jQuery(this);
});
The Google maps geocoder seems to work best if you pass it the street address and a postcode. We can extract these using CSS selectors: this time, we’ll use jQuery’s find() method which searches within the current jQuery selection:
var streetaddress = hcard.find('.street-address').text();
var postcode = hcard.find('.postal-code').text();
The text() method extracts the text contents of the selected node, minus any HTML markup.
We’ve got the address; now we need to geocode it. Mapstraction’s geocoding API requires us to first construct a MapstractionGeocoder, then use the geocode() method to pass it an address. Here’s the code outline:
var geocoder = new MapstractionGeocoder(onComplete, 'google');
geocoder.geocode({'address': 'the address goes here');
The onComplete function is executed when the geocoding operation has been completed, and will be passed an object with the resulting point on the map. We just want to create a marker for the point:
var geocoder = new MapstractionGeocoder(function(result) {
var marker = new Marker(result.point);
mapstraction.addMarker(marker);
}, 'google');
For our purposes, joining the street address and postcode with a comma to create the address should suffice:
geocoder.geocode({'address': streetaddress + ', ' + postcode});
There’s one last step: when the marker is clicked, we want to display details of the restaurant. We can do this with an info bubble, which can be configured by passing in a string of HTML. We’ll construct that HTML using jQuery’s html() method on our hcard object, which extracts the HTML contained within that DOM node as a string.
var marker = new Marker(result.point);
marker.setInfoBubble(
'
' + hcard.html() + '
'
);
mapstraction.addMarker(marker);
We’ve wrapped the bubble in a div with class bubble to make it easier to style. Google Maps can behave strangely if you don’t provide an explicit width for your info bubbles, so we’ll add that to our CSS now:
.bubble {
width: 300px;
}
That’s everything we need: let’s combine our code together:
jQuery(function() {
// First create a div to host the map
var themap = jQuery('').css({
'width': '90%',
'height': '400px'
}).insertBefore('ul.restaurants');
// Now initialise the map
var mapstraction = new Mapstraction('themap','google');
mapstraction.addControls({
zoom: 'large',
map_type: true
});
// Show map centred on Brighton
mapstraction.setCenterAndZoom(
new LatLonPoint(50.82423734980143, -0.14007568359375),
15 // Zoom level appropriate for Brighton city centre
);
// Geocode each hcard and add a marker
jQuery('.vcard').each(function() {
var hcard = jQuery(this);
var streetaddress = hcard.find('.street-address').text();
var postcode = hcard.find('.postal-code').text();
var geocoder = new MapstractionGeocoder(function(result) {
var marker = new Marker(result.point);
marker.setInfoBubble(
'
' + hcard.html() + '
'
);
mapstraction.addMarker(marker);
}, 'google');
geocoder.geocode({'address': streetaddress + ', ' + postcode});
});
});
Here’s the finished code.
There’s one last shortcut we can add: jQuery provides the $ symbol as an alias for jQuery. We could just go through our code and replace every call to jQuery() with a call to $(), but this would cause incompatibilities if we ever attempted to use our script on a page that also includes the Prototype library. A more robust approach is to start our code with the following:
jQuery(function($) {
// Within this function, $ now refers to jQuery
// ...
});
jQuery cleverly passes itself as the first argument to any function registered to the DOM ready event, which means we can assign a local $ variable shortcut without affecting the $ symbol in the global scope. This makes it easy to use jQuery with other libraries.
Limitations of Geocoding
You may have noticed a discrepancy creep in to the last example: whereas my original list included seven restaurants, the geocoding example only shows five. This is because the Google Maps geocoder incorporates a rate limit: more than five lookups in a second and it starts returning error messages instead of regular results.
In addition to this problem, geocoding itself is an inexact science: while UK postcodes generally get you down to the correct street, figuring out the exact point on the street from the provided address usually isn’t too accurate (although Google do a pretty good job).
Finally, there’s the performance overhead. We’re making five geocoding requests to Google for every page served, even though the restaurants themselves aren’t likely to change location any time soon. Surely there’s a better way of doing this?
Microformats to the rescue (again)! The geo microformat suggests simple classes for including latitude and longitude information in a page. We can add specific points for each restaurant using the following markup:
E-Kagen
22-23 Sydney Street
Brighton, UK
BN1 4EN
Telephone: +44 (0)1273 687 068
Lat/Lon:
50.827917,
-0.137764
As before, I used www.getlatlon.com to find the exact locations – I find satellite view is particularly useful for locating individual buildings.
Latitudes and longitudes are great for machines but not so useful for human beings. We could hide them entirely with display: none, but I prefer to merely de-emphasise them (someone might want them for their GPS unit):
.vcard .geo {
margin-top: 0.5em;
font-size: 0.85em;
color: #ccc;
}
It’s probably a good idea to hide them completely when they’re displayed inside an info bubble:
.bubble .geo {
display: none;
}
We can extract the co-ordinates in the same way we extracted the address. Since we’re no longer geocoding anything our code becomes a lot simpler:
$('.vcard').each(function() {
var hcard = $(this);
var latitude = hcard.find('.geo .latitude').text();
var longitude = hcard.find('.geo .longitude').text();
var marker = new Marker(new LatLonPoint(latitude, longitude));
marker.setInfoBubble(
'
' + hcard.html() + '
'
);
mapstraction.addMarker(marker);
});
And here’s the finished geo example.
Further reading
We’ve only scratched the surface of what’s possible with microformats, jQuery (or just regular JavaScript) and a bit of imagination. If this example has piqued your interest, the following links should give you some more food for thought.
The hCard specification
Notes on parsing hCards
jQuery for JavaScript programmers – my extended tutorial on jQuery.
Dann Webb’s Sumo – a full JavaScript library for parsing microformats, based around some clever metaprogramming techniques.
Jeremy Keith’s Adactio Austin – the first place I saw using microformats to unobtrusively plot locations on a map. Makes clever use of hEvent as well.",2007,Simon Willison,simonwillison,2007-12-12T00:00:00+00:00,https://24ways.org/2007/unobtrusively-mapping-microformats-with-jquery/,code
249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.
I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.
Try it out here, then read on to see how I built it.
First step: crawling the data
The first step in building a search engine is to grab a copy of the data that you plan to make searchable.
There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.
I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites.
We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.
Then we’ll tell wget to recursively crawl the website, using the --recursive flag.
We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2
The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"".
Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.
Tie all of those options together and we get this command:
wget --recursive --wait 2 --no-clobber
-I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
-X ""/*/*/comments""
https://24ways.org/archives/
If you leave this running for a few minutes, you’ll end up with a folder structure something like this:
$ find 24ways.org
24ways.org
24ways.org/2013
24ways.org/2013/why-bother-with-accessibility
24ways.org/2013/why-bother-with-accessibility/index.html
24ways.org/2013/levelling-up
24ways.org/2013/levelling-up/index.html
24ways.org/2013/project-hubs
24ways.org/2013/project-hubs/index.html
24ways.org/2013/credits-and-recognition
24ways.org/2013/credits-and-recognition/index.html
...
As a quick sanity check, let’s count the number of HTML pages we have retrieved:
$ find 24ways.org | grep index.html | wc -l
328
There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead:
wget --recursive --wait 2 --no-clobber
-I /2018
-X ""/*/*/comments""
https://24ways.org/
Thanks to --no-clobber, this is safe to run every day in December to pick up any new content.
We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index.
Building a search index using SQLite
There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.
I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.
SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!
SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.
This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.
It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.
I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday.
If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages:
$ python3 -m venv ./jupyter-venv
$ ./jupyter-venv/bin/pip install jupyter
# ... lots of installer output
# Now lets install some extra packages we will need later
$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib
# And start the notebook web application
$ ./jupyter-venv/bin/jupyter-notebook
# This will open your browser to Jupyter at http://localhost:8888/
You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.
A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account.
Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on.
from pathlib import Path
from bs4 import BeautifulSoup as Soup
base = Path(""/Users/simonw/Dropbox/Development/24ways-search"")
articles = list(base.glob(""*/*/*/*.html""))
# articles is now a list of paths that look like this:
# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')
docs = []
for path in articles:
year = str(path.relative_to(base)).split(""/"")[1]
url = 'https://' + str(path.relative_to(base).parent) + '/'
soup = Soup(path.open().read(), ""html5lib"")
author = soup.select_one("".c-continue"")[""title""].split(
""More information about""
)[1].strip()
author_slug = soup.select_one("".c-continue"")[""href""].split(
""/authors/""
)[1].split(""/"")[0]
published = soup.select_one("".c-meta time"")[""datetime""]
contents = soup.select_one("".e-content"").text.strip()
title = soup.find(""title"").text.split("" ◆"")[0]
try:
topic = soup.select_one(
'.c-meta a[href^=""/topics/""]'
)[""href""].split(""/topics/"")[1].split(""/"")[0]
except TypeError:
topic = None
docs.append({
""title"": title,
""contents"": contents,
""year"": year,
""author"": author,
""author_slug"": author_slug,
""published"": published,
""url"": url,
""topic"": topic,
})
After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:
[
{
""title"": ""Why Bother with Accessibility?"",
""contents"": ""Web accessibility (known in other fields as inclus..."",
""year"": ""2013"",
""author"": ""Laura Kalbag"",
""author_slug"": ""laurakalbag"",
""published"": ""2013-12-10T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"",
""topic"": ""design""
},
{
""title"": ""Levelling Up"",
""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."",
""year"": ""2013"",
""author"": ""Ashley Baxter"",
""author_slug"": ""ashleybaxter"",
""published"": ""2013-12-06T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/levelling-up/"",
""topic"": ""business""
},
...
My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries.
import sqlite_utils
db = sqlite_utils.Database(""/tmp/24ways.db"")
db[""articles""].insert_all(docs)
That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.
(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)
You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:
$ sqlite3 /tmp/24ways.db
sqlite> .headers on
sqlite> .mode column
sqlite> select title, author, year from articles;
title author year
------------------------------ ------------ ----------
Why Bother with Accessibility? Laura Kalbag 2013
Levelling Up Ashley Baxte 2013
Project Hubs: A Home Base for Brad Frost 2013
Credits and Recognition Geri Coady 2013
Managing a Mind Christopher 2013
Run Ragged Mark Boulton 2013
Get Started With GitHub Pages Anna Debenha 2013
Coding Towards Accessibility Charlie Perr 2013
...
There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:
db[""articles""].enable_fts([""title"", ""author"", ""contents""])
Introducing Datasette
Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet.
We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that?
If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article.
If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:
./jupyter-venv/bin/pip install datasette
This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.
Now you can run Datasette against the 24ways.db file we created earlier like so:
./jupyter-venv/bin/datasette /tmp/24ways.db
This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.
If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db
Publishing the database to the internet
One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish:
$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways
-----> Python app detected
-----> Installing requirements with pip
-----> Running post-compile hook
-----> Discovering process types
Procfile declares types -> web
-----> Compressing...
Done: 47.1M
-----> Launching...
Released v8
https://search-24ways.herokuapp.com/ deployed to Heroku
If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways.
You can run this command as many times as you like to deploy updated versions of the underlying database.
Searching and faceting
Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.
SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A
A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic
Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:
http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A
Better search using custom SQL
The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.
This exposes the underlying SQL query, which looks like this:
select rowid, * from articles where rowid in (
select rowid from articles_fts where articles_fts match :search
) order by rowid limit 101
We can do better than this by constructing a custom SQL query. Here’s the query we will use instead:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10;
You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.
There’s a lot going on here! Let’s break the SQL down line-by-line:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on.
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.
from articles
join articles_fts on articles.rowid = articles_fts.rowid
articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.
where articles_fts match :search || ""*""
order by rank limit 10;
:search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.
How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects:
JSON API call to search for articles matching SVG
The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.
A simple JavaScript autocomplete search interface
I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.
We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:
function debounce(func, wait, immediate) {
let timeout;
return function() {
let context = this, args = arguments;
let later = () => {
timeout = null;
if (!immediate) func.apply(context, args);
};
let callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
};
We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.
Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these:
const htmlEscape = (s) => s.replace(
/>/g, '>'
).replace(
/Autocomplete search
And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:
// Embed the SQL query in a multi-line backtick string:
const sql = `select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10`;
// Grab a reference to the
const searchbox = document.getElementById(""searchbox"");
// Used to avoid race-conditions:
let requestInFlight = null;
searchbox.onkeyup = debounce(() => {
const q = searchbox.value;
// Construct the API URL, using encodeURIComponent() for the parameters
const url = (
""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" +
encodeURIComponent(sql) +
`&search=${encodeURIComponent(q)}&_shape=array`
);
// Unique object used just for race-condition comparison
let currentRequest = {};
requestInFlight = currentRequest;
fetch(url).then(r => r.json()).then(d => {
if (requestInFlight !== currentRequest) {
// Avoid race conditions where a slow request returns
// after a faster one.
return;
}
let results = d.map(r => `
`).join("""");
document.getElementById(""results"").innerHTML = results;
});
}, 100); // debounce every 100ms
There’s just one more utility function, used to help construct the HTML results:
const highlight = (s) => htmlEscape(s).replace(
/b4de2a49c8/g, ''
).replace(
/8c94a2ed4b/g, ''
);
This is what those unique strings passed to the snippet() function were for.
Avoiding race conditions in autocomplete
One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:
User types acces
Browser sends request A - querying documents matching acces*
User continues to type accessibility
Browser sends request B - querying documents matching accessibility*
Request B returns. It was fast, because there are fewer documents matching the full term
The results interface updates with the documents from request B, matching accessibility*
Request A returns results (this was the slower of the two requests)
The results interface updates with the documents from request A - results matching access*
This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.
Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.
Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.
When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.
It’s not a lot of code, really
And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like.
How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with.
A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.
More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.
For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code
326,Don't be eval(),"JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design.
Common mistakes
Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it:
var property = 'bar';
var value = eval('foo.' + property);
Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly.
Here’s the right way of doing the above:
var property = 'bar';
var value = foo[property];
In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string.
Security issues
In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks.
What’s it good for?
Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval().
function evalRequest(url) {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState==4 && xmlhttp.status==200) {
eval(xmlhttp.responseText);
}
}
xmlhttp.open(""GET"", url, true);
xmlhttp.send(null);
}
If you want this to work with Internet Explorer you’ll need to include this compatibility patch.",2005,Simon Willison,simonwillison,2005-12-07T00:00:00+00:00,https://24ways.org/2005/dont-be-eval/,code
230,The Articulate Web Designer of Tomorrow,"You could say that we design to communicate, and that we seek emotive responses. It sounds straightforward, and it can be, but leaving it to chance isn’t wise. Many wander into web design without formal training, and whilst that certainly isn’t essential, we owe it to ourselves to draw on wider influences, learn from the past, and think smarter.
What knowledge can we ourselves explore in order to become better designers? In addition, how can we take this knowledge, investigate it through our unique discipline, and in turn speak more eloquently about what we do on the web? Below, I outline a number of things that I personally believe all designers should be using and exploring collectively.
Taking stock
Where we’re at is good. Finding clarity through web standards, we’ve ended up quite modernist in our approach, pursuing function, elegance and reduction. However, we’re not great at articulating our own design processes and principles to outsiders. Equally, we rely heavily on our instincts when deciding if something is or isn’t good. That’s fine, but we can better understand why things are the way they are by looking a little deeper, thereby helping us articulate what goes on in our design brains to our peers, our clients and to normal humans.
As designers we use ideas, concepts, text and images. We apply our ideas and experience, imposing order and structure to content, hoping to ease the communication of an idea to the largest possible audience or to a specific audience. We consciously manipulate most of what is available to us, but not all. There is something else we can use. I often think that brilliant work demands a keen understanding of the magical visual language that informs design.
Embracing an established visual language
This is a language whose alphabet is shapes, structures, colours, lines and rhythms. When effective, it is somewhat invisible, subliminally enforcing messages and evoking meaning, using methods solidly rooted in a grammar perceptible in virtually all extraordinary creative work. The syntax for art, architecture, film, and furniture, industrial and graphic design (think Bauhaus and the Swiss style perhaps), this language urges us to become fluent if we aim for a more powerful dialogue with our audience.
Figure 1: Structures (clockwise from top-left): Informal; Formal; Active; Visible.
The greatest creative minds our world has produced could understand some or all of this language. Line and point, form and shape. Abstract objects. Formal and informal structures. Visual distribution. Balance, composition and the multitudinous approaches to symmetry. Patterns and texture. Movement and paths. Repetition, rhythm and frequency. Colour theory. Whitespace and the pause. The list goes on.
The genius we perceive in our creative heroes is often a composite of experience, trial and error, conviction, intuition – even accident – but rarely does great work arise without an initial understanding of the nuts and bolts that help communicate an idea or emotion.
Our world of interactivity
As web designers, our connection with this language is most evident in graphic design. With more technological ease and power comes the responsibility to understand, wisely use, and be able to justify many of our decisions. We have moved beyond the scope of print into a world of interactivity, but we shouldn’t let go of any established principles without good reason.
Figure 2: Understanding movement of objects in any direction along a defined path.
For example, immersion in this visual language can improve our implementation of CSS3 and JavaScript behaviour. With CSS3, we’ve seen a resurgence in CSS experimentation, some of which has been wonderful, but much of it has appeared clumsy. In the race to make something spin, twist, flip or fly from one corner to another, the designer sometimes fails to think about the true movement they seek to emulate. What forces are supposedly affecting this movement? What is the expected path of this transition and is it being respected?
Stopping to think about what is really supposed to be happening on the page compels us to use complex animations, diagrams and rotations more carefully. It helps us to better understand paths and movement.
Figure 3: Repetition can occur through variations in colour, shape, direction, and so on.
It can only be of greater benefit to be mindful of symmetries, depth, affordance, juxtaposition, balance, economy and reduction. A deeper understanding of basic structures can help us to say more with sketches, wireframes, layouts and composition. We’ve all experimented with grids and rhythm but, to truly benefit from these long-established principles, we are duty-bound to understand their possibilities more than we will by simply leveraging a free framework or borrowing some CSS.
Design is not a science, but…
Threading through all of this is what we have learned from science, and what it teaches us of the human brain. This visual language matters because technology changes but, for the most part, people don’t. For centuries, we humans have received and interpreted information in much the same way. Understanding more of how we perceive meaning can help designers make smarter decisions, and call on visual language to underpin these decisions. It is our responsibility as designers to be aware of mental models, mapping, semiotics, sensory experience and human emotion.
Design itself is not a science, but the appropriate use of visual language and scientific understanding exposes the line between effective and awkward, between communicative and mute. By strengthening our mental and analytical approach to what is often done arbitrarily or “because it feels right”, we simply become better designers.
A visual language for the web
So, I’ve outlined numerous starting points and areas worthy of deeper investigation, and hopefully you’re eager to do some research. However, I’ve mostly discussed established ideas and principles that we as web designers can learn from. It’s my belief that our community has a shared responsibility to expand this visual language as it applies to the ebb and flow of the web. Indulge me as I conclude with a related tangent.
In defining a visual language specifically for the web, we must continue to mature. The old powerfully influences the new, but we must intelligently expand the visual language of masterful work and articulate what is uniquely ours.
For example, phrases like Ethan Marcotte’s Responsive Web Design aren’t merely elegant, they describe a new way of thinking and working, of communicating about designs and interaction patterns. These phrases broaden our vocabulary and are immediately adopted by designers worldwide, in both conversation and execution.
Our legacy
Our new definitions should flex and not be tied to specific devices or methods which fade away or morph with time. Our legacy is perhaps more about robust and flexible patterns and systems than it is about specific devices or programming languages.
Figure 4: As web designers, we should think about systems, not pages.
The established principles we adopt and whatever new ways of thinking we define should slip neatly into a wider philosophy about our approach to web design. We’re called, as a community, to define what is distinctive about the visual language of the web, create this vocabulary, this dialect that resonates with us and moves us forward as we tackle each day’s work. Let’s give it some thought.
Further reading
This is my immediate “go-to” list of books that I bullishly believe all web designers should own, but there is so much more out there to read. Sadly, many great texts relating to this stuff are often out of print. Feel free to share your recommendations.
Don Norman, The Design of Everyday Things
Christian Leborg, Visual Grammar
Scott McCloud, Understanding Comics
David Crow, Visible Signs
William Lidwell and Katrina Holden, Universal Principles of Design",2010,Simon Collison,simoncollison,2010-12-16T00:00:00+00:00,https://24ways.org/2010/the-articulate-web-designer-of-tomorrow/,process
267,Taming Complexity,"I’m going to step into my UX trousers for this one. I wouldn’t usually wear them in public, but it’s Christmas, so there’s nothing wrong with looking silly.
Anyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right?
Well, I’ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it’s OK to sometimes reflect that complexity in the products we build.
Myths and legends
Less is more, we’ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I’ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff.
To paraphrase Einstein, simple doesn’t have to be simpler. In other words, simple doesn’t dictate that we remove the complexity. Complex doesn’t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site.
In our decision-making process, principles such as Occam’s razor’s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they’ll be the judge.
As a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you’re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there’s a truckload of suggested features and content because it all needs to be there. Don’t be afraid of that weight.
In the right hands, less can indeed mean more, but it’s just as likely that less can very often lead to, well… less.
Complexity is powerful
Simple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important.
It’s useful to ask throughout a site’s lifespan, “does the user have everything they need?” It’s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term.
The trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they’ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you’re building a relationship, empowering people.
This can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community.
The design aesthetic
Here’s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette’s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be “clean”. Of course, we all like our websites to be clean as that’s more hygienic.
But what do these words even mean, really? Early in a project they’re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we’re building is complex, and many of the components perhaps necessary.
Simple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience.
Recognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user’s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they’re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic — the façade — is only a part of that.
Divide and conquer
I’m currently working on an app that’s complex in architecture, and complex in ambition. We’ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it’s out of beta and beyond.
I’ve noticed that I’m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically.
I’m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It’s really OK to have a lot of stuff, so long as we make each component work smartly.
It’s a divide and conquer approach. By finding simplicity and logic in each content bucket, I’ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places.
I’m also making sure I don’t reduce the app’s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it’s the minority who will be actively building the content, but the power is in providing those opportunities for all.
Much of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity.
Alas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating — even if they’re mind-meltingly complex behind that façade.
Embrace, empathize and tame
So, there are loads of ways to exploit complexity, and make it seem simple. I’ve hinted at some methods above, and we’ve already looked at gradual engagement as a way to make sense of complexity, so that’s a big thumbs-up for a release cycle that increases audience power.
Prior to each and every release, it’s also useful to rest on the finished thing for a while and use it yourself, even if you’re itching to release. ‘Ready’ often isn’t, and ‘finished’ never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It’s definitely worth building in some contingency time for sitting on your work, so to speak.
One thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall.
I mentioned the need to put our design egos to one side and not throw out anything useful, and I think that’s vital. I’m a big believer in economy, reduction, and removing the extraneous, but I’m usually referring to decoration, bells and whistles, and fluff. I wouldn’t ever advocate the complete removal of powerful content from a project roadmap.
Above all, don’t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.",2011,Simon Collison,simoncollison,2011-12-21T00:00:00+00:00,https://24ways.org/2011/taming-complexity/,ux
328,Swooshy Curly Quotes Without Images,"The problem
Take a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images – at all.
The traditional way
Feint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it’s just too difficult to float a closing one.
Why is the traditional way bad?
Well, for a start there are no actual curly quotes in the text (unless you’re doing some nifty image replacement). Thus with CSS disabled you’ll only have default blockquote styling to fall back on. Secondly, images don’t resize, so scaling text will have no affect on your graphic curlies.
The solution
Use really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It’ll also make sense when CSS is unavailable.
The problem
Creating “Drop Caps” with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters – the A to Z or 1 to 10 – and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters.
Curly quotes aren’t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space:
See how this is emphasized when we add a normal alphabetical character within the span. This is what we’re dealing with here:
Then, there’s size. Call in a curly quote at less than 300% font-size and it ain’t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space.
Prepare the curlies
Firstly, remove the opening “ from the quote. Replace it with the opening curly quote character entity “. Then replace the closing “ with the entity reference for that, which is ”. Now at least the curlies will look nice and swooshy.
Add the hooks
Two reasons why we aren’t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we’re doing, and secondly we need to affect the last “letter” of our text also – the closing curly quote.
So, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters:
“Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.”
So far nothing will look any different, aside form the curlies looking a bit nicer. I know we’ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download.
The CSS
OK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right.
.bqstart {
float: left;
font-size: 700%;
color: #FF0000;
}
.bqend {
float: right;
font-size: 700%;
color: #FF0000;
}
That gives us this, which is rubbish. I’ve highlighted the actual span area with outlines:
Note that the curlies don’t even fit inside the span! At this stage on IE 6 PC you won’t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled.
Fiddle with margin and padding
Think of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it’s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height:
.bqstart {
float: left;
height: 45px;
margin-top: -20px;
padding-top: 45px;
margin-bottom: -50px;
font-size: 700%;
color: #FF0000;
}
.bqend {
float: right;
height: 25px;
margin-top: 0px;
padding-top: 45px;
font-size: 700%;
color: #FF0000;
}
I wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully:
I must admit that the heights, margins and spacing don’t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird.
Finished
The final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here’s a real example (note that I’m specifying Lucida Grande and then Verdana for my curlies):
“Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.”
Browsers happy
As I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it’ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely.
It’s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas.",2005,Simon Collison,simoncollison,2005-12-21T00:00:00+00:00,https://24ways.org/2005/swooshy-curly-quotes-without-images/,business
34,Collaborative Responsive Design Workflows,"Much has been written about workflow and designer-developer collaboration in web design, but many teams still struggle with this issue; either with how to adapt their internal workflow, or how to communicate the need for best practices like mobile first and progressive enhancement to their teams and clients. Christmas seems like a good time to have another look at what doesn’t work between us and how we can improve matters.
Why is it so difficult?
We’re still beginning to understand responsive design workflows, acknowledging the need to move away from static design tools and towards best practices in development. It’s not that we don’t want to change – so why is it so difficult?
Changing the way we do something that has become routine is always problematic, even with small things, and the changes today’s web environment requires from web design and development teams are anything but small.
Although developers also have a host of new skills to learn and things to consider, designers are probably the ones pushed furthest out of their comfort zones: as well as graphic design, a web designer today also needs an understanding of interaction design and ergonomics, because more and more websites are becoming tools rather than pages meant to be read like a book or magazine. In addition to that there are thousands of different devices and screen sizes on the market today that layout and interactions need to work on.
These aspects make it impossible to design in a static design tool, so beyond having to learn about new aspects of design, the designer has to either learn how to code or learn to work with a responsive design tool.
Why do it
That alone is enough to leave anyone overwhelmed, as learning a new skill takes time and slows you down in a project – and on most projects time is in short supply. Yet we have to make time or fall behind in the industry as others pitch better, interactive designs. For an efficient workflow, both designers and developers must familiarise themselves with new tools and techniques.
A designer has to be able to play with ideas, make small adjustments here and there, look at the result, go back to the settings and make further adjustments, and so on. You can only realistically do that if you are able to play with all the elements of a design, including interactivity, accessibility and responsiveness.
Figuring out the right breakpoints in a layout is one of the foremost reasons for designing in a responsive design tool. Even if you create layouts for three viewport sizes (i.e. smartphone, tablet and the most common desktop size), you’d only cover around 30% of visitors and you might miss problems like line breaks and padding at other viewport sizes.
Another advantage is consistency. In static design tools changes will not be applied across all your other layouts. A developer referring back to last week’s comps might work with outdated metrics. Furthermore, you cannot easily test what impact changes might have on previously designed areas. In a dynamic design tool such changes will be applied to the entire design and allow you to test things in site areas you had already finished.
No static design tool allows you to do this, and having somebody else produce a mockup from your static designs or wireframes will duplicate work and is inefficient.
How to do it
When working in a responsive design tool rather than in the browser, there is still the question of how and when to communicate with the developer. I have found that working with Sass in combination with a visual style guide is very efficient, but it does need careful planning: fundamental metrics for padding, margins and font sizes, but also design elements like sliders, forms, tabs, buttons and navigational elements, should be defined at the beginning of a project and used consistently across the site. Working with a grid can help you develop a consistent design language across your site.
Create a visual style guide that shows what the elements look like and how they behave across different screen sizes – and when interacted with. Put all metrics on paddings, margins, breakpoints, widths, colours and so on in a text document, ideally with names that your developer can use as Sass variables in the CSS. For example:
$padding-default-vertical: 1.5em;
Developers, too, need an efficient workflow to keep code maintainable and speed up the time needed for more complex interactions with an eye on accessibility and performance. CSS preprocessors like Sass allow you to work with variables and mixins for default rules, as well as style sheet partials for different site areas or design elements. Create your own boilerplate to use for your projects and then update your variables with the information from your designer for each individual project.
How to get buy-in
One obstacle when implementing responsive design, accessibility and content strategy is the logistics of learning new skills and iterating on your workflow. Another is how to sell it. You might expect everyone on a project (including the client) to want to design and develop the best website possible: ultimately, a great site will lead to more conversions. However, we often hear that people find it difficult to convince their teammates, bosses or clients to implement best practices.
Why is that? Well, I believe a lot of it is down to how we sell it. You will have experienced this yourself: some people you trust to know what they are talking about, and others you don’t. Think about why you trust that first person but don’t buy what the other one is telling you. It is likely because person A has a self-assured, calm and assertive demeanour, while person B seems insecure and apologetic. To sell our ideas, we need to become person A! For a timid designer or developer suffering from imposter syndrome (like many of us do in this industry) that is a difficult task. So how can we become more confident in selling our expertise?
Write
We need to become experts. And I mean not just in writing great code or coming up with beautiful designs but at explaining why we’re doing what we’re doing. Why do you code this way or that? Why is this the best layout? Why does a website have to be accessible and responsive? Write about it. Putting your thoughts down on paper or screen is a really efficient way of getting your head around a topic and learning to make a case for something. You may even find that you come up with new ideas as you are writing, so you’ll become a better designer or developer along the way.
Talk
Then, talk about it. Start out in front of your team, then do a lightning talk at a web event near you, then a longer talk or workshop. Having to talk about a topic is going to help you put into spoken words the argument that you’ve previously put together in writing. Writing comes more easily when you’re starting out but we use a different register when writing than talking and you need to learn how to speak your case. Do the talk a couple of times and after each talk make adjustments where you found it didn’t work well. By this time, you are more than ready to make your case to the client. In fact, you’ve been ready since that first talk in front of your colleagues ;)
Pitch
Pitches used to be based on a presentation of static layouts for for three to five typical pages and three different designs. But if we want to sell interactivity, structure, usability, accessibility and responsiveness, we need to demonstrate these things and I believe that it can only do us good. I have seen a few pitches sitting in the client’s chair and static layouts are always sort of dull. What makes a website a website is the fact that I can interact with it and smooth interactions or animations add that extra sparkle.
I can’t claim personal experience for this one but I’d be bold and go for only one design. One demo page matching the client’s corporate design but not any specific page for the final site. Include design elements like navigation, photography, typefaces, article layout (with real content), sliders, tabs, accordions, buttons, forms, tables (yes, tables) – everything you would include in a style tiles document, only interactive. Demonstrate how the elements behave when clicked, hovered and touched, and how they change across different screen sizes. You may even want to demonstrate accessibility features like tabbed navigation and screen reader use.
Obviously, there are many approaches that will work in different situations but don’t give up on finding a process that works for you and that ultimately allows you to build delightful, accessible, responsive user experiences for the web. Make time to try new tools and techniques and don’t just work on them on the side – start using them on an actual project. It is only when we use a tool or process in the real world that we become true experts. Remember your driving lessons: once the instructor had explained how to operate the car, you were sent to practise driving on the road in actual traffic!",2014,Sibylle Weber,sibylleweber,2014-12-07T00:00:00+00:00,https://24ways.org/2014/collaborative-responsive-design-workflows/,process
129,Knockout Type - Thin Is Always In,"OS X has gorgeous native anti-aliasing (although I will admit to missing 10px aliased Geneva — *sigh*). This is especially true for dark text on a light background. However, things can go awry when you start using light text on a dark background. Strokes thicken. Counters constrict. Letterforms fill out like seasonal snackers.
So how do we combat the fat? In Safari and other Webkit-based browsers we can use the CSS ‘text-shadow’ property. While trying to add a touch more contrast to the navigation on haveamint.com I noticed an interesting side-effect on the weight of the type.
The second line in the example image above has the following style applied to it:
This creates an invisible drop-shadow. (Why is it invisible? The shadow is positioned directly behind the type (the first two zeros) and has no spread (the third zero). So the color, black, is completely eclipsed by the type it is supposed to be shadowing.)
Why applying an invisible drop-shadow effectively lightens the weight of the type is unclear. What is clear is that our light-on-dark text is now of a comparable weight to its dark-on-light counterpart.
You can see this trick in effect all over ShaunInman.com and in the navigation on haveamint.com and Subtraction.com. The HTML and CSS source code used to create the example images used in this article can be found here.",2006,Shaun Inman,shauninman,2006-12-17T00:00:00+00:00,https://24ways.org/2006/knockout-type/,code
316,Have Your DOM and Script It Too,"When working with the XMLHttpRequest object it appears you can only go one of three ways:
You can stay true to the colorful moniker du jour and stick strictly to the responseXML property
You can play with proprietary – yet widely supported – fire and inject the value of responseText property into the innerHTML of an element of your choosing
Or you can be eval() and parse JSON or arbitrary JavaScript delivered via responseText
But did you know that there’s a fourth option giving you the best of the latter two worlds? Mint uses this unmentioned approach to grab fresh HTML and run arbitrary JavaScript simultaneously. Without relying on eval(). “But wait-”, you might say, “when would I need to do this?” Besides the example below this technique is handy for things like tab groups that need initialization onload but miss the main onload event handler by a mile thanks to asynchronous scripting.
Consider the problem
Originally Mint used option 2 to refresh or load new tabs into individual Pepper panes without requiring a full roundtrip to the server. This was all well and good until I introduced the new Client Mode which when enabled allows anyone to view a Mint installation without being logged in. If voyeurs are afoot as Client Mode is disabled, the next time they refresh a pane the entire login page is inserted into the current document. That’s not very helpful so I needed a way to redirect the current document to the login page.
Enter the solution
Wouldn’t it be cool if browsers interpreted the contents of script tags crammed into innerHTML? Sure, but unfortunately, that just wasn’t meant to be. However like the body element, image elements have an onload event handler. When the image has fully loaded the handler runs the code applied to it. See where I’m going with this?
By tacking a tiny image (think single pixel, transparent spacer gif – shudder) onto the end of the HTML returned by our Ajax call, we can smuggle our arbitrary JavaScript into the existing document. The image is added to the DOM, and our stowaway can go to town.
This is the results of our Ajax call.
Please be neat
So we’ve just jammed some meaningless cruft into our DOM. If our script does anything with images this addition could have some unexpected side effects. (Remember The Fly?) So in order to save that poor, unsuspecting element whose innerHTML we just swapped out from sharing Jeff Goldblum’s terrible fate we should tidy up after ourselves. And by using the removeChild method we do just that.
This is the results of our Ajax call.
",2005,Shaun Inman,shauninman,2005-12-24T00:00:00+00:00,https://24ways.org/2005/have-your-dom-and-script-it-too/,code
27,Putting Design on the Map,"The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things.
I could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I’m going to explore something that gives us a direct relationship between a website and the physical world – maps.
A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected.
Reif Larsen, The Selected Works of T.S. Spivet
The simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That’s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn’t just use dark grey Helvetica on every site, would you? Where’s the personality? Where’s the tailored experience? Where is the design?
Jumping into design
Let’s keep this simple – we all want to be better web folk, not cartographers. We don’t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I’m going to gloss over some of the technical areas and focus on the practical concepts.
Tiles
If you’ve ever noticed a map loading in sections, it’s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You’ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless.
The tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google’s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours).
Comparison of Google Maps (top) and the Ordnance Survey (bottom).
Carefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You’re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide.
I’ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum.
A nice example of a customised base map can be found on Michael K Owens’ check-in pages:
One of Michael K Owens’ check-in pages.
As I’ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour.
A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea.
Technical
You’re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you’re worried about the time it takes to design the maps, don’t be – you can automate most of it. You don’t need to manually draw each tile for the entire world!
We’ve learned the importance of web standards the hard way, so you’ll be glad (and I won’t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does.
Features
Tiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time.
Showing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent?
Choropleth map showing ranked postcode areas, using ViziCities.
Features are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement.
Erwin Josephus Raisz, cartographer (1893–1968)
Where lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user.
Technical
As you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can’t just use a vector format such as KML, GeoJSON (or even SVG) – and you can – but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level.
User interface
There is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps.
As I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don’t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on.
In some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google’s base map toggle is quite nice since it doesn’t offer many options yet provides very different contexts and styling.
It isn’t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content.
Putting it into practice
I could ramble on for ages about what we can do to customise maps to fit a site’s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web.
Google
OK, I’ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn’t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets.
Leaflet
You have probably seen Leaflet before. It isn’t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don’t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API.
Mapbox
As a freemium service, Mapbox may not be perfect for every use case but it’s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don’t have to only use the maps on the web.
Mapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, “Yes, but how on earth can I design those parts of a map?” then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet.
If you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools.
OpenLayers
After a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too.
OpenStreetMap
I couldn’t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API.
ViziCities
The newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength.
So there you have it – a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content.",2014,Shane Hudson,shanehudson,2014-12-11T00:00:00+00:00,https://24ways.org/2014/putting-design-on-the-map/,design
298,First Steps in VR,"The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not.
I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.
Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.
WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.
Getting Started with A-Frame
I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders.
I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun.
For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:
Must work on Google Cardboard at a minimum, because of price
Therefore, it must not rely on having a controller
Auto-moving around a maze would be a good example
Move in direction you look
Stretch goal: Scoring, time until you hit a wall or get stuck in maze
Stretch goal: Levels, so the map doesn’t need to be random
Stretch goal: Snow!
I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground.
24 ways
As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.
Styling
Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave.
Note, you will probably need a server running for images to work. You can do this by running python -m ""SimpleHTTPServer"" in the folder where the code is, then go to localhost:8000 in browser.
Textures
Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor.
The is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures.
To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.
Replace:
With:
This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.
box.setAttribute('material', 'src: #texture-wall');
That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object.
Lighting
The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.
There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.
To start with I wanted to light up the scene with a general light, type=""ambient"", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky.
I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=""point"" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical.
By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.
I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.
Movement
One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.
AFRAME.registerComponent('automove-controls', {
init: function () {
this.speed = 0.1;
this.isMoving = true;
this.velocityDelta = new THREE.Vector3();
},
isVelocityActive: function () {
return this.isMoving;
},
getVelocityDelta: function () {
this.velocityDelta.z = this.isMoving ? -speed : 0;
return this.velocityDelta.clone();
}
});
Replace:
universal-controls
With:
universal-controls=""movementControls: automove, gamepad, keyboard""
This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.
Building a map
Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.
I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future.
var map =
{
""data"":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
""height"":10,
""width"":10
}
As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.
document.querySelector('a-scene').addEventListener('render-target-loaded', function () {
var WALL_SIZE = 5,
WALL_HEIGHT = 3;
var el = document.querySelector('#walls');
var wall;
for (var x = 0; x < map.height; x++) {
for (var y = 0; y < map.width; y++) {
var i = y*map.width + x;
var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE;
if (map.data[i] === 1) {
// Create wall
wall = document.createElement('a-box');
el.appendChild(wall);
wall.setAttribute('color', '#fff');
wall.setAttribute('material', 'src: #texture-wall;');
wall.setAttribute('width', WALL_SIZE);
wall.setAttribute('height', WALL_HEIGHT);
wall.setAttribute('depth', WALL_SIZE);
wall.setAttribute('position', position);
wall.setAttribute('static-body', ');
}
if (map.data[i] === 2) {
// Set player position
document.querySelector('#player').setAttribute('position', position);
}
}
}
console.info('Walls added.');
});
With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.
It’s Not All Fun And Games
A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.
There have been a number of cases where it has been shown virtual reality can help as a tool for therapies:
Oxford study finds virtual reality can help treat severe paranoia
Virtual Reality Therapy for Phobias at the Duke Faculty Practice
Bravemind: Virtual Reality Exposure Therapy at the University of Southern California
These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.
Wrapping Up
Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.”
We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to.
So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.",2016,Shane Hudson,shanehudson,2016-12-11T00:00:00+00:00,https://24ways.org/2016/first-steps-in-vr/,code
211,Automating Your Accessibility Tests,"Accessibility is one of those things we all wish we were better at. It can lead to a bunch of questions like: how do we make our site better? How do we test what we have done? Should we spend time each day going through our site to check everything by hand? Or just hope that everyone on our team has remembered to check their changes are accessible?
This is where automated accessibility tests can come in. We can set up automated tests and have them run whenever someone makes a pull request, and even alongside end-to-end tests, too.
Automated tests can’t cover everything however; only 20 to 50% of accessibility issues can be detected automatically. For example, we can’t yet automate the comparison of an alt attribute with an image’s content, and there are some screen reader tests that need to be carried out by hand too. To ensure our site is as accessible as possible, we will still need to carry out manual tests, and I will cover these later.
First, I’m going to explain how I implemented automated accessibility tests on Elsevier’s ecommerce pages, and share some of the lessons I learnt along the way.
Picking the right tool
One of the hardest, but most important parts of creating our automated accessibility tests was choosing the right tool.
We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history.
The example below shows the how the beforeScript step completes a login form and then waits for the login to complete before testing the page:
beforeScript: function(page, options, next) {
// An example function that can be used to make sure changes have been confirmed before continuing to run Pa11y
function waitUntil(condition, retries, waitOver) {
page.evaluate(condition, function(err, result) {
if (result || retries < 1) {
// Once the changes have taken place continue with Pa11y testing
waitOver();
} else {
retries -= 1;
setTimeout(function() {
waitUntil(condition, retries, waitOver);
}, 200);
}
});
}
// The script to manipulate the page must be run with page.evaluate to be run within the context of the page
page.evaluate(function() {
const user = document.querySelector('#login-form input[name=""email""]');
const password = document.querySelector('#login-form input[name=""password""]');
const submit = document.querySelector('#login-form input[name=""submit""]');
user.value = 'user@example.com';
password.value = 'password';
submit.click();
}, function() {
// Use the waitUntil function to set the condition, number of retries and the callback
waitUntil(function() {
return window.location.href === 'https://example.com';
}, 20, next);
});
}
The waitUntil callback allows the test to be delayed until our test user is successfully logged in.
Another thing to consider when picking a tool is the type of error messages it produces. aXe groups all elements with the same error together, so the list of issues is a lot easier to read, and it’s easier to identify the most commons problems. For example, here are some elements that have insufficient colour contrast:
Violation of ""color-contrast"" with 8 occurrences!
Ensures the contrast between foreground and background colors meets
WCAG 2 AA contrast ratio thresholds. Correct invalid elements at:
- #maincontent > .make_your_mark > div:nth-child(2) > p > span > span
- #maincontent > .make_your_mark > div:nth-child(4) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(2) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(4) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(6) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(8) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(10) > p > span > span
- #maincontent > .inform_your_decisions > div:nth-child(12) > p > span > span
For details, see: https://dequeuniversity.com/rules/axe/2.5/color-contrast
aXe also provides links to their site where they discuss the best way to fix the problem.
In comparison, Pa11y lists each individual error which can lead to a very verbose list. However, it does provide helpful suggestions of how to fix problems, such as suggesting an alternative shade of a colour to use:
• Error: This element has insufficient contrast at this conformance level.
Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 2.96:1.
Recommendation: change text colour to #767676.
⎣ WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail
⎣ #maincontent > div:nth-child(10) > div:nth-child(8) > p > span > span
⎣ Featured products:
Integrating into our build pipeline
We decided the perfect time to run our accessibility tests would be alongside our end-to-end tests. We have a Jenkins job that detects changes to our staging site and then triggers the end-to-end tests, and in turn our accessibility tests. Our Jenkins job retrieves the contents of a GitHub repository containing our Pa11y script file and npm package manifest.
Once Jenkins has cloned the repository, it installs any dependencies and executes the tests via:
npm install && npm test
Bundling the URLs to be tested into our test script means we don’t have a command line style test where we list each URL we wish to test in the Jenkins CLI. It’s an effective method but can also be cluttered, and obscure which URLs are being tested.
In the middle of the office we have a monitor displaying a Jenkins dashboard and from this we can see if the accessibility tests are passing or failing. Everyone in the team has access to the Jenkins logs and when the build fails they can see why and fix the issue.
Fixing the issues
As mentioned earlier, Pa11y can generate a long list of areas for improvement which can be very verbose and quite overwhelming. I recommend going through the list to see which issues occur most frequently and fix those first. For example, we initially had a lot of errors around colour contrast, and one shade of grey in particular. By making this colour darker, the number of errors decreased, and we could focus on the remaining issues.
Another thing I like to do is to tackle the quick fixes, such as adding alt text to images. These are small things that allow us to make an impact instantly, giving us time to fix more detailed concerns such as addressing tabindex issues, or speaking to our designers about changing the contrast of elements on the site.
Manual testing
Adding automated tests to check our site for accessibility is great, but as I mentioned earlier, this can only cover 20-50% of potential issues. To improve on this, we need to test by hand too, either by ourselves or by asking others.
One way we can test our site is to throw our mouse or trackpad away and interact with the site using only a keyboard. This allows us to check items such as tab order, and ensure menu items, buttons etc. can be used without a mouse. The commands may be different on different operating systems, but there are some great guides online for learning more about these.
It’s tempting to add alt text and aria-labels to make errors go away, but if they don’t make any sense, what use are they really? Using a screenreader we can check that alt text accurately represents the image. This is also a great way to double check that our ARIA roles make sense, and that they correctly identify elements and how to interact with them. When testing our site with screen readers, it’s important to remember that not all screen readers are the same and some may interact with our site differently to others.
Consider asking a range of people with different needs and abilities to test your site, too. People experience the web in numerous ways, be they permanent, temporary or even situational. They may interact with your site in ways you hadn’t even thought about, so this is a good way to broaden your knowledge and awareness.
Tips and tricks
One of our main issues with Pa11y is that it may find issues we don’t have the power to solve. A perfect example of this is the one pixel image Facebook injects into our site. So, we wrote a small function to go though such errors and ignore the ones that we cannot fix.
const test = pa11y({
....
hideElements: '#ratings, #js-bigsearch',
...
});
const ignoreErrors: string[] = [
'',
'',
''
];
const filterResult = result => {
if (ignoreErrors.indexOf(result.context) > -1) {
return false;
}
return true;
};
Initially we wanted to focus on fixing the major problems, so we added a rule to ignore notices and warnings. This made the list or errors much smaller and allowed us focus on fixing major issues such as colour contrast and missing alt text. The ignored notices and warnings can be added in later after these larger issues have been resolved.
const test = pa11y({
ignore: [
'notice',
'warning'
],
...
});
Jenkins gotchas
While using Jenkins we encountered a few problems. Sometimes Jenkins would indicate a build had passed when in reality it had failed. This was because Pa11y had timed out due to PhantomJS throwing an error, or the test didn’t go past the first URL. Pa11y has recently released a new beta version that uses headless Chrome instead of PhantomJS, so hopefully these issues will less occur less often.
We tried a few approaches to solve these issues. First we added error handling, iterating over the array of test URLs so that if an unexpected error happened, we could catch it and exit the process with an error indicating that the job had failed (using process.exit(1)).
for (const url of urls) {
try {
console.log(url);
let urlResult = await run(url);
urlResult = urlResult.filter(filterResult);
urlResult.forEach(result => console.log(result));
}
catch (e) {
console.log('Error:', e);
process.exit(1);
}
}
We also had issues with unhandled rejections sometimes caused by a session disconnecting or similar errors. To avoid Jenkins indicating our site was passing with 100% accessibility, when in reality it had not executed any tests, we instructed Jenkins to fail the job when an unhandled rejection or uncaught exception occurred:
process.on('unhandledRejection', (reason, p) => {
console.log('Unhandled Rejection at:', p, 'reason:', reason);
process.exit(1);
});
process.on('uncaughtException', (err) => {
console.log('Caught exception: ${err}n');
process.exit(1);
});
Now it’s your turn
That’s it! That’s how we automated accessibility testing for Elsevier ecommerce pages, allowing us to improve our site and make it more accessible for everyone. I hope our experience can help you automate accessibility tests on your own site, and bring the web a step closer to being accessible to all.",2017,Seren Davies,serendavies,2017-12-07T00:00:00+00:00,https://24ways.org/2017/automating-your-accessibility-tests/,code
295,Internet of Stranger Things,"This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.
What you’ll see
You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.)
It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.
What you’ll need
Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3
Micro SD card at least 4Gb Class 10 speed3
Micro USB power supply at least 2A
USB Wifi dongle (unless you have a Pi 3 - that has wifi built in).
Addressable fairy lights
Logic level shifter (with pins soldered unless you want to do it!)
Breadboard
Jumper wires (3x male to male and 4x female to male)
Optional but recommended
Base board to hold the Pi and Breadboard (often comes with a breadboard!)
Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.
Setting up the Raspberry Pi
You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5.
Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.
network={
ssid=""mywifi""
psk=""mypassword""
}
Save the file, eject the card from your computer and plug it into the Raspberry Pi.
If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet!
Wiring!
Time to wire everything up!
First of all, push the Logic Level Converter into the middle of the breadboard:
Logic Level Converter
The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top.
Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal.
Connect the first two wires between the Raspberry Pi pins and the breadboard:
Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram.
Then the next two:
This is what you should have so far:
Lights
Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.
Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check.
Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in).
You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out!
That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!
The Moment of Truth!
Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down!
However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up.
Rig up the lights!
Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi.
Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row:
Now visit lights.seb.ly and you’ll see this :
If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn.
How it works - the geeky details!
Electronics:
The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer).
Addressable LEDs or “Neopixels”
We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string.
The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.
Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!
A Laser Light Synth! Covered with around 800 super bright neopixels!
Logic Level Converter
The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip.
Power
Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide.
Node.js
There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server.
The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.
WebSockets
I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server.
When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9.
The Raspberry Pi code
The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file:
node /home/pi/strangerthings/client.js > /dev/null &
Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process.
Working with the Raspberry Pi headless
You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial.
Improvements
This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style.
Where next?
Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com.
There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice!
Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources.
I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly.
Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎
Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎
Slower cards will work but performance may suffer ↩︎
Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎
You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots. ↩︎
SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎
Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎
Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎
So you can see other people sending messages in the browser ↩︎
ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎
You can change this default hostname using raspi-config ↩︎",2016,Seb Lee-Delisle,sebleedelisle,2016-12-01T00:00:00+00:00,https://24ways.org/2016/internet-of-stranger-things/,code
221,"“Probably, Maybe, No”: The State of HTML5 Audio","With the hype around HTML5 and CSS3 exceeding levels not seen since 2005’s Ajax era, it’s worth noting that the excitement comes with good reason: the two specifications render many years of feature hacks redundant by replacing them with native features. For fun, consider how many CSS2-based rounded corners hacks you’ve probably glossed over, looking for a magic solution. These days, with CSS3, the magic is border-radius (and perhaps some vendor prefixes) followed by a coffee break.
CSS3’s border-radius, box-shadow, text-shadow and gradients, and HTML5’s