Employing specific JavaScript animation frameworks
Making use of all the powerful properties of CSS Transforms and CSS Animation
Trying out emerging CSS3 animation tools like Sencha Animator
If it wasn’t already apparent, these demos show an exaggerated example and probably aren’t practical in a lot of environments. However, there are a handful of great sites out there that honor animation techniques—metaphor, physics, and misdirection, among others—like Benjamin De Cock’s vCard, 20 Things I Learned About Browsers and the Web by Fantasy Interactive, and the Nike Snowboarding site by Ian Coyle and HEGA. They’re wonderful testaments to what you can do to aid interaction for users.
My goal was to show you the ‘why’ and the ‘how.’ Your charge is to discern the ‘where’ and the ‘when.’ Happy animating!",2010,Dan Mall,danmall,2010-12-15T00:00:00+00:00,https://24ways.org/2010/real-animation-using-javascript-css3-and-html5-video/,code
238,Everything You Wanted To Know About Gradients (And a Few Things You Didn’t),"Hello. I am here to discuss CSS3 gradients. Because, let’s face it, what the web really needed was more gradients.
Still, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer’s vocabulary. There’s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.
Of course, that whole ‘proper application’ thing is the tricky bit.
But given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They’re part of the draft images module, and implemented in two of the major rendering engines.
Still, I’ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you’ll indulge me, let’s take a quick look at how to create CSS gradients—hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.
Gradient theory 101 (I hope that’s not really a thing)
Right. So before we dive into the code, let’s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here’s a straightforward one:
I spent seconds hours designing this gradient. I hope you like it.
At either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:
(Don’t ever really do this. Please. I beg you.)
It’s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.
Now, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn’t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.
A tale of two syntaxes
Armed with our new vocabulary, let’s look at a CSS gradient in the wild. Behold, the simple input button:
There’s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here’s the CSS that makes it happen:
input[type=submit] {
background-color: #F47A20;
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
}
I’ve borrowed David DeSandro’s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that’s perfectly understandable—heck, it sort of turned mine. But let’s step through the CSS slowly, and see if we can’t make it a little less terrifying.
Verbose WebKit is verbose
Here’s the syntax for our little gradient on WebKit:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
color-stop(0, #FAA51A),
color-stop(1, #F47A20)
);
Woof. Quite a mouthful, no? Well, here’s what we’re looking at:
WebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.
The next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.
Afterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop’s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.
For a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:
background-image: -webkit-gradient(linear, 0 0, 0 100%,
from(#FAA51A),
to(#FAA51A)
);
from(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)—in both cases, we’re simply declaring the first and last color stops in our gradient.
Terse Gecko is terse
WebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.
Naturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.
Here’s how we get gradient-y in Gecko:
background-image: -moz-linear-gradient(
#FAA51A,
#F47A20
);
Wait, what? Done already? That’s right. By default, -moz-linear-gradient assumes you’re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that’s the case, then you simply need to specify your color stops, delimited with a few commas.
I know: that was almost… painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.
We can specify an origin point for our gradient:
background-image: -moz-linear-gradient(50% 100%,
#FAA51A,
#F47A20
);
As well as an angle, to give it a direction:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#F47A20
);
And we can specify multiple stops, simply by adding to our comma-delimited list:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC,
#F47A20
);
By adding a percentage after a given color value, we can determine its position along the gradient path:
background-image: -moz-linear-gradient(50% 100%, 45deg,
#FAA51A,
#FCC 20%,
#F47A20
);
So that’s some of the flexibility implicit in the W3C/Mozilla-style syntax.
Now, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit’s more verbose approach to the, well, looseness behind the -moz syntax. À chacun son gradient syntax.
Still, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.
Reusing color stops for fun and profit
But CSS gradients aren’t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.
Tim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it’s the feature image that really catches the eye.
You see, there’s nothing that says you can’t reuse color stops. And Tim’s exploited that perfectly.
He’s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he’s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.
But then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.
And his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.
Rocking the radials
We’ve been looking at linear gradients pretty exclusively. But I’d be remiss if I didn’t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:
And here’s the relevant CSS:
background: -moz-radial-gradient(50% 100%, farthest-side,
rgb(204, 255, 255) 1%,
rgb(85, 85, 85) 15%,
rgba(85, 85, 85, 0)
);
background: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,
from(rgb(204, 255, 255)),
to(rgba(85, 85, 85, 0))
);
Now, the syntax builds on what we’ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes’ reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, …)) to shift into circular mode.
Mozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There’s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.
WebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.
Again, this is a fairly modest little radial gradient. Time and article length (and, let’s be honest, your author’s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can’t recommend the references at Mozilla and Apple strongly enough.
Leave no browser behind
But no matter the kind of gradients you’re working with, there is a large swathe of browsers that simply don’t support gradients. Thankfully, it’s fairly easy to declare a sensible fallback—it just depends on the kind of fallback you’d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.
For example: if you’d like to fall back to a flat color, simply declare a separate background-color:
.nav {
background-color: #000;
background-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
Or perhaps just create three separate background properties.
.nav {
background: #000;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
We can even build on this to fall back to a non-gradient image:
.nav {
background: #000 url(""faux-gradient-lol.png"") repeat-x ;
background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));
background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));
}
No matter the approach you feel most appropriate to your design, it’s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.
(If you’re feeling especially masochistic, there’s even a way to get simple linear gradients working in IE via Microsoft’s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I’d recommend avoiding those.
And don’t tell Andy Clarke I told you, or he’ll probably unload his Derringer at me. Or something.)
Go forth and, um, gradientify!
It’s entirely possible your head’s spinning. Heck, mine is, but that might be the effects of the ’nog. But maybe you’re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine.
Well, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they’re easily modifiable by tweaking a few CSS properties. Because, let’s face it, less time spent yelling at Photoshop is a very, very good thing.
Of course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it’s still under development at the W3C. As we’ve seen, browser support is still very much in flux. And it’s possible that gradients themselves have some real performance drawbacks—so test thoroughly, and gradient carefully.
But still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.
So have fun, and get gradientin’.",2010,Ethan Marcotte,ethanmarcotte,2010-12-22T00:00:00+00:00,https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/,code
239,Using the WebFont Loader to Make Browsers Behave the Same,"Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences.
Safari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari’s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way.
The WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts.
The WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache):
when fonts start to download (‘loading’)
when fonts finish loading (‘active’)
if fonts fail to load (‘inactive’)
If your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so).
The WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we’ll be using just the CSS classes.
Implementing the WebFont Loader
As stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts.
Self-hosted fonts
To use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page:
Replace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside.
Fontdeck
Assuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css.
Typekit
Typekit’s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won’t need to include any WebFont Loader code.
Making all browsers behave like Safari
To make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading.
While fonts are loading, the WebFont Loader adds a class of wf-loading to the element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading.
So, let’s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading:
.wf-loading h1, .wf-loading p {
visibility:hidden;
}
Because the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page.
That works nicely across the board, but the situation is slightly more complicated. WebKit doesn’t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded.
To emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too.
When a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows:
.wf-fontfamilyname-n4-loading h1,
.wf-anotherfontfamily-n4-loading p {
visibility:hidden;
}
Note that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you’ll use n4 but refer to the WebFont Loader documentation for exceptions.
You can see it in action on this Safari example page (you’ll probably need to disable your cache to see any change occur).
Making all browsers behave like Firefox
To make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this:
.wf-fontfamilyname-n4-loading h1 {
font-family: 'arial narrow', sans-serif;
}
.wf-anotherfontfamily-n4-loading p {
font-family: arial, sans-serif;
}
You can see this in action on the Firefox example page (again you’ll probably need to disable your cache to see any change occur).
And there’s more
That’s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load.",2010,Richard Rutter,richardrutter,2010-12-02T00:00:00+00:00,https://24ways.org/2010/using-the-webfont-loader-to-make-browsers-behave-the-same/,code
240,My CSS Wish List,"I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs.
I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list.
The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year.
This time I’ve made my CSS wish list. As before, I’d be happy with just one present.
Before I begin…
… this list includes:
things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them);
others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers).
Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example).
The list
Cross-browser implementation of font-size-adjust
When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished.
What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example:
p {
font-family: Calibri, ""Lucida Sans"", Verdana, sans-serif;
font-size-adjust: 0.47;
}
In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to.
Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it.
More control over overflowing text
The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap:
div {
white-space: nowrap;
width: 100%;
overflow: hidden;
text-overflow: ellipsis;
}
This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were…
WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency.
Indentation and hanging punctuation properties
You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category.
The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation.
The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases.
Text alignment and hyphenation properties
Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples.
In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this.
calc()
This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc().
Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc():
#main {
width: -moz-calc(100% - 240px);
}
Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation.
Selector grouping with -moz-any()
The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4.
This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…).
Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing:
section section h1, section article h1, section aside h1,
section nav h1, article section h1, article article h1,
article aside h1, article nav h1, aside section h1,
aside article h1, aside aside h1, aside nav h1, nav section h1,
nav article h1, nav aside h1, nav nav h1, {
font-size: 24px;
}
You could simply write:
-moz-any(section, article, aside, nav)
-moz-any(section, article, aside, nav) h1 {
font-size: 24px;
}
Nice, huh?
More control over styling form elements
Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability.
I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana.
There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker:
or Safari’s slider control (think star movie ratings, for example):
Parent selector
I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular:
article < h1 {
…
}
One can dream…
Flexible box layout
The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared.
Two of my favourite features of this new box model are:
the ability to redistribute boxes in a different order from the markup
the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space
Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two:
With the flexible box model, you could specify it like this:
body {
display: box;
box-orient: horizontal;
}
#main {
box-flex: 2;
}
#links {
box-flex: 1;
}
aside {
box-flex: 1;
}
If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that.
Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog.
It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects.
Nested selectors
Anyone who has never wished they could do something like the following in CSS, cast the first stone:
article {
h1 { font-size: 1.2em; }
ul { margin-bottom: 1.2em; }
}
Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects.
Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice.
Implementation of the ::marker pseudo-element
The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created.
Using the ::marker pseudo-element you could create something like the following:
Footnote 1: Both John Locke and his father, Anthony Cooper, are
named after 17th- and 18th-century English philosophers; the real
Anthony Cooper was educated as a boy by the real John Locke.
Footnote 2: Parts of the plane were used as percussion instruments
and can be heard in the soundtrack.
where the footnote marker is generated by the following CSS:
li::marker {
content: ""Footnote "" counter(notes) "":"";
text-align: left;
width: 12em;
}
li {
counter-increment: notes;
}
You can read more about how to use counters in CSS in my article from last year.
Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end…
Variables
The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful.
Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology).
Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these.
If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so:
@fontFamily: Calibri, ""Lucida Grande"", ""Lucida Sans Unicode"", Helvetica, Arial, sans-serif;
body {
font-family: @fontFamily;
}
Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors):
header {
background: #000000;
}
footer {
background: header['background'];
}
or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient:
.gradient (@start:"""", @end:"""") {
background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end));
background: -moz-linear-gradient(-90deg,@start,@end);
}
button {
.gradient(#D0D0D0,#9F9F9F);
}
Standardised comments
Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting.
One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure:
/**
* Short description
*
* Long description (this can have multiple lines and contain tags
*
* @tags (optional)
*/
CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data.
I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers.
Final notes
I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen.
Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it.
The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point.
Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list?",2010,Inayaili de León Persson,inayailideleon,2010-12-03T00:00:00+00:00,https://24ways.org/2010/my-css-wish-list/,code
241,Jank-Free Image Loads,"There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then —
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
— oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place.
By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author elements that load like this:
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
Jank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.
width
Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.
—The HTML 3.2 Specification, published on January 14 1997
Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:
See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.
width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.
I have good news, bad news, and great news.
The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.
The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.
So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.
aspect-ratio in CSS
The first proposed feature? An aspect-ratio property in CSS!
This would allow us to write CSS like this:
img {
width: 100%;
}
.thumb {
aspect-ratio: 1/1;
}
.hero {
aspect-ratio: 16/9;
}
This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.
Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:
And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="""" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later.
How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it?
I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!
A brief note on intrinsic and extrinsic sizing
What do I mean by “intrinsic” and “extrinsic?”
The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px.
The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.
It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).
CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.
The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML.
intrinsicsize in HTML
The proposed intrinsicsize attribute will let you do this:
That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size.
You may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio:
In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.
Can’t wait
I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!
Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait!",2018,Eric Portis,ericportis,2018-12-21T00:00:00+00:00,https://24ways.org/2018/jank-free-image-loads/,code
242,Creating My First Chrome Extension,"Writing a Chrome Extension isn’t as scary at it seems!
Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.
Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?
Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit!
Setup
But first, some things you’ll need to know about before getting started:
Callbacks
Timeouts
Chrome Dev Tools
Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started.
Hyperbole and a Half
Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.
On the note of organization, abstraction is important! You might have any combination of the following:
The Chrome extension options page
The popup from the Chrome Menu
The windows or tabs you create
The background scripts
And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.
Alright, now that you know all that up front, let’s get going!
Documentation
TL;DR READ THE DOCS.
A few things to get started:
Read Google’s primer on browser extensions
Have a look at their Getting started tutorial
Check out their overview on Chrome Extensions
This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.
The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example:
https://github.com/jennz0r/eye-rest/blob/master/manifest.json
Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful.
To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.
The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { ""default_popup"": FILE_NAME_HERE }.
The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.
Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.
Debugging
A quick note: don’t forget the debugging tutorial!
Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.
For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information.
Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms.
Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!
Me adding a ton of `console.log`s while trying to debug my alarms.
Eye Rest Functionality
Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:
If the extension is running AND
If the user has not clicked Pause in the Popup HTML AND
If the counter in the Popup HTML is down to 00:00 THEN
Open a new window with Timer HTML AND
Start a 20 sec countdown in Timer HTML AND
Reset the Popup HTML counter to 20:00
If the Timer HTML is down to 0 sec THEN
Close that window. Rinse. Repeat.
Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding.
For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.)
API
Now let’s discuss the APIs that I used to build this extension.
Alarms
What even are alarms? I didn’t know either.
Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says…
DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.
For more information, check out this background migration doc.
One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.
Background Scripts
For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.
I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.
I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.
Other APIs
Windows
I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.
One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired!
I pass the timer.html as the url option, as well as type, size, position, and other visual options.
Storage
Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums.
I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.
Declarative Content
The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser!
You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty.
There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!
The Extension
Here’s what my original Popup looked like before I added styles.
And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel.
And here’s the Timer window and Popup together!
Publishing
Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D
My extension is now available for you to install.
Conclusion
I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches.
Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!
Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.",2018,Jennifer Wong,jenniferwong,2018-12-05T00:00:00+00:00,https://24ways.org/2018/my-first-chrome-extension/,code
243,Researching a Property in the CSS Specifications,"I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial.
What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.
Where are the CSS Specifications?
The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.
Who are the specifications for?
CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.
Which version of a spec should I look at?
There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.
If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.
If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.
How to approach a spec
The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail.
The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:
Values and Units
Intrinsic and Extrinsic Sizing
Box Alignment
When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.
Researching your property
As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.
You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.
Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.
Value
For value we can see that the property accepts a value . The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.
The value is defined as accepting various values:
minmax( , )
fit-content(
We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.
is defined just below as:
min-content
max-content
auto
So these are all things that would be valid to use as a value for grid-auto-rows.
The first value of is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.
grid-auto-rows: 100px;
grid-auto-rows: 1em;
grid-auto-rows: 30%;
When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.
“ values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.”
This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.
The second value of is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:
grid-auto-rows: 1fr;
There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.
We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.
For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.
We can now add min-content and max-content to our available values.
grid-auto-rows: min-content;
grid-auto-rows: max-content;
The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.
grid-auto-rows: auto;
So, this was the list for , the next possible value is minmax( , ). So this is telling us that we can use minmax() as a value, the final (max) value will be and we have already unpacked all of the allowable values there. The first value (min) is detailed as an . If we look at the values for this, we discover that they are the same as , minus the value:
min-content
max-content
auto
We already know what all of these do, so we can add possible minmax() values to our list of values for .
grid-auto-rows: minmax(100px, 200px);
grid-auto-rows: minmax(20%, 1fr);
grid-auto-rows: minmax(1em, auto);
grid-auto-rows: minmax(min-content, max-content);
Finally we can use fit-content( . We can see that fit-content takes a value of which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.
grid-auto-rows: fit-content(200px);
grid-auto-rows: fit-content(20%);
Those are all of our possible values, and to round things off, check again at the initial value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:
“If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.”
Therefore with the following CSS, if five implicit rows were needed they would be as follows:
100px
1fr
auto
100px
1fr
.grid {
display: grid;
grid-auto-rows: 100px 1fr auto;
}
Initial
We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.
Applies to
This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.
Inherited
This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.
Percentages
Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.
Media
This defines the media group that the property belongs to. In this case visual.
Computed Value
This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.
Canonical Order
If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table.
Animation Type
This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!
That’s (mostly) it!
Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.
In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.
Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you.
Further reading
Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.
You may also find useful:
Value Definition Syntax on MDN.
The MDN Glossary defines many common terms.
Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.
How to read W3C Specs - from 2001 but still relevant.
I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.",2018,Rachel Andrew,rachelandrew,2018-12-14T00:00:00+00:00,https://24ways.org/2018/researching-a-property-in-the-css-specifications/,code
244,It’s Beginning to Look a Lot Like XSSmas,"I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.
It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.
With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky.
Vulnerabilities and Open Source
Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.
Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017
In the first 24 ways article this year, Katie referenced a few different types of vulnerability:
Cross-site Request Forgery (also known as CSRF)
SQL Injection
Cross-site Scripting (also known as XSS)
There are many more types of vulnerability, and those that live under the same category share similarities.
For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues.
Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library.
Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity.
Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env).
This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.
Santa’s on his sleigh
If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.
Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.
Fixing vulnerabilities – the ultimate christmas gift
If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.
When you find out about a new vulnerability, you have some options:
Fix the vulnerability via an upgrade
You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.
Patch the vulnerable code
Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else.
Switch to a different library
If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained.
Responsibly disclosing vulnerabilities
Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options:
A malicious developer will exploit that vulnerability for their own gain.
A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.
An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.
It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.
So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.
Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.
Encourage responsible disclosure
Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one.
Stats from the State of Open Source Security 2017
Bug bounties
Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.
If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money.
On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.
A gift worth giving
It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.
Stats from the State of Open Source Security 2017
Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.",2018,Anna Debenham,annadebenham,2018-12-17T00:00:00+00:00,https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/,code
246,Designing Your Site Like It’s 1998,"It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.
My design for Planes, Trains and Automobiles is fixed at 800px wide.
Developing a framework
I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.
I add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:
[…]
Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:
I do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:
The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need:
Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.
Handling older browsers
Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:
This page uses frames, but your browser doesn’t support them.
Please upgrade your browser.
Forcing someone back into a frame
Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery:
Laying out my page
Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:
I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a which limits the width of my layout to 800px. The align attribute will keep this in the centre of someone’s screen:
Although they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content:
The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them:
The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:
The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.
For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:
[…]
I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it:
Adding details
Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:
I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer:
Directed by John Hughes
Implementing a typographic hierarchy
So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser’s default to any font installed on someone’s device:
Planes, Trains and Automobiles is a comedy film […]
To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows:
Steve Martin
An American actor, comedian, writer, producer, and musician.
When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.
NB: I use as many elements as needed to create space between headlines and paragraphs.
View the final result (and especially the source.)
My modern day design for Planes, Trains and Automobiles.
I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true.
We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:
font-variant-caps: titling-caps;
font-variant-ligatures: common-ligatures;
font-variant-numeric: oldstyle-nums;
Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:
body {
display: grid;
grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;
grid-template-rows: auto;
grid-column-gap: 2vw;
grid-row-gap: 1vh;
}
Flexbox has made it easy to develop flexible components such as navigation links:
nav ul { display: flex; }
nav li { flex: 1; }
Just one line of CSS can create multiple columns of fluid type:
main { column-width: 12em; }
CSS Shapes enable text to flow around irregular shapes including polygons:
[src*=""main-img""] {
float: left;
shape-outside: polygon(…);
}
Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:
.btn {
background: linear-gradient(#8B1212, #DD3A3C);
border-radius: 1em;
box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);
}
CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998.
Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.
I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated.
Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today.",2018,Andy Clarke,andyclarke,2018-12-23T00:00:00+00:00,https://24ways.org/2018/designing-your-site-like-its-1998/,code
247,Managing Flow and Rhythm with CSS Custom Properties,"An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability.
Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as and
also have no default margin or padding associated with them.
I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in!
The Flow utility
With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid.
That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.
The code
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: var(--flow-space);
}
What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them.
The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples.
A basic example
See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/LXqerj
What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility.
Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size.
This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though?
See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/qQgxaY
I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:
h2 {
--flow-space: 3rem;
}
Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured.
A more advanced example
Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.
See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/ZmPGyL
In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself:
...
Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element.
“But what about old browsers?”, I hear you cry
We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size:
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: 1em;
margin-top: var(--flow-space);
}
Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement!
Wrapping up
This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS.
If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on!",2018,Andy Bell,andybell,2018-12-07T00:00:00+00:00,https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/,code
249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.
I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.
Try it out here, then read on to see how I built it.
First step: crawling the data
The first step in building a search engine is to grab a copy of the data that you plan to make searchable.
There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.
I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites.
We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.
Then we’ll tell wget to recursively crawl the website, using the --recursive flag.
We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2
The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"".
Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.
Tie all of those options together and we get this command:
wget --recursive --wait 2 --no-clobber
-I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
-X ""/*/*/comments""
https://24ways.org/archives/
If you leave this running for a few minutes, you’ll end up with a folder structure something like this:
$ find 24ways.org
24ways.org
24ways.org/2013
24ways.org/2013/why-bother-with-accessibility
24ways.org/2013/why-bother-with-accessibility/index.html
24ways.org/2013/levelling-up
24ways.org/2013/levelling-up/index.html
24ways.org/2013/project-hubs
24ways.org/2013/project-hubs/index.html
24ways.org/2013/credits-and-recognition
24ways.org/2013/credits-and-recognition/index.html
...
As a quick sanity check, let’s count the number of HTML pages we have retrieved:
$ find 24ways.org | grep index.html | wc -l
328
There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead:
wget --recursive --wait 2 --no-clobber
-I /2018
-X ""/*/*/comments""
https://24ways.org/
Thanks to --no-clobber, this is safe to run every day in December to pick up any new content.
We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index.
Building a search index using SQLite
There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.
I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.
SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!
SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.
This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.
It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.
I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday.
If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages:
$ python3 -m venv ./jupyter-venv
$ ./jupyter-venv/bin/pip install jupyter
# ... lots of installer output
# Now lets install some extra packages we will need later
$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib
# And start the notebook web application
$ ./jupyter-venv/bin/jupyter-notebook
# This will open your browser to Jupyter at http://localhost:8888/
You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.
A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account.
Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on.
from pathlib import Path
from bs4 import BeautifulSoup as Soup
base = Path(""/Users/simonw/Dropbox/Development/24ways-search"")
articles = list(base.glob(""*/*/*/*.html""))
# articles is now a list of paths that look like this:
# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')
docs = []
for path in articles:
year = str(path.relative_to(base)).split(""/"")[1]
url = 'https://' + str(path.relative_to(base).parent) + '/'
soup = Soup(path.open().read(), ""html5lib"")
author = soup.select_one("".c-continue"")[""title""].split(
""More information about""
)[1].strip()
author_slug = soup.select_one("".c-continue"")[""href""].split(
""/authors/""
)[1].split(""/"")[0]
published = soup.select_one("".c-meta time"")[""datetime""]
contents = soup.select_one("".e-content"").text.strip()
title = soup.find(""title"").text.split("" ◆"")[0]
try:
topic = soup.select_one(
'.c-meta a[href^=""/topics/""]'
)[""href""].split(""/topics/"")[1].split(""/"")[0]
except TypeError:
topic = None
docs.append({
""title"": title,
""contents"": contents,
""year"": year,
""author"": author,
""author_slug"": author_slug,
""published"": published,
""url"": url,
""topic"": topic,
})
After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:
[
{
""title"": ""Why Bother with Accessibility?"",
""contents"": ""Web accessibility (known in other fields as inclus..."",
""year"": ""2013"",
""author"": ""Laura Kalbag"",
""author_slug"": ""laurakalbag"",
""published"": ""2013-12-10T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"",
""topic"": ""design""
},
{
""title"": ""Levelling Up"",
""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."",
""year"": ""2013"",
""author"": ""Ashley Baxter"",
""author_slug"": ""ashleybaxter"",
""published"": ""2013-12-06T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/levelling-up/"",
""topic"": ""business""
},
...
My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries.
import sqlite_utils
db = sqlite_utils.Database(""/tmp/24ways.db"")
db[""articles""].insert_all(docs)
That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.
(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)
You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:
$ sqlite3 /tmp/24ways.db
sqlite> .headers on
sqlite> .mode column
sqlite> select title, author, year from articles;
title author year
------------------------------ ------------ ----------
Why Bother with Accessibility? Laura Kalbag 2013
Levelling Up Ashley Baxte 2013
Project Hubs: A Home Base for Brad Frost 2013
Credits and Recognition Geri Coady 2013
Managing a Mind Christopher 2013
Run Ragged Mark Boulton 2013
Get Started With GitHub Pages Anna Debenha 2013
Coding Towards Accessibility Charlie Perr 2013
...
There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:
db[""articles""].enable_fts([""title"", ""author"", ""contents""])
Introducing Datasette
Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet.
We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that?
If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article.
If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:
./jupyter-venv/bin/pip install datasette
This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.
Now you can run Datasette against the 24ways.db file we created earlier like so:
./jupyter-venv/bin/datasette /tmp/24ways.db
This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.
If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db
Publishing the database to the internet
One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish:
$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways
-----> Python app detected
-----> Installing requirements with pip
-----> Running post-compile hook
-----> Discovering process types
Procfile declares types -> web
-----> Compressing...
Done: 47.1M
-----> Launching...
Released v8
https://search-24ways.herokuapp.com/ deployed to Heroku
If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways.
You can run this command as many times as you like to deploy updated versions of the underlying database.
Searching and faceting
Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.
SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A
A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic
Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:
http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A
Better search using custom SQL
The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.
This exposes the underlying SQL query, which looks like this:
select rowid, * from articles where rowid in (
select rowid from articles_fts where articles_fts match :search
) order by rowid limit 101
We can do better than this by constructing a custom SQL query. Here’s the query we will use instead:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10;
You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.
There’s a lot going on here! Let’s break the SQL down line-by-line:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on.
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.
from articles
join articles_fts on articles.rowid = articles_fts.rowid
articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.
where articles_fts match :search || ""*""
order by rank limit 10;
:search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.
How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects:
JSON API call to search for articles matching SVG
The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.
A simple JavaScript autocomplete search interface
I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.
We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:
function debounce(func, wait, immediate) {
let timeout;
return function() {
let context = this, args = arguments;
let later = () => {
timeout = null;
if (!immediate) func.apply(context, args);
};
let callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
};
We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.
Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these:
const htmlEscape = (s) => s.replace(
/>/g, '>'
).replace(
/Autocomplete search
And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:
// Embed the SQL query in a multi-line backtick string:
const sql = `select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10`;
// Grab a reference to the
const searchbox = document.getElementById(""searchbox"");
// Used to avoid race-conditions:
let requestInFlight = null;
searchbox.onkeyup = debounce(() => {
const q = searchbox.value;
// Construct the API URL, using encodeURIComponent() for the parameters
const url = (
""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" +
encodeURIComponent(sql) +
`&search=${encodeURIComponent(q)}&_shape=array`
);
// Unique object used just for race-condition comparison
let currentRequest = {};
requestInFlight = currentRequest;
fetch(url).then(r => r.json()).then(d => {
if (requestInFlight !== currentRequest) {
// Avoid race conditions where a slow request returns
// after a faster one.
return;
}
let results = d.map(r => `
${htmlEscape(r.author)} - ${r.year}
${highlight(r.snippet)}
`).join("""");
document.getElementById(""results"").innerHTML = results;
});
}, 100); // debounce every 100ms
There’s just one more utility function, used to help construct the HTML results:
const highlight = (s) => htmlEscape(s).replace(
/b4de2a49c8/g, ''
).replace(
/8c94a2ed4b/g, ' '
);
This is what those unique strings passed to the snippet() function were for.
Avoiding race conditions in autocomplete
One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:
User types acces
Browser sends request A - querying documents matching acces*
User continues to type accessibility
Browser sends request B - querying documents matching accessibility*
Request B returns. It was fast, because there are fewer documents matching the full term
The results interface updates with the documents from request B, matching accessibility*
Request A returns results (this was the slower of the two requests)
The results interface updates with the documents from request A - results matching access*
This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.
Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.
Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.
When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.
It’s not a lot of code, really
And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like.
How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with.
A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.
More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.
For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code
253,Clip Paths Know No Bounds,"CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.
The basics of a clip path
Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.
Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions.
See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.
So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.
clip-path: polygon(
30% 0%,
70% 0%,
100% 30%,
100% 70%,
70% 100%,
30% 100%,
0% 70%,
0% 30%
);
A shape with less vertices than the eye can see
It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%.
Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.
See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.
By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.
Our earlier octagon can similarly be made with only four points.
See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.
Multiple shapes, one clip path
We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().
See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.
Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed.
It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.
See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.
Different shapes with fill rules
A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.
See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.
As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside.
Order of operations
It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.
These compositing effects are applied in the order:
CSS Filters (e.g. filter: blur(2px))
Clipping (e.g. what this article is about)
Masking (Clipping’s cousin)
Blend Modes (e.g. mix-blend-mode: multiply)
Opacity
This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges.
See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.
If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.
Revealing content with animation
CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values.
See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.
Don’t keep CSS Shapes in a box
Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.",2018,Dan Wilson,danwilson,2018-12-20T00:00:00+00:00,https://24ways.org/2018/clip-paths-know-no-bounds/,code
255,Inclusive Considerations When Restyling Form Controls,"I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.
I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year.
Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match.
However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve.
Quick level setting
Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:
Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.
It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics.
Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing.
Visually resetting and restyling form controls
Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors.
As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.
See the Pen form control styling comparisons by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/gZOrZm/
Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated.
If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing.
You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?
It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:
Non-standard
This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.
While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.
Can’t make it? Fake it.
Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended.
Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling.
Getting clever with markup and CSS
The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.
See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/RqEayN/
Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?
As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.
Limitations to cleverness
Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to.
However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach.
Using ARIA when appropriate
Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).
ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.
Exceptions to the rule
One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.
Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.
While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.
See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/XyvoeE/
By adding a role=""switch"" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state.
But while this is a valid approach to take in building a switch, how does this actually match up to reality?
Does it pass the test(s)?
Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better.
What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:
Is the control implemented in all supported browsers?
If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field?
If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline?
Test with multiple input devices.
Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few.
You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.
How well does the control take to custom styles?
If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling.
Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled.
Do specifications match reality?
If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations?
For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order?
Wrapping up
While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented.
Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.
2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs.
And hey. There’s always next year, right?",2018,Scott O'Hara,scottohara,2018-12-13T00:00:00+00:00,https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/,code
256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.
*(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!)
Looking for critters in rocky intertidal habitats
One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’
A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.)
They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.
My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.
I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist.
Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!
Finding nearby animals with iNaturalist
We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.
I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting.
This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.
iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button.
You’ll then get a URL that looks a bit like
https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at
which you can call and interrrogate using a programming language of your choice.
If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).
Rapid development using Observable Notebooks
We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all.
You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example.
Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.
Developing your Naturalist superpowers
If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.
For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.
We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com
The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API.
westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811
A quick map primer:
The higher the latitude the more north it is
The lower the latitude the more south it is
Latitude 0 = the equator
The higher the longitude the more east it is of Greenwich
The lower the longitude the more west it is of Greenwich
Longitude 0 = Greenwich
In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:
nelat = highest latitude = north limit = 37.499811
nelng = highest longitude = east limit = -122.492738
swlat = smallest latitude = south limit = 37.492805
swlng = smallest longitude = west limit = 122.50542
As API parameters these look like this:
?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542
These parameters in this format can be used for most of the iNaturalist API methods.
Nudibranch seasonality in Pillar Point
We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.
In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’.
Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’.
The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.
The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:
pillar_point_counts_per_week = fetch(
""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true""
).then(response => {
return response.json();
})
Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks.
(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)
The iNaturalist API returns data that looks like this:
{
""total_results"": 53,
""page"": 1,
""per_page"": 53,
""results"": {
""week_of_year"": {
""1"": 136,
""2"": 20,
""3"": 150,
""4"": 65,
""5"": 186,
""6"": 74,
""7"": 47,
""8"": 87,
""9"": 64,
""10"": 56,
But for our Vega-Lite graph we need data that looks like this:
[{
""week"": ""01"",
""value"": 136
}, {
""week"": ""02"",
""value"": 20
}, ...]
We can convert what we get back from the API to the second format using a loop that iterates over the object keys:
objects_to_plot = {
let objects = [];
Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {
objects.push({
week: `Wk ${week_index.toString()}`,
observations: pillar_point_counts_per_week.results.week_of_year[week_index]
});
})
return objects;
}
We can then plug this into Vega-Lite to draw us a graph:
vegalite({
data: {values: objects_to_plot},
mark: ""bar"",
encoding: {
x: {field: ""week"", type: ""nominal"", sort: null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences.
So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.
pillar_point_nudibranches = {
let api_results = await fetch(
""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true""
).then(r => r.json())
let species_list = api_results.results.map(i => ({
value: i.taxon.id,
label: `${i.taxon.preferred_common_name} (${i.taxon.name})`
}));
return species_list
}
We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!
viewof select_species = select({
title: ""Which Nudibranch do you want to see seasonality for?"",
options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches],
value: """"
})
Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.
pillar_point_counts_per_month_per_species = fetch(
`https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`
).then(r => r.json())
Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:
objects_to_plot_species_month = {
let objects = [];
Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {
objects.push({
month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}),
observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]
});
})
return objects;
}
(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)
And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!
vegalite({
data: {values: objects_to_plot_species_month},
mark: ""bar"",
encoding: {
x: {field: ""month"", type: ""nominal"", sort:null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.
This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?
Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!
Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.
viewof selected_month = select({
title: ""When do you want to see Nudibranchs?"",
options: [
{ label: ""Whenever"", value: """" },
{ label: ""January"", value: ""1"" },
{ label: ""February"", value: ""2"" },
{ label: ""March"", value: ""3"" },
{ label: ""April"", value: ""4"" },
{ label: ""May"", value: ""5"" },
{ label: ""June"", value: ""6"" },
{ label: ""July"", value: ""7"" },
{ label: ""August"", value: ""8"" },
{ label: ""September"", value: ""9"" },
{ label: ""October"", value: ""10"" },
{ label: ""November"", value: ""11"" },
{ label: ""December"", value: ""12"" },
],
value: """"
})
We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.
all_species_data = fetch(
`https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`
).then(r => r.json())
You can render HTML directly in a notebook cell using Observable’s html tagged template literal:
If you go to Pillar Point ${
{"""": """",
""1"":""in January"",
""2"":""in Febrary"",
""3"":""in March"",
""4"":""in April"",
""5"":""in May"",
""6"":""in June"",
""7"":""in July"",
""8"":""in August"",
""9"":""in September"",
""10"":""in October"",
""11"":""in November"",
""12"":""in December"",
}[selected_month]
} you might see…
${all_species_data.results.map(s => `
${s.taxon.name}
Seen ${s.count} times
`)}
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!
Play with it yourself in this Observable Notebook.
Conclusion
I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower.
Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.
Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",2018,Natalie Downe,nataliedowne,2018-12-18T00:00:00+00:00,https://24ways.org/2018/observable-notebooks-and-inaturalist/,code
257,The (Switch)-Case for State Machines in User Interfaces,"You’re tasked with creating a login form. Email, password, submit button, done.
“This will be easy,” you think to yourself.
Login form by Selecto
You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems.
As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.
What if login fails? Where do those errors show up?
Should we show errors differently if the user forgot to enter their email, or password, or both?
Or should the submit button be disabled?
Should we validate the email field?
When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)
When should the errors disappear?
What do we show during the login process? Some loading spinner?
What if loading takes too long, or a server error occurs?
Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.
Modeling Behavior
Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information.
However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.
The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:
start - not submitted yet
loading - submitted and logging in
success - successfully logged in
error - login failed
Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity:
SUBMIT - pressing the submit button
RESOLVE - the server responds, indicating that login is successful
REJECT - the server responds, indicating that login failed
Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:
From the start state, when the SUBMIT event occurs, the app should be in the loading state.
From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.
If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.
From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.
Otherwise, if any other event occurs, don’t do anything and stay in the same state.
That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:
Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.
From Visuals to Code
Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.
With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:
function loginMachine(state, event) {
switch (state) {
case 'start':
if (event === 'SUBMIT') {
return 'loading';
}
break;
case 'loading':
if (event === 'RESOLVE') {
return 'success';
} else if (event === 'REJECT') {
return 'error';
}
break;
case 'success':
// Accept no further events
break;
case 'error':
if (event === 'SUBMIT') {
return 'loading';
}
break;
default:
// This should never occur
return undefined;
}
}
console.log(loginMachine('start', 'SUBMIT'));
// => 'loading'
This is fine (I suppose) but personally, I find it much easier to use objects:
const loginMachine = {
initial: ""start"",
states: {
start: {
on: { SUBMIT: 'loading' }
},
loading: {
on: {
REJECT: 'error',
RESOLVE: 'success'
}
},
error: {
on: {
SUBMIT: 'loading'
}
},
success: {}
}
};
function transition(state, event) {
return machine
.states[state] // Look up the state
.on[event] // Look up the next state based on the event
|| state; // If not found, return the current state
}
console.log(transition('start', 'SUBMIT'));
As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:
A Common Language Between Designers and Developers
Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:
identify all possible states, and potentially missing states
describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)
eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition)
add features with full confidence of knowing what other states it might affect
simplify redundant states or complex user flows
create test paths for almost every possible user flow, and easily identify edge cases
collaborate better by understanding the entire application model equally.
Not a New Idea
I’m not the first to suggest that state machines can help bridge the gap between design and development.
Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines
User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.
In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.
More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born.
State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:
Visualized modeling
Precise diagrams
Automatic code generation
Comprehensive test coverage
Accommodation of late-breaking requirements changes
Moving Forward
It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:
The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications
The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling
I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.
I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).
I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.",2018,David Khourshid,davidkhourshid,2018-12-12T00:00:00+00:00,https://24ways.org/2018/state-machines-in-user-interfaces/,code
258,Mistletoe Offline,"It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.
I am, of course, talking about Murphy, and the golden rule he gave unto us:
Anything that can go wrong will go wrong.
So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?
But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.
I guess there’s nothing we can do about those particular situations, right?
Wrong!
A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.
If you’ve got a custom 404 page, why not make a custom offline page too?
Get your server in order
Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.
If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.
Make an offline page
Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.
When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html.
Pre-cache your offline page
Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:
addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener
In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html',
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/path/to/image.jpg'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.
Intercept requests
The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:
addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener
Let’s write a fairly conservative script with the following logic:
Whenever a file is requested,
First, try to fetch it from the network,
But if that doesn’t work, try to find it in the cache,
But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead.
Here’s how that translates into JavaScript:
// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
fetchEvent.respondWith(
// First, try to fetch it from the network
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
}) // end fetch.then
// But if that doesn't work
.catch( fetchError => {
// try to find it in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
// But if that doesn't work
} else {
// and it's a request for a web page
if (request.headers.get('Accept').includes('text/html')) {
// show the custom offline page instead
return caches.match('/offline.html');
} // end if
} // end if/else
}) // end match.then
}) // end fetch.catch
); // end respondWith
}); // end addEventListener
I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.
Hook up your service worker script
You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('/serviceworker.js');
}
That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.
You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.
Go further!
Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!
This is just the beginning. You can do more with service workers.
What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.
Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.
So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.
Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action.",2018,Jeremy Keith,jeremykeith,2018-12-04T00:00:00+00:00,https://24ways.org/2018/mistletoe-offline/,code
260,The Art of Mathematics: A Mandala Maker Tutorial,"In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things.
A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:
Mandala Maker user interface
The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new.
In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative!
Preparations: a blank canvas
The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).
Files
Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML.
Your browser doesn't support canvas.
I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use:
body {
background-color: #ccc;
text-align: center;
}
canvas {
touch-action: none;
background-color: #fff;
}
button {
font-size: 110%;
}
Next steps
We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:
Building a simple drawing app with one line and one color
Adding a Clear button and a color picker
Adding more functionality: 2 line drawing (add the first reflection)
Adding more functionality: 8 line drawing (add 6 more reflections!)
Interactive demos
This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.
Part 1: A simple drawing app
Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:
var canvas, context, w, h,
prevX = 0, currX = 0, prevY = 0, currY = 0,
draw = false;
The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true.
1. init
Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events.
function init() {
canvas = document.querySelector(""canvas"");
context = canvas.getContext(""2d"");
w = canvas.width;
h = canvas.height;
canvas.onpointermove = handlePointerMove;
canvas.onpointerdown = handlePointerDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
}
2. drawLine
This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.
lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).
function drawLine() {
var a = prevX,
b = prevY,
c = currX,
d = currY;
context.lineWidth = 4;
context.lineCap = ""round"";
context.beginPath();
context.moveTo(a, b);
context.lineTo(c, d);
context.stroke();
context.closePath();
}
3. stopDrawing
This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).
function stopDrawing() {
draw = false;
}
4. recordPointerLocation
This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.
function recordPointerLocation(e) {
prevX = currX;
prevY = currY;
currX = e.clientX - canvas.offsetLeft;
currY = e.clientY - canvas.offsetTop;
}
5. handlePointerMove
This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.
function handlePointerMove(e) {
if (draw) {
recordPointerLocation(e);
drawLine();
}
}
6. handlePointerDown
This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.
function handlePointerDown(e) {
recordPointerLocation(e);
draw = true;
}
Finally, we have a working drawing app. But that’s just the beginning!
See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.
Part 2: Add a Clear button and a color picker
Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.
Your browser doesn't support canvas.
Clear
Color picker
This is our new color picker function. It targets the input element by its class and gets its value.
function getColor() {
return document.querySelector("".color"").value;
}
Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.
function drawLine() {
//...code...
context.strokeStyle = getColor();
context.lineWidth = 4;
context.lineCap = ""round"";
//...code...
}
Clear button
This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.
function clearCanvas() {
if (confirm(""Want to clear?"")) {
context.clearRect(0, 0, w, h);
}
}
The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner.
If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.
Let’s add this event handler to init:
function init() {
//...code...
canvas.onpointermove = handleMouseMove;
canvas.onpointerdown = handleMouseDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
document.querySelector("".clear"").onclick = clearCanvas;
}
See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.
Part 3: Draw with 2 lines
It’s time to make a line appear where no pointer has gone before. A ghost line!
For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis.
Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!).
Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.
A sketch illustrating the mathematics of reflecting a point.
What happens to the x coordinates?
The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c.
What happens to the y coordinates?
What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.
This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis.
function drawLine() {
var a = prevX, a_ = a,
b = prevY, b_ = h-b,
c = currX, c_ = c,
d = currY, d_ = h-d;
//... code ...
// Draw line #1, at the pointer's location
context.moveTo(a, b);
context.lineTo(c, d);
// Draw line #2, mirroring the line #1
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
}
In case this was too abstract for you, let’s look at some actual numbers to see how this works.
Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.
So b' = 10 - 2 = 8 and d' = 10 - 3 = 7.
We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.
If you are still confused, I don’t blame you.
Here is the result. Draw something and see what happens.
See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.
Part 4: Draw with 8 lines
I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas.
A sketch illustrating points C and D.
This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).
function drawLine() {
//... code ...
// Reassign values
a_ = w-a; b_ = b;
c_ = w-c; d_ = d;
// Draw the 3rd line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
// Reassign values
a_ = w-a; b_ = h-b;
c_ = w-c; d_ = h-d;
// Draw the 4th line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
What is happening?
You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will.
Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous).
What happens to the x coordinates?
As you recall, our x coordinates are a (prevX) and c (currX).
For the third line we are adding, a' = w - a and c' = w - c, which means…
For the fourth line, the same thing happens to our x coordinates a and c.
What happens to the y coordinates?
As you recall, our y coordinates are b (prevY) and d (currY).
For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis.
For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis.
We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same:
context.moveTo(a_, b_);
context.lineTo(c_, d_);
We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments.
Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.
A sketch illustrating points E and F.
This is the code for E and F:
// Reassign for 5
a_ = w/2+h/2-b; b_ = w/2+h/2-a;
c_ = w/2+h/2-d; d_ = w/2+h/2-c;
// Reassign for 6
a_ = w/2+h/2-b; b_ = h/2-w/2+a;
c_ = w/2+h/2-d; d_ = h/2-w/2+c;
Their x coordinates are identical and their y coordinates are reversed to one another.
This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).
A sketch illustrating points G and H.
This is the code:
// Reassign for 7
a_ = w/2-h/2+b; b_ = w/2+h/2-a;
c_ = w/2-h/2+d; d_ = w/2+h/2-c;
// Reassign for 8
a_ = w/2-h/2+b; b_ = h/2-w/2+a;
c_ = w/2-h/2+d; d_ = h/2-w/2+c;
//...code...
}
Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.
I hope you had fun learning! This is our final app:
See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.",2018,Hagar Shilo,hagarshilo,2018-12-02T00:00:00+00:00,https://24ways.org/2018/the-art-of-mathematics/,code
263,Securing Your Site like It’s 1999,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.
As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.
What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.
Bad input validation: Trusting anything the user sends you
Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.
One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.
One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile.
Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened?
There aren’t any official rules for developing software on the web. But if there were, my golden rule would be:
Never trust user input. Ever.
Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.
Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.
Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner.
Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.
SQL injection: Allowing the user to run their own database queries
A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.
One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.
It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?
' OR '1'='1
SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Alice'
AND PASSWORD='hunter2'
This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead!
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Admin'
AND PASSWORD='' OR '1'='1'
Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!
So how can you protect yourself from SQL injection?
Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.
Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!
Cross site request forgery: Getting other users to do your dirty work for you
Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.
Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky…
Let’s just say there are older and fouler things than Rick Astley in the dark places of the web.
What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:
Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge!
This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from.
The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set.
Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies.
You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.
Cross site scripting: Someone else’s code running on your website
In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.
Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then…
Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but was filtered. MySpace wasn’t about to let someone else run their own code on MySpace.
Intrigued, Samy set about finding out exactly what he could do with and
tags. He found that you could add style properties to
tags to style them with CSS.
This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from
.
Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.
alert(document.body.innerHTML)
Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.
alert(eval('document.body.inne' + 'rHTML'))
This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages.
One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.
if (location.hostname == 'profile.myspace.com') {
document.location = 'http://www.myspace.com'
+ location.pathname + location.search
}
Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited.
Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.
This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.
So how can you help protect yourself from cross-site scripting?
Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.
You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.
You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.
For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.
npm audit: Protecting yourself from code you don’t own
JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit.
npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm:
npm install npm -g
npm audit
When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.
If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!
Securing your site like it’s 2019
The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.
However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.
As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years.",2018,Katie Fenn,katiefenn,2018-12-01T00:00:00+00:00,https://24ways.org/2018/securing-your-site-like-its-1999/,code
264,Dynamic Social Sharing Images,"Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days!
With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content.
One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.
So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.
Adding sharing images
The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.
There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.
This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation.
This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article.
Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.
One of the hand-made sharing images from 2017
Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.
Hatching a new plan
One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.
The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!
In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS?
Breaking it down, I figured the steps needed would be something like:
Create the CSS to lay out a component that could be turned into an image
Add a new field to articles in the CMS to hold a handpicked quote
Build a new article template in the CMS to output the author name and quote dynamically for any article
… um … screenshot?
I thought I’d get cracking and see if I could figure out the final steps later.
Building the page
The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.
Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.
With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup.
I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process.
I added a new field to the article template to capture the quote.
With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.
Our automatically generated layout direct from the CMS
It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot?
Enter Puppeteer
Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.
Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.
Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example.
First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line:
npm i puppeteer
Then save a new file as example.js with this code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
and then run it using Node:
node example.js
This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:
Launch a browser
Open up a new page
Goto a URL
Take a screenshot
Close the browser
The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested.
That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?
Our content is up in the corner of the image with lots of empty space.
That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');
await page.setViewport({
width: 600,
height: 315
});
await page.screenshot({path: 'example.png'});
await browser.close();
})();
This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
Perfect! We now have a programmatic way of turning a URL into an image.
Improving the script
Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image.
Our goal is to call the script like this:
node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/
And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.
// Get the URL and the slug segment from it
const url = process.argv[2];
const segments = url.split('/');
// Get the second-to-last segment (the slug)
const slug = segments[segments.length-2];
We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url + 'sharing');
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
await page.screenshot({path: slug + '.png'});
await browser.close();
})();
Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.
You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little.
const imagemin = require('imagemin');
const imageminPngquant = require('imagemin-pngquant');
await imagemin([slug + '.png'], 'build', {
plugins: [
imageminPngquant({quality: '75-90'})
]
});
You can see the completed example as a gist.
Integrating it with your CMS
So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process.
For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further.
We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!
By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.
I hope you’ll give it a try!",2018,Drew McLellan,drewmclellan,2018-12-24T00:00:00+00:00,https://24ways.org/2018/dynamic-social-sharing-images/,code
271,Creating Custom Font Stacks with Unicode-Range,"Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.
If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.
One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.
Unicode-range
The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers.
@font-face {
font-family: BBCBengali;
src: url(fonts/BBCBengali.ttf) format(""opentype"");
unicode-range: U+00-FF;
}
In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range.
By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.
When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.
The best available ampersand
A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a
element with a class applied:
&
A CSS rule can then be written to select the and apply a different font:
span.amp {
font-family: Baskerville, Palatino, ""Book Antiqua"", serif;
}
That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS.
Perhaps we could do this with unicode-range.
A better best available ampersand
The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.
We can then use that new family in a regular font stack.
h1 {
font-family: Ampersand, Arial, sans-serif;
}
With this in place, any elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.
You didn’t think it was that easy, did you?
Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.
If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.
Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad.
Ensuring good fallbacks
As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off.
We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
font-family: 'Ampersand';
src: local('Arial');
}
In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial.
Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug.
If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges.
@font-face {
font-family: 'Ampersand';
src: local('Baskerville'), local('Palatino'), local('Book Antiqua');
unicode-range: U+26;
}
@font-face {
/* Ampersand fallback font */
font-family: 'Ampersand';
src: local('Arial');
unicode-range: U+270C;
}
By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C.
So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!
That obscure character, my friends, is what Unicode defines as the VICTORY HAND.
✌
So, how can we use this?
Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.
How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.
Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience.
One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page.
As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.",2011,Drew McLellan,drewmclellan,2011-12-01T00:00:00+00:00,https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/,code
276,Your jQuery: Now With 67% Less Suck,"Fun fact: more websites are now using jQuery than Flash.
jQuery is an amazing tool that’s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, “with great power comes great responsibility.” The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it.
This becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency.
Thankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We’ll tackle all three of these and hopefully you’ll walk away with some new jQuery batarangs to toss around in your next project.
Selector optimization
Selector speed: fast or slow?
Saying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color – it’s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it’s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a with an ID.
$(""#id p"");
$(""#id"").find(""p"");
Would it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn’t frustrate your users waiting for things to happen.
There are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are:
$(""#id"");
This is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery’s .find() method to limit the scope of the page that has to be searched (as in the $(""#id"").find(""p"") example shown above).
$(""p"");, $(""input"");, $(""form""); and so on
Selecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method.
$("".class"");
Selecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance.
$(""[attribute=value]"");
There is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons.
$("":hidden"");
Like attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $(""#list"").find("":hidden"");
But, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes.
Chaining
Almost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it.
Without chaining
$(""#object"").addClass(""active"");
$(""#object"").css(""color"",""#f0f"");
$(""#object"").height(300);
With chaining
$(""#object"").addClass(""active"").css(""color"", ""#f0f"").height(300);
This has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait… “cached selector”? What is this new devilry?
Caching
Another easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $("".element"") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them.
First, run your search (here we’re selecting all of the
elements inside ):
var blocks = $(""#blocks"").find(""li"");
Now, you can use the blocks variable wherever you want without having to search the DOM every time.
$(""#hideBlocks"").click(function() {
blocks.fadeOut();
});
$(""#showBlocks"").click(function() {
blocks.fadeIn();
});
My advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot).
Event delegation
Event listeners cost memory. In complex websites and apps it’s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation.
In a bit of an extreme example, imagine a situation where a 10×10 cell table needs to have an event listener on each cell; let’s say that clicking on a cell adds or removes a class that defines the cell’s background color. A typical way that this might be written (and something I’ve often seen during code reviews) is like so:
$('table').find('td').click(function() {
$(this).toggleClass('active');
});
jQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery’s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we’d simply do the following:
$('table').find('td').on('click',function() {
$(this).toggleClass('active');
});
Simple enough, right? Sure, but the problem here is that we’re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the ) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above:
$('table').on('click','td',function() {
$(this).toggleClass('active');
});
All we’ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we’ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you’d be completely right. The difference between the two examples above is staggering.
(Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you’ve never used it before, it’s worth checking the API docs for that page to see how it works.)
DOM manipulation
jQuery makes it very easy to manipulate the DOM. It’s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop.
In this case, let’s say we’ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you’ll see code like this to pull this off:
var arr = [reallyLongArrayOfImageURLs];
$.each(arr, function(count, item) {
var newImg = ' ';
$('#imgList').append(newImg);
});
There are a couple of problems with this. For one (which you should have already noticed if you’ve read the earlier part of this article), we’re making the $(""#imgList"") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it’s adding a new to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded ‘A script is causing this page to run slowly’ warning.
var arr = [reallyLongArrayOfImageURLs],
tmp = '';
$.each(arr, function(count, item) {
tmp += ' ';
});
$('#imgList').append(tmp);
All we’ve done here is create a tmp variable that each is added to as it’s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our all in one go. Browsers work much faster when working with objects in memory rather than on screen, so this is a much faster, more CPU-cycle-friendly method of building a list.
Wrapping up
These are far from being the only ways to make your jQuery code run better, but they are among the simplest ones to implement. Though each individual change may only make a few milliseconds of difference, it doesn’t take long for those milliseconds to add up. Studies have shown that the human eye can discern delays of as few as 100ms, so simply making a few changes sprinkled throughout your code can very easily have a noticeable effect on how well your website or app performs. Do you have other jQuery optimization tips to share? Leave them in the comments and help make us all better.
Now go forth and make awesome!",2011,Scott Kosman,scottkosman,2011-12-13T00:00:00+00:00,https://24ways.org/2011/your-jquery-now-with-less-suck/,code
283,"CSS3 Patterns, Explained","Many of you have probably seen my CSS3 patterns gallery. It became very popular throughout the year and it showed many web developers how powerful CSS3 gradients really are. But how many really understand how these patterns are created? The biggest benefit of CSS-generated backgrounds is that they can be modified directly within the style sheet. This benefit is void if we are just copying and pasting CSS code we don’t understand. We may as well use a data URI instead.
Important note
In all the examples that follow, I’ll be using gradients without a vendor prefix, for readability and brevity. However, you should keep in mind that in reality you need to use all the vendor prefixes (-moz-, -ms-, -o-, -webkit-) as no browser currently implements them without a prefix. Alternatively, you could use -prefix-free and have the current vendor prefix prepended at runtime, only when needed.
The syntax described here is the one that browsers currently implement. The specification has since changed, but no browser implements the changes yet. If you are interested in what is coming, I suggest you take a look at the dev version of the spec.
If you are not yet familiar with CSS gradients, you can read these excellent tutorials by John Allsopp and return here later, as in the rest of the article I assume you already know the CSS gradient basics:
CSS3 Linear Gradients
CSS3 Radial Gradients
The main idea
I’m sure most of you can imagine the background this code generates:
background: linear-gradient(left, white 20%, #8b0 80%);
It’s a simple gradient from one color to another that looks like this:
See this example live
As you probably know, in this case the first 20% of the container’s width is solid white and the last 20% is solid green. The other 60% is a smooth gradient between these colors. Let’s try moving these color stops closer to each other:
background: linear-gradient(left, white 30%, #8b0 70%);
See this example live
background: linear-gradient(left, white 40%, #8b0 60%);
See this example live
background: linear-gradient(left, white 50%, #8b0 50%);
See this example live
Notice how the gradient keeps shrinking and the solid color areas expanding, until there is no gradient any more in the last example. We can even adjust the position of these two color stops to control where each color abruptly changes into another:
background: linear-gradient(left, white 30%, #8b0 30%);
See this example live
background: linear-gradient(left, white 90%, #8b0 90%);
See this example live
What you need to take away from these examples is that when two color stops are at the same position, there is no gradient, only solid colors. Even without going any further, this trick is useful for a number of different use cases like faux columns or the effect I wanted to achieve in my homepage or the -prefix-free page where the background is only shown on one side and hidden on the other:
Combining with background-size
We can do wonders, however, if we combine this with the CSS3 background-size property:
background: linear-gradient(left, white 50%, #8b0 50%);
background-size: 100px 100px;
See this example live
And there it is. We just created the simplest of patterns: (vertical) stripes. We can remove the first parameter (left) or replace it with top and we’ll get horizontal stripes. However, let’s face it: Horizontal and vertical stripes are kinda boring. Most stripey backgrounds we see on the web are diagonal. So, let’s try doing that.
Our first attempt would be to change the angle of the gradient to something like 45deg. However, this results in an ugly pattern like this:
See this example live
Before reading on, think for a second: why didn’t this produce the desired result? Can you figure it out?
The reason is that the gradient angle rotates the gradient inside each tile, not the tiled background as a whole. However, didn’t we have the same problem the first time we tried to create diagonal stripes with an image? And then we learned that every stripe has to be included twice, like so:
So, let’s try to create that effect with CSS gradients. It’s essentially what we tried before, but with more color stops:
background: linear-gradient(45deg, white 25%,
#8b0 25%, #8b0 50%,
white 50%, white 75%,
#8b0 75%);
background-size:100px 100px;
See this example live
And there we have our stripes! An easy way to remember the order of the percentages and colors it is that you always have two of the same in succession, except the first and last color.
Note: Firefox for Mac also needs an additional 100% color stop at the end of any pattern with more than two stops, like so: ..., white 75%, #8b0 75%, #8b0). The bug was reported in February 2011 and you can vote for it and track its progress at Bugzilla.
Unfortunately, this is essentially a hack and we will realize that if we try to change the gradient angle to 60deg:
See this example live
Not that maintainable after all, eh? Luckily, CSS3 offers us another way of declaring such backgrounds, which not only helps this case but also results in much more concise code:
background: repeating-linear-gradient(60deg, white, white 35px, #8b0 35px, #8b0 70px);
See this example live
In this case, however, the size has to be declared in the color stop positions and not through background-size, since the gradient is supposed to cover the entire container. You might notice that the declared size is different from the one specified the previous way. This is because the size of the stripes is measured differently: in the first example we specify the dimensions of the tile itself; in the second, the width of the stripes (35px), which is measured diagonally.
Multiple backgrounds
Using only one gradient you can create stripes and that’s about it. There are a few more patterns you can create with just one gradient (linear or radial) but they are more or less boring and ugly. Almost every pattern in my gallery contains a number of different backgrounds. For example, let’s create a polka dot pattern:
background: radial-gradient(circle, white 10%, transparent 10%),
radial-gradient(circle, white 10%, black 10%) 50px 50px;
background-size:100px 100px;
See this example live
Notice that the two gradients are almost the same image, but positioned differently to create the polka dot effect. The only difference between them is that the first (topmost) gradient has transparent instead of black. If it didn’t have transparent regions, it would effectively be the same as having a single gradient, as the topmost gradient would obscure everything beneath it.
There is an issue with this background. Can you spot it?
This background will be fine for browsers that support CSS gradients but, for browsers that don’t, it will be transparent as the whole declaration is ignored. We have two ways to provide a fallback, each for different use cases. We have to either declare another background before the gradient, like so:
background: black;
background: radial-gradient(circle, white 10%, transparent 10%),
radial-gradient(circle, white 10%, black 10%) 50px 50px;
background-size:100px 100px;
or declare each background property separately:
background-color: black;
background-image: radial-gradient(circle, white 10%, transparent 10%),
radial-gradient(circle, white 10%, transparent 10%);
background-size:100px 100px;
background-position: 0 0, 50px 50px;
The vigilant among you will have noticed another change we made to our code in the last example: we altered the second gradient to have transparent regions as well. This way background-color serves a dual purpose: it sets both the fallback color and the background color of the polka dot pattern, so that we can change it with just one edit. Always strive to make code that can be modified with the least number of edits. You might think that it will never be changed in that way but, almost always, given enough time, you’ll be proved wrong.
We can apply the exact same technique with linear gradients, in order to create checkerboard patterns out of right triangles:
background-color: white;
background-image: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%),
linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%);
background-size:100px 100px;
background-position: 0 0, 50px 50px;
See this example live
Using the right units
Don’t use pixels for the sizes without any thought. In some cases, ems make much more sense. For example, when you want to make a lined paper background, you want the lines to actually follow the text. If you use pixels, you have to change the size every time you change font-size. If you set the background-size in ems, it will naturally follow the text and you will only have to update it if you change line-height.
Is it possible?
The shapes that can be achieved with only one gradient are:
stripes
right triangles
circles and ellipses
semicircles and other shapes formed from slicing ellipses horizontally or vertically
You can combine several of them to create squares and rectangles (two right triangles put together), rhombi and other parallelograms (four right triangles), curves formed from parts of ellipses, and other shapes.
Just because you can doesn’t mean you should
Technically, anything can be crafted with these techniques. However, not every pattern is suitable for it. The main advantages of this technique are:
no extra HTTP requests
short code
human-readable code (unlike data URIs) that can be changed without even leaving the CSS file.
Complex patterns that require a large number of gradients are probably better left to SVG or bitmap images, since they negate almost every advantage of this technique:
they are not shorter
they are not really comprehensible – changing them requires much more effort than using an image editor
They still save an HTTP request, but so does a data URI.
I have included some very complex patterns in my gallery, because even though I think they shouldn’t be used in production (except under very exceptional conditions), understanding how they work and coding them helps somebody understand the technology in much more depth.
Another rule of thumb is that if your pattern needs shapes to obscure parts of other shapes, like in the star pattern or the yin yang pattern, then you probably shouldn’t use it. In these patterns, changing the background color requires you to also change the color of these shapes, making edits very tedious.
If a certain pattern is not practicable with a reasonable amount of CSS, that doesn’t mean you should resort to bitmap images. SVG is a very good alternative and is supported by all modern browsers.
Browser support
CSS gradients are supported by Firefox 3.6+, Chrome 10+, Safari 5.1+ and Opera 11.60+ (linear gradients since Opera 11.10). Support is also coming in Internet Explorer when IE10 is released. You can get gradients in older WebKit versions (including most mobile browsers) by using the proprietary -webkit-gradient(), if you really need them.
Epilogue
I hope you find these techniques useful for your own designs. If you come up with a pattern that’s very different from the ones already included, especially if it demonstrates a cool new technique, feel free to send a pull request to the github repo of the patterns gallery. Also, I’m always fascinated to see my techniques put in practice, so if you made something cool and used CSS patterns, I’d love to know about it!
Happy holidays!",2011,Lea Verou,leaverou,2011-12-16T00:00:00+00:00,https://24ways.org/2011/css3-patterns-explained/,code
288,Displaying Icons with Fonts and Data- Attributes,"Traditionally, bitmap formats such as PNG have been the standard way of delivering iconography on websites. They’re quick and easy, and it also ensures they’re as pixel crisp as possible. Bitmaps have two drawbacks, however: multiple HTTP requests, affecting the page’s loading performance; and a lack of scalability, noticeable when the page is zoomed or viewed on a screen with a high pixel density, such as the iPhone 4 and 4S.
The requests problem is normally solved by using CSS sprites, combining the icon set into one (physically) large image file and showing the relevant portion via background-position. While this works well, it can get a bit fiddly to specify all the positions. In particular, scalability is still an issue. A vector-based format such as SVG sounds ideal to solve this, but browser support is still patchy.
The rise and adoption of web fonts have given us another alternative. By their very nature, they’re not only scalable, but resolution-independent too. No need to specify higher resolution graphics for high resolution screens!
That’s not all though:
Browser support: Unlike a lot of new shiny techniques, they have been supported by Internet Explorer since version 4, and, of course, by all modern browsers. We do need several different formats, however!
Design on the fly: The font contains the basic graphic, which can then be coloured easily with CSS – changing colours for themes or :hover and :focus styles is done with one line of CSS, rather than requiring a new graphic. You can also use CSS3 properties such as text-shadow to add further effects. Using -webkit-background-clip: text;, it’s possible to use gradient and inset shadow effects, although this creates a bitmap mask which spoils the scalability.
Small file size: specially designed icon fonts, such as Drew Wilson’s Pictos font, can be as little as 12Kb for the .woff font. This is because they contain fewer characters than a fully fledged font. You can see Pictos being used in the wild on sites like Garrett Murray’s Maniacal Rage.
As with all formats though, it’s not without its disadvantages:
Icons can only be rendered in monochrome or with a gradient fill in browsers that are capable of rendering CSS3 gradients. Specific parts of the icon can’t be a different colour.
It’s only appropriate when there is an accompanying text to provide meaning. This can be alleviated by wrapping the text label in a tag (I like to use rather than , due to the fact that it’s smaller and isn’ t being used elsewhere) and then hiding it from view with text-indent:-999em.
Creating an icon font can be a complex and time-consuming process. While font editors can carry out hinting automatically, the best results are achieved manually.
Unless you’re adept at creating your own fonts, you’re restricted to what is available in the font. However, fonts like Pictos will cover the most common needs, and icons are most effective when they’re using familiar conventions.
The main complaint about using fonts for icons is that it can mean adding a meaningless character to our markup. The good news is that we can overcome this by using one of two methods – CSS generated content or the data-icon attribute – in combination with the :before and :after pseudo-selectors, to keep our markup minimal and meaningful.
Our simple markup looks like this:
View Basket
Note the multiple class attributes. Next, we’ll import the Pictos font using the @font-face web fonts property in CSS:
@font-face {
font-family: 'Pictos';
src: url('pictos-web.eot');
src: local('☺'),
url('pictos-web.woff') format('woff'),
url('pictos-web.ttf') format('truetype'),
url('pictos-web.svg#webfontIyfZbseF') format('svg');
}
This rather complicated looking set of rules is (at the time of writing) the most bulletproof way of ensuring as many browsers as possible load the font we want. We’ll now use the content property applied to the :before pseudo-class selector to generate our icon. Once again, we’ll use those multiple class attribute values to set common icon styles, then specific styles for .basket. This helps us avoid repeating styles:
.icon {
font-family: 'Pictos';
font-size: 22px:
}
.basket:before {
content: ""$"";
}
What does the :before pseudo-class do? It generates the dollar character in a browser, even when it’s not present in the markup. Using the generated content approach means our markup stays simple, but we’ll need a new line of CSS, defining what letter to apply to each class attribute for every icon we add.
data-icon is a new alternative approach that uses the HTML5 data- attribute in combination with CSS attribute selectors. This new attribute lets us add our own metadata to elements, as long as its prefixed by data- and doesn’t contain any uppercase letters. In this case, we want to use it to provide the letter value for the icon. Look closely at this markup and you’ll see the data-icon attribute.
View Basket
We could add others, in fact as many as we like.
Favourites
History
Location
Then, we need just one CSS attribute selector to style all our icons in one go:
.icon:before {
content: attr(data-icon);
/* Insert your fancy colours here */
}
By placing our custom attribute data-icon in the selector in this way, we can enable CSS to read the value of that attribute and display it before the element (in this case, the anchor tag). It saves writing a lot of CSS rules. I can imagine that some may not like the extra attribute, but it does keep it out of the actual content – generated or not.
This could be used for all manner of tasks, including a media player and large simple illustrations. See the demo for live examples. Go ahead and zoom the page, and the icons will be crisp, with the exception of the examples that use -webkit-background-clip: text as mentioned earlier.
Finally, it’s worth pointing out that with both generated content and the data-icon method, the letter will be announced to people using screen readers. For example, with the shopping basket icon above, the reader will say “dollar sign view basket”. As accessibility issues go, it’s not exactly the worst, but could be confusing. You would need to decide whether this method is appropriate for the audience. Despite the disadvantages, icon fonts have huge potential.",2011,Jon Hicks,jonhicks,2011-12-12T00:00:00+00:00,https://24ways.org/2011/displaying-icons-with-fonts-and-data-attributes/,code
289,Front-End Developers Are Information Architects Too,"The theme of this year’s World IA Day was “Information Everywhere, Architects Everywhere”. This article isn’t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People’s ability to be successful online is unequivocally connected to the quality of the code that is written.
Places made of information
In information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:
People live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.
25 Theses
We’re creating structures which people rely on for significant parts of their lives, so it’s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.
Semantics
The word “semantic” has its origin in Greek words meaning “significant”, “signify”, and “sign”. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building’s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don’t need an “entrance” sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building’s purpose happens, but the affect is similar: the building is signifying something about its purpose.
HTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:
Elements, attributes, and attribute values in HTML are defined … to have certain meanings (semantics). For example, the element represents an ordered list, and the lang attribute represents the language of the content.
HTML 5.1 Semantics, structure, and APIs of HTML documents
HTML’s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they’re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It’s also what a front-end developer does, whether they realise it or not.
A brief introduction to information architecture
We’re going to start by looking at what an information architect is. There are many definitions, and I’m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:
the individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.
Of Patterns And Structures
To me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.
Just as there are many definitions of what an information architect is, there are for information architecture itself. I’m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:
The structural design of shared information environments.
The synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.
The art and science of shaping information products and experiences to support usability, findability, and understanding.
Information Architecture For The World Wide Web, 4th Edition
To me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015’s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct element to mark up headings, in some situations browsers will decide that a is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by Léonie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.
Our definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we’re going to use. Dan defines these terms as:
Ontology
The definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.
What we mean when we say what we say.
Taxonomy
The arrangements of the parts. Developing systems and structures for what everything’s called, where everything’s sorted, and the relationships between labels and categories
Choreography
Rules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.
We now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:
… data is not information. Information is data endowed with relevance and purpose.
Managing For The Future
If we use the correct semantic element to mark up content then we’re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we’re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an element should be relevant to its parent, which should be the . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you’ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:
by creating a list of headings so users can quickly scan the page for information
by using a keyboard command to cycle through one heading at a time
If we had a document for Christmas Day TV we might structure it something like this:
Christmas Day TV schedule
BBC1
Morning
Evening
BBC2
Morning
Evening
ITV
Morning
Evening
Channel 4
Morning
Evening
If I use VoiceOver to generate a list of headings, I get this:
Once I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the s:
If we hadn’t used headings, of if we’d nested them incorrectly, our users would be frustrated.
Putting this together
Let’s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a . Every browser understands what a is, how it works, and what keyboard shortcuts should be used with them. The HTML specification for the element says:
The element represents a button labeled by its contents.
The contents that a can have include the type attribute, any relevant ARIA attributes, and the actual text label that the user sees. This information is more important than the visual design: it doesn’t matter how beautiful or obtuse the design is, if the underlying code is non-semantic and poorly labelled, people are going to struggle to use it. Here are three buttons, each created with the same HTML but with different designs:
Regardless of what they look like, because we’ve used semantic HTML instead of a bunch of meaningless s or
s, people who use assistive technology are going to benefit. Out of the box, without any extra development effort, a is accessible and usable with a keyboard. We don’t have to write event handlers to listen for people pressing the Enter key or the space bar, which we would have to do if we’d faked a button with non-semantic elements. Our can also be quickly findable: for example, in the same way it’s possible to create a list of headings with a screen reader, I can also create a list of form elements and then quickly jump to the one I want.
Now we have our , let’s add the panel we’re toggling the appearance of. Here’s our code:
Settings
There’s quite a bit going on here. We’re using the:
aria-controls attribute to architect a connection between the element and the panel whose appearance it controls. When some assistive technology, for example the JAWS screen reader, encounters an element with aria-controls it audibly tells a user about the controlled expanded element and gives them the ability to move focus to it.
aria-expanded attribute to denote whether the panel is visible or not. We toggle this value using JavaScript to true when the panel is visible and false when it’s not. This important attribute tells people who use screen readers about the state of the elements they’re interacting with. For example, VoiceOver announces Settings expanded button when the panel is visible and Settings collapsed button when it’s hidden.
aria-labelledby attribute to give the list a title of “Settings”. This can benefit some users of assistive technology. For example, screen readers can cycle through all the lists on a page, so being able to title them can improve findability. Being able to hear list Settings three items is, I’d argue, more useful than list three items. By doing this we’re supporting usability and findability.
element to contain our list of links in our panel.
Let’s look at the choice of to contain our settings choices. Firstly, our settings are related items, so they belong in a structure that semantically groups things. This is something that a list can do that other elements or patterns can’t. This pattern, for example, isn’t semantic and has no structure:
All we have there is three elements next to each other on the screen and in the DOM. That is not robust code that signifies anything.
Why are we using an unordered list as opposed to an ordered list or a definition list? A quick look at the HTML specification tells us why:
The element represents a list of items, where the order of the items is not important — that is, where changing the order would not materially change the meaning of the document.
The HTML 5.1 specification’s description of the element
Will the meaning of our document materially change if we moved the order of our links around? Nope. Therefore, I’d argue, we’ve used the correct element to structure our content.
These coding decisions are information architecture
I believe that what we’ve done here is pure information architecture. Going back to Dan Klyn’s model, we’ve practiced ontology by looking at the meaning of what we’re intending to communicate:
we want to communicate there is an interactive element that toggles the appearance of an element on a page so we’ve used one, a , with those semantics.
programmatically we’ve used the type='button' attribute to signify that the button isn’t a menu, reset, or submit element.
visually we’ve designed our look like something that can be interacted with and, importantly, we haven’t removed the focus ring.
we’ve labelled the with the word “Settings” so that our users will hopefully understand what the button is for.
we’ve used an element to structure and communicate our list of related items.
We’ve also practiced taxonomy by developing systems and structures and creating relationships between our elements:
by connecting the to the panel using the aria-controls attribute we’ve programmatically created a relationship between two elements.
we’ve developed a structure in our elements by labelling our with the same name as the that controls its appearance.
And finally we’ve practiced choreography by creating elements that foster movement and interaction. We’ve anticipated the way users and information want to flow:
we’ve used a element that is interactive and accessible out of the box.
our aria-controls attribute can help some people who use screen readers move easily from the to the panel it controls.
by toggling the value of the aria-expanded attribute we’ve developed a system that tells assistive technology about the status of the relationship between our elements: the panel is visible or the panel is hidden.
we’ve made sure our information is more usable and findable no matter how our users want or need to interact with it. Regardless of how someone “sees” our work they’re going to be able to use it because we’ve architected multiple ways to access our information.
Information architecture, robust code, and accessibility
The United Nations estimates that around 10% of the world’s population has some form of disability which, at the time of writing, is around 740,000,000 people. That’s a lot of people who rely on well-architected semantic code that can be interpreted by whatever assistive technology they may need to use.
If everyone involved in the creation of our places made of information practiced information architecture it would make satisfying the WCAG 2.0 POUR principles so much easier. Our digital construction practices directly affect the quality of life of millions of people, and we have a responsibility to make technology available to them. In her book How To Make Sense Of Any Mess, Abby Covert states:
If we’re going to be successful in this new world, we need to see information as a workable material and learn to architect it in a way that gets us to our goals.
How To Make Sense Of Any Mess
I believe that the world will be a better place if we start treating front-end development as information architecture.",2016,Francis Storr,francisstorr,2016-12-17T00:00:00+00:00,https://24ways.org/2016/front-end-developers-are-information-architects-too/,code
292,Watch Your Language!,"I’m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can’t be expressed in some languages, while other languages express these concepts with ease.
It also helped me understand the way we label languages. English: business. French: romance. Here’s an example of how words, or the absence thereof, can affect the way we think:
In French we love everything. There’s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It’s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn’t grasped the difference. Needless to say, I’ve scared away plenty of first dates!
Learning another language made me realize the limitations of my native language and revealed concepts I didn’t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas.
When I lived in Montréal, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient.
I’m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job.
When I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn’t consider accessibility. All this made editing views a laborious process.
Grep. Replace all. Test. Ship it. Repeat.
This wasn’t sustainable at Shopify’s scale, so the newly-formed front end team was given two missions:
Make the app responsive (AKA Let’s Make This Thing Responsive ASAP)
Make the view layer scalable and maintainable (AKA Let’s Build a Pattern Library… in Ruby)
Let’s make this thing responsive ASAP
The year was 2015. The Shopify admin wasn’t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries.
It seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn’t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool.
Writing the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive.
But with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn’t performant at all. Components would jarringly jump around the page before settling in on first paint.
It was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn’t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout.
In other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly.
Let’s build a pattern library… in Ruby
In order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping.
When I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn’t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan.
We put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks.
Since the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn’t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn’t going to cut it.
I don’t know the end of this story, because we haven’t written it yet. We’ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven’t determined the winner yet. Only time (and data gathering) will tell.
Ruby is great but it doesn’t speak the browser’s language efficiently. It was not the right language for the job.
Learning a new spoken language has had an impact on how I write code. It has taught me that you don’t know what you don’t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you’re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.",2016,Annie-Claude Côté,annieclaudecote,2016-12-10T00:00:00+00:00,https://24ways.org/2016/watch-your-language/,code
293,A Favor for Your Future Self,"We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens.
We comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later.
I want to talk about a situation that Past Alicia and Team couldn’t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn’t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it’s the ultimate “I really hope this doesn’t break something else” situation. It was a stressful and tedious effort of triple checking that the things we were removing weren’t dependencies elsewhere. To be honest, we wouldn’t have been able to do this with any amount of success or confidence without our test suite.
Writing tests for code is one of those things that developers really, really don’t want to do. It’s one of the easiest things to cut in the development process, and there’s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can’t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren’t in a tough spot later on because we only thought of the best case scenarios.
JavaScript
Regardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you’re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests.
Unit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations.
/*
* This example uses a test framework called Jasmine
*/
describe(""Calculator Operations"", function () {
it(""Should add two numbers"", function () {
// Say we have a calculator
Calculator.init();
// We can run the function that does our addition calculation...
var result = Calculator.addNumbers(7,3);
// ...and ensure we're getting the right output
expect(result).toBe(10);
});
});
Even though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let’s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn’t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.).
it(“Should remember the last calculation”, function () {
// Run an operation
Calculator.addNumbers(7,10);
// Expect something else to have happened as a result
expect(Calculator.updateCurrentValue).toHaveBeenCalled();
expect(Calculator.currentValue).toBe(17);
});
Unit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment.
Interfaces
Regardless of how you’re building something, it most definitely has some kind of interface. Whether you’re using a very barebones structure, or you’re leveraging a whole design system, these things can be tested as well.
Acceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable.
For example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress.
/*
* This example uses Jasmine along with an add-on called jasmine-integration
*/
describe(""Acceptance tests"", function () {
// Go to our signup page
var page = visit(""/signup"");
// Fill our signup form with invalid information
page.fill_in(""input[name='email']"", ""Not An Email"");
page.fill_in(""input[name='name']"", ""Alicia"");
page.click(""button[type=submit]"");
// Check that we get an expected error message
it(""Shouldn't allow signup with invalid information"", function () {
expect(page.find(""#signupError"").hasClass(""hidden"")).toBeFalsy();
});
// Now, fill our signup form with valid information
page.fill_in(""input[name='email']"", ""thisismyemail@gmail.com"");
page.fill_in(""input[name='name']"", ""Gerry"");
page.click(""button[type=submit]"");
// Check that we get an expected success message and the error message is hidden
it(""Should allow signup with valid information"", function () {
expect(page.find(""#signupError"").hasClass(""hidden"")).toBeTruthy();
expect(page.find(""#thankYouMessage"").hasClass(""hidden"")).toBeFalsy();
});
});
In terms of visual design, we’re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga.
Tools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this:
/*
* This example uses PhantomCSS
*/
casper.start(""/home"").then(function(){
// Initial state of form
phantomcss.screenshot(""#signUpForm"", ""sign up form"");
// Hit the sign up button (should trigger error)
casper.click(""button#signUp"");
// Take a screenshot of the UI component
phantomcss.screenshot(""#signUpForm"", ""sign up form error"");
// Fill in form by name attributes & submit
casper.fill(""#signUpForm"", {
name: ""Alicia Sedlock"",
email: ""alicia@example.com""
}, true);
// Take a second screenshot of success state
phantomcss.screenshot(""#signUpForm"", ""sign up form success"");
});
You run this code before starting any development, to create your baseline set of screen captures. After you’ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements – your image diff would look something like this:
This is a great tool for ensuring not just your site retains its expected styling, but it’s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It’s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored.
Conclusion
The shape and size of what you’re testing for your site or app will vary. You may not need lots of unit or integration tests if you don’t write a lot of JavaScript. You may not need visual regression testing for a one page site. It’s important to assess your codebase to see which tests would provide the most benefit for you and your team.
Writing tests isn’t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that’s broken breaks trust with our users, and it’s our responsibility as developers to make sure that trust isn’t broken. Testing shouldn’t be considered a “nice to have” - it should be an integral piece of our workflow and our day-to-day job.",2016,Alicia Sedlock,aliciasedlock,2016-12-03T00:00:00+00:00,https://24ways.org/2016/a-favor-for-your-future-self/,code
294,New Tricks for an Old Dog,"Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy.
I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on.
In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways!
How do you do, fellow kids?
To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React.
Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us.
Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge.
There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it.
Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on.
To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone.
But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort!
Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using.
Boiled down, what we liked seemed quite simple:
Component nesting & composition
Easy, predictable state management
Normal functions for data manipulation
Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app.
While the specifics of our situation and Flight aren’t really important, this experience taught me something:
Distill good tech into great ideas. You can apply great ideas anywhere.
You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already?
Times, they are a changin’
Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code.
Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate.
In fact, we found ourselves with a mix of both AMD syntaxes:
define([""lodash""], function (_) {
// . . .
});
define(function (require) {
var _ = require(""lodash"");
// . . .
});
This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax:
import _ from ""lodash"";
These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there?
To explain, I want to introduce you to codemods and jscodeshift.
A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(""..."") to new URL(""..."").
jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes.
Here’s an example that renames all instances of the variable foo to bar:
module.exports = function (fileInfo, api) {
return api
.jscodeshift(fileInfo.source)
.findVariableDeclarators('foo')
.renameTo('bar')
.toSource();
};
It’s a seriously powerful tool, and we’ve used it to write a series of codemods that:
rename modules,
unify our use of AMD to a single syntax,
transition from one testing framework to another, and
switch from AMD to CommonJS.
These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS:
commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc
Date: Tue Jul 12 2016
Run AMD -> CommonJS codemod
418 files changed, 47550 insertions(+), 48468 deletions(-)
Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone!
From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes:
Find all the existing patterns
Choose the two most similar
Unify with a codemod
Repeat.
For example:
For module loading, we had 2 competing AMD patterns plus some use of CommonJS
The two AMD syntaxes were the most similar
We used a codemod to move to unify the AMD patterns
Later we returned to AMD to convert it to CommonJS
It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer.
Welcome aboard!
As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie.
Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks.
Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding!
And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help.
This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies.
After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help!
For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts:
What are our dependencies?
Where are the potential points of failure?
Where does authentication live? Storage? Caching?
Who owns “X”?
Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track.
To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review.
In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started:
Every pull request gets a +1 from someone else.
That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master.
At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things:
Don’t review for more than hour 1
Keep reviews smaller than ~400 lines 2
Code review your own code first 2
After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them.
On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration.
But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool.
It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours!
And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace.
For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results.
If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this!
More, please!
There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations.
And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk.
Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩
Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩",2016,Tom Ashworth,tomashworth,2016-12-18T00:00:00+00:00,https://24ways.org/2016/new-tricks-for-an-old-dog/,code
295,Internet of Stranger Things,"This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.
What you’ll see
You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.)
It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.
What you’ll need
Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3
Micro SD card at least 4Gb Class 10 speed3
Micro USB power supply at least 2A
USB Wifi dongle (unless you have a Pi 3 - that has wifi built in).
Addressable fairy lights
Logic level shifter (with pins soldered unless you want to do it!)
Breadboard
Jumper wires (3x male to male and 4x female to male)
Optional but recommended
Base board to hold the Pi and Breadboard (often comes with a breadboard!)
Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.
Setting up the Raspberry Pi
You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5.
Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.
network={
ssid=""mywifi""
psk=""mypassword""
}
Save the file, eject the card from your computer and plug it into the Raspberry Pi.
If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet!
Wiring!
Time to wire everything up!
First of all, push the Logic Level Converter into the middle of the breadboard:
Logic Level Converter
The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top.
Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal.
Connect the first two wires between the Raspberry Pi pins and the breadboard:
Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram.
Then the next two:
This is what you should have so far:
Lights
Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.
Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check.
Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in).
You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out!
That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!
The Moment of Truth!
Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down!
However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up.
Rig up the lights!
Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi.
Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row:
Now visit lights.seb.ly and you’ll see this :
If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn.
How it works - the geeky details!
Electronics:
The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer).
Addressable LEDs or “Neopixels”
We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string.
The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.
Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!
A Laser Light Synth! Covered with around 800 super bright neopixels!
Logic Level Converter
The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip.
Power
Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide.
Node.js
There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server.
The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.
WebSockets
I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server.
When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9.
The Raspberry Pi code
The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file:
node /home/pi/strangerthings/client.js > /dev/null &
Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process.
Working with the Raspberry Pi headless
You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial.
Improvements
This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style.
Where next?
Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com.
There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice!
Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources.
I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly.
Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎
Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎
Slower cards will work but performance may suffer ↩︎
Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎
You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots. ↩︎
SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎
Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎
Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎
So you can see other people sending messages in the browser ↩︎
ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎
You can change this default hostname using raspi-config ↩︎",2016,Seb Lee-Delisle,sebleedelisle,2016-12-01T00:00:00+00:00,https://24ways.org/2016/internet-of-stranger-things/,code
296,Animation in Design Systems,"Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts.
But while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let’s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences.
Understand the importance of animation
Part of the reason we treat animation like a second-class citizen is that we don’t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion.
We are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place.
“Where did that menu go? Oh it’s in there.”
For a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks.
An animation flow on mobile.
Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.
14 second generic loading screen.22 second custom loading screen.
This also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn’t as long.
Eli Fitch gave a talk at CSS Dev Conf called: “Perceived Performance: The Only Kind That Really Matters”, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable–and therefore easier to measure–but that measuring how a user feels when visiting the site is more important and worth the time and attention.
In his talk, he states “Humans over-estimate passive waits by 36%, per Richard Larson of MIT”. This means that if you’re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording.
Reign it in
Unlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they’re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way.
The first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren’t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.)
Not sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas.
Even some beautiful component libraries that have animation in the docs make this mistake. You don’t need every kind of animation, just like you don’t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can’t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I’ve seen have a mixin that does just this.
Which leads to…
Have an opinion
Many people are confused about Material Design. They think that Material Design is Motion Design, mostly because they’ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that’s good branding.
By using Google’s motion design language and not your own, you’re losing out on a chance to be memorable on your own website.
What does having an opinion on motion look like in practice? It could mean you’ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks “gliding” and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they’re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don’t have to communicate again and again to make things feel cohesive.
Create good developer resources
Sometimes people don’t incorporate animation into a design system because they aren’t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow):
Create timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it’s the default (usually around .25 seconds or thereabouts).
Keep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation.
Use the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here.
See the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen.
The example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily.
One low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable.
React and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example!
Responsive
At the very least we should ensure that interaction also works well on mobile, but if we’d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well.
Responsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px × 40px or greater) or use @media(pointer:coarse) as support allows.
Buy-in
Sometimes people don’t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them:
This helps people you’re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You’re also more likely to get something through if you’re proving that what you’re making is needed and can be reused.
Good compromises can be made this way: “we’re not going to go all out and create an animated ‘About Us’ page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.”
Successfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can’t be overstressed that good communication is key.
Get started!
With these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.",2016,Sarah Drasner,sarahdrasner,2016-12-16T00:00:00+00:00,https://24ways.org/2016/animation-in-design-systems/,code
298,First Steps in VR,"The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not.
I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.
Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.
WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.
Getting Started with A-Frame
I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders.
I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun.
For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:
Must work on Google Cardboard at a minimum, because of price
Therefore, it must not rely on having a controller
Auto-moving around a maze would be a good example
Move in direction you look
Stretch goal: Scoring, time until you hit a wall or get stuck in maze
Stretch goal: Levels, so the map doesn’t need to be random
Stretch goal: Snow!
I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground.
24 ways
As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.
Styling
Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave.
Note, you will probably need a server running for images to work. You can do this by running python -m ""SimpleHTTPServer"" in the folder where the code is, then go to localhost:8000 in browser.
Textures
Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor.
The
is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures.
To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.
Replace:
With:
This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.
box.setAttribute('material', 'src: #texture-wall');
That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object.
Lighting
The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.
There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.
To start with I wanted to light up the scene with a general light, type=""ambient"", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky.
I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=""point"" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical.
By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.
I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.
Movement
One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.
AFRAME.registerComponent('automove-controls', {
init: function () {
this.speed = 0.1;
this.isMoving = true;
this.velocityDelta = new THREE.Vector3();
},
isVelocityActive: function () {
return this.isMoving;
},
getVelocityDelta: function () {
this.velocityDelta.z = this.isMoving ? -speed : 0;
return this.velocityDelta.clone();
}
});
Replace:
universal-controls
With:
universal-controls=""movementControls: automove, gamepad, keyboard""
This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.
Building a map
Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.
I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future.
var map =
{
""data"":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
""height"":10,
""width"":10
}
As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.
document.querySelector('a-scene').addEventListener('render-target-loaded', function () {
var WALL_SIZE = 5,
WALL_HEIGHT = 3;
var el = document.querySelector('#walls');
var wall;
for (var x = 0; x < map.height; x++) {
for (var y = 0; y < map.width; y++) {
var i = y*map.width + x;
var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE;
if (map.data[i] === 1) {
// Create wall
wall = document.createElement('a-box');
el.appendChild(wall);
wall.setAttribute('color', '#fff');
wall.setAttribute('material', 'src: #texture-wall;');
wall.setAttribute('width', WALL_SIZE);
wall.setAttribute('height', WALL_HEIGHT);
wall.setAttribute('depth', WALL_SIZE);
wall.setAttribute('position', position);
wall.setAttribute('static-body', ');
}
if (map.data[i] === 2) {
// Set player position
document.querySelector('#player').setAttribute('position', position);
}
}
}
console.info('Walls added.');
});
With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.
It’s Not All Fun And Games
A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.
There have been a number of cases where it has been shown virtual reality can help as a tool for therapies:
Oxford study finds virtual reality can help treat severe paranoia
Virtual Reality Therapy for Phobias at the Duke Faculty Practice
Bravemind: Virtual Reality Exposure Therapy at the University of Southern California
These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.
Wrapping Up
Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.”
We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to.
So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.",2016,Shane Hudson,shanehudson,2016-12-11T00:00:00+00:00,https://24ways.org/2016/first-steps-in-vr/,code
300,Taking Device Orientation for a Spin,"When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot.
But if you can’t beat them, join them.
A brave new world
A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob.
Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all.
Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no!
So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out.
The Device Orientation API
The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat.
The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too.
The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away:
window.addEventListener('deviceorientation', function(e) {
console.log(e.alpha);
console.log(e.beta);
console.log(e.gamma);
});
If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended.
One important note
Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel.
ngrok http -host-header=rewrite mylocaldevsite.dev:80
ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option.
Right, where were we?
Make something to look at
It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those.
Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible.
Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.)
The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever).
var position_image = function(percent) {
var pos = (img_W / 100)*percent;
img.style.transform = 'translate(-'+pos+'px)';
};
All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.)
Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.)
The maths bit
If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track.
As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications:
When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle.
Moving left, we’ll track to the left of the image (lower percentage).
Moving right, we’ll track to the right (higher percentage).
If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own.
Nose-forward
When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings.
var initial_position = null;
window.addEventListener('deviceorientation', function(e) {
if (initial_position === null) {
initial_position = Math.floor(e.alpha);
};
var current_position = initial_position - Math.floor(e.alpha);
});
(I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.)
We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one.
These values are weird
One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll.
What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it.
var rotation = current_position;
if (current_position > 180) rotation = current_position-360;
Which way up?
Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is:
var screen_portrait = false;
if (window.innerWidth < window.innerHeight) {
screen_portrait = true;
}
It works. Then we can use screen_portrait to branch our code:
if (screen_portrait) {
if (current_position > 180) rotation = current_position-360;
} else {
if (current_position < -180) rotation = 360+current_position;
}
Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!).
Limiting rotation
Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%).
If we set max_rotation = 60, that code ends up looking like this:
if (rotation > max_rotation) rotation = max_rotation;
if (rotation < (0-max_rotation)) rotation = 0-max_rotation;
var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100);
We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself.
The big reveal
All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work.
position_image(percent);
You can see the final result and take it for a spin. Literally.
So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it.
What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought.
The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock.
The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things.",2016,Drew McLellan,drewmclellan,2016-12-24T00:00:00+00:00,https://24ways.org/2016/taking-device-orientation-for-a-spin/,code
303,We Need to Talk About Technical Debt,"In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt.
One thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this:
Not all technical debt is bad code, and not all bad code is technical debt.
Sometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work.
It is an oft-misunderstood phrase, and when mistaken for meaning ‘anything legacy or old hacky or nasty or bad’, technical debt is swept under the carpet along with all of the other parts of the codebase we’d rather not talk about, and therein lies the problem.
We need to talk about technical debt.
What We Talk About When We Talk About Technical Debt
The thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn’t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on.
An Example
You’re a front-end developer working on a SaaS product, and your sales team is courting a large customer – a customer so large that you can’t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line… the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn’t currently a nice, clean way to incorporate a theme into the codebase.
You and the business make the decision that you will hack a theme into the product in two days. It’s going to be messy, it’s going to be ugly, but you can’t afford to lose a huge customer just because your CSS isn’t quite right, right now. This is technical debt.
You deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make:
Do we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework?
Do we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted?
Option 1 is choosing to pay off your debts; Option 2 is ignoring your repayments.
With Option 1, you’re acknowledging that you did what you could given the constraints, but, free of constraints, you’d have done something different. Now, you are choosing to implement that something different.
With Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that…
your SaaS product now offers theming to one of your customers;
another potential customer might also demand the ability to theme their instance of your product;
you can’t refuse them that request, nor can you quickly fulfil it;
you hack in another theme, thus adding to the balance of your existing debt;
and so on (plus interest) for every subsequent theme you need to implement.
Here you have increased entropy whilst making little to no attempt to address what you already knew to be problems.
Your second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you’re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You’ve made a loss; your strategic debt ultimately became a loss-making exercise.
The important thing to note here is that you didn’t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment.
Good Debt and Bad Debt
Technical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn’t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt.
Good debt might be…
a mortgage;
a student loan, or;
a business loan.
These are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off—they have real, tangible benefit.
A business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back.
These kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back.
Conversely, bad debt might be…
borrowing $1,000 from a loan shark so you can go to Vegas, or;
taking out a payday loan in order to buy a new television.
Both of these kinds of debt will leave you paying for things that didn’t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it?
A good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment.
The earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don’t). A calculated decision to do something ‘wrong’ in the short term with the promise of better payoffs later on.
Bad Technical Debt
The majority of my work is with front-end development teams—CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply:
!important
All front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why?
It’s not necessarily because we’re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms.
This is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense.
However, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left.
So many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write.
Bad Code
Now we’ve got a good idea of what constitutes technical debt, let’s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this:
We’ve amassed a lot of technical debt and we’d like to get a strategy in place
to begin dealing with it.
Whilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they’re not looking at technical debt at
all—sometimes they’re just looking at bad code, plain and simple.
Where technical debt is knowing that there’s a better way, but the quicker way makes more sense right now, bad code is not caring if there’s a better way at all.
Again, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that…
the developers knew they were implementing a sub-par solution, but…
the developers also knew that a better solution was out there, which…
implies that it can be tidied up relatively simply.
Developers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully—and usually—bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn’t happen for a good enough reason, and is therefore much harder to justify.
Technical debt often represents ability in judgement, whereas bad code often represents a gap in skills.
Takeaway
Take time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality.
Understanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team.
Sometimes it’s okay to cut corners if there is a tangible gain to be had in the immediate term.
Technical debt is okay provided it is a sensible debt and you have intentions to pay it off.
Technical debt is not necessarily synonymous with bad code, and bad code isn’t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future.
Technical debt is not inherently bad—failure to make repayments is. Periodically, it is justifiable—encouraged, even—to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge.
Bad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.",2016,Harry Roberts,harryroberts,2016-12-05T00:00:00+00:00,https://24ways.org/2016/we-need-to-talk-about-technical-debt/,code
305,CSS Writing Modes,"Since you may not have a lot of time, I’m going to start at the end, with the dessert.
You can use a little-known, yet important and powerful CSS property to make text run vertically. Like this.
Or instead of running text vertically, you can layout a set of icons or interface buttons in this way. Or, of course, with anything on your page.
The CSS I’ve applied makes the browser rethink the orientation of the world, and flow the layout of this element at a 90° angle to “normal”. Check out the live demo, highlight the headline, and see how the cursor is now sideways.
See the Pen Writing Mode Demo — Headline by Jen Simmons (@jensimmons) on CodePen.
The code for accomplishing this is pretty simple.
h1 {
writing-mode: vertical-rl;
}
That’s all it takes to switch the writing mode from the web’s default horizontal top-to-bottom mode to a vertical right-to-left mode. If you apply such code to the html element, the entire page is switched, affecting the scroll direction, too.
In my example above, I’m telling the browser that only the h1 will be in this vertical-rl mode, while the rest of my page stays in the default of horizontal-tb.
So now the dessert course is over. Let me serve up this whole meal, and explain the the CSS Writing Mode Specification.
Why learn about writing modes?
There are three reasons I’m teaching writing modes to everyone—including western audiences—and explaining the whole system, instead of quickly showing you a simple trick.
We live in a big, diverse world, and learning about other languages is fascinating. Many of you lay out pages in languages like Chinese, Japanese and Korean. Or you might be inspired to in the future.
Using writing-mode to turn bits sideways is cool. This CSS can be used in all kinds of creative ways, even if you are working only in English.
Most importantly, I’ve found understanding Writing Modes incredibly helpful when understanding Flexbox and CSS Grid. Before I learned Writing Mode, I felt like there was still a big hole in my knowledge, something I just didn’t get about why Grid and Flexbox work the way they do. Once I wrapped my head around Writing Modes, Grid and Flexbox got a lot easier. Suddenly the Alignment properties, align-* and justify-*, made sense.
Whether you know about it or not, the writing mode is the first building block of every layout we create. You can do what we’ve been doing for 25 years – and leave your page set to the default left-to-right direction, horizontal top-to-bottom writing mode. Or you can enter a world of new possibilities where content flows in other directions.
CSS properties
I’m going to focus on the CSS writing-mode property in this article. It has five possible options:
writing-mode: horizontal-tb;
writing-mode: vertical-rl;
writing-mode: vertical-lr;
writing-mode: sideways-rl;
writing-mode: sideways-lr;
The CSS Writing Modes Specification is designed to support a wide range of written languages in all our human and linguistic complexity. Which—spoiler alert—is pretty insanely complex. The global evolution of written languages has been anything but simple.
So I’ve got to start with explaining some basic concepts of web page layout and writing systems. Then I can show you what these CSS properties do.
Inline Direction, Block Direction, and Character Direction
In the world of the web, there’s a concept of ‘block’ and ‘inline’ layout. If you’ve ever written display: block or display: inline, you’ve leaned on these concepts.
In the default writing mode, blocks stack vertically starting at the top of the page and working their way down. Think of how a bunch of block-levels elements stack—like a bunch of a paragraphs—that’s the block direction.
Inline is how each line of text flows. The default on the web is from left to right, in horizontal lines. Imagine this text that you are reading right now, being typed out one character at a time on a typewriter. That’s the inline direction.
The character direction is which way the characters point. If you type a capital “A” for instance, on which side is the top of the letter? Different languages can point in different directions. Most languages have their characters pointing towards the top of the page, but not all.
Put all three together, and you start to see how they work as a system.
The default settings for the web work like this.
Now that we know what block, inline, and character directions mean, let’s see how they are used in different writing systems from around the world.
The four writing systems of CSS Writing Modes
The CSS Writing Modes Specification handles all the use cases for four major writing systems; Latin, Arabic, Han and Mongolian.
Latin-based systems
One writing system dominates the world more than any other, reportedly covering about 70% of the world’s population.
The text is horizontal, running from left to right, or LTR. The block direction runs from top to bottom.
It’s called the Latin-based system because it includes all languages that use the Latin alphabet, including English, Spanish, German, French, and many others. But there are many non-Latin-alphabet languages that also use this system, including Greek, Cyrillic (Russian, Ukrainian, Bulgarian, Serbian, etc.), and Brahmic scripts (Devanagari, Thai, Tibetan), and many more.
You don’t need to do anything in your CSS to trigger this mode. This is the default.
Best practices, however, dictate that you declare in your opening element which language and which direction (LTR or RTL) you are using. This website, for instance, uses to let the browser know this content is published in Great Britian’s version of English, in a left to right direction.
Arabic-based systems
Arabic, Hebrew and a few other languages run the inline direction from right to left. This is commonly known as RTL.
Note that the inline direction still runs horizontally. The block direction runs from top to bottom. And the characters are upright.
It’s not just the flow of text that runs from right to left, but everything about the layout of the website. The upper right-hand corner is the starting position. Important things are on the right. The eyes travel from right to left. So, typically RTL websites use layouts that are just like LTR websites, only flipped.
On websites that support both LTR and RTL, like the United Nations’ site at un.org, the two layouts are mirror images of each other.
For many web developers, our experiences with internationalization have focused solely on supporting Arabic and Hebrew script.
CSS layout hacks for internationalization & RTL
To prepare an LTR project to support RTL, developers have had to create all sorts of hacks. For example, the Drupal community started a convention of marking every margin-left and -right, every padding-left and -right, every float: left and float: right with the comment /* LTR */. Then later developers could search for each instance of that exact comment, and create stylesheets to override each left with right, and vice versa. It’s a tedious and error prone way to work. CSS itself needed a better way to let web developers write their layout code once, and easily switch language directions with a single command.
Our new CSS layout system does exactly that. Flexbox, Grid and Alignment use start and end instead of left and right. This lets us define everything in relationship to the writing system, and switch directions easily. By writing justify-content: flex-start, justify-items: end, and eventually margin-inline-start: 1rem we have code that doesn’t need to be changed.
This is a much better way to work. I know it can be confusing to think through start and end as replacements for left and right. But it’s better for any multiligual project, and it’s better for the web as a whole.
Sadly, I’ve seen CSS preprocessor tools that claim to “fix” the new CSS layout system by getting rid of start and end and bringing back left and right. They want you to use their tool, write justify-content: left, and feel self-righteous. It seems some folks think the new way of working is broken and should be discarded. It was created, however, to fulfill real needs. And to reflect a global internet. As Bruce Lawson says, WWW stands for the World Wide Web, not the Wealthy Western Web. Please don’t try to convince the industry that there’s something wrong with no longer being biased towards western culture. Instead, spread the word about why this new system is here.
Spend a bit of time drilling the concept of inline and block into your head, and getting used to start and end. It will be second nature soon enough.
I’ve also seen CSS preprocessors that let us use this new way of thinking today, even as all the parts aren’t fully supported by browsers yet. Some tools let you write text-align: start instead of text-align: left, and let the preprocessor handle things for you. That is terrific, in my opinion. A great use of the power of a preprocessor to help us switch over now.
But let’s get back to RTL.
How to declare your direction
You don’t want to use CSS to tell the browser to switch from an LTR language to RTL. You want to do this in your HTML. That way the browser has the information it needs to display the document even if the CSS doesn’t load.
This is accomplished mainly on the html element. You should also declare your main language. As I mentioned above, the 24 ways website is using to declare the LTR direction and the use of British English. The UN Arabic website uses to declare the site as an Arabic site, using a RTL layout.
Things get more complicated when you’ve got a page with a mix of languages. But I’m not going to get into all of that, since this article is focused on CSS and layouts, not explaining everything about internationalization.
Let me just leave direction here by noting that much of the heavy work of laying out the characters which make up each word is handled by Unicode. If you are interested in learning more about LTR, RTL and bidirectional text, watch this video: Introduction to Bidirectional Text, a presentation by Elika Etemad.
Meanwhile, let’s get back to CSS.
The writing mode CSS for Latin-based and Arabic-based systems
For both of these systems—Latin-based and Arabic-based, whether LTR or RTL—the same CSS property applies for specifying the writing mode: writing-mode: horizontal-tb. That’s because in both systems, the inline text flow is horizontal, while the block direction is top-to-bottom. This is expressed as horizontal-tb.
horizontal-tb is the default writing mode for the web, so you don’t need to specify it unless you are overriding something else higher up in the cascade. You can just imagine that every site you’ve ever built came with:
html {
writing-mode: horizontal-tb;
}
Now let’s turn our attention to the vertical writing systems.
Han-based systems
This is where things start to get interesting.
Han-based writing systems include CJK languages, Chinese, Japanese, Korean and others. There are two options for laying out a page, and sometimes both are used at the same time.
Much of CJK text is laid out like Latin-based languages, with a horizontal top-to-bottom block direction, and a left-to-right inline direction. This is the more modern way to doing things, started in the 20th century in many places, and further pushed into domination by the computer and later the web.
The CSS to do this bit of the layouts is the same as above:
section {
writing-mode: horizontal-tb;
}
Or, you know, do nothing, and get that result as a default.
Alternatively Han-based languages can be laid out in a vertical writing mode, where the inline direction runs vertically, and the block direction goes from right to left.
See both options in this diagram:
Note that the horizontal text flows from left to right, while the vertical text flows from right to left. Wild, eh?
This Japanese issue of Vogue magazine is using a mix of writing modes. The cover opens on the left spine, opposite of what an English magazine does.
This page mixes English and Japanese, and typesets the Japanese text in both horizontal and vertical modes. Under the title “Richard Stark” in red, you can see a passage that’s horizontal-tb and LTR, while the longer passage of text at the bottom of the page is typeset vertical-rl. The red enlarged cap marks the beginning of that passage. The long headline above the vertical text is typeset LTR, horizontal-tb.
The details of how to set the default of the whole page will depend on your use case. But each element, each headline, each section, each article can be marked to flow the opposite of the default however you’d like.
For example, perhaps you leave the default as horizontal-tb, and specify your vertical elements like this:
div.articletext {
writing-mode: vertical-rl;
}
Or alternatively you could change the default for the page to a vertical orientation, and then set specific elements to horizontal-tb, like this:
html {
writing-mode: vertical-rl;
}
h2, .photocaptions, section {
writing-mode: horizontal-tb;
}
If your page has a sideways scroll, then the writing mode will determine whether the page loads with upper left corner as the starting point, and scroll to the right (horizontal-tb as we are used to), or if the page loads with the upper right corner as the starting point, scrolling to the left to display overflow. Here’s an example of that change in scrolling direction, in a CSS Writing Mode demo by Chen Hui Jing. Check out her demo — you can switch from horizontal to vertical writing modes with a checkbox and see the difference.
Mongolian-based systems
Now, hopefully so far all of this kind of makes sense. It might be a bit more complicated than expected, but it’s not so hard. Well, enter the Mongolian-based systems.
Mongolian is also a vertical script language. Text runs vertically down the page. Just like Han-based systems. There are two major differences. First, the block direction runs the other way. In Mongolian, block-level elements stack from left to right.
Here’s a drawing of how Wikipedia would look in Mongolian if it were laid out correctly.
Perhaps the Mongolian version of Wikipedia will be redone with this layout.
Now you might think, that doesn’t look so weird. Tilt your head to the left, and it’s very familiar. The block direction starts on the left side of the screen and goes to the right. The inline direction starts on the top of the page and moves to the bottom (similar to RTL text, just turned 90° counter-clockwise). But here comes the other huge difference. The character direction is “upside down”. The top of the Mongolian characters are not pointing to the left, towards the start edge of the block direction. They point to the right. Like this:
Now you might be tempted to ignore all this. Perhaps you don’t expect to be typesetting Mongolian content anytime soon. But here’s why this is important for everyone — the way Mongolian works defines the results writing-mode: vertical-lr. And it means we cannot use vertical-lr for typesetting content in other languages in the way we might otherwise expect.
If we took what we know about vertical-rl and guessed how vertical-lr works, we might imagine this:
But that’s wrong. Here’s how they actually compare:
See the unexpected situation? In both writing-mode: vertical-rl and writing-mode: vertical-lr latin text is rotated clockwise. Neither writing mode let’s us rotate text counter-clockwise.
If you are typesetting Mongolian content, apply this CSS in the same way you would apply writing-mode to Han-based writing systems. To the whole page on the html element, or to specific pages of the page like this:
section {
writing-mode: vertical-lr;
}
Now, if you are using writing-mode for a graphic design effect on a language that is otherwise typesets horizontally, I don’t think writing-mode: vertical-lr is useful. If the text wraps onto two lines, it stacks in a very unexpected way. So I’ve sort of obliterated it from my toolkit. I find myself using writing-mode: vertical-rl a lot. And never using -lr. Hm.
Writing modes for graphic design
So how do we use writing-mode to turn English headlines sideways? We could rely on transform: rotate()
Here are two examples, one for each direction. (By the way, each of these demos use CSS Grid for their overall layout, so be sure to test them in a browser that supports CSS Grid, like Firefox Nightly.)
In this demo 4A, the text is rotated clockwise using this code:
h1 {
writing-mode: vertical-rl;
}
In this demo 4B, the text is rotated counter-clockwise using this code:
h1 {
writing-mode: vertical-rl;
transform: rotate(180deg);
text-align: right;
}
I use vertical-rl to rotate the text so that it takes up the proper amount of space in the overall flow of the layout. Then I rotate it 180° to spin it around to the other direction. And then I use text-align: right to get it to rise up to the top of it’s container. This feels like a hack, but it’s a hack that works.
Now what I would like to do instead is use another CSS value that was designed for this use case — one of the two other options for writing mode.
If I could, I would lay out example 4A with:
h1 {
writing-mode: sideways-rl;
}
And layout example 4B with:
h1 {
writing-mode: sideways-lr;
}
The problem is that these two values are only supported in Firefox. None of the other browsers recognize sideways-*. Which means we can’t really use it yet.
In general, the writing-mode property is very well supported across browsers. So I’ll use writing-mode: vertical-rl for now, with the transform: rotate(180deg); hack to fake the other direction.
There’s much more to what we can do with the CSS designed to support multiple languages, but I’m going to stop with this intermediate introduction.
If you do want a bit more of a taste, look at this example that adds text-orientation: upright; to the mix — turning the individual letters of the latin font to be upright instead of sideways.
It’s this demo 4C, with this CSS applied:
h1 {
writing-mode: vertical-rl;
text-orientation: upright;
text-transform: uppercase;
letter-spacing: -25px;
}
You can check out all my Writing Modes demos at labs.jensimmons.com/#writing-modes.
I’ll leave you with this last demo. One that applies a vertical writing mode to the sub headlines of a long article. I like how small details like this can really bring a fresh feeling to the content.
See the Pen Writing Mode Demo — Article Subheadlines by Jen Simmons (@jensimmons) on CodePen.",2016,Jen Simmons,jensimmons,2016-12-23T00:00:00+00:00,https://24ways.org/2016/css-writing-modes/,code
306,What next for CSS Grid Layout?,"In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn’t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking.
As I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added.
The CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn’t mean that works stops here, and that new use cases and features can’t be proposed for future levels of Grid Layout. Therefore, in this article I’m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have.
Subgrid - the missing feature of Level 1
The implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent.
Nesting Grids by Rachel Andrew (@rachelandrew) on CodePen.
The subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above.
The specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other.
In an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as “at risk” in the Level 1 Candidate Recommendation. With regard to ‘at-risk’ this is explained as follows:
“‘At-risk’ is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.”
If we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I’m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable.
Further reading about subgrid
My post from 2015 detailing why I feel subgrid is important
My post based on the revised specification
Eric Meyer’s thoughts on subgrid
Write-up of a discussion from Igalia who work on the Blink and Webkit browser implementations
Styling cells, tracks and areas
Having defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can’t do is style the grid tracks or cells. Grid doesn’t even go as far as multiple column layout, which has the column-rule properties.
In order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I’m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells.
Faked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen.
I think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that.
More control over auto placement
If you haven’t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid.
Items auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen.
The auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid.
Websafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen.
I think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Björklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here.
Creating non-rectangular grid areas
A grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can’t create an L-shaped or otherwise non-regular shape.
Grid Areas by Rachel Andrew (@rachelandrew) on CodePen.
Perhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area.
.wrapper {
display: grid;
grid-template-areas:
""sidebar header header""
""sidebar content quote""
""sidebar content content"";
}
Flowing content through grid cells or areas
Some uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec.
Being able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work.
In a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own.
Solving the keyboard/layout disconnect
One issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by Léonie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect.
The grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It’s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion.
Bringing your ideas to the future of Grid Layout
When I’m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout.
As a “product” grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn’t think of. You may well have one in mind right now. That’s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now.
This is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don’t just mutter about how the CSS Working Group don’t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can’t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated.
I opened this article by explaining I’d written about grid layout four years ago, and how we’re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it’s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don’t feel that because you don’t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we’re dealing with now and in the future.",2016,Rachel Andrew,rachelandrew,2016-12-12T00:00:00+00:00,https://24ways.org/2016/what-next-for-css-grid-layout/,code
307,Get the Balance Right: Responsive Display Text,"Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers’ attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout.
When setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader’s preferences. You can take the same approach with display text by choosing larger sizes from the same scale.
However, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window.
Introducing viewport units
With CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels:
vw - viewport width,
vh - viewport height,
vmin - viewport height or width, whichever is smaller
vmax - viewport height or width, whichever is larger
In one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width:
h1 {
font-size: 13 vw;
}
So for a selection of widths, the rendered font size would be:
Rendered font size (px)
Viewport width
13 vw
320
42
768
100
1024
133
1280
166
1920
250
A problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation.
Landscape text is much bigger than portrait text when using vw units.
The proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression.
However if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering.
h1 {
font-size: 13vmin;
}
Landscape text is consistent with portrait text when using vmin units.
Comparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude:
Rendered font size (px)
Viewport
13 vw
13 vmin
320 × 480
42
42
414 × 736
54
54
768 × 1024
100
100
1024 × 768
133
100
1280 × 720
166
94
1366 × 768
178
100
1440 × 900
187
117
1680 × 1050
218
137
1920 × 1080
250
140
2560 × 1440
333
187
Hybrid font sizing
Using vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading:
h1 {
font-size: 13vmin;
}
h2 {
font-size: 5vmin;
}
Using vmin alone for supporting text can scale it too quickly
The balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens.
A solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example:
h2 {
font-size: calc(0.5rem + 2.5vmin);
}
For a 320 px wide screen, the font size will be 16 px, calculated as follows:
(0.5 × 16) + (320 × 0.025) = 8 + 8 = 16px
For a 768 px wide screen, the font size will be 27 px:
(0.5 × 16) + (768 × 0.025) = 8 + 19 = 27px
This results in a more balanced subheading that doesn’t take emphasis away from the main heading:
To give you an idea of the effect of using a hybrid approach, here’s a side-by-side comparison of hybrid and viewport text sizing:
table.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em}
top: calc() hybrid method; bottom: vmin only
16
20
27
32
35
40
44
16
24
38
48
54
64
72
320
480
768
960
1080
1280
1440
Over this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting.
A reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays. ↩︎
For even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller. ↩︎",2016,Richard Rutter,richardrutter,2016-12-09T00:00:00+00:00,https://24ways.org/2016/responsive-display-text/,code
308,How to Make a Chrome Extension to Delight (or Troll) Your Friends,"If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius.
But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right?
Nope.
If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves.
How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun.
Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension.
Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to.
Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful.
What you’ll need
Adobe Illustrator (or a similar illustration program to export PNG)
Some images of your friends
A text editor
Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it.
Getting started
Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome.
Open index.html in Chrome. You should see a grey page that says “Noname”.
Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template.
Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay.
Step 1: Gather your friends
The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family.
Create your artwork
For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew:
As you can see, my use of the word “family” extends to cats. There’s no judgement here.
Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application.
Now, export your images by following these steps:
Turn the Template layer off and export the images as PNGs.
In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces.
Export at 72ppi to keep things running fast.
Save your images into the images/ folder in your project.
Add your images to config.js
Open scripts/config.js. This is where you configure your extension.
Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image.
faces: {
leslie: 'images/face_leslie.png',
kyle: 'images/face_kyle.png',
beep: 'images/face_beep.png'
}
The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen.
Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this:
Congrats, you’re off and running!
Step 2: Add adjectives
Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort.
Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering…
prefixes: [
'Loving',
'Drunk',
'Chatty',
'Merry',
'Creepy',
'Introspective',
'Cheerful',
'Awkward',
'Unrelatable',
'Hungry',
...
]
When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila!
Step 3: Choose your color palette
Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead.
How to choose colors
To create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like.
Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later.
Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors.
Add your background colors to config.js
Copy your hex codes into the bgColors array in config.js.
bgColors: [
'#FFDD77',
'#FF8E72',
'#ED5E84',
'#4CE0B3',
'#9893DA',
...
]
Now when you go back to Chrome and refresh the page, you’ll see your new palette!
Step 4: Accessorize
This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends.
Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person.
Create a group of accessories
To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead.
Next, follow the same steps as you did when you exported the faces. Here they are again:
Turn the Template layer off and export the images as PNGs.
In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats.
Export at 72ppi to keep things running fast.
Save your images into the images/ folder in your project.
Add your accessories to config.js
In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array:
customProps: {
hats: [
'images/hat_crown.png',
'images/hat_santa.png',
'images/hat_tophat.png',
'images/hat_antlers.png'
]
}
Refresh Chrome and behold, accessories!
Create as many more accessories as you want
Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this:
The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses:
customProps: {
hair: [
'images/hair_bowl.png',
'images/hair_bob.png'
],
hats: [
'images/hat_crown.png',
'images/hat_santa.png',
'images/hat_tophat.png',
'images/hat_antlers.png'
],
glasses: [
'images/glasses_aviators.png',
'images/glasses_monacle.png'
]
}
And, there you have it! Randomly generated friends with random accessories.
Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress…
Step 5: Publish it
It’s time to put this in your new tabs! You have two options:
Publish it as a Chrome extension in the Chrome Web Store.
Host it as a website and point to it with the New Tab Redirect extension.
Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2.
How to make a simple Chrome extension to replace the new tab page
All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement:
{
""manifest_version"": 2,
""name"": ""Your extension name"",
""version"": ""1.0"",
""chrome_url_overrides"" : {
""newtab"": ""index.html""
}
}
To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that:
Go to the Extensions page in Chrome by navigating to chrome://extensions/.
Tick the checkbox in the upper-right corner labelled “Developer Mode”.
Click “Load unpacked extension…” and select this project.
If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page.
Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you.
Share the love
Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats.
At the end of the day, let’s make something that makes us happy. Cheers!",2016,Leslie Zacharkow,lesliezacharkow,2016-12-08T00:00:00+00:00,https://24ways.org/2016/how-to-make-a-chrome-extension/,code
309,HTTP/2 Server Push and Service Workers: The Perfect Partnership,"Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push.
During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination.
If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users.
What is HTTP/2 Server Push?
When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower.
Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this:
As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs.
This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster.
Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like.
Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better!
There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings.
HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs.
HTTP/2 Server Push & Service Workers
So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand.
Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server.
Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load!
Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files.
HTTP2 Push Demo
HTTP2 Push
In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push.
The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns.
I’ve add the following code to the service-worker.js file.
(global => {
'use strict';
// Load the sw-toolbox library.
importScripts('/js/sw-toolbox/sw-toolbox.js');
// The route for any requests
toolbox.router.get('/push', global.toolbox.fastest);
toolbox.router.get('/images/(.*)', global.toolbox.fastest);
toolbox.router.get('/js/(.*)', global.toolbox.fastest);
// Ensure that our service worker takes control of the page as soon as possible.
global.addEventListener('install', event => event.waitUntil(global.skipWaiting()));
global.addEventListener('activate', event => event.waitUntil(global.clients.claim()));
})(self);
Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too.
The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again.
You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit.
But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing!
Using this technique, the waterfall chart for a repeat visit should look like the image below.
If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache.
Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance.
Summary
When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information.
All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible.
If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit.
I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live!",2016,Dean Hume,deanhume,2016-12-15T00:00:00+00:00,https://24ways.org/2016/http2-server-push-and-service-workers/,code
310,Fairytale of new Promise,"There are only four good Christmas songs.
I know, yeah, JavaScript or whatever. We’ll get to that in a minute, I promise.
First—and I cannot stress this enough— there are four good Christmas songs. You’re free to disagree with me here, of course, but please try to understand that you will be wrong.
They don’t all have the most safe-for-work titles; I can’t list all of them here, but if you choose to let your fingers do the walkin’ to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels’ self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you’ve got thin walls.
For my money, though, the two I can reference by name are the top of that small heap: Tom Waits’ Christmas Card from a Hooker in Minneapolis, and The Pogues’ Fairytale of New York. The former once held the honor of being the only good Christmas song—about which which I was also unequivocally correct, right up until I changed my mind. It’s not the song up for discussion today, but feel free to familiarize yourself just the same—I’ll wait.
Fairytale of New York—the top of the list—starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos.
This song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day’s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward “PARENTAL ADVISORY: EXPLICIT CONTENT.” You might have heard a similar tune yourself; it goes a little somethin’ like setTimeout(function() { console.log( ""this should be happening last"" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we’d better fiddle with the order of things to make sure those actions don’t happen too soon.
“But I can see a better time,” as the song says, “when all our dreams come true.” So, with that Pogues brand of holiday spirit squarely in mind—by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication—gather ’round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled.
The Main Thread
JavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you’ll see it referred to as the “main thread,” or “UI thread.” That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple—say, highlighting text—to the more complex—interacting with form elements.
If that sounds a little scary to you, well, that’s because it is. The more complex our scripts, the more we’re cramming into that single-file main thread, to be processed along with—say—some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads—though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user’s interactions—which, in this already strained metaphor, would be ham, I guess?
Asynchronous JavaScript
Now, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won’t block the main thread while we’re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they’re meant to perform will get shuffled right back into that single-thread queue.
There are a couple of places you might have written asynchronously-fired JavaScript, even if you’re not super familiar with the overarching concept: XMLHttpRequest—“AJAX,” if ya nasty—or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure—it won’t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar.
For example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line—but in a script of sufficient size and complexity we’re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance.
This pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way—we had to set a listener on a given object before the event fires, or nothing would happen:
// Create the event:
var event = document.createEvent( ""Event"" );
// Name the event:
event.initEvent( ""doTheStuff"", true, true );
// Listen for the custom `doTheStuff` event on `window`:
window.addEventListener( ""doTheStuff"", initializeEverything );
// Fire the custom event
window.dispatchEvent( event );
This example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that’s the basic gist: create and name the event, add a listener for the event, and—after setting our listener—dispatch the event.
Events and callbacks aren’t the only game in town for weaving our way in and out of the main thread, though—at least, not anymore.
Promises
A Promise is, at the risk of sounding sentimental, pure potential—an empty container into which a value eventually results. A Promise can exist in several states: “pending,” while the computation they contain is being performed or “resolved” once that computation is complete. Once resolved, a Promise is “fulfilled” if it gave us back something we expect, or “rejected” if it didn’t.
The Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action—asynchronous or otherwise—within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject—with an error, conventionally. To illustrate, let’s tack something together with a pretty decent chance of doing what we don’t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn’t necessarily put it past me.
var promisedOne = new Promise( function( resolve, reject ) {
var coinToss = Math.floor( Math.random() * 2 ) + 1;
if( coinToss === 1 ) {
resolve( coinToss );
} else {
reject( new Error( ""That ain’t a one."" ) );
}
});
There’s nothing too surprising in there, after you boil it all down. It’s a little return-y, with the exception that we’re flagging results as “as expected” or “something went wrong.”
Tapping into that Promise uses another new keyword: then—and as someone who attempts to make sense of JavaScript by breaking it down to plain ol’ human-language, I’m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it’s a pretty simple pattern:
promisedOne.then( function( result ) {
console.log( result );
}, function( error ) {
console.error( error );
});
If you’ve spent any time working with AJAX—jQuery-wise, in particular—you’ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed—any reference we make to promisedOne will have a single, fixed result.
It may not look like too much the way I’m using it here, but it’s powerful stuff—a pattern for asynchronously resolving anything. I’ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading.
var fontObserver = new FontFaceObserver( ""Fancy Font"" );
fontObserver.check().then(function() {
document.documentElement.className += "" fonts-loaded"";
}, function( error ) {
console.error( error );
});
fontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don’t bother including an argument in the first function, since we don’t care about the result itself so much as we care that the promise resolved without error—we’re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we’ll want to know what happened should something go wrong.
Now, this isn’t the tidiest syntax around—at least to my eyes—with those two functions just kinda floating in a then. Luckily there’s an similar alternative syntax; one that I find a bit easier to parse at-a-glance:
fontObserver.check()
.then(function() {
document.documentElement.className += "" fonts-loaded"";
})
.catch(function( error ) {
console.log( error );
});
The first callback inside then provides us with our success state, while the catch provides us with a single, explicit “something went wrong” callback. The two syntaxes aren’t completely identical in all situations, but for a simple case like this, I find it a little neater.
The Common Thread
I guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I’ve explained that plenty. No, I mean Fairytale of New York, and why it’s perched up there at the top of the four (4) song heap.
Fairytale is a sad song, ostensibly. If you follow the main thread—start to finish, line-by-line, step by step— Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: “but is it a Christmas song?”
Well, for my money, nothing says “holidays” quite like unreliable narration.
Shane MacGowan, the song’s author, has placed the first verse about “Christmas Eve in the drunk tank” as happening right after the “lucky one, came in eighteen-to-one”—not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past—good times and bad times—maybe not even in chronological order. Hell, the “NYPD Choir” mentioned in the chorus? There’s no such thing.
We’re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year—like clockwork—there’s a lull in conversation, there’s a sharp exhale, and Ma says “we all made it.” Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story—and one begets another, and so on. Sometimes the stories are happy, sometimes they’re sad, more often than not they’re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn’t.
Start-to-finish, line-by-line, step-by-step, the main thread through the year doesn’t change, and maybe there isn’t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread—stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators—we can change the way we interact with it and, little by little, we can start making sense of it.",2016,Mat Marquis,matmarquis,2016-12-19T00:00:00+00:00,https://24ways.org/2016/fairytale-of-new-promise/,code
313,Centered Tabs with CSS,"Doug Bowman’s Sliding Doors is pretty much the de facto way to build tabbed navigation with CSS, and rightfully so – it is, as they say, rockin’ like Dokken. But since it relies heavily on floats for the positioning of its tabs, we’re constrained to either left- or right-hand navigation. But what if we need a bit more flexibility? What if we need to place our navigation in the center?
Styling the li as a floated block does give us a great deal of control over margin, padding, and other presentational styles. However, we should learn to love the inline box – with it, we can create a flexible, centered alternative to floated navigation lists.
Humble Beginnings
Do an extra shot of ‘nog, because you know what’s coming next. That’s right, a simple unordered list:
If we were wedded to using floats to style our list, we could easily fix the width of our ul, and trick it out with some margin: 0 auto; love to center it accordingly. But this wouldn’t net us much flexibility: if we ever changed the number of navigation items, or if the user increased her browser’s font size, our design could easily break.
Instead of worrying about floats, let’s take the most basic approach possible: let’s turn our list items into inline elements, and simply use text-align to center them within the ul:
#navigation ul, #navigation ul li {
list-style: none;
margin: 0;
padding: 0;
}
#navigation ul {
text-align: center;
}
#navigation ul li {
display: inline;
margin-right: .75em;
}
#navigation ul li.last {
margin-right: 0;
}
Our first step is sexy, no? Well, okay, not really – but it gives us a good starting point. We’ve tamed our list by removing its default styles, set the list items to display: inline, and centered the lot. Adding a background color to the links shows us exactly how the different elements are positioned.
Now the fun stuff.
Inline Elements, Padding, and You
So how do we give our links some dimensions? Well, as the CSS specification tells us, the height property isn’t an option for inline elements such as our anchors. However, what if we add some padding to them?
#navigation li a {
padding: 5px 1em;
}
I just love leading questions. Things are looking good, but something’s amiss: as you can see, the padded anchors seem to be escaping their containing list.
Thankfully, it’s easy to get things back in line. Our anchors have 5 pixels of padding on their top and bottom edges, right? Well, by applying the same vertical padding to the list, our list will finally “contain” its child elements once again.
’Tis the Season for Tabbing
Now, we’re finally able to follow the “Sliding Doors” model, and tack on some graphics:
#navigation ul li a {
background: url(""tab-right.gif"") no-repeat 100% 0;
color: #06C;
padding: 5px 0;
text-decoration: none;
}
#navigation ul li a span {
background: url(""tab-left.gif"") no-repeat;
padding: 5px 1em;
}
#navigation ul li a:hover span {
color: #69C;
text-decoration: underline;
}
Finally, our navigation’s looking appropriately sexy. By placing an equal amount of padding on the top and bottom of the ul, our tabs are properly “contained”, and we can subsequently style the links within them.
But what if we want them to bleed over the bottom-most border? Easy: we can simply decrease the bottom padding on the list by one pixel, like so.
A Special Note for Special Browsers
The Mac IE5 users in the audience are likely hopping up and down by now: as they’ve discovered, our centered navigation behaves rather annoyingly in their browser. As Philippe Wittenbergh has reported, Mac IE5 is known to create “phantom links” in a block-level element when text-align is set to any value but the default value of left. Thankfully, Philippe has documented a workaround that gets that [censored] venerable browser to behave. Simply place the following code into your CSS, and the links will be restored to their appropriate width:
/**//*/
#navigation ul li a {
display: inline-block;
white-space: nowrap;
width: 1px;
}
/**/
IE for Windows, however, displays an extra kind of crazy. The padding I’ve placed on my anchors is offsetting the spans that contain the left curve of my tabs; thankfully, these shenanigans are easily straightened out:
/**/
* html #navigation ul li a {
padding: 0;
}
/**/
And with that, we’re finally finished.
All set.
And that’s it. With your centered navigation in hand, you can finally enjoy those holiday toddies and uncomfortable conversations with your skeevy Uncle Eustace.",2005,Ethan Marcotte,ethanmarcotte,2005-12-08T00:00:00+00:00,https://24ways.org/2005/centered-tabs-with-css/,code
314,Easy Ajax with Prototype,"There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity.
But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends.
Introducing prototype.js
Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff.
There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp.
First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree.
Cutting to the chase
Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page:
var url = 'myscript.php';
var pars = 'foo=bar';
var target = 'output-div';
var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});
This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page.
Knocking up a basic example
So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too.
So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing:
Easy Ajax
As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.)
Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility.
Meanwhile, back at the server
So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it.
Here’s a quick PHP script – greeting.php – that Santa brought me early.
Season's Greetings, $the_name!"";
?>
You’ll perhaps want to do something a little more complex within your own projects. Just sayin’.
Gluing it all together
Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init():
Event.observe(window, 'load', init, false);
Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field.
function init(){
$('greeting-submit').style.display = 'none';
Event.observe('greeting-name', 'keyup', greet, false);
}
As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this:
function greet(){
var url = 'greeting.php';
var pars = 'greeting-name='+escape($F('greeting-name'));
var target = 'greeting';
var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});
}
The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call.
That’s it
No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.",2005,Drew McLellan,drewmclellan,2005-12-01T00:00:00+00:00,https://24ways.org/2005/easy-ajax-with-prototype/,code
315,Edit-in-Place with Ajax,"Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn’t go that far towards implementing something really practical. We dipped our toes in, but haven’t learned to swim yet.
So here is swimming lesson number one. Anyone who’s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page.
Prototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we’ll need here) and a whole bunch of manipulations to the user interface – all through simple library calls. Here’s what we’re building, so let’s do it.
Getting Started
There are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you’ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here’s what Santa dropped down my chimney:
Edit-in-Place with Ajax
Edit-in-place
Dashing through the snow on a one horse open sleigh.
So that’s our page. The editable item is going to be the called desc. The process goes something like this:
Highlight the area onMouseOver
Clear the highlight onMouseOut
If the user clicks, hide the area and replace with a