245 |
Web Content Accessibility Guidelines 2.1—for People Who Haven’t Read the Update |
Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose “Empowering persons with disabilities and ensuring inclusiveness and equality” as this year’s theme. We’ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website.
On social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, “Captions (Live)” of the Web Content Accessibility Guidelines (figure 1) …and British Vogue Contributing Editor Sinéad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, “Sign Language (Prerecorded)” of the Web Content Accessibility Guidelines (figure 2).
Figure 1: Screenshot of Alexandria Ocasio-Cortez’s Instagram story with live captionsFigure 2: Screenshot of Sinéad Burke’s Instagram story with Sign Language Interpretation
That theme chimes with this year’s publication of the World Wide Web Consortium (W3C)’s Web Content Accessibility Guidelines (WCAG) 2.1. In last year’s “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I mentioned the scale of the project to produce this update during 2018: “the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible”.
The WCAG working group have added 17 success criteria to the 61 that they released way back in 2008—for context, that was 1½ years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities.
Once again, let’s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date.
Can your users perceive the information on your website?
The first guideline has criteria that help you prevent your users from asking, “What the **** is this thing here supposed to be?” We’ve seven new criteria for this guideline.
1.3.4 Some people can’t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation—and expect your users to do likewise with your websites and apps. We’ve had 18½ years since John Allsopp’s revelatory Dao of Web Design enlightened us to “embrace the fact that the web doesn’t have the same constraints” as printed pages, and to “design for this flexibility”. So, even though this guideline doesn’t apply to websites where “a specific display orientation is essential,” such as a piano tutorial, always ask yourself, “What would John Allsopp do?”
1.3.5 You should help the user’s browser to automatically complete–or not complete–form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you’ve used microformats or microdata to mark up information about a person, the autocomplete attribute’s range of values should seem familiar. I like how the W3’s “Using HTML 5.2 autocomplete attributes” says that autocompleted values in forms help “those with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form” (emphasis mine). Um…🙋♂️
1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C’s ongoing work on Personalisation Semantics.
1.4.10 If you were to find a Nintendo Switch and “Super Mario Odyssey” under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you’re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don’t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or—better yet—test it with real users.
1.4.11 In “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I recommended bookmarking Lea Verou’s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn’t apply to controls that use the browser’s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them.
1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set:
line height to at least 1½ × the font size;
space below paragraphs to at least 2 × the font size;
letter spacing to at least 0.12 × the font size;
word spacing to at least 0.16 × the font size.
To test this, check for text overlapping, text hiding behind other elements, or text disappearing.
1.4.13 Sometimes when visiting a website, you hover over—or tab on to—something that unleashes a newsletter subscription pop-up, some suggested “related content”, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent “Close” button or “X” button to vanquish such intrusions. If the Esc key fails you, or if you either can’t see or can’t click the “Close” button…well, you’ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that:
your user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn’t apply to error warnings, or well-behaved content that doesn’t obscure or replace other content);
the new content remains visible while your user moves their cursor over it;
the new content remains visible as long as the user hovers over that element or dismisses that content—or until the new content is no longer valid.
This doesn’t apply to situations such as hovering over an element’s title attribute, where the user’s browser controls the display of the content that appears.
Can users operate the controls and links on your website?
The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” We’ve nine new criteria for this guideline.
2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the ⇧ key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn’t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus.
2.2.6 If your website uses a timeout for some process, you could store the user’s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen.
2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the “Reduce Motion” setting (or equivalent) in their device’s operating system:
@media (prefers-reduced-motion: reduce) {
.MrFancyPants {
animation: none;
}
}
2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and “unpinch” with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type="range", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out.
2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a “Send” button but then immediately realise that you really didn’t want to send the message, and so move your finger or cursor away from the “Send” button before lifting your finger?! Imagine how many arguments that functionality has prevented. 😌 You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn’t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a “Whac-A-Mole” game or a piano emulator needs the action to happen as soon as they click or touch the screen.
2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this:
use the label element to pair text with the corresponding input element;
use an alt attribute value that exactly matches any text that appears in an image;
use an aria-labelledby attribute value that exactly matches the text that appears in any complex component.
2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn’t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone’s “Shake to Undo” gesture as “dreadful — impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand”. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device.
2.5.5 Homer Simpson’s telephone famously complained, “The fingers you have used to dial are too fat.” I think we’ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide × 44px high. Apple’s “Human Interface Guidelines” agree: “Provide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.” This doesn’t apply to links within inline text, or to unsoiled elements.
2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn’t apply to websites that require a specific modality; this includes typing tutors and music programmes.
Can users understand your content?
The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” We’ve no new criteria for this guideline.
Have you made your website robust enough to work on your users’ browsers and assistive technologies?
The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” We’ve one new criterion for this guideline.
4.1.3 Sometimes you need to let your user know the status of something: “Did it work OK? What was the error? How far through it are we?” However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example:
if your user needs to know, “Did it work OK?”, add role="status”;
if your user needs to know, “What was the error?”, add role="alert”;
if you user needs to know, “How far through it are we?”, add role="log" (for a chat window) or role="progressbar" (for, well, a progress bar).
Better design for humans
My favourite of Luke Wroblewski’s collection of Design Quotes is, “Design is the art of gradually applying constraints until only one solution remains,” from that most prolific author, “Unknown”. I’ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use.
Spending those book vouchers you got for Christmas
What next? If you’re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise:
“Pro HTML5 Accessibility” by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old—a long time in web design—I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context.
“A Web for Everyone—Designing Accessible User Experiences” by Sarah Horton (the Paciello Group’s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design’s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures—including some 24ways authors—keep its content practical and relevant throughout.
“Accessibility For Everyone” by Laura Kalbag (Ind.ie’s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems—as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility—in less than four hours.
“Just Ask: Integrating Accessibility Throughout Design” by Shawn Lawton Henry (the World Wide Web Consortium (W3C)’s Web Accessibility Initiative (WAI)’s Outreach Coordinator): Although this book is 11½ years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful. |
2018 |
Alan Dalton |
alandalton |
2018-12-03T00:00:00+00:00 |
https://24ways.org/2018/wcag-for-people-who-havent-read-the-update/ |
ux |
247 |
Managing Flow and Rhythm with CSS Custom Properties |
An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability.
Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as <div> and <section> also have no default margin or padding associated with them.
I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in!
The Flow utility
With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid.
That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.
The code
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: var(--flow-space);
}
What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them.
The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples.
A basic example
See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/LXqerj
What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility.
Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a <h2> in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size.
This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though?
See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/qQgxaY
I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:
h2 {
--flow-space: 3rem;
}
Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured.
A more advanced example
Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.
See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.
https://codepen.io/hankchizljaw/pen/ZmPGyL
In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself:
<article class="media__content flow">
...
</article>
Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element.
“But what about old browsers?”, I hear you cry
We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size:
.flow {
--flow-space: 1em;
}
.flow > * + * {
margin-top: 1em;
margin-top: var(--flow-space);
}
Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement!
Wrapping up
This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS.
If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on! |
2018 |
Andy Bell |
andybell |
2018-12-07T00:00:00+00:00 |
https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/ |
code |
246 |
Designing Your Site Like It’s 1998 |
It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.
My design for Planes, Trains and Automobiles is fixed at 800px wide.
Developing a <frameset> framework
I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a <frameset> element.
I add two rows to my <frameset>; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:
<frameset frameborder="0" framespacing="0" rows="50,*">
[…]
</frameset>
Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:
<frameset frameborder="0" framespacing="0" rows="50,*">
<frame noresize scrolling="no" src="nav.html">
<frame src="content.html">
</frameset>
I do want links from my navigation to open in the content frame, so I give each <frame> a name so I can specify where I want links to open:
<frameset frameborder="0" framespacing="0" rows="50,*">
<frame name="navigation" noresize scrolling="no" src="nav.html">
<frame name="content" src="content.html">
</frameset>
The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need:
<frameset rows="50,*">
<frame name="navigation">
<frameset cols="25%,*">
<frame name="sidebar">
<frame name="content">
</frameset>
</frameset>
Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.
Handling older browsers
Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:
<noframes>
<body>
<p>This page uses frames, but your browser doesn’t support them.
Please upgrade your browser.</p>
</body>
</noframes>
Forcing someone back into a frame
Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a <frameset>, people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery:
<script type="text/javascript">
if (top == self) {
location = 'frameset.html';
}
</script>
Laying out my page
Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:
<body background="img/container.jpg" bgcolor="#fef7fb"
link="#245eab" alink="#245eab" vlink="#3c146e" text="#000000">
I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a <table> which limits the width of my layout to 800px. The align attribute will keep this <table> in the centre of someone’s screen:
<table width="800" align="center">
<tr>
<td>[…]</td>
</tr>
</table>
Although they were developed for displaying tabular information, the cells and rows which make up the <table> element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content:
<table width="800" align="center">
<table>[…]</table>
<table>
<table>
<table>[…]</table>
</table>
</table>
<table>[…]</table>
<table>[…]</table>
</table>
The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them:
<table border="0" cellpadding="0" cellspacing="0" width="587" align="center">
<tr>
<td><img src="logo.gif" border="0" width="587" alt="Logo"></td>
</tr>
</table>
The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:
<tr>
<td><img src="spacer.gif" width="593" height="1"></td>
<td><img src="spacer.gif" width="207" height="1"></td>
</tr>
The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.
For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:
<tr>
<td background="slice-1.jpg" height="473"> </td>
<td background="slice-2.jpg" height="473">[…]</td>
</tr>
<tr>
<td background="slice-3.jpg" height="388"> </td>
</tr>
I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it:
<table border="0" cellpadding="1" cellspacing="0">
<tr>
<td>
<table bgcolor="#ffffff" border="0" cellpadding="0" cellspacing="0">
[…]
</table>
</td>
</tr>
</table>
Adding details
Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:
<table border="0" cellpadding="0" cellspacing="0">
<tr>
<td background="btn-1.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td>
<td background="btn-2.jpg" border="0" height="48">
<center>
<a href="" target="_blank"><b>Buy the DVD</b></a>
</center>
</td>
<td background="btn-3.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td>
</tr>
</table>
I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer:
<table border="0" cellpadding="0" cellspacing="0">
<tr>
<td width="10"><img src="li.gif" border="0" width="8" height="8"> </td>
<td><img src="spacer.gif" width="10" height="1"> </td>
<td>Directed by John Hughes</td>
</tr>
</table>
Implementing a typographic hierarchy
So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use <font> elements to change the typeface from the browser’s default to any font installed on someone’s device:
<font face="Arial">Planes, Trains and Automobiles is a comedy film […]</font>
To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows:
<font face="Arial" size="4"><b>Steve Martin</b></font>
<font face="Arial" size="2">An American actor, comedian, writer, producer, and musician.</font>
When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.
NB: I use as many <br> elements as needed to create space between headlines and paragraphs.
View the final result (and especially the source.)
My modern day design for Planes, Trains and Automobiles.
I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true.
We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:
font-variant-caps: titling-caps;
font-variant-ligatures: common-ligatures;
font-variant-numeric: oldstyle-nums;
Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:
body {
display: grid;
grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;
grid-template-rows: auto;
grid-column-gap: 2vw;
grid-row-gap: 1vh;
}
Flexbox has made it easy to develop flexible components such as navigation links:
nav ul { display: flex; }
nav li { flex: 1; }
Just one line of CSS can create multiple columns of fluid type:
main { column-width: 12em; }
CSS Shapes enable text to flow around irregular shapes including polygons:
[src*="main-img"] {
float: left;
shape-outside: polygon(…);
}
Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:
.btn {
background: linear-gradient(#8B1212, #DD3A3C);
border-radius: 1em;
box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);
}
CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998.
Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than <font> elements, frames, layout tables, and spacer images are today.
I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated.
Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today. |
2018 |
Andy Clarke |
andyclarke |
2018-12-23T00:00:00+00:00 |
https://24ways.org/2018/designing-your-site-like-its-1998/ |
code |
244 |
It’s Beginning to Look a Lot Like XSSmas |
I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.
It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.
With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky.
Vulnerabilities and Open Source
Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.
Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017
In the first 24 ways article this year, Katie referenced a few different types of vulnerability:
Cross-site Request Forgery (also known as CSRF)
SQL Injection
Cross-site Scripting (also known as XSS)
There are many more types of vulnerability, and those that live under the same category share similarities.
For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues.
Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library.
Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity.
Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env).
This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.
Santa’s on his sleigh
If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.
Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.
Fixing vulnerabilities – the ultimate christmas gift
If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.
When you find out about a new vulnerability, you have some options:
Fix the vulnerability via an upgrade
You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.
Patch the vulnerable code
Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else.
Switch to a different library
If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained.
Responsibly disclosing vulnerabilities
Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options:
A malicious developer will exploit that vulnerability for their own gain.
A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.
An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.
It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.
So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.
Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.
Encourage responsible disclosure
Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one.
Stats from the State of Open Source Security 2017
Bug bounties
Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.
If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money.
On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.
A gift worth giving
It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.
Stats from the State of Open Source Security 2017
Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too. |
2018 |
Anna Debenham |
annadebenham |
2018-12-17T00:00:00+00:00 |
https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/ |
code |
254 |
What I Learned in Six Years at GDS |
When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.
Lots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.
1. What is the user need?
The main discipline I learned from my time at GDS was to always ask ‘what is the user need?’ It’s very easy to build something that seems like a good idea, but until you’ve identified what problem you are solving for the user, you can’t be sure that you are building something that is going to help solve an actual problem.
A really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a “where’s my stuff” for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what’s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.
The project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users’ needs. At the end of this discovery, the team realised that a status tracker wasn’t the way to address the problem. As they wrote in this blog post:
Status tracking tools are often just ‘channel shift’ for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.
What would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it’s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.
Making sure you know your user
At GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you’d observe users working with things you’d built, but even if they weren’t, it was an incredibly valuable experience, and something you should seek out if you are able to.
Even if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn’t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.
User needs is not just about building things
Asking the question “what is the user need?” really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be).
Thinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What’s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review.
Or you are writing an email to a colleague. What’s the user need? What are you hoping the reader will learn, understand or do as a result of your email?
2. Make things open: it makes things better
The second important thing I learned at GDS was ‘make things open: it makes things better’. This works on many levels: being open about your strategy, blogging about what you are doing and what you’ve learned (including mistakes), and – the part that I got most involved in – coding in the open.
Talking about your work helps clarify it
One thing we did really well at GDS was blogging – a lot – about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you’re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.
It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.
Coding in the open has a lot of benefits
In my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).
When I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people’s mind at rest about these things is why it’s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given).
But I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won’t be looking) makes everyone raise their game slightly. The very fact that you know it’s open, makes you make it a bit better.
It helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I’ve left GDS, it’s so useful to be able to look back at code I worked on to remember how things worked.
Share what you learn
It’s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you’ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.
3. Do the hard work to make it simple (tech edition)
‘Start with user needs’ and ‘Make things open: it makes things better’ are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: ‘Do the hard work to make it simple’, and specifically, how this manifests itself in the way we build technology.
At GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.
We worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.
Reviewing others’ pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a ‘pull request seal’, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.
Making it easier for developers to support the site
Another example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.
The team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions.
The main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.
This was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they’d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.
Developers are users too
Doing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.
These three principles help us make great things
I learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.
And the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I’ve talked about here: think about the user need, make things open, and do the hard work to make it simple. |
2018 |
Anna Shipman |
annashipman |
2018-12-08T00:00:00+00:00 |
https://24ways.org/2018/what-i-learned-in-six-years-at-gds/ |
business |
259 |
Designing Your Future |
I’ve had the pleasure of working for a variety of clients – both large and small – over the last 25 years. In addition to my work as a design consultant, I’ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years.
In July, 2018 – frustrated with formal education, not least the ever-present hand of ‘austerity’ that has ravaged universities in the UK for almost a decade – I formally reduced my teaching commitment, moving from a full-time role to a half-time role.
Making the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary’s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary’s been drastically reduced. That can be a shock to the system.
In this article, I’ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront ‘the fear’ – the nervousness, the sleepless nights and the ever-present worry about paying the bills – I’ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you.
In short: I’ll bare my soul and share everything I’m currently working on to – once and for all – make a final bid for freedom.
This isn’t easy. I’m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith.
The power of visualisation
As designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves.
In this article I’ll explore three tools that you can use to design your future:
Product DNA
Artefacts From the Future
Tomorrow Clients
Each of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality.
Brian Eno – the noted musician, producer and thinker – states, “Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.” Eno helpfully provides a powerful example:
When Martin Luther King said, “I have a dream,” he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it.
The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.
When you imagine your future – designing an alternate, imagined reality in your mind – you begin the process of making that future real.
Product DNA
The first tool, which I use regularly – for myself and for client work – is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future.
We all have heroes – individuals or organisations – that we look up to. Ask yourself, “Who are your heroes?” If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.)
Earlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me:
Alan Moore: the author of ‘Do Design: Why Beauty is Key to Everything’;
Mark Shayler: the founder of Ape, a strategic consultancy; and
Seth Godin: a writer and educator I’ve admired and followed for many years.
Looking at each of these individuals, I ‘borrowed’ a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel.
Moore’s book - ‘Do Design’ – had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore’s mission is an important one and he conveys that with an appropriate weight of expression.
Shayler’s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: “I believe that you can do the things that you do better.” That sense – of helping others to be their best selves – appealed to me.
Finally, the words Godin uses to describe himself – “An Author, Entrepreneur and Most of All, a Teacher” – resonated with me. The way he positions himself, as, “most of all, a teacher,” gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia.
I’ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don’t all know it, but they are all ‘mentors from afar’.
In a moment of serendipity – and largely, I believe, because I’d used this tool to explore his work – I was recently invited by Alan Moore to help him develop a leadership programme built around his book.
The key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it’s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly.
Artefacts From the Future
The second tool, which I also use regularly, is a tool called ‘Artefacts From the Future’. These artefacts – especially when designed as ‘finished’ pieces – are useful for creating provocations to help you see the future more clearly.
‘Artefacts From the Future’ can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for.
Earlier this year I revisited this tool to create a provocation for myself. I’d just finished Alla Kholmatova’s excellent book on ‘Design Systems’, which I would recommend highly. The book wasn’t just filled with valuable insights, it was also beautifully designed.
Once I’d finished reading Kholmatova’s book, I started thinking: “Perhaps it’s time for me to write a new book?” Using the magic of ‘Inspect Element’, I created a fictitious page for a new book I wanted to write: ‘Designing Delightful Experiences’.
I wrote a description for the book, considering how I’d pitch it.
This imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I’m happy to say that I’m now working on that book, which is due to be published in 2019.
Without this fictional promotional page from the future, the book would have remained as an idea – loosely defined – rolling around my mind. By spending some time, turning that idea into something ‘real’, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine.
Of course, they could have politely informed me that they weren’t interested, but I’d have lost nothing – truly – in the process.
As designers, creating these imaginary ‘Artefacts From the Future’ is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander.
In my experience, working with clients and – to a lesser extent, students – it’s the ‘letting go’ part that’s the hard part. It can be difficult to let down your guard and share a weighty goal, but I’d encourage you to do so. At the end of the day, you have nothing to lose.
The key lesson here is that your ‘Artefacts From the Future’ will focus your mind. They’ll transform your unformed ideas into ‘tangible evidence’ of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality.
Tomorrow Clients
The third tool, which I developed more recently, is a tool called ‘Tomorrow Clients’. This tool is designed to help you identify a list of clients that you aspire to work with.
The goal is to pinpoint who you would like to work with – in an ideal world – and define how you’d position yourself to win them over. Again, this involves ‘letting go’ and allowing your mind to imagine the possibilities, asking, “What if…?”
Before I embarked upon the design of my new website, I put together a ‘soul searching’ document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound.
One of my graduates – Chris Armstrong, the founder of Niice – replied with the following: “Might it be useful to consider five to ten companies you’d love to work for, and consider how you’d pitch yourself to them?”
This was just the provocation I needed. To add a little focus, I reduced the list to three, asking: “Who would my top three clients be?”
By distilling the list down I focused on who I’d like to work for and how I’d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for.
This exercise might – on the surface – appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words.
For each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset.
Focusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe’s creative tools to get the most out of them.
A few weeks ago, I signed a contract with the team working on Adobe XD to create a series of ‘capsule courses’, focused on UX design. The first of these courses – exploring UI design – will be out in 2019.
I believe that Armstrong’s provocation – asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future – made all the difference.
The key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future.
In closing…
I hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself.
I passionately believe that you can design your future. I also firmly believe that you’re more likely to make that future a reality if you put some thought into defining what it looks like.
As I say to my students and the clients I work with: It’s not enough to want to be a success, the word ‘success’ is too vague to be a destination. A far better approach is to define exactly what success looks like.
The secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality. |
2018 |
Christopher Murphy |
christophermurphy |
2018-12-15T00:00:00+00:00 |
https://24ways.org/2018/designing-your-future/ |
process |
253 |
Clip Paths Know No Bounds |
CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.
The basics of a clip path
Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.
Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions.
See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.
So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.
clip-path: polygon(
30% 0%,
70% 0%,
100% 30%,
100% 70%,
70% 100%,
30% 100%,
0% 70%,
0% 30%
);
A shape with less vertices than the eye can see
It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%.
Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.
See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.
By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.
Our earlier octagon can similarly be made with only four points.
See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.
Multiple shapes, one clip path
We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().
See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.
Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed.
It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.
See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.
Different shapes with fill rules
A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.
See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.
As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside.
Order of operations
It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.
These compositing effects are applied in the order:
CSS Filters (e.g. filter: blur(2px))
Clipping (e.g. what this article is about)
Masking (Clipping’s cousin)
Blend Modes (e.g. mix-blend-mode: multiply)
Opacity
This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges.
See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.
If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.
Revealing content with animation
CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values.
See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.
Don’t keep CSS Shapes in a box
Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful. |
2018 |
Dan Wilson |
danwilson |
2018-12-20T00:00:00+00:00 |
https://24ways.org/2018/clip-paths-know-no-bounds/ |
code |
257 |
The (Switch)-Case for State Machines in User Interfaces |
You’re tasked with creating a login form. Email, password, submit button, done.
“This will be easy,” you think to yourself.
Login form by Selecto
You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems.
As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.
What if login fails? Where do those errors show up?
Should we show errors differently if the user forgot to enter their email, or password, or both?
Or should the submit button be disabled?
Should we validate the email field?
When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)
When should the errors disappear?
What do we show during the login process? Some loading spinner?
What if loading takes too long, or a server error occurs?
Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.
Modeling Behavior
Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information.
However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.
The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:
start - not submitted yet
loading - submitted and logging in
success - successfully logged in
error - login failed
Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity:
SUBMIT - pressing the submit button
RESOLVE - the server responds, indicating that login is successful
REJECT - the server responds, indicating that login failed
Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:
From the start state, when the SUBMIT event occurs, the app should be in the loading state.
From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.
If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.
From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.
Otherwise, if any other event occurs, don’t do anything and stay in the same state.
That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:
Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.
From Visuals to Code
Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.
With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:
function loginMachine(state, event) {
switch (state) {
case 'start':
if (event === 'SUBMIT') {
return 'loading';
}
break;
case 'loading':
if (event === 'RESOLVE') {
return 'success';
} else if (event === 'REJECT') {
return 'error';
}
break;
case 'success':
// Accept no further events
break;
case 'error':
if (event === 'SUBMIT') {
return 'loading';
}
break;
default:
// This should never occur
return undefined;
}
}
console.log(loginMachine('start', 'SUBMIT'));
// => 'loading'
This is fine (I suppose) but personally, I find it much easier to use objects:
const loginMachine = {
initial: "start",
states: {
start: {
on: { SUBMIT: 'loading' }
},
loading: {
on: {
REJECT: 'error',
RESOLVE: 'success'
}
},
error: {
on: {
SUBMIT: 'loading'
}
},
success: {}
}
};
function transition(state, event) {
return machine
.states[state] // Look up the state
.on[event] // Look up the next state based on the event
|| state; // If not found, return the current state
}
console.log(transition('start', 'SUBMIT'));
As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:
A Common Language Between Designers and Developers
Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:
identify all possible states, and potentially missing states
describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)
eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition)
add features with full confidence of knowing what other states it might affect
simplify redundant states or complex user flows
create test paths for almost every possible user flow, and easily identify edge cases
collaborate better by understanding the entire application model equally.
Not a New Idea
I’m not the first to suggest that state machines can help bridge the gap between design and development.
Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines
User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.
In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.
More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born.
State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:
Visualized modeling
Precise diagrams
Automatic code generation
Comprehensive test coverage
Accommodation of late-breaking requirements changes
Moving Forward
It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:
The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications
The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling
I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.
I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).
I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience. |
2018 |
David Khourshid |
davidkhourshid |
2018-12-12T00:00:00+00:00 |
https://24ways.org/2018/state-machines-in-user-interfaces/ |
code |
264 |
Dynamic Social Sharing Images |
Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days!
With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content.
One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.
So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.
Adding sharing images
The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.
<meta property="og:image" content="https://example.com/my_image.jpg">
There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.
This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation.
This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article.
Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.
One of the hand-made sharing images from 2017
Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.
Hatching a new plan
One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.
The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!
In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS?
Breaking it down, I figured the steps needed would be something like:
Create the CSS to lay out a component that could be turned into an image
Add a new field to articles in the CMS to hold a handpicked quote
Build a new article template in the CMS to output the author name and quote dynamically for any article
… um … screenshot?
I thought I’d get cracking and see if I could figure out the final steps later.
Building the page
The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.
Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.
With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup.
I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process.
I added a new field to the article template to capture the quote.
With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.
Our automatically generated layout direct from the CMS
It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot?
Enter Puppeteer
Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.
Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.
Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example.
First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line:
npm i puppeteer
Then save a new file as example.js with this code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
and then run it using Node:
node example.js
This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:
Launch a browser
Open up a new page
Goto a URL
Take a screenshot
Close the browser
The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested.
That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?
Our content is up in the corner of the image with lots of empty space.
That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');
await page.setViewport({
width: 600,
height: 315
});
await page.screenshot({path: 'example.png'});
await browser.close();
})();
This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
Perfect! We now have a programmatic way of turning a URL into an image.
Improving the script
Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image.
Our goal is to call the script like this:
node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/
And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.
// Get the URL and the slug segment from it
const url = process.argv[2];
const segments = url.split('/');
// Get the second-to-last segment (the slug)
const slug = segments[segments.length-2];
We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url + 'sharing');
await page.setViewport({
width: 600,
height: 315,
deviceScaleFactor: 2
});
await page.screenshot({path: slug + '.png'});
await browser.close();
})();
Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.
You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little.
const imagemin = require('imagemin');
const imageminPngquant = require('imagemin-pngquant');
await imagemin([slug + '.png'], 'build', {
plugins: [
imageminPngquant({quality: '75-90'})
]
});
You can see the completed example as a gist.
Integrating it with your CMS
So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process.
For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further.
We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!
By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.
I hope you’ll give it a try! |
2018 |
Drew McLellan |
drewmclellan |
2018-12-24T00:00:00+00:00 |
https://24ways.org/2018/dynamic-social-sharing-images/ |
code |
262 |
Be the Villain |
Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I’ll consider it a success.
As a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good.
When designing, you take a user flow and make sure it can’t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date—you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario.
Lately, I’ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it’s all too easy to miss the forest for the trees when you’re a product designer focused on cranking out features.
What I’m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end.
If you pay attention, you’ll start to notice this subversion with increasing frequency:
Domestic abusers using internet-controlled devices to spy on and control their partner.
Zealots tanking a business’ rating on Google because its owners spoke out against unchecked gun violence.
Forcing people to choose between TV or stalking because the messaging center portion of a cable provider’s entertainment package lacks muting or blocking features.
White supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories.
Facebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion.
White supremacists also using a video game chat service as a recruiting tool.
The unchecked harassment of minors on Instagram.
Swatting.
If I were to guess why we haven’t heard more about this problem, I’d say that optimistically, people have settled out of court. Pessimistically, it’s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention.
Subverted design isn’t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you’re interested in this kind of thing
Subverted design also isn’t beholden design, or simple lack of attention. This phenomenon isn’t even necessarily premeditated. I think it arises from naïve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game.
This is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting.
To achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic.
The thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted.
Don’t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be.
They’re not going to be fun conversations. It’s not going to be easy convincing others that these aren’t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career.
But consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office’s resident villain than to have to live with something like this:
If the past few years have taught us anything, it’s that the choices we make—or avoid making—have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn’t neutral.
You’re going to have to start thinking the way a monster does—if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start:
Providing a comparable experience becomes forcing a single path.
Considering situation becomes ignoring circumstance.
Being consistent becomes acting capriciously.
Giving control becomes removing autonomy.
Offering choice becomes limiting options.
Prioritizing content becomes obfuscating purpose.
Adding value becomes filling with gibberish.
Combined, these inverted principles start to paint a picture of something we’re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors.
These kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don’t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws.
So, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good—perhaps the most important takeaway being admitting you have a problem in the first place.
Another exercise is asking the question, “What is the evil version of this feature?” Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don’t care when, so long as the question is actually raised.
In keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, “Whose benefits? Whose risks?” instead of “What benefits? What risks?” in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven.
Few things in this world are intrinsically altruistic or good—it’s just the nature of the beast. However, that doesn’t mean we have to stand idly by when harm is created. If we can add terms like “anti-pattern” to our professional vocabulary, we can certainly also incorporate phrases like “abuser flow.”
Design finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged.
Subverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact. |
2018 |
Eric Bailey |
ericbailey |
2018-12-06T00:00:00+00:00 |
https://24ways.org/2018/be-the-villain/ |
ux |
241 |
Jank-Free Image Loads |
There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then —
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
— oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place.
By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author <img> elements that load like this:
Your browser doesn’t support HTML5 video. Here is
a link to the video instead.
Jank is a very old problem, and there is a very old solution to it: the width and height attributes on <img>. The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.
width
Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.
—The HTML 3.2 Specification, published on January 14 1997
Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:
See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.
width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.
I have good news, bad news, and great news.
The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.
The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.
So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.
aspect-ratio in CSS
The first proposed feature? An aspect-ratio property in CSS!
This would allow us to write CSS like this:
img {
width: 100%;
}
.thumb {
aspect-ratio: 1/1;
}
.hero {
aspect-ratio: 16/9;
}
This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.
Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:
<img src="image.jpg" style="aspect-ratio: 5/4" />
And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later.
How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it?
I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!
A brief note on intrinsic and extrinsic sizing
What do I mean by “intrinsic” and “extrinsic?”
The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px.
The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.
It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).
CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.
The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML.
intrinsicsize in HTML
The proposed intrinsicsize attribute will let you do this:
<img src="image.jpg" intrinsicsize="800x600" />
That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size.
You may ask (I did!): wait, what if my <img> references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio:
<img
srcset="300x200.jpg 300w,
600x400.jpg 600w,
900x600.jpg 900w,
1200x800.jpg 1200w"
sizes="75vw"
intrinsicsize="3x2" />
In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.
Can’t wait
I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!
Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait! |
2018 |
Eric Portis |
ericportis |
2018-12-21T00:00:00+00:00 |
https://24ways.org/2018/jank-free-image-loads/ |
code |
260 |
The Art of Mathematics: A Mandala Maker Tutorial |
In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things.
A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:
Mandala Maker user interface
The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new.
In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative!
Preparations: a blank canvas
The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).
Files
Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="style.css">
<script src="main.js"></script>
</head>
<body onload="init()">
<canvas width="600" height="600">
<p>Your browser doesn't support canvas.</p>
</canvas>
</body>
</html>
I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use:
body {
background-color: #ccc;
text-align: center;
}
canvas {
touch-action: none;
background-color: #fff;
}
button {
font-size: 110%;
}
Next steps
We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:
Building a simple drawing app with one line and one color
Adding a Clear button and a color picker
Adding more functionality: 2 line drawing (add the first reflection)
Adding more functionality: 8 line drawing (add 6 more reflections!)
Interactive demos
This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.
Part 1: A simple drawing app
Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:
var canvas, context, w, h,
prevX = 0, currX = 0, prevY = 0, currY = 0,
draw = false;
The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true.
1. init
Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events.
function init() {
canvas = document.querySelector("canvas");
context = canvas.getContext("2d");
w = canvas.width;
h = canvas.height;
canvas.onpointermove = handlePointerMove;
canvas.onpointerdown = handlePointerDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
}
2. drawLine
This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.
lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).
function drawLine() {
var a = prevX,
b = prevY,
c = currX,
d = currY;
context.lineWidth = 4;
context.lineCap = "round";
context.beginPath();
context.moveTo(a, b);
context.lineTo(c, d);
context.stroke();
context.closePath();
}
3. stopDrawing
This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).
function stopDrawing() {
draw = false;
}
4. recordPointerLocation
This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.
function recordPointerLocation(e) {
prevX = currX;
prevY = currY;
currX = e.clientX - canvas.offsetLeft;
currY = e.clientY - canvas.offsetTop;
}
5. handlePointerMove
This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.
function handlePointerMove(e) {
if (draw) {
recordPointerLocation(e);
drawLine();
}
}
6. handlePointerDown
This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.
function handlePointerDown(e) {
recordPointerLocation(e);
draw = true;
}
Finally, we have a working drawing app. But that’s just the beginning!
See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.
Part 2: Add a Clear button and a color picker
Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.
<body onload="init()">
<canvas width="600" height="600">
<p>Your browser doesn't support canvas.</p>
</canvas>
<div class="menu">
<input type="color" class="color" />
<button type="button" class="clear">Clear</button>
</div>
</body>
Color picker
This is our new color picker function. It targets the input element by its class and gets its value.
function getColor() {
return document.querySelector(".color").value;
}
Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.
function drawLine() {
//...code...
context.strokeStyle = getColor();
context.lineWidth = 4;
context.lineCap = "round";
//...code...
}
Clear button
This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.
function clearCanvas() {
if (confirm("Want to clear?")) {
context.clearRect(0, 0, w, h);
}
}
The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner.
If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.
Let’s add this event handler to init:
function init() {
//...code...
canvas.onpointermove = handleMouseMove;
canvas.onpointerdown = handleMouseDown;
canvas.onpointerup = stopDrawing;
canvas.onpointerout = stopDrawing;
document.querySelector(".clear").onclick = clearCanvas;
}
See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.
Part 3: Draw with 2 lines
It’s time to make a line appear where no pointer has gone before. A ghost line!
For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis.
Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!).
Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.
A sketch illustrating the mathematics of reflecting a point.
What happens to the x coordinates?
The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c.
What happens to the y coordinates?
What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.
This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis.
function drawLine() {
var a = prevX, a_ = a,
b = prevY, b_ = h-b,
c = currX, c_ = c,
d = currY, d_ = h-d;
//... code ...
// Draw line #1, at the pointer's location
context.moveTo(a, b);
context.lineTo(c, d);
// Draw line #2, mirroring the line #1
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
}
In case this was too abstract for you, let’s look at some actual numbers to see how this works.
Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.
So b' = 10 - 2 = 8 and d' = 10 - 3 = 7.
We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.
If you are still confused, I don’t blame you.
Here is the result. Draw something and see what happens.
See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.
Part 4: Draw with 8 lines
I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas.
A sketch illustrating points C and D.
This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).
function drawLine() {
//... code ...
// Reassign values
a_ = w-a; b_ = b;
c_ = w-c; d_ = d;
// Draw the 3rd line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
// Reassign values
a_ = w-a; b_ = h-b;
c_ = w-c; d_ = h-d;
// Draw the 4th line
context.moveTo(a_, b_);
context.lineTo(c_, d_);
//... code ...
What is happening?
You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will.
Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous).
What happens to the x coordinates?
As you recall, our x coordinates are a (prevX) and c (currX).
For the third line we are adding, a' = w - a and c' = w - c, which means…
For the fourth line, the same thing happens to our x coordinates a and c.
What happens to the y coordinates?
As you recall, our y coordinates are b (prevY) and d (currY).
For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis.
For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis.
We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same:
context.moveTo(a_, b_);
context.lineTo(c_, d_);
We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments.
Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.
A sketch illustrating points E and F.
This is the code for E and F:
// Reassign for 5
a_ = w/2+h/2-b; b_ = w/2+h/2-a;
c_ = w/2+h/2-d; d_ = w/2+h/2-c;
// Reassign for 6
a_ = w/2+h/2-b; b_ = h/2-w/2+a;
c_ = w/2+h/2-d; d_ = h/2-w/2+c;
Their x coordinates are identical and their y coordinates are reversed to one another.
This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).
A sketch illustrating points G and H.
This is the code:
// Reassign for 7
a_ = w/2-h/2+b; b_ = w/2+h/2-a;
c_ = w/2-h/2+d; d_ = w/2+h/2-c;
// Reassign for 8
a_ = w/2-h/2+b; b_ = h/2-w/2+a;
c_ = w/2-h/2+d; d_ = h/2-w/2+c;
//...code...
}
Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.
I hope you had fun learning! This is our final app:
See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen. |
2018 |
Hagar Shilo |
hagarshilo |
2018-12-02T00:00:00+00:00 |
https://24ways.org/2018/the-art-of-mathematics/ |
code |
242 |
Creating My First Chrome Extension |
Writing a Chrome Extension isn’t as scary at it seems!
Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.
Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?
Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit!
Setup
But first, some things you’ll need to know about before getting started:
Callbacks
Timeouts
Chrome Dev Tools
Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started.
Hyperbole and a Half
Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.
On the note of organization, abstraction is important! You might have any combination of the following:
The Chrome extension options page
The popup from the Chrome Menu
The windows or tabs you create
The background scripts
And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.
Alright, now that you know all that up front, let’s get going!
Documentation
TL;DR READ THE DOCS.
A few things to get started:
Read Google’s primer on browser extensions
Have a look at their Getting started tutorial
Check out their overview on Chrome Extensions
This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.
The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example:
https://github.com/jennz0r/eye-rest/blob/master/manifest.json
Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful.
To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.
The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { "default_popup": FILE_NAME_HERE }.
The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.
Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.
Debugging
A quick note: don’t forget the debugging tutorial!
Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.
For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information.
Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms.
Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!
Me adding a ton of `console.log`s while trying to debug my alarms.
Eye Rest Functionality
Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:
If the extension is running AND
If the user has not clicked Pause in the Popup HTML AND
If the counter in the Popup HTML is down to 00:00 THEN
Open a new window with Timer HTML AND
Start a 20 sec countdown in Timer HTML AND
Reset the Popup HTML counter to 20:00
If the Timer HTML is down to 0 sec THEN
Close that window. Rinse. Repeat.
Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding.
For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.)
API
Now let’s discuss the APIs that I used to build this extension.
Alarms
What even are alarms? I didn’t know either.
Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says…
DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.
For more information, check out this background migration doc.
One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.
Background Scripts
For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.
I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.
I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.
Other APIs
Windows
I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.
One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired!
I pass the timer.html as the url option, as well as type, size, position, and other visual options.
Storage
Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums.
I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.
Declarative Content
The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser!
You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty.
There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!
The Extension
Here’s what my original Popup looked like before I added styles.
And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel.
And here’s the Timer window and Popup together!
Publishing
Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D
My extension is now available for you to install.
Conclusion
I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches.
Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!
Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes. |
2018 |
Jennifer Wong |
jenniferwong |
2018-12-05T00:00:00+00:00 |
https://24ways.org/2018/my-first-chrome-extension/ |
code |
258 |
Mistletoe Offline |
It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.
I am, of course, talking about Murphy, and the golden rule he gave unto us:
Anything that can go wrong will go wrong.
So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit?
But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong.
I guess there’s nothing we can do about those particular situations, right?
Wrong!
A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.
If you’ve got a custom 404 page, why not make a custom offline page too?
Get your server in order
Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.
If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.
Make an offline page
Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference.
When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html.
Pre-cache your offline page
Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:
addEventListener('install', installEvent => {
// put your instructions here.
}); // end addEventListener
In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:
addEventListener('install', installEvent => {
installEvent.waitUntil(
caches.open('Johnny')
.then( JohnnyCache => {
JohnnyCache.addAll([
'/offline.html',
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/path/to/image.jpg'
]); // end addAll
}) // end open.then
); // end waitUntil
}); // end addEventListener
Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.
Intercept requests
The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:
addEventListener('fetch', fetchEvent => {
// What happens next is up to you!
}); // end addEventListener
Let’s write a fairly conservative script with the following logic:
Whenever a file is requested,
First, try to fetch it from the network,
But if that doesn’t work, try to find it in the cache,
But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead.
Here’s how that translates into JavaScript:
// Whenever a file is requested
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
fetchEvent.respondWith(
// First, try to fetch it from the network
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
}) // end fetch.then
// But if that doesn't work
.catch( fetchError => {
// try to find it in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
// But if that doesn't work
} else {
// and it's a request for a web page
if (request.headers.get('Accept').includes('text/html')) {
// show the custom offline page instead
return caches.match('/offline.html');
} // end if
} // end if/else
}) // end match.then
}) // end fetch.catch
); // end respondWith
}); // end addEventListener
I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas.
Hook up your service worker script
You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('/serviceworker.js');
}
That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.
You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted.
Go further!
Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!
This is just the beginning. You can do more with service workers.
What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version.
Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.
So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript.
Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action. |
2018 |
Jeremy Keith |
jeremykeith |
2018-12-04T00:00:00+00:00 |
https://24ways.org/2018/mistletoe-offline/ |
code |
263 |
Securing Your Site like It’s 1999 |
Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.
As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.
What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.
Bad input validation: Trusting anything the user sends you
Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.
One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.
One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile.
Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened?
There aren’t any official rules for developing software on the web. But if there were, my golden rule would be:
Never trust user input. Ever.
Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.
Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.
Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner.
Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.
SQL injection: Allowing the user to run their own database queries
A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.
One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.
It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?
' OR '1'='1
SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Alice'
AND PASSWORD='hunter2'
This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead!
SELECT COUNT(*)
FROM USERS
WHERE USERNAME='Admin'
AND PASSWORD='' OR '1'='1'
Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!
So how can you protect yourself from SQL injection?
Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.
Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!
Cross site request forgery: Getting other users to do your dirty work for you
Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.
Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky…
Let’s just say there are older and fouler things than Rick Astley in the dark places of the web.
What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:
<img src="http://www.netflix.com/JSON/AddToQueue?movieid=70110672" />
Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge!
This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from.
The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set.
Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies.
You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.
Cross site scripting: Someone else’s code running on your website
In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.
Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then…
Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject <img /> and <div /> tags into his headline, but <script /> was filtered. MySpace wasn’t about to let someone else run their own code on MySpace.
Intrigued, Samy set about finding out exactly what he could do with <img /> and <div /> tags. He found that you could add style properties to <div /> tags to style them with CSS.
<div style="background:url('javascript:alert(1)')">
This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from <div />.
<div style="background:url('java
script:alert(1)')">
Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.
alert(document.body.innerHTML)
Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.
alert(eval('document.body.inne' + 'rHTML'))
This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages.
One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.
if (location.hostname == 'profile.myspace.com') {
document.location = 'http://www.myspace.com'
+ location.pathname + location.search
}
Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited.
Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.
This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.
So how can you help protect yourself from cross-site scripting?
Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.
You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.
You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.
For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.
npm audit: Protecting yourself from code you don’t own
JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit.
npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm:
npm install npm -g
npm audit
When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.
If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!
Securing your site like it’s 2019
The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.
However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.
As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years. |
2018 |
Katie Fenn |
katiefenn |
2018-12-01T00:00:00+00:00 |
https://24ways.org/2018/securing-your-site-like-its-1999/ |
code |
251 |
The System, the Search, and the Food Bank |
Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center.
On the belt are plastic bins filled with assortments of shelf-stable food—one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like “Bottled Water” or “Cereal” or “Candy.”
Such was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.
I could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there’s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.
Look, I can’t quite explain it: I just know that I love sorting, organizing, and classifying things—groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?
The opportunity to create meaning from nothing is at the core of my excitement, which is why I’ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. “I can’t believe they’re letting me do this,” I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.
The jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It’s not just a nice-to-have that we spent our morning separating cookies from carrots—it’s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we’re talking about donated groceries or—there it is—web content.
This happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.)
Is X a Y? was the question at the heart of our food sorting—but it’s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:
What is the criteria that defines Y?
Does X meet that criteria?
We don’t usually articulate it so concretely because it’s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn’t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn’t allowed, causing a small cognitive hiccup—this X is NOT a Y—that sometimes meant having to re-sort our boxes.
On the web, we’re interested—I would hope—in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer—Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?
We have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means—well, this means a lot of things, and I’ve got limited space here, so let’s focus on these three lessons from the food bank:
Use plain, familiar language. This advice seems to be given constantly, but that’s because it’s solid and it’s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it’s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I’ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows—not only will they be more likely to spot what they’re looking for on your site or app, but you’ll turn up more often in search results.
Create consistency in all things. Missteps in consistency look like my earlier chicken broth example—changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts—these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.
Make the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.
This idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn’t know every category label around the conveyer belt, nor what criteria each category warranted.
The system wasn’t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We’d ask staff members. We’d ask more seasoned volunteers. We’d ask each other. We’d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.
The more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.
The same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away—or, worse, misinform or hurt them.
This is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y—well-sorted content can give them the answer. |
2018 |
Lisa Maria Martin |
lisamariamartin |
2018-12-16T00:00:00+00:00 |
https://24ways.org/2018/the-system-the-search-and-the-food-bank/ |
content |
250 |
Build up Your Leadership Toolbox |
Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won’t wake up one day and magically be imbued with all you need to do a good job at leading. If we don’t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?
What even is it?
I got very frustrated way back in my days as a senior developer when I was given “advice” about my leadership style; at the time I didn’t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:
you need to step up
you need to take charge
you need to grab the bull by its horns
you need to have thicker skin
you need to just be more confident in your leading
you need to just make it happen
I appreciate some people’s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you “step up” and how are you evaluating what step I’m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<
While there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I’ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.
I’ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.
I <3 books
I refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.
There are three books that changed me forever in how I approach and think about leadership.
Primal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee
Quiet, by Susan Cain
Daring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Brené Brown
I recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.
Primal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.
Quiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I’d had managers or support from someone who valued introverts’ strengths, things would’ve been very different. I would’ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It’s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.
Brené Brown’s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.
It takes great courage to be vulnerable and open about what you can and can’t do. Open about your mistakes. Vocalise what you don’t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.
Just like gender, leadership is not binary
If you google for what a leader is, you’ll get many different answers. I personally think Brené’s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.
I define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.
Brené Brown
Being a leader isn’t about being the loudest in a room, having veto power, talking over people or ignoring everyone else’s ideas. It’s not about “telling people what to do”. It’s not about an elevated status that you’re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.
Being a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.
Leaders are Made, they are not Born.
Be flexible in your leadership style
Typically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.
From the book, Primal Leadership, it gives a summary of 6 leadership styles which are:
Visionary
Coaching
Affiliative
Democratic
Pacesetting
Commanding
Visionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they’re doing what they’re doing.
Coaching, is about connecting what a person wants and helping to align that with organisation’s goals. It’s a balance of helping someone improve their performance to fulfil their role and their potential beyond.
Affiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.
Democratic, values people’s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone’s input, that doesn’t mean the end result is that I have to please everyone.
The next two, sadly, are the ones wielded far too often and have the greatest negative impact. It’s where the “telling people what to do” comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.
Pacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the “just make it happen” and driver of unrealistic workload which contributes to burnout.
Commanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.
Commanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.
Be responsible for the power you wield
If reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven’t got all of these down and ready to use in your toolbox…
Take a breath. Take responsibility. Take action.
No one is perfect, and it’s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you’re going to try some different styles. You can be vulnerable and own up to mistakes you might’ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.
The impact you can have on the lives of those around you when you’re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.
Time spent understanding people is never wasted.
Cate Huston.
I believe in you. <3 Mazz. |
2018 |
Mazz Mosley |
mazzmosley |
2018-12-10T00:00:00+00:00 |
https://24ways.org/2018/build-up-your-leadership-toolbox/ |
business |
261 |
Surviving—and Thriving—as a Remote Worker |
Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there?
I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility.
However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely.
Experiment with your environment
As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight.
Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.
Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.)
Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls.
Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you.
Set boundaries
If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh?
Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with).
For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.
I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.
I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality.
Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon.
You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.
Create intentional transition time
I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time.
The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort.
I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch.
This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer.
Embrace async
If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now.
For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.
Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.”
Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!
(That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)
Seek out social opportunities
Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.
I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)
Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.
If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work.
If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own.
These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!
Hope to see you all online soon 🙌 |
2018 |
Mel Choyce |
melchoyce |
2018-12-09T00:00:00+00:00 |
https://24ways.org/2018/thriving-as-a-remote-worker/ |
process |
256 |
Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist |
We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.
*(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!)
Looking for critters in rocky intertidal habitats
One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’
A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.)
They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.
My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.
I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist.
Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!
Finding nearby animals with iNaturalist
We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.
I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting.
This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.
iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button.
You’ll then get a URL that looks a bit like
https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at
which you can call and interrrogate using a programming language of your choice.
If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).
Rapid development using Observable Notebooks
We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all.
You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example.
Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.
Developing your Naturalist superpowers
If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.
For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.
We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com
The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API.
westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811
A quick map primer:
The higher the latitude the more north it is
The lower the latitude the more south it is
Latitude 0 = the equator
The higher the longitude the more east it is of Greenwich
The lower the longitude the more west it is of Greenwich
Longitude 0 = Greenwich
In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:
nelat = highest latitude = north limit = 37.499811
nelng = highest longitude = east limit = -122.492738
swlat = smallest latitude = south limit = 37.492805
swlng = smallest longitude = west limit = 122.50542
As API parameters these look like this:
?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542
These parameters in this format can be used for most of the iNaturalist API methods.
Nudibranch seasonality in Pillar Point
We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.
In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’.
Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’.
The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.
The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:
pillar_point_counts_per_week = fetch(
"https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true"
).then(response => {
return response.json();
})
Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks.
(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)
The iNaturalist API returns data that looks like this:
{
"total_results": 53,
"page": 1,
"per_page": 53,
"results": {
"week_of_year": {
"1": 136,
"2": 20,
"3": 150,
"4": 65,
"5": 186,
"6": 74,
"7": 47,
"8": 87,
"9": 64,
"10": 56,
But for our Vega-Lite graph we need data that looks like this:
[{
"week": "01",
"value": 136
}, {
"week": "02",
"value": 20
}, ...]
We can convert what we get back from the API to the second format using a loop that iterates over the object keys:
objects_to_plot = {
let objects = [];
Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {
objects.push({
week: `Wk ${week_index.toString()}`,
observations: pillar_point_counts_per_week.results.week_of_year[week_index]
});
})
return objects;
}
We can then plug this into Vega-Lite to draw us a graph:
vegalite({
data: {values: objects_to_plot},
mark: "bar",
encoding: {
x: {field: "week", type: "nominal", sort: null},
y: {field: "observations", type: "quantitative"}
},
width: width * 0.9
})
It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences.
So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.
pillar_point_nudibranches = {
let api_results = await fetch(
"https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true"
).then(r => r.json())
let species_list = api_results.results.map(i => ({
value: i.taxon.id,
label: `${i.taxon.preferred_common_name} (${i.taxon.name})`
}));
return species_list
}
We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from "@jashkenas/inputs" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!
viewof select_species = select({
title: "Which Nudibranch do you want to see seasonality for?",
options: [{value: "", label: "All the Nudibranchs!"}, ...pillar_point_nudibranches],
value: ""
})
Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.
pillar_point_counts_per_month_per_species = fetch(
`https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`
).then(r => r.json())
Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:
objects_to_plot_species_month = {
let objects = [];
Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {
objects.push({
month: (new Date(2018, (month_index - 1), 1)).toLocaleString("en", {month: "long"}),
observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]
});
})
return objects;
}
(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)
And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!
vegalite({
data: {values: objects_to_plot_species_month},
mark: "bar",
encoding: {
x: {field: "month", type: "nominal", sort:null},
y: {field: "observations", type: "quantitative"}
},
width: width * 0.9
})
Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.
This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?
Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!
Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.
viewof selected_month = select({
title: "When do you want to see Nudibranchs?",
options: [
{ label: "Whenever", value: "" },
{ label: "January", value: "1" },
{ label: "February", value: "2" },
{ label: "March", value: "3" },
{ label: "April", value: "4" },
{ label: "May", value: "5" },
{ label: "June", value: "6" },
{ label: "July", value: "7" },
{ label: "August", value: "8" },
{ label: "September", value: "9" },
{ label: "October", value: "10" },
{ label: "November", value: "11" },
{ label: "December", value: "12" },
],
value: ""
})
We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.
all_species_data = fetch(
`https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`
).then(r => r.json())
You can render HTML directly in a notebook cell using Observable’s html tagged template literal:
<style>
.collection { margin-top: 2em }
.collection .species { display: inline-block; width: 9em; margin-bottom: 2em; }
.collection .species-name { font-size: 1em; margin: 0; padding: 0 }
.collection .species-count { margin: 0 0 0.3em 0; padding: 0; font-size: 0.75em; color: #999; font-style: italic; }
.collection img { display: block; width: 100% }
.collection select { font-size: 1.5em; }
</style>
<h2>If you go to Pillar Point ${
{"": "",
"1":"in January",
"2":"in Febrary",
"3":"in March",
"4":"in April",
"5":"in May",
"6":"in June",
"7":"in July",
"8":"in August",
"9":"in September",
"10":"in October",
"11":"in November",
"12":"in December",
}[selected_month]
} you might see…</h2>
<div class="collection">
${all_species_data.results.map(s => `<div class="species"><h3 class="species-name">${s.taxon.name}</h3>
<p class="species-count">Seen ${s.count} times</p>
<img src="${s.taxon.default_photo.medium_url}"></div>
`)}
</div>
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!
Play with it yourself in this Observable Notebook.
Conclusion
I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower.
Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.
Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences. |
2018 |
Natalie Downe |
nataliedowne |
2018-12-18T00:00:00+00:00 |
https://24ways.org/2018/observable-notebooks-and-inaturalist/ |
code |
252 |
Turn Jekyll up to Eleventy |
Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option.
The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.
By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.
Introducing Eleventy
Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).
As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1.
Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:
git clone https://github.com/mattcone/markdown-guide.git
cd markdown-guide
Before you start
If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM:
Install Node.js:
Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.
Windows: Download the Windows installer from the Node.js website and follow the instructions.
Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies.
If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.
Installing Eleventy
With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:
npm install --save-dev @11ty/eleventy
If you open package.json you should see the following:
…
"devDependencies": {
"@11ty/eleventy": "^0.6.0"
}
…
We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following:
npx eleventy --input=README.md --formats=md
This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.
Configuration
Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on.
We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:
module.exports = function(eleventyConfig) {
return {
dir: {
input: "./", // Equivalent to Jekyll's source property
output: "./_site" // Equivalent to Jekyll's destination property
}
};
};
A few other things to bear in mind:
Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).
By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.
Layouts
One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).
Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config:
module.exports = function(eleventyConfig) {
// Aliases are in relation to the _includes folder
eleventyConfig.addLayoutAlias('about', 'layouts/about.html');
eleventyConfig.addLayoutAlias('book', 'layouts/book.html');
eleventyConfig.addLayoutAlias('default', 'layouts/default.html');
return {
dir: {
input: "./",
output: "./_site"
}
};
}
Determining which template language to use
Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.
Variables
On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.
Site variables
Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:
title: "Markdown Guide"
url: https://www.markdownguide.org
baseurl: ""
repo: http://github.com/mattcone/markdown-guide
comments: false
author:
name: "Matt Cone"
og_locale: "en_US"
Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.
{
"title": "Markdown Guide",
"url": "https://www.markdownguide.org",
"baseurl": "",
"repo": "http://github.com/mattcone/markdown-guide",
"comments": false,
"author": {
"name": "Matt Cone"
},
"og_locale": "en_US"
}
Page variables
The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:
Jekyll
Eleventy
page.url
page.url
page.date
page.date
page.path
page.inputPath
page.id
page.outputPath
page.name
page.fileSlug
page.content
content
page.title
title
page.foobar
foobar
When iterating through pages, frontmatter values are available via the data object while content is available via templateContent:
Jekyll
Eleventy
item.url
item.url
item.date
item.date
item.path
item.inputPath
item.name
item.fileSlug
item.id
item.outputPath
item.content
item.templateContent
item.title
item.data.title
item.foobar
item.data.foobar
Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.
Pagination variables
Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:
Jekyll
Eleventy
paginator.page
pagination.pageNumber
paginator.per_page
pagination.size
paginator.posts
pagination.items
paginator.previous_page_path
pagination.previousPageHref
paginator.next_page_path
pagination.nextPageHref
Filters
Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where.
The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:
// {{ variable | jsonify }}
eleventyConfig.addFilter('jsonify', function (variable) {
return JSON.stringify(variable);
});
Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:
{{ site.members | where: "graduation_year","2014" }}
To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:
// {{ array | where: key,value }}
eleventyConfig.addFilter('where', function (array, key, value) {
return array.filter(item => {
const keys = key.split('.');
const reducedKey = keys.reduce((object, key) => {
return object[key];
}, item);
return (reducedKey === value ? item : false);
});
});
There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew!
Includes
As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:
<!-- page.html -->
{% include include.html value="key" %}
<!-- include.html -->
{{ include.value }}
in Eleventy, you would do this:
<!-- page.html -->
{% include "include.html", value: "key" %}
<!-- include.html -->
{{ value }}
A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.
Tweaking Liquid
You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option.
Thankfully, Eleventy allows us to pass options to LiquidJS:
eleventyConfig.setLiquidOptions({
dynamicPartials: false,
root: [
'_includes',
'.'
]
});
Collections
Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.
Collections in Jekyll
In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:
collections:
- basic-syntax
- extended-syntax
These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:
{% for syntax in site.extended-syntax %}
{{ syntax.title }}
{% endfor %}
Collections in Eleventy
There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:
---
title: Strikethrough
syntax-id: strikethrough
syntax-summary: "~~The world is flat.~~"
tag: extended-syntax
---
We can then iterate over tagged content like this:
{% for syntax in collections.extended-syntax %}
{{ syntax.data.title }}
{% endfor %}
Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):
eleventyConfig.addCollection('basic-syntax', collection => {
return collection.getFilteredByGlob('_basic-syntax/*.md');
});
eleventyConfig.addCollection('extended-syntax', collection => {
return collection.getFilteredByGlob('_extended-syntax/*.md');
});
We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result:
eleventyConfig.addCollection('example', collection => {
return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {
return a.data.display_order - b.data.display_order;
});
});
Hopefully, this gives you just a hint of what’s possible using this approach.
Using directory data to manage defaults
By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:
---
title: Lists
syntax-id: lists
api: "no"
permalink: /basic-syntax/lists.html
---
Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.
For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:
{
"layout": "syntax",
"tag": "basic-syntax",
"permalink": "basic-syntax/{{ title | slug }}.html"
}
However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content:
markdown-guide
└── _content
├── basic-syntax
├── extended-syntax
├── getting-started
└── _content.json
We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:
{
"permalink": false
}
Static files
Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:
module.exports = function(eleventyConfig) {
…
// Copy the `assets` directory to the compiled site folder
eleventyConfig.addPassthroughCopy('assets');
return {
dir: {
input: "./",
output: "./_site"
},
passthroughFileCopy: true
};
}
Final considerations
Assets
Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.
Publishing to GitHub Pages
One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!
Going one louder
If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons.
While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.
I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine!
But these go to 11.
Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩ |
2018 |
Paul Lloyd |
paulrobertlloyd |
2018-12-11T00:00:00+00:00 |
https://24ways.org/2018/turn-jekyll-up-to-eleventy/ |
content |
243 |
Researching a Property in the CSS Specifications |
I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial.
What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.
Where are the CSS Specifications?
The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.
Who are the specifications for?
CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.
Which version of a spec should I look at?
There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.
If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.
If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.
How to approach a spec
The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail.
The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:
Values and Units
Intrinsic and Extrinsic Sizing
Box Alignment
When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.
Researching your property
As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.
You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.
Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.
Value
For value we can see that the property accepts a value <track-size>. The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.
The <track-size> value is defined as accepting various values:
<track-breadth>
minmax( <inflexible-breadth> , <track-breadth> )
fit-content( <length-percentage>
We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.
<track-breadth> is defined just below <track-size> as:
<length-percentage>
<flex>
min-content
max-content
auto
So these are all things that would be valid to use as a value for grid-auto-rows.
The first value of <length-percentage> is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a <length> in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.
grid-auto-rows: 100px;
grid-auto-rows: 1em;
grid-auto-rows: 30%;
When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.
“<percentage> values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.”
This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.
The second value of <flex> is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:
grid-auto-rows: 1fr;
There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.
We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.
For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.
We can now add min-content and max-content to our available values.
grid-auto-rows: min-content;
grid-auto-rows: max-content;
The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.
grid-auto-rows: auto;
So, this was the list for <track-breadth>, the next possible value is minmax( <inflexible-breadth> , <track-breadth> ). So this is telling us that we can use minmax() as a value, the final (max) value will be <track-breadth> and we have already unpacked all of the allowable values there. The first value (min) is detailed as an <inflexible-breadth>. If we look at the values for this, we discover that they are the same as <track-breadth>, minus the <flex> value:
<length-percentage>
min-content
max-content
auto
We already know what all of these do, so we can add possible minmax() values to our list of values for <track-size>.
grid-auto-rows: minmax(100px, 200px);
grid-auto-rows: minmax(20%, 1fr);
grid-auto-rows: minmax(1em, auto);
grid-auto-rows: minmax(min-content, max-content);
Finally we can use fit-content( <length-percentage>. We can see that fit-content takes a value of <length-percentage> which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.
grid-auto-rows: fit-content(200px);
grid-auto-rows: fit-content(20%);
Those are all of our possible values, and to round things off, check again at the initial <track-size> value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:
“If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.”
Therefore with the following CSS, if five implicit rows were needed they would be as follows:
100px
1fr
auto
100px
1fr
.grid {
display: grid;
grid-auto-rows: 100px 1fr auto;
}
Initial
We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.
Applies to
This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.
Inherited
This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.
Percentages
Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.
Media
This defines the media group that the property belongs to. In this case visual.
Computed Value
This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.
Canonical Order
If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table.
Animation Type
This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!
That’s (mostly) it!
Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.
In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.
Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you.
Further reading
Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.
You may also find useful:
Value Definition Syntax on MDN.
The MDN Glossary defines many common terms.
Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.
How to read W3C Specs - from 2001 but still relevant.
I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed. |
2018 |
Rachel Andrew |
rachelandrew |
2018-12-14T00:00:00+00:00 |
https://24ways.org/2018/researching-a-property-in-the-css-specifications/ |
code |
248 |
How to Use Audio on the Web |
I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉
You’re having flashbacks, flashbacks to the days of yore, when we had a <bgsound> element and yup did everyone think that was the most rad thing since <blink>. I mean put those two together with a <marquee>, only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998.
The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.
Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.
We can’t help but be jaded, even though the <bgsound> element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.
But then of course comes the question, having lived in a muted present for so long, where and why would you use audio?
✨ Showcase Time ✨
There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.
futurelibrary.no
Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.
pottermore.com
Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.
Eighty-six and a half years
Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.
Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.
lightnote.co
learningmusic.ableton.com
Considerations
Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.
Eighty-six and a half years
Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings.
Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone.
To The Future
So let’s go!
We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.
But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.
Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).
This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises.
Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb. |
2018 |
Ruth John |
ruthjohn |
2018-12-22T00:00:00+00:00 |
https://24ways.org/2018/how-to-use-audio-on-the-web/ |
design |
255 |
Inclusive Considerations When Restyling Form Controls |
I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.
I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year.
Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match.
However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve.
Quick level setting
Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:
Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.
It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics.
Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing.
Visually resetting and restyling form controls
Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors.
As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.
See the Pen form control styling comparisons by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/gZOrZm/
Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated.
If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing.
You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?
It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:
Non-standard
This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.
While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.
Can’t make it? Fake it.
Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended.
Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling.
Getting clever with markup and CSS
The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.
See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/RqEayN/
Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?
As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.
Limitations to cleverness
Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to.
However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach.
Using ARIA when appropriate
Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).
ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.
Exceptions to the rule
One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.
Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.
While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.
See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/XyvoeE/
By adding a role="switch" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state.
But while this is a valid approach to take in building a switch, how does this actually match up to reality?
Does it pass the test(s)?
Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better.
What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:
Is the control implemented in all supported browsers?
If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field?
If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline?
Test with multiple input devices.
Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few.
You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.
How well does the control take to custom styles?
If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling.
Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled.
Do specifications match reality?
If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations?
For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order?
Wrapping up
While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented.
Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.
2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs.
And hey. There’s always next year, right? |
2018 |
Scott O'Hara |
scottohara |
2018-12-13T00:00:00+00:00 |
https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/ |
code |
249 |
Fast Autocomplete Search for Your Website |
Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.
I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.
Try it out here, then read on to see how I built it.
First step: crawling the data
The first step in building a search engine is to grab a copy of the data that you plan to make searchable.
There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.
I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites.
We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.
Then we’ll tell wget to recursively crawl the website, using the --recursive flag.
We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2
The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X "/*/*/comments".
Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.
Tie all of those options together and we get this command:
wget --recursive --wait 2 --no-clobber
-I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
-X "/*/*/comments"
https://24ways.org/archives/
If you leave this running for a few minutes, you’ll end up with a folder structure something like this:
$ find 24ways.org
24ways.org
24ways.org/2013
24ways.org/2013/why-bother-with-accessibility
24ways.org/2013/why-bother-with-accessibility/index.html
24ways.org/2013/levelling-up
24ways.org/2013/levelling-up/index.html
24ways.org/2013/project-hubs
24ways.org/2013/project-hubs/index.html
24ways.org/2013/credits-and-recognition
24ways.org/2013/credits-and-recognition/index.html
...
As a quick sanity check, let’s count the number of HTML pages we have retrieved:
$ find 24ways.org | grep index.html | wc -l
328
There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead:
wget --recursive --wait 2 --no-clobber
-I /2018
-X "/*/*/comments"
https://24ways.org/
Thanks to --no-clobber, this is safe to run every day in December to pick up any new content.
We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index.
Building a search index using SQLite
There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.
I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.
SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!
SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.
This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.
It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.
I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday.
If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages:
$ python3 -m venv ./jupyter-venv
$ ./jupyter-venv/bin/pip install jupyter
# ... lots of installer output
# Now lets install some extra packages we will need later
$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib
# And start the notebook web application
$ ./jupyter-venv/bin/jupyter-notebook
# This will open your browser to Jupyter at http://localhost:8888/
You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.
A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account.
Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on.
from pathlib import Path
from bs4 import BeautifulSoup as Soup
base = Path("/Users/simonw/Dropbox/Development/24ways-search")
articles = list(base.glob("*/*/*/*.html"))
# articles is now a list of paths that look like this:
# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')
docs = []
for path in articles:
year = str(path.relative_to(base)).split("/")[1]
url = 'https://' + str(path.relative_to(base).parent) + '/'
soup = Soup(path.open().read(), "html5lib")
author = soup.select_one(".c-continue")["title"].split(
"More information about"
)[1].strip()
author_slug = soup.select_one(".c-continue")["href"].split(
"/authors/"
)[1].split("/")[0]
published = soup.select_one(".c-meta time")["datetime"]
contents = soup.select_one(".e-content").text.strip()
title = soup.find("title").text.split(" ◆")[0]
try:
topic = soup.select_one(
'.c-meta a[href^="/topics/"]'
)["href"].split("/topics/")[1].split("/")[0]
except TypeError:
topic = None
docs.append({
"title": title,
"contents": contents,
"year": year,
"author": author,
"author_slug": author_slug,
"published": published,
"url": url,
"topic": topic,
})
After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:
[
{
"title": "Why Bother with Accessibility?",
"contents": "Web accessibility (known in other fields as inclus...",
"year": "2013",
"author": "Laura Kalbag",
"author_slug": "laurakalbag",
"published": "2013-12-10T00:00:00+00:00",
"url": "https://24ways.org/2013/why-bother-with-accessibility/",
"topic": "design"
},
{
"title": "Levelling Up",
"contents": "Hello, 24 ways. Iu2019m Ashley and I sell property ins...",
"year": "2013",
"author": "Ashley Baxter",
"author_slug": "ashleybaxter",
"published": "2013-12-06T00:00:00+00:00",
"url": "https://24ways.org/2013/levelling-up/",
"topic": "business"
},
...
My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries.
import sqlite_utils
db = sqlite_utils.Database("/tmp/24ways.db")
db["articles"].insert_all(docs)
That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.
(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)
You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:
$ sqlite3 /tmp/24ways.db
sqlite> .headers on
sqlite> .mode column
sqlite> select title, author, year from articles;
title author year
------------------------------ ------------ ----------
Why Bother with Accessibility? Laura Kalbag 2013
Levelling Up Ashley Baxte 2013
Project Hubs: A Home Base for Brad Frost 2013
Credits and Recognition Geri Coady 2013
Managing a Mind Christopher 2013
Run Ragged Mark Boulton 2013
Get Started With GitHub Pages Anna Debenha 2013
Coding Towards Accessibility Charlie Perr 2013
...
<Ctrl+D to quit>
There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:
db["articles"].enable_fts(["title", "author", "contents"])
Introducing Datasette
Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet.
We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that?
If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article.
If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:
./jupyter-venv/bin/pip install datasette
This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.
Now you can run Datasette against the 24ways.db file we created earlier like so:
./jupyter-venv/bin/datasette /tmp/24ways.db
This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.
If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db
Publishing the database to the internet
One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish:
$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways
-----> Python app detected
-----> Installing requirements with pip
-----> Running post-compile hook
-----> Discovering process types
Procfile declares types -> web
-----> Compressing...
Done: 47.1M
-----> Launching...
Released v8
https://search-24ways.herokuapp.com/ deployed to Heroku
If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways.
You can run this command as many times as you like to deploy updated versions of the underlying database.
Searching and faceting
Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.
SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A
A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic
Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:
http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A
Better search using custom SQL
The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.
This exposes the underlying SQL query, which looks like this:
select rowid, * from articles where rowid in (
select rowid from articles_fts where articles_fts match :search
) order by rowid limit 101
We can do better than this by constructing a custom SQL query. Here’s the query we will use instead:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || "*"
order by rank limit 10;
You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.
There’s a lot going on here! Let’s break the SQL down line-by-line:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on.
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.
from articles
join articles_fts on articles.rowid = articles_fts.rowid
articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.
where articles_fts match :search || "*"
order by rank limit 10;
:search || "*" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.
How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects:
JSON API call to search for articles matching SVG
The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.
A simple JavaScript autocomplete search interface
I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.
We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:
function debounce(func, wait, immediate) {
let timeout;
return function() {
let context = this, args = arguments;
let later = () => {
timeout = null;
if (!immediate) func.apply(context, args);
};
let callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
};
We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.
Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these:
const htmlEscape = (s) => s.replace(
/>/g, '>'
).replace(
/</g, '<'
).replace(
/&/g, '&'
).replace(
/"/g, '"'
).replace(
/'/g, '''
);
We need some HTML for the search form, and a div in which to render the results:
<h1>Autocomplete search</h1>
<form>
<p><input id="searchbox" type="search" placeholder="Search 24ways" style="width: 60%"></p>
</form>
<div id="results"></div>
And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:
// Embed the SQL query in a multi-line backtick string:
const sql = `select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || "*"
order by rank limit 10`;
// Grab a reference to the <input type="search">
const searchbox = document.getElementById("searchbox");
// Used to avoid race-conditions:
let requestInFlight = null;
searchbox.onkeyup = debounce(() => {
const q = searchbox.value;
// Construct the API URL, using encodeURIComponent() for the parameters
const url = (
"https://search-24ways.herokuapp.com/24ways-866073b.json?sql=" +
encodeURIComponent(sql) +
`&search=${encodeURIComponent(q)}&_shape=array`
);
// Unique object used just for race-condition comparison
let currentRequest = {};
requestInFlight = currentRequest;
fetch(url).then(r => r.json()).then(d => {
if (requestInFlight !== currentRequest) {
// Avoid race conditions where a slow request returns
// after a faster one.
return;
}
let results = d.map(r => `
<div class="result">
<h3><a href="${r.url}">${htmlEscape(r.title)}</a></h3>
<p><small>${htmlEscape(r.author)} - ${r.year}</small></p>
<p>${highlight(r.snippet)}</p>
</div>
`).join("");
document.getElementById("results").innerHTML = results;
});
}, 100); // debounce every 100ms
There’s just one more utility function, used to help construct the HTML results:
const highlight = (s) => htmlEscape(s).replace(
/b4de2a49c8/g, '<b>'
).replace(
/8c94a2ed4b/g, '</b>'
);
This is what those unique strings passed to the snippet() function were for.
Avoiding race conditions in autocomplete
One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:
User types acces
Browser sends request A - querying documents matching acces*
User continues to type accessibility
Browser sends request B - querying documents matching accessibility*
Request B returns. It was fast, because there are fewer documents matching the full term
The results interface updates with the documents from request B, matching accessibility*
Request A returns results (this was the slower of the two requests)
The results interface updates with the documents from request A - results matching access*
This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.
Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.
Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.
When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.
It’s not a lot of code, really
And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like.
How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with.
A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.
More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.
For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter. |
2018 |
Simon Willison |
simonwillison |
2018-12-19T00:00:00+00:00 |
https://24ways.org/2018/fast-autocomplete-search-for-your-website/ |
code |