rowid,title,contents,year,author,author_slug,published,url,topic 247,Managing Flow and Rhythm with CSS Custom Properties,"An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability. Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as
and
also have no default margin or padding associated with them. I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in! The Flow utility With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy. The code .flow { --flow-space: 1em; } .flow > * + * { margin-top: var(--flow-space); } What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them. The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples. A basic example See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/LXqerj What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a

in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size. This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/qQgxaY I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances: h2 { --flow-space: 3rem; } Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured. A more advanced example Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space. See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/ZmPGyL In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself:
...
Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element. “But what about old browsers?”, I hear you cry We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size: .flow { --flow-space: 1em; } .flow > * + * { margin-top: 1em; margin-top: var(--flow-space); } Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement! Wrapping up This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS. If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on!",2018,Andy Bell,andybell,2018-12-07T00:00:00+00:00,https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/,code 248,How to Use Audio on the Web,"I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉 You’re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998. The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then. Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth. We can’t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future. But then of course comes the question, having lived in a muted present for so long, where and why would you use audio? ✨ Showcase Time ✨ There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience. futurelibrary.no Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good. pottermore.com Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience. Eighty-six and a half years Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here. Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today. lightnote.co learningmusic.ableton.com Considerations Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment. Eighty-six and a half years Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings. Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone. To The Future So let’s go! We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that. But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design. Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers). This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises. Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb.",2018,Ruth John,ruthjohn,2018-12-22T00:00:00+00:00,https://24ways.org/2018/how-to-use-audio-on-the-web/,design 249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive. I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface. Try it out here, then read on to see how I built it. First step: crawling the data The first step in building a search engine is to grab a copy of the data that you plan to make searchable. There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need. I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites. We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running. Then we’ll tell wget to recursively crawl the website, using the --recursive flag. We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2 The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"". Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this. Tie all of those options together and we get this command: wget --recursive --wait 2 --no-clobber -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 -X ""/*/*/comments"" https://24ways.org/archives/ If you leave this running for a few minutes, you’ll end up with a folder structure something like this: $ find 24ways.org 24ways.org 24ways.org/2013 24ways.org/2013/why-bother-with-accessibility 24ways.org/2013/why-bother-with-accessibility/index.html 24ways.org/2013/levelling-up 24ways.org/2013/levelling-up/index.html 24ways.org/2013/project-hubs 24ways.org/2013/project-hubs/index.html 24ways.org/2013/credits-and-recognition 24ways.org/2013/credits-and-recognition/index.html ... As a quick sanity check, let’s count the number of HTML pages we have retrieved: $ find 24ways.org | grep index.html | wc -l 328 There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead: wget --recursive --wait 2 --no-clobber -I /2018 -X ""/*/*/comments"" https://24ways.org/ Thanks to --no-clobber, this is safe to run every day in December to pick up any new content. We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index. Building a search index using SQLite There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL. I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite. SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch! SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications. This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year. It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension. I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday. If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages: $ python3 -m venv ./jupyter-venv $ ./jupyter-venv/bin/pip install jupyter # ... lots of installer output # Now lets install some extra packages we will need later $ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib # And start the notebook web application $ ./jupyter-venv/bin/jupyter-notebook # This will open your browser to Jupyter at http://localhost:8888/ You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook. A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account. ​ Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on. from pathlib import Path from bs4 import BeautifulSoup as Soup base = Path(""/Users/simonw/Dropbox/Development/24ways-search"") articles = list(base.glob(""*/*/*/*.html"")) # articles is now a list of paths that look like this: # PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html') docs = [] for path in articles: year = str(path.relative_to(base)).split(""/"")[1] url = 'https://' + str(path.relative_to(base).parent) + '/' soup = Soup(path.open().read(), ""html5lib"") author = soup.select_one("".c-continue"")[""title""].split( ""More information about"" )[1].strip() author_slug = soup.select_one("".c-continue"")[""href""].split( ""/authors/"" )[1].split(""/"")[0] published = soup.select_one("".c-meta time"")[""datetime""] contents = soup.select_one("".e-content"").text.strip() title = soup.find(""title"").text.split("" ◆"")[0] try: topic = soup.select_one( '.c-meta a[href^=""/topics/""]' )[""href""].split(""/topics/"")[1].split(""/"")[0] except TypeError: topic = None docs.append({ ""title"": title, ""contents"": contents, ""year"": year, ""author"": author, ""author_slug"": author_slug, ""published"": published, ""url"": url, ""topic"": topic, }) After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this: [ { ""title"": ""Why Bother with Accessibility?"", ""contents"": ""Web accessibility (known in other fields as inclus..."", ""year"": ""2013"", ""author"": ""Laura Kalbag"", ""author_slug"": ""laurakalbag"", ""published"": ""2013-12-10T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"", ""topic"": ""design"" }, { ""title"": ""Levelling Up"", ""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."", ""year"": ""2013"", ""author"": ""Ashley Baxter"", ""author_slug"": ""ashleybaxter"", ""published"": ""2013-12-06T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/levelling-up/"", ""topic"": ""business"" }, ... My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries. import sqlite_utils db = sqlite_utils.Database(""/tmp/24ways.db"") db[""articles""].insert_all(docs) That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table. (I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.) You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this: $ sqlite3 /tmp/24ways.db sqlite> .headers on sqlite> .mode column sqlite> select title, author, year from articles; title author year ------------------------------ ------------ ---------- Why Bother with Accessibility? Laura Kalbag 2013 Levelling Up Ashley Baxte 2013 Project Hubs: A Home Base for Brad Frost 2013 Credits and Recognition Geri Coady 2013 Managing a Mind Christopher 2013 Run Ragged Mark Boulton 2013 Get Started With GitHub Pages Anna Debenha 2013 Coding Towards Accessibility Charlie Perr 2013 ... There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this: db[""articles""].enable_fts([""title"", ""author"", ""contents""]) Introducing Datasette Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet. We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that? If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article. If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter: ./jupyter-venv/bin/pip install datasette This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette. Now you can run Datasette against the 24ways.db file we created earlier like so: ./jupyter-venv/bin/datasette /tmp/24ways.db This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application. If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db Publishing the database to the internet One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish: $ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways -----> Python app detected -----> Installing requirements with pip -----> Running post-compile hook -----> Discovering process types Procfile declares types -> web -----> Compressing... Done: 47.1M -----> Launching... Released v8 https://search-24ways.herokuapp.com/ deployed to Heroku If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways. You can run this command as many times as you like to deploy updated versions of the underlying database. Searching and faceting Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action. ​ SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL: http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A Better search using custom SQL The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page. This exposes the underlying SQL query, which looks like this: select rowid, * from articles where rowid in ( select rowid from articles_fts where articles_fts match :search ) order by rowid limit 101 We can do better than this by constructing a custom SQL query. Here’s the query we will use instead: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10; You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute. There’s a lot going on here! Let’s break the SQL down line-by-line: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on. articles_fts.rank, articles.title, articles.url, articles.author, articles.year These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table. from articles join articles_fts on articles.rowid = articles_fts.rowid articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it. where articles_fts match :search || ""*"" order by rank limit 10; :search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results. How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects: JSON API call to search for articles matching SVG The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature. A simple JavaScript autocomplete search interface I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own. We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh: function debounce(func, wait, immediate) { let timeout; return function() { let context = this, args = arguments; let later = () => { timeout = null; if (!immediate) func.apply(context, args); }; let callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing. Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these: const htmlEscape = (s) => s.replace( />/g, '>' ).replace( /Autocomplete search

And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript: // Embed the SQL query in a multi-line backtick string: const sql = `select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10`; // Grab a reference to the const searchbox = document.getElementById(""searchbox""); // Used to avoid race-conditions: let requestInFlight = null; searchbox.onkeyup = debounce(() => { const q = searchbox.value; // Construct the API URL, using encodeURIComponent() for the parameters const url = ( ""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" + encodeURIComponent(sql) + `&search=${encodeURIComponent(q)}&_shape=array` ); // Unique object used just for race-condition comparison let currentRequest = {}; requestInFlight = currentRequest; fetch(url).then(r => r.json()).then(d => { if (requestInFlight !== currentRequest) { // Avoid race conditions where a slow request returns // after a faster one. return; } let results = d.map(r => `

${htmlEscape(r.title)}

${htmlEscape(r.author)} - ${r.year}

${highlight(r.snippet)}

`).join(""""); document.getElementById(""results"").innerHTML = results; }); }, 100); // debounce every 100ms There’s just one more utility function, used to help construct the HTML results: const highlight = (s) => htmlEscape(s).replace( /b4de2a49c8/g, '' ).replace( /8c94a2ed4b/g, '' ); This is what those unique strings passed to the snippet() function were for. Avoiding race conditions in autocomplete One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case: User types acces Browser sends request A - querying documents matching acces* User continues to type accessibility Browser sends request B - querying documents matching accessibility* Request B returns. It was fast, because there are fewer documents matching the full term The results interface updates with the documents from request B, matching accessibility* Request A returns results (this was the slower of the two requests) The results interface updates with the documents from request A - results matching access* This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on. Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null. Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well. When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results. It’s not a lot of code, really And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like. How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with. A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite. More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now. For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code 251,"The System, the Search, and the Food Bank","Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center. On the belt are plastic bins filled with assortments of shelf-stable food—one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like “Bottled Water” or “Cereal” or “Candy.” Such was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins. I could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there’s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into. Look, I can’t quite explain it: I just know that I love sorting, organizing, and classifying things—groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask? The opportunity to create meaning from nothing is at the core of my excitement, which is why I’ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. “I can’t believe they’re letting me do this,” I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey. The jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It’s not just a nice-to-have that we spent our morning separating cookies from carrots—it’s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we’re talking about donated groceries or—there it is—web content. This happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.) Is X a Y? was the question at the heart of our food sorting—but it’s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves: What is the criteria that defines Y? Does X meet that criteria? We don’t usually articulate it so concretely because it’s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn’t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn’t allowed, causing a small cognitive hiccup—this X is NOT a Y—that sometimes meant having to re-sort our boxes. On the web, we’re interested—I would hope—in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer—Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y? We have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means—well, this means a lot of things, and I’ve got limited space here, so let’s focus on these three lessons from the food bank: Use plain, familiar language. This advice seems to be given constantly, but that’s because it’s solid and it’s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it’s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I’ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows—not only will they be more likely to spot what they’re looking for on your site or app, but you’ll turn up more often in search results. Create consistency in all things. Missteps in consistency look like my earlier chicken broth example—changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts—these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall. Make the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it. This idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn’t know every category label around the conveyer belt, nor what criteria each category warranted. The system wasn’t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We’d ask staff members. We’d ask more seasoned volunteers. We’d ask each other. We’d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn. The more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it. The same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away—or, worse, misinform or hurt them. This is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y—well-sorted content can give them the answer.",2018,Lisa Maria Martin,lisamariamartin,2018-12-16T00:00:00+00:00,https://24ways.org/2018/the-system-the-search-and-the-food-bank/,content 252,Turn Jekyll up to Eleventy,"Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option. The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013. By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree. Introducing Eleventy Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains). As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1. Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory: git clone https://github.com/mattcone/markdown-guide.git cd markdown-guide Before you start If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM: Install Node.js: Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node. Windows: Download the Windows installer from the Node.js website and follow the instructions. Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies. If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too. Installing Eleventy With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency: npm install --save-dev @11ty/eleventy If you open package.json you should see the following: … ""devDependencies"": { ""@11ty/eleventy"": ""^0.6.0"" } … We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following: npx eleventy --input=README.md --formats=md This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition. Configuration Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on. We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options: module.exports = function(eleventyConfig) { return { dir: { input: ""./"", // Equivalent to Jekyll's source property output: ""./_site"" // Equivalent to Jekyll's destination property } }; }; A few other things to bear in mind: Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore). By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins. Layouts One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub). Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config: module.exports = function(eleventyConfig) { // Aliases are in relation to the _includes folder eleventyConfig.addLayoutAlias('about', 'layouts/about.html'); eleventyConfig.addLayoutAlias('book', 'layouts/book.html'); eleventyConfig.addLayoutAlias('default', 'layouts/default.html'); return { dir: { input: ""./"", output: ""./_site"" } }; } Determining which template language to use Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do. Variables On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating. Site variables Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values: title: ""Markdown Guide"" url: https://www.markdownguide.org baseurl: """" repo: http://github.com/mattcone/markdown-guide comments: false author: name: ""Matt Cone"" og_locale: ""en_US"" Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged. { ""title"": ""Markdown Guide"", ""url"": ""https://www.markdownguide.org"", ""baseurl"": """", ""repo"": ""http://github.com/mattcone/markdown-guide"", ""comments"": false, ""author"": { ""name"": ""Matt Cone"" }, ""og_locale"": ""en_US"" } Page variables The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace: Jekyll Eleventy page.url page.url page.date page.date page.path page.inputPath page.id page.outputPath page.name page.fileSlug page.content content page.title title page.foobar foobar When iterating through pages, frontmatter values are available via the data object while content is available via templateContent: Jekyll Eleventy item.url item.url item.date item.date item.path item.inputPath item.name item.fileSlug item.id item.outputPath item.content item.templateContent item.title item.data.title item.foobar item.data.foobar Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data. Pagination variables Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables: Jekyll Eleventy paginator.page pagination.pageNumber paginator.per_page pagination.size paginator.posts pagination.items paginator.previous_page_path pagination.previousPageHref paginator.next_page_path pagination.nextPageHref Filters Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where. The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform: // {{ variable | jsonify }} eleventyConfig.addFilter('jsonify', function (variable) { return JSON.stringify(variable); }); Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match: {{ site.members | where: ""graduation_year"",""2014"" }} To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match: // {{ array | where: key,value }} eleventyConfig.addFilter('where', function (array, key, value) { return array.filter(item => { const keys = key.split('.'); const reducedKey = keys.reduce((object, key) => { return object[key]; }, item); return (reducedKey === value ? item : false); }); }); There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew! Includes As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this: {% include include.html value=""key"" %} {{ include.value }} in Eleventy, you would do this: {% include ""include.html"", value: ""key"" %} {{ value }} A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments. Tweaking Liquid You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. Thankfully, Eleventy allows us to pass options to LiquidJS: eleventyConfig.setLiquidOptions({ dynamicPartials: false, root: [ '_includes', '.' ] }); Collections Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way. Collections in Jekyll In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections: collections: - basic-syntax - extended-syntax These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so: {% for syntax in site.extended-syntax %} {{ syntax.title }} {% endfor %} Collections in Eleventy There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files: --- title: Strikethrough syntax-id: strikethrough syntax-summary: ""~~The world is flat.~~"" tag: extended-syntax --- We can then iterate over tagged content like this: {% for syntax in collections.extended-syntax %} {{ syntax.data.title }} {% endfor %} Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters): eleventyConfig.addCollection('basic-syntax', collection => { return collection.getFilteredByGlob('_basic-syntax/*.md'); }); eleventyConfig.addCollection('extended-syntax', collection => { return collection.getFilteredByGlob('_extended-syntax/*.md'); }); We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result: eleventyConfig.addCollection('example', collection => { return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => { return a.data.display_order - b.data.display_order; }); }); Hopefully, this gives you just a hint of what’s possible using this approach. Using directory data to manage defaults By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following: --- title: Lists syntax-id: lists api: ""no"" permalink: /basic-syntax/lists.html --- Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables. For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path: { ""layout"": ""syntax"", ""tag"": ""basic-syntax"", ""permalink"": ""basic-syntax/{{ title | slug }}.html"" } However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content: markdown-guide └── _content ├── basic-syntax ├── extended-syntax ├── getting-started └── _content.json We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published: { ""permalink"": false } Static files Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true: module.exports = function(eleventyConfig) { … // Copy the `assets` directory to the compiled site folder eleventyConfig.addPassthroughCopy('assets'); return { dir: { input: ""./"", output: ""./_site"" }, passthroughFileCopy: true }; } Final considerations Assets Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly. Publishing to GitHub Pages One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere! Going one louder If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it. I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine! But these go to 11. Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩",2018,Paul Lloyd,paulrobertlloyd,2018-12-11T00:00:00+00:00,https://24ways.org/2018/turn-jekyll-up-to-eleventy/,content 253,Clip Paths Know No Bounds,"CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance. The basics of a clip path Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed. Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions. See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen. So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines. clip-path: polygon( 30% 0%, 70% 0%, 100% 30%, 100% 70%, 70% 100%, 30% 100%, 0% 70%, 0% 30% ); A shape with less vertices than the eye can see It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%. Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element. See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen. By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons. Our earlier octagon can similarly be made with only four points. See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen. Multiple shapes, one clip path We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon(). See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen. Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed. It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles. See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen. Different shapes with fill rules A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape. See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen. As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside. Order of operations It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more. These compositing effects are applied in the order: CSS Filters (e.g. filter: blur(2px)) Clipping (e.g. what this article is about) Masking (Clipping’s cousin) Blend Modes (e.g. mix-blend-mode: multiply) Opacity This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges. See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen. If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally. Revealing content with animation CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen. Don’t keep CSS Shapes in a box Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.",2018,Dan Wilson,danwilson,2018-12-20T00:00:00+00:00,https://24ways.org/2018/clip-paths-know-no-bounds/,code 255,Inclusive Considerations When Restyling Form Controls,"I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility. I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year. Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match. However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve. Quick level setting Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following: Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels. It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing. Visually resetting and restyling form controls Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation. See the Pen form control styling comparisons by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/gZOrZm/ Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting? It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements: Non-standard This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all. Can’t make it? Fake it. Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended. Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling. Getting clever with markup and CSS The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox. See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/RqEayN/ Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen? As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others. Limitations to cleverness Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to. However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach. Using ARIA when appropriate Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA). ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible. Exceptions to the rule One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control. Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized. While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control. See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/XyvoeE/ By adding a role=""switch"" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state. But while this is a valid approach to take in building a switch, how does this actually match up to reality? Does it pass the test(s)? Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better. What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example: Is the control implemented in all supported browsers? If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? Test with multiple input devices. Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well. How well does the control take to custom styles? If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled. Do specifications match reality? If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order? Wrapping up While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented. Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option. 2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. And hey. There’s always next year, right?",2018,Scott O'Hara,scottohara,2018-12-13T00:00:00+00:00,https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/,code 256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month. *(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!) Looking for critters in rocky intertidal habitats One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’ A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.) ​ They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons. My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible. ​ I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower! Finding nearby animals with iNaturalist We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society. I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF. ​ iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button. ​ You’ll then get a URL that looks a bit like https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at which you can call and interrrogate using a programming language of your choice. If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season). Rapid development using Observable Notebooks We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all. You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock. Developing your Naturalist superpowers If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you. For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in. We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com ​ The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API. westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811 A quick map primer: The higher the latitude the more north it is The lower the latitude the more south it is Latitude 0 = the equator The higher the longitude the more east it is of Greenwich The lower the longitude the more west it is of Greenwich Longitude 0 = Greenwich In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California: nelat = highest latitude = north limit = 37.499811 nelng = highest longitude = east limit = -122.492738 swlat = smallest latitude = south limit = 37.492805 swlng = smallest longitude = west limit = 122.50542 As API parameters these look like this: ?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542 These parameters in this format can be used for most of the iNaturalist API methods. Nudibranch seasonality in Pillar Point We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box. In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’. Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’. The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead. The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box: pillar_point_counts_per_week = fetch( ""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true"" ).then(response => { return response.json(); }) Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. (Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite) The iNaturalist API returns data that looks like this: { ""total_results"": 53, ""page"": 1, ""per_page"": 53, ""results"": { ""week_of_year"": { ""1"": 136, ""2"": 20, ""3"": 150, ""4"": 65, ""5"": 186, ""6"": 74, ""7"": 47, ""8"": 87, ""9"": 64, ""10"": 56, But for our Vega-Lite graph we need data that looks like this: [{ ""week"": ""01"", ""value"": 136 }, { ""week"": ""02"", ""value"": 20 }, ...] We can convert what we get back from the API to the second format using a loop that iterates over the object keys: objects_to_plot = { let objects = []; Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) { objects.push({ week: `Wk ${week_index.toString()}`, observations: pillar_point_counts_per_week.results.week_of_year[week_index] }); }) return objects; } We can then plug this into Vega-Lite to draw us a graph: vegalite({ data: {values: objects_to_plot}, mark: ""bar"", encoding: { x: {field: ""week"", type: ""nominal"", sort: null}, y: {field: ""observations"", type: ""quantitative""} }, width: width * 0.9 }) It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names. pillar_point_nudibranches = { let api_results = await fetch( ""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true"" ).then(r => r.json()) let species_list = api_results.results.map(i => ({ value: i.taxon.id, label: `${i.taxon.preferred_common_name} (${i.taxon.name})` })); return species_list } We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another! viewof select_species = select({ title: ""Which Nudibranch do you want to see seasonality for?"", options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches], value: """" }) Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113. pillar_point_counts_per_month_per_species = fetch( `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true` ).then(r => r.json()) Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite: objects_to_plot_species_month = { let objects = []; Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) { objects.push({ month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}), observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index] }); }) return objects; } (Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month) And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update! vegalite({ data: {values: objects_to_plot_species_month}, mark: ""bar"", encoding: { x: {field: ""month"", type: ""nominal"", sort:null}, y: {field: ""observations"", type: ""quantitative""} }, width: width * 0.9 }) Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch. ​ This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs? Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year! Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month. viewof selected_month = select({ title: ""When do you want to see Nudibranchs?"", options: [ { label: ""Whenever"", value: """" }, { label: ""January"", value: ""1"" }, { label: ""February"", value: ""2"" }, { label: ""March"", value: ""3"" }, { label: ""April"", value: ""4"" }, { label: ""May"", value: ""5"" }, { label: ""June"", value: ""6"" }, { label: ""July"", value: ""7"" }, { label: ""August"", value: ""8"" }, { label: ""September"", value: ""9"" }, { label: ""October"", value: ""10"" }, { label: ""November"", value: ""11"" }, { label: ""December"", value: ""12"" }, ], value: """" }) We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name. all_species_data = fetch( `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true` ).then(r => r.json()) You can render HTML directly in a notebook cell using Observable’s html tagged template literal:

If you go to Pillar Point ${ {"""": """", ""1"":""in January"", ""2"":""in Febrary"", ""3"":""in March"", ""4"":""in April"", ""5"":""in May"", ""6"":""in June"", ""7"":""in July"", ""8"":""in August"", ""9"":""in September"", ""10"":""in October"", ""11"":""in November"", ""12"":""in December"", }[selected_month] } you might see…

${all_species_data.results.map(s => `

${s.taxon.name}

Seen ${s.count} times

`)}
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month! ​ Play with it yourself in this Observable Notebook. Conclusion I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower. Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there. Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",2018,Natalie Downe,nataliedowne,2018-12-18T00:00:00+00:00,https://24ways.org/2018/observable-notebooks-and-inaturalist/,code 260,The Art of Mathematics: A Mandala Maker Tutorial,"In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things. A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days: Mandala Maker user interface The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new. In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative! Preparations: a blank canvas The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like). Files Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML.

Your browser doesn't support canvas.

I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use: body { background-color: #ccc; text-align: center; } canvas { touch-action: none; background-color: #fff; } button { font-size: 110%; } Next steps We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts: Building a simple drawing app with one line and one color Adding a Clear button and a color picker Adding more functionality: 2 line drawing (add the first reflection) Adding more functionality: 8 line drawing (add 6 more reflections!) Interactive demos This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet. Part 1: A simple drawing app Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables: var canvas, context, w, h, prevX = 0, currX = 0, prevY = 0, currY = 0, draw = false; The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. 1. init Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. function init() { canvas = document.querySelector(""canvas""); context = canvas.getContext(""2d""); w = canvas.width; h = canvas.height; canvas.onpointermove = handlePointerMove; canvas.onpointerdown = handlePointerDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; } 2. drawLine This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial. lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY). function drawLine() { var a = prevX, b = prevY, c = currX, d = currY; context.lineWidth = 4; context.lineCap = ""round""; context.beginPath(); context.moveTo(a, b); context.lineTo(c, d); context.stroke(); context.closePath(); } 3. stopDrawing This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout). function stopDrawing() { draw = false; } 4. recordPointerLocation This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it. function recordPointerLocation(e) { prevX = currX; prevY = currY; currX = e.clientX - canvas.offsetLeft; currY = e.clientY - canvas.offsetTop; } 5. handlePointerMove This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it. function handlePointerMove(e) { if (draw) { recordPointerLocation(e); drawLine(); } } 6. handlePointerDown This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down. function handlePointerDown(e) { recordPointerLocation(e); draw = true; } Finally, we have a working drawing app. But that’s just the beginning! See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen. Part 2: Add a Clear button and a color picker Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.

Your browser doesn't support canvas.

Color picker This is our new color picker function. It targets the input element by its class and gets its value. function getColor() { return document.querySelector("".color"").value; } Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor. function drawLine() { //...code... context.strokeStyle = getColor(); context.lineWidth = 4; context.lineCap = ""round""; //...code... } Clear button This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing. function clearCanvas() { if (confirm(""Want to clear?"")) { context.clearRect(0, 0, w, h); } } The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up. Let’s add this event handler to init: function init() { //...code... canvas.onpointermove = handleMouseMove; canvas.onpointerdown = handleMouseDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; document.querySelector("".clear"").onclick = clearCanvas; } See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen. Part 3: Draw with 2 lines It’s time to make a line appear where no pointer has gone before. A ghost line! For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis. Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal. A sketch illustrating the mathematics of reflecting a point. What happens to the x coordinates? The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. What happens to the y coordinates? What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height. This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis. function drawLine() { var a = prevX, a_ = a, b = prevY, b_ = h-b, c = currX, c_ = c, d = currY, d_ = h-d; //... code ... // Draw line #1, at the pointer's location context.moveTo(a, b); context.lineTo(c, d); // Draw line #2, mirroring the line #1 context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... } In case this was too abstract for you, let’s look at some actual numbers to see how this works. Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3. So b' = 10 - 2 = 8 and d' = 10 - 3 = 7. We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior. If you are still confused, I don’t blame you. Here is the result. Draw something and see what happens. See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen. Part 4: Draw with 8 lines I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. A sketch illustrating points C and D. This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different). function drawLine() { //... code ... // Reassign values a_ = w-a; b_ = b; c_ = w-c; d_ = d; // Draw the 3rd line context.moveTo(a_, b_); context.lineTo(c_, d_); // Reassign values a_ = w-a; b_ = h-b; c_ = w-c; d_ = h-d; // Draw the 4th line context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... What is happening? You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). What happens to the x coordinates? As you recall, our x coordinates are a (prevX) and c (currX). For the third line we are adding, a' = w - a and c' = w - c, which means… For the fourth line, the same thing happens to our x coordinates a and c. What happens to the y coordinates? As you recall, our y coordinates are b (prevY) and d (currY). For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis. We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same: context.moveTo(a_, b_); context.lineTo(c_, d_); We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code. A sketch illustrating points E and F. This is the code for E and F: // Reassign for 5 a_ = w/2+h/2-b; b_ = w/2+h/2-a; c_ = w/2+h/2-d; d_ = w/2+h/2-c; // Reassign for 6 a_ = w/2+h/2-b; b_ = h/2-w/2+a; c_ = w/2+h/2-d; d_ = h/2-w/2+c; Their x coordinates are identical and their y coordinates are reversed to one another. This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3). A sketch illustrating points G and H. This is the code: // Reassign for 7 a_ = w/2-h/2+b; b_ = w/2+h/2-a; c_ = w/2-h/2+d; d_ = w/2+h/2-c; // Reassign for 8 a_ = w/2-h/2+b; b_ = h/2-w/2+a; c_ = w/2-h/2+d; d_ = h/2-w/2+c; //...code... } Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them. I hope you had fun learning! This is our final app: See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.",2018,Hagar Shilo,hagarshilo,2018-12-02T00:00:00+00:00,https://24ways.org/2018/the-art-of-mathematics/,code 261,Surviving—and Thriving—as a Remote Worker,"Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there? I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility. However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely. Experiment with your environment As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight. Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls. Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.) Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls. Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you. Set boundaries If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh? Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with). For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot. I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time. I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality. Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon. You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out. Create intentional transition time I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time. The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort. I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch. This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer. Embrace async If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now. For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers. Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.” Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it! (That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.) Seek out social opportunities Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide. I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.) Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh. If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work. If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own. These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment! Hope to see you all online soon 🙌",2018,Mel Choyce,melchoyce,2018-12-09T00:00:00+00:00,https://24ways.org/2018/thriving-as-a-remote-worker/,process 262,Be the Villain,"Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I’ll consider it a success. As a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good. When designing, you take a user flow and make sure it can’t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date—you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario. Lately, I’ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it’s all too easy to miss the forest for the trees when you’re a product designer focused on cranking out features. What I’m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end. If you pay attention, you’ll start to notice this subversion with increasing frequency: Domestic abusers using internet-controlled devices to spy on and control their partner. Zealots tanking a business’ rating on Google because its owners spoke out against unchecked gun violence. Forcing people to choose between TV or stalking because the messaging center portion of a cable provider’s entertainment package lacks muting or blocking features. White supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories. Facebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion. White supremacists also using a video game chat service as a recruiting tool. The unchecked harassment of minors on Instagram. Swatting. If I were to guess why we haven’t heard more about this problem, I’d say that optimistically, people have settled out of court. Pessimistically, it’s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention. Subverted design isn’t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you’re interested in this kind of thing Subverted design also isn’t beholden design, or simple lack of attention. This phenomenon isn’t even necessarily premeditated. I think it arises from naïve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game. This is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting. To achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic. The thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted. Don’t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be. They’re not going to be fun conversations. It’s not going to be easy convincing others that these aren’t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career. But consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office’s resident villain than to have to live with something like this: If the past few years have taught us anything, it’s that the choices we make—or avoid making—have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn’t neutral. You’re going to have to start thinking the way a monster does—if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start: Providing a comparable experience becomes forcing a single path. Considering situation becomes ignoring circumstance. Being consistent becomes acting capriciously. Giving control becomes removing autonomy. Offering choice becomes limiting options. Prioritizing content becomes obfuscating purpose. Adding value becomes filling with gibberish. Combined, these inverted principles start to paint a picture of something we’re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors. These kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don’t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws. So, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good—perhaps the most important takeaway being admitting you have a problem in the first place. Another exercise is asking the question, “What is the evil version of this feature?” Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don’t care when, so long as the question is actually raised. In keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, “Whose benefits? Whose risks?” instead of “What benefits? What risks?” in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven. Few things in this world are intrinsically altruistic or good—it’s just the nature of the beast. However, that doesn’t mean we have to stand idly by when harm is created. If we can add terms like “anti-pattern” to our professional vocabulary, we can certainly also incorporate phrases like “abuser flow.” Design finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged. Subverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact.",2018,Eric Bailey,ericbailey,2018-12-06T00:00:00+00:00,https://24ways.org/2018/be-the-villain/,ux 263,Securing Your Site like It’s 1999,"Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities. As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned. What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate. Bad input validation: Trusting anything the user sends you Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web. One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong. One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile. Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened? There aren’t any official rules for developing software on the web. But if there were, my golden rule would be: Never trust user input. Ever. Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe. Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML. Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner. Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world. SQL injection: Allowing the user to run their own database queries A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day. One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned. It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it? ' OR '1'='1 SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this: SELECT COUNT(*) FROM USERS WHERE USERNAME='Alice' AND PASSWORD='hunter2' This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead! SELECT COUNT(*) FROM USERS WHERE USERNAME='Admin' AND PASSWORD='' OR '1'='1' Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum! So how can you protect yourself from SQL injection? Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize. Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can! Cross site request forgery: Getting other users to do your dirty work for you Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge. Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky… Let’s just say there are older and fouler things than Rick Astley in the dark places of the web. What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it: Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge! This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from. The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set. Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies. You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers. Cross site scripting: Someone else’s code running on your website In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends. Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then… Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them. Styling Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave. Note, you will probably need a server running for images to work. You can do this by running python -m ""SimpleHTTPServer"" in the folder where the code is, then go to localhost:8000 in browser. Textures Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor. The is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image. Replace: With: This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined. box.setAttribute('material', 'src: #texture-wall'); That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. Lighting The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished. There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit. To start with I wanted to light up the scene with a general light, type=""ambient"", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky. I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=""point"" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical. By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see. I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers. Movement One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working. AFRAME.registerComponent('automove-controls', { init: function () { this.speed = 0.1; this.isMoving = true; this.velocityDelta = new THREE.Vector3(); }, isVelocityActive: function () { return this.isMoving; }, getVelocityDelta: function () { this.velocityDelta.z = this.isMoving ? -speed : 0; return this.velocityDelta.clone(); } }); Replace: universal-controls With: universal-controls=""movementControls: automove, gamepad, keyboard"" This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels. Building a map Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels. I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future. var map = { ""data"":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], ""height"":10, ""width"":10 } As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2. document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var WALL_SIZE = 5, WALL_HEIGHT = 3; var el = document.querySelector('#walls'); var wall; for (var x = 0; x < map.height; x++) { for (var y = 0; y < map.width; y++) { var i = y*map.width + x; var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE; if (map.data[i] === 1) { // Create wall wall = document.createElement('a-box'); el.appendChild(wall); wall.setAttribute('color', '#fff'); wall.setAttribute('material', 'src: #texture-wall;'); wall.setAttribute('width', WALL_SIZE); wall.setAttribute('height', WALL_HEIGHT); wall.setAttribute('depth', WALL_SIZE); wall.setAttribute('position', position); wall.setAttribute('static-body', '); } if (map.data[i] === 2) { // Set player position document.querySelector('#player').setAttribute('position', position); } } } console.info('Walls added.'); }); With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web. It’s Not All Fun And Games A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation. There have been a number of cases where it has been shown virtual reality can help as a tool for therapies: Oxford study finds virtual reality can help treat severe paranoia Virtual Reality Therapy for Phobias at the Duke Faculty Practice Bravemind: Virtual Reality Exposure Therapy at the University of Southern California These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool. Wrapping Up Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.” We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to. So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.",2016,Shane Hudson,shanehudson,2016-12-11T00:00:00+00:00,https://24ways.org/2016/first-steps-in-vr/,code 299,What the Heck Is Inclusive Design?,"Naming things is hard. And I don’t just mean CSS class names and JSON properties. Finding the right term for what we do with the time we spend awake and out of bed turns out to be really hard too. I’ve variously gone by “front-end developer”, “user experience designer”, and “accessibility engineer”, all clumsy and incomplete terms for labeling what I do as an… erm… see, there’s the problem again. It’s tempting to give up entirely on trying to find the right words for things, but this risks summarily dispensing with thousands of years spent trying to qualify the world around us. So here we are again. Recently, I’ve been using the term “inclusive design” and calling myself an “inclusive designer” a lot. I’m not sure where I first heard it or who came up with it, but the terminology feels like a good fit for the kind of stuff I care to do when I’m not at a pub or asleep. This article is about what I think “inclusive design” means and why I think you might like it as an idea. Isn’t ‘inclusive design’ just ‘accessibility’ by another name? No, I don’t think so. But that’s not to say the two concepts aren’t related. Note the ‘design’ part in ‘inclusive design’ — that’s not just there by accident. Inclusive design describes a design activity; a way of designing things. This sets it apart from accessibility — or at least our expectations of what ‘accessibility’ entails. Despite every single accessibility expert I know (and I know a lot) recommending that accessibility should be integrated into design process, it is rarely ever done. Instead, it is relegated to an afterthought, limiting its effect. The term ‘accessibility’ therefore lacks the power to connote design process. It’s not that we haven’t tried to salvage the term, but it’s beginning to look like a lost cause. So maybe let’s use a new term, because new things take new names. People get that. The ‘access’ part of accessibility is also problematic. Before we get ahead of ourselves, I don’t mean access is a problem — access is good, and the more accessible something is the better. I mean it’s not enough by itself. Imagine a website filled with poorly written and lackadaisically organized information, including a bunch of convoluted and confusing functionality. To make this site accessible is to ensure no barriers prevent people from accessing the content. But that doesn’t make the content any better. It just means more people get to suffer it. Whoopdidoo. Access is certainly a prerequisite of inclusion, but accessibility compliance doesn’t get you all the way there. It’s possible to check all the boxes but still be left with an unusable interface. And unusable interfaces are necessarily inaccessible ones. Sure, you can take an unusable interface and make it accessibility compliant, but that only placates stakeholders’ lawyers, not users. Users get little value from it. So where have we got to? Access is important, but inclusion is bigger than access. Inclusive design means making something valuable, not just accessible, to as many people as we can. So inclusive design is kind of accessibility + UX? Closer, but there are some problems with this definition. UX is, you will have already noted, a broad term encompassing activities ranging from conducting research studies to optimizing the perceived affordance of interface elements. But overall, what I take from UX is that it’s the pursuit of making interfaces understandable. As it happens, WCAG 2.0 already contains an ‘Understandable’ principle covering provisions such as readability, predictability and feedback. So you might say accessibility — at least as described by WCAG — already covers UX. Unfortunately, the criteria are limited, plus some really important stuff (like readability) is relegated to the AAA level; essentially “bonus points if you get the time (you won’t).” So better to let UX folks take care of this kind of thing. It’s what they do. Except, therein lies a danger. UX professionals don’t tend to be well versed in accessibility, so their ‘solutions’ don’t tend to work for that many people. My friend Billy Gregory coined the term SUX, or “Some UX”: if it doesn’t work for different users, it’s only doing part of the job it should be. SUX won’t do, but it’s not just a disability issue. All sorts of user circumstances go unchecked when you’re shooting straight for what people like, and bypassing what people need: device type, device settings, network quality, location, native language, and available time to name just a few. In short, inclusive design means designing things for people who aren’t you, in your situation. In my experience, mainstream UX isn’t very good at that. By bolting accessibility onto mainstream UX we labor under the misapprehension that most people have a ‘normal’ experience, a few people are exceptions, and that all of the exceptions pertain to disability directly. So inclusive design isn’t really about disability? It is about disability, but not in the same way as accessibility. Accessibility (as it is typically understood, anyway) aims to make sure things work for people with clinically recognized disabilities. Inclusive design aims to make sure things work for people, not forgetting those with clinically recognized disabilities. A subtle, but not so subtle, difference. Let’s go back to discussing readability, because that’s a good example. Now: everyone benefits from readable text; text with concise sentences and widely-understood words. It certainly helps people with cognitive impairments, but it doesn’t hinder folks who have less trouble with comprehension. In fact, they’ll more than likely be thankful for the time saved and the clarity. Readable text covers the whole gamut. It’s — you’ve got it — inclusive. Legibility is another one. A clear, well-balanced typeface makes the reading experience less uncomfortable and frustrating for all concerned, including those who have various forms of visual dyslexia. Again, everyone’s happy — so why even contemplate a squiggly, sketchy typeface? Leave well alone. Contrast too. No one benefits from low contrast; everyone benefits from high contrast. Simple. There’s no more work involved, it just entails better decision making. And that’s what design is really: decision making. How about zoom support? If you let your users pinch zoom on their phones they can compensate for poor eyesight, but they can also increase the touch area of controls, inspect detail in images, and compose better screen shots. Unobtrusively supporting options like zoom makes interfaces much more inclusive at very little cost. And when it comes to the underlying HTML code, you’re in luck: it has already been designed, from the outset, to be inclusive. HTML is a toolkit for inclusion. Using the right elements for the job doesn’t just mean the few who use screen readers benefit, but keyboard accessibility comes out-of-the-box, you can defer to browser behavior rather than writing additional scripts, the code is easier to read and maintain, and editors can create content that is effortlessly presentable. Wait… are you talking about universal design? Hmmm. Yes, I guess some folks might think of “universal design” and “inclusive design” as synonymous. I just really don’t like the term universal in this context. The thing is, it gives the impression that you should be designing for absolutely everyone in the universe. Though few would adopt a literal interpretation of “universal” in this context, there are enough developers who would deliberately misconstrue the term and decry universal design as an impossible task. I’ve actually had people push back by saying, “what, so I’ve got to make it work for people who are allergic to computers? What about people in comas?” For everyone’s sake, I think the term ‘inclusive’ is less misleading. Of course you can’t make things that everybody can use — it’s okay, that’s not the aim. But with everything that’s possible with web technologies, there’s really no need to exclude people in the vast numbers that we usually are. Accessibility can never be perfect, but by thinking inclusively from planning, through prototyping to production, you can cast a much wider net. That means more and happier users at very little if any more effort. If you like, inclusive design is the means and accessibility is the end — it’s just that you get a lot more than just accessibility along the way. Conclusion That’s inclusive design. Or at least, that’s a definition for a thing I think is a good idea which I identify as inclusive design. I’ll leave you with a few tips. Involve code early Web interfaces are made of code. If you’re not working with code, you’re not working on the interface. That’s not to say there’s anything wrong with sketching or paper prototyping — in fact, I recommend paper prototyping in my book on inclusive design. Just work with code as soon as you can, and think about code even before that. Maintain a pattern library of coded solutions and omit any solutions that don’t adhere to basic accessibility guidelines. Respect conventions Your content should be fresh, inventive, radical. Your interface shouldn’t. Adopt accepted conventions in the appearance, placement and coding of interface elements. Users aren’t there to experience interface design; they’re there to use an interface. In other words: stop showing off (unless, of course, the brief is to experiment with new paradigms in interface design, for an audience of interface design researchers). Don’t be exact “Perfection is the enemy of good”. But the pursuit of perfection isn’t just to be avoided because nothing ever gets finished. Exacting design also makes things inflexible and brittle. If your design depends on elements retaining precise coordinates, they’ll break easily when your users start adjusting font settings or zooming. Choose not to position elements exactly or give them fixed, “magic number” dimensions. Make less decisions in the interface so your users can make more decisions for it. Enforce simplicity The virtue of simplicity is difficult to overestimate. The simpler an interface is, the easier it is to use for all kinds of users. Simpler interfaces require less code to make too, so there’s an obvious performance advantage. There are many design decisions that require user research, but keeping things simple is always the right thing to do. Not simplified or simple-seeming or simplistic, but simple. Do a little and do it well, for as many people as you can.",2016,Heydon Pickering,heydonpickering,2016-12-07T00:00:00+00:00,https://24ways.org/2016/what-the-heck-is-inclusive-design/,process 300,Taking Device Orientation for a Spin,"When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot. But if you can’t beat them, join them. A brave new world A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob. Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all. Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no! So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out. The Device Orientation API The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat. The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too. The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away: window.addEventListener('deviceorientation', function(e) { console.log(e.alpha); console.log(e.beta); console.log(e.gamma); }); If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended. One important note Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel. ngrok http -host-header=rewrite mylocaldevsite.dev:80 ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option. Right, where were we? Make something to look at It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those. Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible. Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.) The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever). var position_image = function(percent) { var pos = (img_W / 100)*percent; img.style.transform = 'translate(-'+pos+'px)'; }; All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.) Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.) The maths bit If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track. As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications: When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle. Moving left, we’ll track to the left of the image (lower percentage). Moving right, we’ll track to the right (higher percentage). If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own. Nose-forward When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings. var initial_position = null; window.addEventListener('deviceorientation', function(e) { if (initial_position === null) { initial_position = Math.floor(e.alpha); }; var current_position = initial_position - Math.floor(e.alpha); }); (I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.) We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one. These values are weird One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll. What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it. var rotation = current_position; if (current_position > 180) rotation = current_position-360; Which way up? Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is: var screen_portrait = false; if (window.innerWidth < window.innerHeight) { screen_portrait = true; } It works. Then we can use screen_portrait to branch our code: if (screen_portrait) { if (current_position > 180) rotation = current_position-360; } else { if (current_position < -180) rotation = 360+current_position; } Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!). Limiting rotation Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%). If we set max_rotation = 60, that code ends up looking like this: if (rotation > max_rotation) rotation = max_rotation; if (rotation < (0-max_rotation)) rotation = 0-max_rotation; var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100); We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself. The big reveal All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work. position_image(percent); You can see the final result and take it for a spin. Literally. So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought. The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things.",2016,Drew McLellan,drewmclellan,2016-12-24T00:00:00+00:00,https://24ways.org/2016/taking-device-orientation-for-a-spin/,code 301,Stretching Time,"Time is valuable. It’s a precious commodity that, if we’re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we’re often guilty of forgetting the most valuable resource we have to hand: time. We are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less. It doesn’t matter if we’re rich or we’re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely. I believe we can ‘stretch’ time, ensuring we make the most of every second and maximising the opportunities that time affords us. Through a combination of ‘Structured Procrastination’ and ‘Focused Finishing’ we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it’s required. A win win, I’m sure you’ll agree. Structured Procrastination I’m a terrible procrastinator. I used to think that was a curse – “Why didn’t I just get started earlier?” – over time, however, I’ve started to see procrastination as a valuable tool if it is used in a structured manner. Don Norman refers to procrastination as ‘late binding’ (a term I’ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits: Delaying decisions until the time for action is beneficial… it provides the maximum amount of time to think, plan, and determine alternatives. We live in a world that is constantly changing and evolving, as such the best time to execute is often ‘just in time’. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes. Procrastination isn’t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can ‘prime the brain’. As James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life. By late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you’re working on and, perhaps, discover other elements that will serve you well in future tasks. When setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered – if not useful for this article – would serve me well in the future. Ron Burgundy summarises this neatly: Procrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser. An ‘older, therefore wiser’ mind is a good thing. We’re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don’t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample. Deadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination. Like everyone I’ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up. At this point we need to focus. Focused Finishing We live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we’re not careful the constant connection they provide can interrupt our flow. When a deadline is accelerating towards us it’s important to set aside the distractions and carve out a space where we can work in a clear and focused manner. When it’s time to finish, it’s important to avoid context switching and focus. All those micro-interactions throughout the day – triaging your emails, checking social media and browsing the web – can get in the way of you hitting your deadline. At this point, they’re distractions. Chunking tasks and managing when they’re scheduled can improve your productivity by a surprising order of magnitude. At this point it’s important to remove distractions which result in ‘attention residue’, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks. By focusing on a single task in a focused manner, it’s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand. Cal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it: Efforts to deepen your focus will struggle if you don’t simultaneously wean your mind from a dependence on distraction. To help you focus on finishing it’s helpful to set up a work-focused environment that is purposefully free from distractions. There’s a time and a place for structured procrastination, but – equally – there’s a time and a place for focused finishing. The French term ‘mise en place’ is drawn from the world of fine cuisine – I discovered it when I was procrastinating – and it’s applicable in this context. The term translates as ‘putting in place’ or ‘everything in its place’ and it refers to the process of getting the workplace ready before cooking. Just like a professional chef organises their utensils and arranges their ingredients, so too can you. Thanks to the magic of multiple users on computers, it’s possible to create a separate user on your computer – without access to email and other social tools – so that you can switch to that account when you need to focus and hit the deadline. Another, less technical way of achieving the same result – depending, of course, upon your line of work – is to close your computer and find some non-digital, unconnected space to work in. The goal is to carve out time to focus so you can finish. As Newport states: If you don’t produce, you won’t thrive – no matter how skilled or talented you are. Procrastination is fine, but only if it’s accompanied by finishing. Create the space to finish and you’ll enjoy the best of both worlds. In closing… There is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both. By combining the processes of ‘Structured Procrastination’ and ‘Focused Finishing’ we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines. Make the most of your time, you only get so much. Use every second productively and you’ll be thankful that you did. Don’t waste your time, once it’s gone, it’s gone… and you can never get it back.",2016,Christopher Murphy,christophermurphy,2016-12-21T00:00:00+00:00,https://24ways.org/2016/stretching-time/,process 304,Five Lessons From My First 18 Months as a Dev,"I recently moved from Sydney to London to start a dream job with Twitter as a software engineer. A software engineer! Who would have thought. Having started my career as a journalist, the title ‘engineer’ is very strange to me. The notion of writing in first person is also very strange. Journalists are taught to be objective, invisible, to keep yourself out of the story. And here I am writing about myself on a public platform. Cringe. Since I started learning to code I’ve often felt compelled to write about my experience. I want to share my excitement and struggles with the world! But as a junior I’ve been held back by thoughts like ‘whatever you have to say won’t be technical enough’, ‘any time spent writing a blog would be better spent writing code’, ‘blogging is narcissistic’, etc.  Well, I’ve been told that your thirties are the years where you stop caring so much about what other people think. And I’m almost 30. So here goes! These are five key lessons from my first year and a half in tech: Deployments should delight, not dread Lesson #1: Making your deployment process as simple as possible is worth the investment. In my first dev job, I dreaded deployments. We would deploy every Sunday night at 8pm. Preparation would begin the Friday before. A nominated deployment manager would spend half a day tagging master, generating scripts, writing documentation and raising JIRAs. The only fun part was choosing a train gif to post in HipChat: ‘All aboard! The deployment train leaves in 3, 2, 1…” When Sunday night came around, at least one person from every squad would need to be online to conduct smoke tests. Most times, the deployments would succeed. Other times they would fail. Regardless, deployments ate into people’s weekend time — and they were intense. Devs would rush to have their code approved before the Friday cutoff. Deployment managers who were new to the process would fear making a mistake.  The team knew deployments were a problem. They were constantly striving to improve them. And what I’ve learnt from Twitter is that when they do, their lives will be bliss. TweetDeck’s deployment process fills me with joy and delight. It’s quick, easy and stress free. In fact, it’s so easy I deployed code on my first day in the job! Anyone can deploy, at any time of day, with a single command. Rollbacks are just as simple. There’s no rush to make the deployment train. No manual preparation. No fuss. Value — whether in the form of big new features, simple UI improvements or even production bug fixes — can be shipped in an instant. The team assures me the process wasn’t always like this. They invested lots of time in making their deployments better. And it’s clearly paid off. Code reviews need love, time and acceptance Lesson #2: Code reviews are a three-way gift. Every time I review someone else’s code, I help them, the team and myself. Code reviews were another pain point in my previous job. And to be honest, I was part of the problem. I would raise code reviews that were far too big. They would take days, sometimes weeks, to get merged. One of my reviews had 96 comments! I would rarely review other people’s code because I felt too junior, like my review didn’t carry any weight.  The review process itself was also tiring, and was often raised in retrospectives as being slow. In order for code to be merged it needed to have ticks of approval from two developers and a third tick from a peer tester. It was the responsibility of the author to assign the reviewers and tester. It was felt that if it was left to team members to assign themselves to reviews, the “someone else will do it” mentality would kick in, and nothing would get done. At TweetDeck, no-one is specifically assigned to reviews. Instead, when a review is raised, the entire team is notified. Without fail, someone will jump on it. Reviews are seen as blocking. They’re seen to be equally, if not more important, than your own work. I haven’t seen a review sit for longer than a few hours without comments.  We also don’t work on branches. We push single commits for review, which are then merged to master. This forces the team to work in small, incremental changes. If a review is too big, or if it’s going to take up more than an hour of someone’s time, it will be sent back. What I’ve learnt so far at Twitter is that code reviews must be small. They must take priority. And they must be a team effort. Being a new starter is no “get out of jail free card”. In fact, it’s even more of a reason to be reviewing code. Reviews are a great way to learn, get across the product and see different programming styles. If you’re like me, and find code reviews daunting, ask to pair with a senior until you feel more confident. I recently paired with my mentor at Twitter and found it really helpful. Get friendly with feature flagging Lesson #3: Feature flagging gives you complete control over how you build and release a project. Say you’re implementing a new feature. It’s going to take a few weeks to complete. You’ll complete the feature in small, incremental changes. At what point do these changes get merged to master? At what point do they get deployed? Do you start at the back end and finish with the UI, so the user won’t see the changes until they’re ready? With feature flagging — it doesn’t matter. In fact, with feature flagging, by the time you are ready to release your feature, it’s already deployed, sitting happily in master with the rest of your codebase.  A feature flag is a boolean value that gets wrapped around the code relating to the thing you’re working on. The code will only be executed if the value is true. if (TD.decider.get(‘new_feature’)) { //code for new feature goes here } In my first dev job, I deployed a navigation link to the feature I’d been working on, making it visible in the product, even though the feature wasn’t ready. “Why didn’t you use a feature flag?” a senior dev asked me. An honest response would have been: “Because they’re confusing to implement and I don’t understand the benefits of using them.” The fix had to wait until the next deployment. The best thing about feature flagging at TweetDeck is that there is no need to deploy to turn on or off a feature. We set the status of the feature via an interface called Deckcider, and the code makes regular API requests to get the status.  At TweetDeck we are also able to roll our features out progressively. The first rollout might be to a staging environment. Then to employees only. Then to 10 per cent of users, 20 per cent, 30 per cent, and so on. A gradual rollout allows you to monitor for bugs and unexpected behaviour, before releasing the feature to the entire user base. Sometimes a piece of work requires changes to existing business logic. So the code might look more like this: if (TD.decider.get(‘change_to_existing_feature’)) { //new logic goes here } else { //old logic goes here } This seems messy, right? Riddling your code with if else statements to determine which path of logic should be executed, or which version of the UI should be displayed. But at Twitter, this is embraced. You can always clean up the code once a feature is turned on. This isn’t essential, though. At least not in the early days. When a cheeky bug is discovered, having the flag in place allows the feature to be very quickly turned off again. Let data and experimentation drive development Lesson #4: Use data to determine the direction of your product and measure its success. The first company I worked for placed a huge amount of emphasis on data-driven decision making. If we had an idea, or if we wanted to make a change, we were encouraged to “bring data” to show why it was necessary. “Without data, you’re just another person with an opinion,” the chief data scientist would say. This attitude helped to ensure we were building the right things for our customers. Instead of just plucking a new feature out of thin air, it was chosen based on data that reflected its need. But how do you design that feature? How do you know that the design you choose will have the desired impact? That’s where experiments come into play.  At TweetDeck we make UI changes that we hope will delight our users. But the assumptions we make about our users are often wrong. Our front-end team recently sat in a room and tried to guess which UIs from A/B tests had produced better results. Half the room guessed incorrectly every time. We can’t assume a change we want to make will have the impact we expect. So we run an experiment. Here’s how it works. Users are placed into buckets. One bucket of users will have access to the new feature, the other won’t. We hypothesise that the bucket exposed to the new feature will have better results. The beauty of running an experiment is that we’ll know for sure. Instead of blindly releasing the feature to all users without knowing its impact, once the experiment has run its course, we’ll have the data to make decisions accordingly. Hire the developer, not the degree Lesson #5: Testing candidates on real world problems will allow applicants from all backgrounds to shine. Surely, a company like Twitter would give their applicants insanely difficult code tests, and the toughest technical questions, that only the cleverest CS graduates could pass, I told myself when applying for the job. Lucky for me, this wasn’t the case. The process was insanely difficult—don’t get me wrong—but the team at TweetDeck gave me real world problems to solve. The first code test involved bug fixes, performance and testing. The second involved DOM traversal and manipulation. Instead of being put on the spot in a room with a whiteboard and pen I was given a task, access to the internet, and time to work on it. Similarly, in my technical interviews, I was asked to pair program on real world problems that I was likely to face on the job. In one of my phone screenings I was told Twitter wanted to increase diversity in its teams. Not just gender diversity, but also diversity of experience and background. Six months later, with a bunch of new hires, team lead Tom Ashworth says TweetDeck has the most diverse team it’s ever had. “We designed an interview process that gave us a way to simulate the actual job,” he said. “It’s not about testing whether you learnt an algorithm in school.” Is this lowering the bar? No. The bar is whether a candidate has the ability to solve problems they are likely to face on the job. I recently spoke to a longstanding Atlassian engineer who said they hadn’t seen an algorithm in their seven years at the company. These days, only about 50 per cent of developers have computer science degrees. The majority of developers are self taught, learn on the job or via online courses. If you want to increase diversity in your engineering team, ensure your interview process isn’t excluding these people.",2016,Amy Simmons,amysimmons,2016-12-20T00:00:00+00:00,https://24ways.org/2016/my-first-18-months-as-a-dev/,process 308,How to Make a Chrome Extension to Delight (or Troll) Your Friends,"If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right? Nope. If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves. How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun. Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to. Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. What you’ll need Adobe Illustrator (or a similar illustration program to export PNG) Some images of your friends A text editor Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it. Getting started Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome. Open index.html in Chrome. You should see a grey page that says “Noname”. Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template. Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay. Step 1: Gather your friends The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family. Create your artwork For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew: As you can see, my use of the word “family” extends to cats. There’s no judgement here. Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application. Now, export your images by following these steps: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your images to config.js Open scripts/config.js. This is where you configure your extension. Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image. faces: { leslie: 'images/face_leslie.png', kyle: 'images/face_kyle.png', beep: 'images/face_beep.png' } The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen. Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this: Congrats, you’re off and running! Step 2: Add adjectives Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort. Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering… prefixes: [ 'Loving', 'Drunk', 'Chatty', 'Merry', 'Creepy', 'Introspective', 'Cheerful', 'Awkward', 'Unrelatable', 'Hungry', ... ] When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila! Step 3: Choose your color palette Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead. How to choose colors To create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like. Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later. Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors. Add your background colors to config.js Copy your hex codes into the bgColors array in config.js. bgColors: [ '#FFDD77', '#FF8E72', '#ED5E84', '#4CE0B3', '#9893DA', ... ] Now when you go back to Chrome and refresh the page, you’ll see your new palette! Step 4: Accessorize This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends. Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person. Create a group of accessories To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead. Next, follow the same steps as you did when you exported the faces. Here they are again: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your accessories to config.js In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array: customProps: { hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ] } Refresh Chrome and behold, accessories! Create as many more accessories as you want Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this: The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses: customProps: { hair: [ 'images/hair_bowl.png', 'images/hair_bob.png' ], hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ], glasses: [ 'images/glasses_aviators.png', 'images/glasses_monacle.png' ] } And, there you have it! Randomly generated friends with random accessories. Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress… Step 5: Publish it It’s time to put this in your new tabs! You have two options: Publish it as a Chrome extension in the Chrome Web Store. Host it as a website and point to it with the New Tab Redirect extension. Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2. How to make a simple Chrome extension to replace the new tab page All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement: { ""manifest_version"": 2, ""name"": ""Your extension name"", ""version"": ""1.0"", ""chrome_url_overrides"" : { ""newtab"": ""index.html"" } } To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that: Go to the Extensions page in Chrome by navigating to chrome://extensions/. Tick the checkbox in the upper-right corner labelled “Developer Mode”. Click “Load unpacked extension…” and select this project. If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page. Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you. Share the love Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. At the end of the day, let’s make something that makes us happy. Cheers!",2016,Leslie Zacharkow,lesliezacharkow,2016-12-08T00:00:00+00:00,https://24ways.org/2016/how-to-make-a-chrome-extension/,code 314,Easy Ajax with Prototype,"There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity. But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends. Introducing prototype.js Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff. There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp. First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree. Cutting to the chase Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page: var url = 'myscript.php'; var pars = 'foo=bar'; var target = 'output-div'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page. Knocking up a basic example So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too. So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing: Easy Ajax
As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.) Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility. Meanwhile, back at the server So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it. Here’s a quick PHP script – greeting.php – that Santa brought me early. Season's Greetings, $the_name!

""; ?> You’ll perhaps want to do something a little more complex within your own projects. Just sayin’. Gluing it all together Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init(): Event.observe(window, 'load', init, false); Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field. function init(){ $('greeting-submit').style.display = 'none'; Event.observe('greeting-name', 'keyup', greet, false); } As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this: function greet(){ var url = 'greeting.php'; var pars = 'greeting-name='+escape($F('greeting-name')); var target = 'greeting'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); } The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call. That’s it No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.",2005,Drew McLellan,drewmclellan,2005-12-01T00:00:00+00:00,https://24ways.org/2005/easy-ajax-with-prototype/,code 315,Edit-in-Place with Ajax,"Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn’t go that far towards implementing something really practical. We dipped our toes in, but haven’t learned to swim yet. So here is swimming lesson number one. Anyone who’s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page. Prototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we’ll need here) and a whole bunch of manipulations to the user interface – all through simple library calls. Here’s what we’re building, so let’s do it. Getting Started There are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you’ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here’s what Santa dropped down my chimney: Edit-in-Place with Ajax

Edit-in-place

Dashing through the snow on a one horse open sleigh.

So that’s our page. The editable item is going to be the

called desc. The process goes something like this: Highlight the area onMouseOver Clear the highlight onMouseOut If the user clicks, hide the area and replace with a '; var button = ' OR

'; new Insertion.After(obj, textarea+button); Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false); Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false); } The first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object. The last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel. In the event of a cancel, we can clean up behind ourselves like so: function cleanUp(obj, keepEditable){ Element.remove(obj.id+'_editor'); Element.show(obj); if (!keepEditable) showAsEditable(obj, true); } Saving the Changes This is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response. function saveChanges(obj){ var new_content = escape($F(obj.id+'_edit')); obj.innerHTML = ""Saving...""; cleanUp(obj, true); var success = function(t){editComplete(t, obj);} var failure = function(t){editFailed(t, obj);} var url = 'edit.php'; var pars = 'id=' + obj.id + '&content=' + new_content; var myAjax = new Ajax.Request(url, {method:'post', postBody:pars, onSuccess:success, onFailure:failure}); } function editComplete(t, obj){ obj.innerHTML = t.responseText; showAsEditable(obj, true); } function editFailed(t, obj){ obj.innerHTML = 'Sorry, the update failed.'; cleanUp(obj); } As you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to “Saving…” to show that an update is occurring, and make the Ajax POST. If the Ajax fails, editFailed() sets the contents of the object to “Sorry, the update failed.” Admittedly, that’s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to “Saving…”. No one likes a failure – especially a messy one. If the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up. Meanwhile, back at the server The missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here’s what I have in PHP. Not exactly rocket science is it? I’m just catching the content item from the POST and echoing it back. For your application to be useful, however, you’ll need to know exactly which record you should be updating. I’m passing in the ID of my
, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update. You should also check the user’s credentials to make sure they have permission to edit whatever it is they’re editing. Basically the same rules apply as with any script in your application. Limitations There are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I’ve already mentioned. The second is that from an idealistic standpoint, I’d rather not be using innerHTML. However, the reality is that it’s presently the most efficient way of making large changes to the document. If you’re serving as XML, remember that you’ll need to replace these with proper DOM nodes. It’s also important to note that it’s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don’t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all – it fails silently. It is for this reason that this shouldn’t be used as a complete replacement for a traditional, universally accessible edit form. It’s a great time-saver for those with the ability to use it, but it’s no replacement. See it in action I’ve put together an example page using the inert PHP script above. That is to say, your edits aren’t committed to a database, so the example is reset when the page is reloaded.",2005,Drew McLellan,drewmclellan,2005-12-23T00:00:00+00:00,https://24ways.org/2005/edit-in-place-with-ajax/,code 320,DOM Scripting Your Way to Better Blockquotes,"Block quotes are great. I don’t mean they’re great for indenting content – that would be an abuse of the browser’s default styling. I mean they’re great for semantically marking up a chunk of text that is being quoted verbatim. They’re especially useful in blog posts.

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

Notice that you can’t just put the quoted text directly between the
tags. In order for your markup to be valid, block quotes may only contain block-level elements such as paragraphs. There is an optional cite attribute that you can place in the opening
tag. This should contain a URL containing the original text you are quoting:

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

Great! Except… the default behavior in most browsers is to completely ignore the cite attribute. Even though it contains important and useful information, the URL in the cite attribute is hidden. You could simply duplicate the information with a hyperlink at the end of the quoted text:

Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.

source

But somehow it feels wrong to have to write out the same URL twice every time you want to quote something. It could also get very tedious if you have a lot of quotes. Well, “tedious” is no problem to a programming language, so why not use a sprinkling of DOM Scripting? Here’s a plan for generating an attribution link for every block quote with a cite attribute: Write a function called prepareBlockquotes. Begin by making sure the browser understands the methods you will be using. Get all the blockquote elements in the document. Start looping through each one. Get the value of the cite attribute. If the value is empty, continue on to the next iteration of the loop. Create a paragraph. Create a link. Give the paragraph a class of “attribution”. Give the link an href attribute with the value from the cite attribute. Place the text “source” inside the link. Place the link inside the paragraph. Place the paragraph inside the block quote. Close the for loop. Close the function. Here’s how that translates to JavaScript: function prepareBlockquotes() { if (!document.getElementsByTagName || !document.createElement || !document.appendChild) return; var quotes = document.getElementsByTagName(""blockquote""); for (var i=0; i tags. You can style the attribution link using CSS. It might look good aligned to the right with a smaller font size. If you’re looking for something to do to keep you busy this Christmas, I’m sure that this function could be greatly improved. Here are a few ideas to get you started: Should the text inside the generated link be the URL itself? If the block quote has a title attribute, how would you take its value and use it as the text inside the generated link? Should the attribution paragraph be placed outside the block quote? If so, how would you that (remember, there is an insertBefore method but no insertAfter)? Can you think of other instances of useful information that’s locked away inside attributes? Access keys? Abbreviations?",2005,Jeremy Keith,jeremykeith,2005-12-05T00:00:00+00:00,https://24ways.org/2005/dom-scripting-your-way-to-better-blockquotes/,code 326,Don't be eval(),"JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design. Common mistakes Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it: var property = 'bar'; var value = eval('foo.' + property); Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly. Here’s the right way of doing the above: var property = 'bar'; var value = foo[property]; In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string. Security issues In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks. What’s it good for? Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval(). function evalRequest(url) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { eval(xmlhttp.responseText); } } xmlhttp.open(""GET"", url, true); xmlhttp.send(null); } If you want this to work with Internet Explorer you’ll need to include this compatibility patch.",2005,Simon Willison,simonwillison,2005-12-07T00:00:00+00:00,https://24ways.org/2005/dont-be-eval/,code 327,Improving Form Accessibility with DOM Scripting,"The form label element is an incredibly useful little element – it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element. What happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements? The classic example is date of birth – ideally, you’ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say “this is a date of birth”, “this is the day field”, “this is the month field” and “this is the day field”. Seems like overkill, doesn’t it? And it can uglify a form no end. There are various ways that you can approach it (and I think I’ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this? The technique I am suggesting as another alternative is as follows (here comes the pseudo-code): Start with a totally valid and accessible form Ensure that each form input has a label that is linked to its related form control Apply a class to any label that you don’t want to be visible (for example superfluous) Then, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded: Find all the label elements that are marked as superfluous and hide them Find out what input element each of these label elements is related to Then apply a hint about formatting required for input (gleaned from the original, now-hidden label text) – add it to the form input as default text Finally, add in a behaviour that clears or selects the default text (as you choose) So, here’s the theory put into practice – a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here’s the JavaScript that does the heavy lifting. But why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following: Using the DOM, you can add extra levels of help, potentially across a whole form – or even range of forms – without necessarily increasing your markup (it goes beyond simply hiding labels) Screen readers today may identify a label that is set not to display, but they may not in the future – this might provide a way around By expanding this technique above, it might be possible to visually change the parent container that groups these items – in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers – while still retaining the underlying semantic/logical structure Well, it’s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms?",2005,Ian Lloyd,ianlloyd,2005-12-03T00:00:00+00:00,https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/,code 328,Swooshy Curly Quotes Without Images,"The problem Take a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images – at all. The traditional way Feint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it’s just too difficult to float a closing one. Why is the traditional way bad? Well, for a start there are no actual curly quotes in the text (unless you’re doing some nifty image replacement). Thus with CSS disabled you’ll only have default blockquote styling to fall back on. Secondly, images don’t resize, so scaling text will have no affect on your graphic curlies. The solution Use really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It’ll also make sense when CSS is unavailable. The problem Creating “Drop Caps” with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters – the A to Z or 1 to 10 – and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters. Curly quotes aren’t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space: See how this is emphasized when we add a normal alphabetical character within the span. This is what we’re dealing with here: Then, there’s size. Call in a curly quote at less than 300% font-size and it ain’t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space. Prepare the curlies Firstly, remove the opening “ from the quote. Replace it with the opening curly quote character entity “. Then replace the closing “ with the entity reference for that, which is ”. Now at least the curlies will look nice and swooshy. Add the hooks Two reasons why we aren’t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we’re doing, and secondly we need to affect the last “letter” of our text also – the closing curly quote. So, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters:
Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.
So far nothing will look any different, aside form the curlies looking a bit nicer. I know we’ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download. The CSS OK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right. .bqstart { float: left; font-size: 700%; color: #FF0000; } .bqend { float: right; font-size: 700%; color: #FF0000; } That gives us this, which is rubbish. I’ve highlighted the actual span area with outlines: Note that the curlies don’t even fit inside the span! At this stage on IE 6 PC you won’t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled. Fiddle with margin and padding Think of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it’s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height: .bqstart { float: left; height: 45px; margin-top: -20px; padding-top: 45px; margin-bottom: -50px; font-size: 700%; color: #FF0000; } .bqend { float: right; height: 25px; margin-top: 0px; padding-top: 45px; font-size: 700%; color: #FF0000; } I wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully: I must admit that the heights, margins and spacing don’t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird. Finished The final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here’s a real example (note that I’m specifying Lucida Grande and then Verdana for my curlies): “Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.” Browsers happy As I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it’ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely. It’s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas.",2005,Simon Collison,simoncollison,2005-12-21T00:00:00+00:00,https://24ways.org/2005/swooshy-curly-quotes-without-images/,business 335,Naughty or Nice? CSS Background Images,"Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div:

Your postal/zip code was not in the correct format.

div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users:

Your postal/zip code was not in the correct format.

div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the

...

that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere).",2005,Derek Featherstone,derekfeatherstone,2005-12-20T00:00:00+00:00,https://24ways.org/2005/naughty-or-nice-css-background-images/,code