This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from
.
Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.
alert(document.body.innerHTML)
Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.
alert(eval('document.body.inne' + 'rHTML'))
This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages.
One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.
if (location.hostname == 'profile.myspace.com') {
document.location = 'http://www.myspace.com'
+ location.pathname + location.search
}
Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited.
Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.
This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.
So how can you help protect yourself from cross-site scripting?
Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.
You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.
You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.
For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.
npm audit: Protecting yourself from code you don’t own
JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit.
npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm:
npm install npm -g
npm audit
When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.
If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!
Securing your site like it’s 2019
The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.
However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.
As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years.",2018,Katie Fenn,katiefenn,2018-12-01T00:00:00+00:00,https://24ways.org/2018/securing-your-site-like-its-1999/,code
251,"The System, the Search, and the Food Bank","Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center.
On the belt are plastic bins filled with assortments of shelf-stable food—one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like “Bottled Water” or “Cereal” or “Candy.”
Such was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.
I could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there’s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.
Look, I can’t quite explain it: I just know that I love sorting, organizing, and classifying things—groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?
The opportunity to create meaning from nothing is at the core of my excitement, which is why I’ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. “I can’t believe they’re letting me do this,” I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.
The jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It’s not just a nice-to-have that we spent our morning separating cookies from carrots—it’s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we’re talking about donated groceries or—there it is—web content.
This happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.)
Is X a Y? was the question at the heart of our food sorting—but it’s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:
What is the criteria that defines Y?
Does X meet that criteria?
We don’t usually articulate it so concretely because it’s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn’t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn’t allowed, causing a small cognitive hiccup—this X is NOT a Y—that sometimes meant having to re-sort our boxes.
On the web, we’re interested—I would hope—in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer—Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?
We have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means—well, this means a lot of things, and I’ve got limited space here, so let’s focus on these three lessons from the food bank:
Use plain, familiar language. This advice seems to be given constantly, but that’s because it’s solid and it’s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it’s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I’ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows—not only will they be more likely to spot what they’re looking for on your site or app, but you’ll turn up more often in search results.
Create consistency in all things. Missteps in consistency look like my earlier chicken broth example—changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts—these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.
Make the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.
This idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn’t know every category label around the conveyer belt, nor what criteria each category warranted.
The system wasn’t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We’d ask staff members. We’d ask more seasoned volunteers. We’d ask each other. We’d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.
The more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.
The same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away—or, worse, misinform or hurt them.
This is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y—well-sorted content can give them the answer.",2018,Lisa Maria Martin,lisamariamartin,2018-12-16T00:00:00+00:00,https://24ways.org/2018/the-system-the-search-and-the-food-bank/,content
250,Build up Your Leadership Toolbox,"Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won’t wake up one day and magically be imbued with all you need to do a good job at leading. If we don’t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?
What even is it?
I got very frustrated way back in my days as a senior developer when I was given “advice” about my leadership style; at the time I didn’t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:
you need to step up
you need to take charge
you need to grab the bull by its horns
you need to have thicker skin
you need to just be more confident in your leading
you need to just make it happen
I appreciate some people’s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you “step up” and how are you evaluating what step I’m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<
While there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I’ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.
I’ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.
I <3 books
I refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.
There are three books that changed me forever in how I approach and think about leadership.
Primal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee
Quiet, by Susan Cain
Daring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Brené Brown
I recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.
Primal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.
Quiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I’d had managers or support from someone who valued introverts’ strengths, things would’ve been very different. I would’ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It’s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.
Brené Brown’s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.
It takes great courage to be vulnerable and open about what you can and can’t do. Open about your mistakes. Vocalise what you don’t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.
Just like gender, leadership is not binary
If you google for what a leader is, you’ll get many different answers. I personally think Brené’s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.
I define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.
Brené Brown
Being a leader isn’t about being the loudest in a room, having veto power, talking over people or ignoring everyone else’s ideas. It’s not about “telling people what to do”. It’s not about an elevated status that you’re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.
Being a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.
Leaders are Made, they are not Born.
Be flexible in your leadership style
Typically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.
From the book, Primal Leadership, it gives a summary of 6 leadership styles which are:
Visionary
Coaching
Affiliative
Democratic
Pacesetting
Commanding
Visionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they’re doing what they’re doing.
Coaching, is about connecting what a person wants and helping to align that with organisation’s goals. It’s a balance of helping someone improve their performance to fulfil their role and their potential beyond.
Affiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.
Democratic, values people’s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone’s input, that doesn’t mean the end result is that I have to please everyone.
The next two, sadly, are the ones wielded far too often and have the greatest negative impact. It’s where the “telling people what to do” comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.
Pacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the “just make it happen” and driver of unrealistic workload which contributes to burnout.
Commanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.
Commanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.
Be responsible for the power you wield
If reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven’t got all of these down and ready to use in your toolbox…
Take a breath. Take responsibility. Take action.
No one is perfect, and it’s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you’re going to try some different styles. You can be vulnerable and own up to mistakes you might’ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.
The impact you can have on the lives of those around you when you’re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.
Time spent understanding people is never wasted.
Cate Huston.
I believe in you. <3 Mazz.",2018,Mazz Mosley,mazzmosley,2018-12-10T00:00:00+00:00,https://24ways.org/2018/build-up-your-leadership-toolbox/,business
261,Surviving—and Thriving—as a Remote Worker,"Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there?
I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility.
However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely.
Experiment with your environment
As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight.
Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.
Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.)
Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls.
Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you.
Set boundaries
If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh?
Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with).
For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.
I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.
I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality.
Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon.
You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.
Create intentional transition time
I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time.
The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort.
I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch.
This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer.
Embrace async
If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now.
For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.
Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.”
Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!
(That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)
Seek out social opportunities
Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.
I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)
Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.
If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work.
If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own.
These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!
Hope to see you all online soon 🙌",2018,Mel Choyce,melchoyce,2018-12-09T00:00:00+00:00,https://24ways.org/2018/thriving-as-a-remote-worker/,process
256,Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist,"We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.
*(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!)
Looking for critters in rocky intertidal habitats
One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’
A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.)
They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.
My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.
I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist.
Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!
Finding nearby animals with iNaturalist
We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.
I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting.
This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.
iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button.
You’ll then get a URL that looks a bit like
https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at
which you can call and interrrogate using a programming language of your choice.
If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).
Rapid development using Observable Notebooks
We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all.
You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example.
Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.
Developing your Naturalist superpowers
If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.
For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.
We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com
The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API.
westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811
A quick map primer:
The higher the latitude the more north it is
The lower the latitude the more south it is
Latitude 0 = the equator
The higher the longitude the more east it is of Greenwich
The lower the longitude the more west it is of Greenwich
Longitude 0 = Greenwich
In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:
nelat = highest latitude = north limit = 37.499811
nelng = highest longitude = east limit = -122.492738
swlat = smallest latitude = south limit = 37.492805
swlng = smallest longitude = west limit = 122.50542
As API parameters these look like this:
?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542
These parameters in this format can be used for most of the iNaturalist API methods.
Nudibranch seasonality in Pillar Point
We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.
In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’.
Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’.
The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.
The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:
pillar_point_counts_per_week = fetch(
""https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true""
).then(response => {
return response.json();
})
Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks.
(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)
The iNaturalist API returns data that looks like this:
{
""total_results"": 53,
""page"": 1,
""per_page"": 53,
""results"": {
""week_of_year"": {
""1"": 136,
""2"": 20,
""3"": 150,
""4"": 65,
""5"": 186,
""6"": 74,
""7"": 47,
""8"": 87,
""9"": 64,
""10"": 56,
But for our Vega-Lite graph we need data that looks like this:
[{
""week"": ""01"",
""value"": 136
}, {
""week"": ""02"",
""value"": 20
}, ...]
We can convert what we get back from the API to the second format using a loop that iterates over the object keys:
objects_to_plot = {
let objects = [];
Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {
objects.push({
week: `Wk ${week_index.toString()}`,
observations: pillar_point_counts_per_week.results.week_of_year[week_index]
});
})
return objects;
}
We can then plug this into Vega-Lite to draw us a graph:
vegalite({
data: {values: objects_to_plot},
mark: ""bar"",
encoding: {
x: {field: ""week"", type: ""nominal"", sort: null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences.
So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.
pillar_point_nudibranches = {
let api_results = await fetch(
""https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true""
).then(r => r.json())
let species_list = api_results.results.map(i => ({
value: i.taxon.id,
label: `${i.taxon.preferred_common_name} (${i.taxon.name})`
}));
return species_list
}
We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from ""@jashkenas/inputs"" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!
viewof select_species = select({
title: ""Which Nudibranch do you want to see seasonality for?"",
options: [{value: """", label: ""All the Nudibranchs!""}, ...pillar_point_nudibranches],
value: """"
})
Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.
pillar_point_counts_per_month_per_species = fetch(
`https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`
).then(r => r.json())
Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:
objects_to_plot_species_month = {
let objects = [];
Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {
objects.push({
month: (new Date(2018, (month_index - 1), 1)).toLocaleString(""en"", {month: ""long""}),
observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]
});
})
return objects;
}
(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)
And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!
vegalite({
data: {values: objects_to_plot_species_month},
mark: ""bar"",
encoding: {
x: {field: ""month"", type: ""nominal"", sort:null},
y: {field: ""observations"", type: ""quantitative""}
},
width: width * 0.9
})
Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.
This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?
Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!
Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.
viewof selected_month = select({
title: ""When do you want to see Nudibranchs?"",
options: [
{ label: ""Whenever"", value: """" },
{ label: ""January"", value: ""1"" },
{ label: ""February"", value: ""2"" },
{ label: ""March"", value: ""3"" },
{ label: ""April"", value: ""4"" },
{ label: ""May"", value: ""5"" },
{ label: ""June"", value: ""6"" },
{ label: ""July"", value: ""7"" },
{ label: ""August"", value: ""8"" },
{ label: ""September"", value: ""9"" },
{ label: ""October"", value: ""10"" },
{ label: ""November"", value: ""11"" },
{ label: ""December"", value: ""12"" },
],
value: """"
})
We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.
all_species_data = fetch(
`https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`
).then(r => r.json())
You can render HTML directly in a notebook cell using Observable’s html tagged template literal:
If you go to Pillar Point ${
{"""": """",
""1"":""in January"",
""2"":""in Febrary"",
""3"":""in March"",
""4"":""in April"",
""5"":""in May"",
""6"":""in June"",
""7"":""in July"",
""8"":""in August"",
""9"":""in September"",
""10"":""in October"",
""11"":""in November"",
""12"":""in December"",
}[selected_month]
} you might see…
${all_species_data.results.map(s => `
${s.taxon.name}
Seen ${s.count} times
`)}
These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!
Play with it yourself in this Observable Notebook.
Conclusion
I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower.
Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.
Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.",2018,Natalie Downe,nataliedowne,2018-12-18T00:00:00+00:00,https://24ways.org/2018/observable-notebooks-and-inaturalist/,code
252,Turn Jekyll up to Eleventy,"Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option.
The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.
By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.
Introducing Eleventy
Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).
As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1.
Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:
git clone https://github.com/mattcone/markdown-guide.git
cd markdown-guide
Before you start
If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM:
Install Node.js:
Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.
Windows: Download the Windows installer from the Node.js website and follow the instructions.
Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies.
If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.
Installing Eleventy
With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:
npm install --save-dev @11ty/eleventy
If you open package.json you should see the following:
…
""devDependencies"": {
""@11ty/eleventy"": ""^0.6.0""
}
…
We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following:
npx eleventy --input=README.md --formats=md
This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.
Configuration
Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on.
We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:
module.exports = function(eleventyConfig) {
return {
dir: {
input: ""./"", // Equivalent to Jekyll's source property
output: ""./_site"" // Equivalent to Jekyll's destination property
}
};
};
A few other things to bear in mind:
Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).
By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.
Layouts
One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).
Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config:
module.exports = function(eleventyConfig) {
// Aliases are in relation to the _includes folder
eleventyConfig.addLayoutAlias('about', 'layouts/about.html');
eleventyConfig.addLayoutAlias('book', 'layouts/book.html');
eleventyConfig.addLayoutAlias('default', 'layouts/default.html');
return {
dir: {
input: ""./"",
output: ""./_site""
}
};
}
Determining which template language to use
Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.
Variables
On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.
Site variables
Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:
title: ""Markdown Guide""
url: https://www.markdownguide.org
baseurl: """"
repo: http://github.com/mattcone/markdown-guide
comments: false
author:
name: ""Matt Cone""
og_locale: ""en_US""
Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.
{
""title"": ""Markdown Guide"",
""url"": ""https://www.markdownguide.org"",
""baseurl"": """",
""repo"": ""http://github.com/mattcone/markdown-guide"",
""comments"": false,
""author"": {
""name"": ""Matt Cone""
},
""og_locale"": ""en_US""
}
Page variables
The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:
Jekyll
Eleventy
page.url
page.url
page.date
page.date
page.path
page.inputPath
page.id
page.outputPath
page.name
page.fileSlug
page.content
content
page.title
title
page.foobar
foobar
When iterating through pages, frontmatter values are available via the data object while content is available via templateContent:
Jekyll
Eleventy
item.url
item.url
item.date
item.date
item.path
item.inputPath
item.name
item.fileSlug
item.id
item.outputPath
item.content
item.templateContent
item.title
item.data.title
item.foobar
item.data.foobar
Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.
Pagination variables
Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:
Jekyll
Eleventy
paginator.page
pagination.pageNumber
paginator.per_page
pagination.size
paginator.posts
pagination.items
paginator.previous_page_path
pagination.previousPageHref
paginator.next_page_path
pagination.nextPageHref
Filters
Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where.
The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:
// {{ variable | jsonify }}
eleventyConfig.addFilter('jsonify', function (variable) {
return JSON.stringify(variable);
});
Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:
{{ site.members | where: ""graduation_year"",""2014"" }}
To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:
// {{ array | where: key,value }}
eleventyConfig.addFilter('where', function (array, key, value) {
return array.filter(item => {
const keys = key.split('.');
const reducedKey = keys.reduce((object, key) => {
return object[key];
}, item);
return (reducedKey === value ? item : false);
});
});
There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew!
Includes
As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:
{% include include.html value=""key"" %}
{{ include.value }}
in Eleventy, you would do this:
{% include ""include.html"", value: ""key"" %}
{{ value }}
A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.
Tweaking Liquid
You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option.
Thankfully, Eleventy allows us to pass options to LiquidJS:
eleventyConfig.setLiquidOptions({
dynamicPartials: false,
root: [
'_includes',
'.'
]
});
Collections
Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.
Collections in Jekyll
In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:
collections:
- basic-syntax
- extended-syntax
These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:
{% for syntax in site.extended-syntax %}
{{ syntax.title }}
{% endfor %}
Collections in Eleventy
There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:
---
title: Strikethrough
syntax-id: strikethrough
syntax-summary: ""~~The world is flat.~~""
tag: extended-syntax
---
We can then iterate over tagged content like this:
{% for syntax in collections.extended-syntax %}
{{ syntax.data.title }}
{% endfor %}
Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):
eleventyConfig.addCollection('basic-syntax', collection => {
return collection.getFilteredByGlob('_basic-syntax/*.md');
});
eleventyConfig.addCollection('extended-syntax', collection => {
return collection.getFilteredByGlob('_extended-syntax/*.md');
});
We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result:
eleventyConfig.addCollection('example', collection => {
return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {
return a.data.display_order - b.data.display_order;
});
});
Hopefully, this gives you just a hint of what’s possible using this approach.
Using directory data to manage defaults
By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:
---
title: Lists
syntax-id: lists
api: ""no""
permalink: /basic-syntax/lists.html
---
Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.
For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:
{
""layout"": ""syntax"",
""tag"": ""basic-syntax"",
""permalink"": ""basic-syntax/{{ title | slug }}.html""
}
However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content:
markdown-guide
└── _content
├── basic-syntax
├── extended-syntax
├── getting-started
└── _content.json
We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:
{
""permalink"": false
}
Static files
Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:
module.exports = function(eleventyConfig) {
…
// Copy the `assets` directory to the compiled site folder
eleventyConfig.addPassthroughCopy('assets');
return {
dir: {
input: ""./"",
output: ""./_site""
},
passthroughFileCopy: true
};
}
Final considerations
Assets
Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.
Publishing to GitHub Pages
One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!
Going one louder
If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons.
While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.
I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine!
But these go to 11.
Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩",2018,Paul Lloyd,paulrobertlloyd,2018-12-11T00:00:00+00:00,https://24ways.org/2018/turn-jekyll-up-to-eleventy/,content
243,Researching a Property in the CSS Specifications,"I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial.
What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.
Where are the CSS Specifications?
The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.
Who are the specifications for?
CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.
Which version of a spec should I look at?
There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.
If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.
If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.
How to approach a spec
The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail.
The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:
Values and Units
Intrinsic and Extrinsic Sizing
Box Alignment
When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.
Researching your property
As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.
You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.
Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.
Value
For value we can see that the property accepts a value
. The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.
The value is defined as accepting various values:
minmax( , )
fit-content(
We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.
is defined just below as:
min-content
max-content
auto
So these are all things that would be valid to use as a value for grid-auto-rows.
The first value of is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.
grid-auto-rows: 100px;
grid-auto-rows: 1em;
grid-auto-rows: 30%;
When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.
“ values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.”
This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.
The second value of is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:
grid-auto-rows: 1fr;
There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.
We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.
For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.
We can now add min-content and max-content to our available values.
grid-auto-rows: min-content;
grid-auto-rows: max-content;
The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.
grid-auto-rows: auto;
So, this was the list for , the next possible value is minmax( , ). So this is telling us that we can use minmax() as a value, the final (max) value will be and we have already unpacked all of the allowable values there. The first value (min) is detailed as an . If we look at the values for this, we discover that they are the same as , minus the value:
min-content
max-content
auto
We already know what all of these do, so we can add possible minmax() values to our list of values for .
grid-auto-rows: minmax(100px, 200px);
grid-auto-rows: minmax(20%, 1fr);
grid-auto-rows: minmax(1em, auto);
grid-auto-rows: minmax(min-content, max-content);
Finally we can use fit-content( . We can see that fit-content takes a value of which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.
grid-auto-rows: fit-content(200px);
grid-auto-rows: fit-content(20%);
Those are all of our possible values, and to round things off, check again at the initial value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:
“If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.”
Therefore with the following CSS, if five implicit rows were needed they would be as follows:
100px
1fr
auto
100px
1fr
.grid {
display: grid;
grid-auto-rows: 100px 1fr auto;
}
Initial
We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.
Applies to
This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.
Inherited
This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.
Percentages
Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.
Media
This defines the media group that the property belongs to. In this case visual.
Computed Value
This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.
Canonical Order
If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table.
Animation Type
This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!
That’s (mostly) it!
Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.
In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.
Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you.
Further reading
Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.
You may also find useful:
Value Definition Syntax on MDN.
The MDN Glossary defines many common terms.
Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.
How to read W3C Specs - from 2001 but still relevant.
I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.",2018,Rachel Andrew,rachelandrew,2018-12-14T00:00:00+00:00,https://24ways.org/2018/researching-a-property-in-the-css-specifications/,code
248,How to Use Audio on the Web,"I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉
You’re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998.
The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.
Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.
We can’t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.
But then of course comes the question, having lived in a muted present for so long, where and why would you use audio?
✨ Showcase Time ✨
There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.
futurelibrary.no
Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.
pottermore.com
Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.
Eighty-six and a half years
Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.
Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.
lightnote.co
learningmusic.ableton.com
Considerations
Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.
Eighty-six and a half years
Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings.
Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone.
To The Future
So let’s go!
We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.
But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.
Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).
This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises.
Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb.",2018,Ruth John,ruthjohn,2018-12-22T00:00:00+00:00,https://24ways.org/2018/how-to-use-audio-on-the-web/,design
255,Inclusive Considerations When Restyling Form Controls,"I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.
I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year.
Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match.
However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve.
Quick level setting
Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:
Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.
It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics.
Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing.
Visually resetting and restyling form controls
Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors.
As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.
See the Pen form control styling comparisons by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/gZOrZm/
Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated.
If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing.
You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?
It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:
Non-standard
This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.
While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.
Can’t make it? Fake it.
Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended.
Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling.
Getting clever with markup and CSS
The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.
See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/RqEayN/
Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?
As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.
Limitations to cleverness
Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to.
However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach.
Using ARIA when appropriate
Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).
ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:
If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.
Exceptions to the rule
One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.
Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.
While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.
See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.
https://codepen.io/scottohara/pen/XyvoeE/
By adding a role=""switch"" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state.
But while this is a valid approach to take in building a switch, how does this actually match up to reality?
Does it pass the test(s)?
Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better.
What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:
Is the control implemented in all supported browsers?
If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field?
If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline?
Test with multiple input devices.
Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few.
You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.
How well does the control take to custom styles?
If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling.
Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled.
Do specifications match reality?
If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations?
For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order?
Wrapping up
While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented.
Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.
2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs.
And hey. There’s always next year, right?",2018,Scott O'Hara,scottohara,2018-12-13T00:00:00+00:00,https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/,code
249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.
I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.
Try it out here, then read on to see how I built it.
First step: crawling the data
The first step in building a search engine is to grab a copy of the data that you plan to make searchable.
There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.
I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites.
We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.
Then we’ll tell wget to recursively crawl the website, using the --recursive flag.
We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2
The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"".
Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.
Tie all of those options together and we get this command:
wget --recursive --wait 2 --no-clobber
-I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017
-X ""/*/*/comments""
https://24ways.org/archives/
If you leave this running for a few minutes, you’ll end up with a folder structure something like this:
$ find 24ways.org
24ways.org
24ways.org/2013
24ways.org/2013/why-bother-with-accessibility
24ways.org/2013/why-bother-with-accessibility/index.html
24ways.org/2013/levelling-up
24ways.org/2013/levelling-up/index.html
24ways.org/2013/project-hubs
24ways.org/2013/project-hubs/index.html
24ways.org/2013/credits-and-recognition
24ways.org/2013/credits-and-recognition/index.html
...
As a quick sanity check, let’s count the number of HTML pages we have retrieved:
$ find 24ways.org | grep index.html | wc -l
328
There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead:
wget --recursive --wait 2 --no-clobber
-I /2018
-X ""/*/*/comments""
https://24ways.org/
Thanks to --no-clobber, this is safe to run every day in December to pick up any new content.
We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index.
Building a search index using SQLite
There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.
I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.
SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!
SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.
This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.
It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.
I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday.
If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages:
$ python3 -m venv ./jupyter-venv
$ ./jupyter-venv/bin/pip install jupyter
# ... lots of installer output
# Now lets install some extra packages we will need later
$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib
# And start the notebook web application
$ ./jupyter-venv/bin/jupyter-notebook
# This will open your browser to Jupyter at http://localhost:8888/
You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.
A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account.
Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on.
from pathlib import Path
from bs4 import BeautifulSoup as Soup
base = Path(""/Users/simonw/Dropbox/Development/24ways-search"")
articles = list(base.glob(""*/*/*/*.html""))
# articles is now a list of paths that look like this:
# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')
docs = []
for path in articles:
year = str(path.relative_to(base)).split(""/"")[1]
url = 'https://' + str(path.relative_to(base).parent) + '/'
soup = Soup(path.open().read(), ""html5lib"")
author = soup.select_one("".c-continue"")[""title""].split(
""More information about""
)[1].strip()
author_slug = soup.select_one("".c-continue"")[""href""].split(
""/authors/""
)[1].split(""/"")[0]
published = soup.select_one("".c-meta time"")[""datetime""]
contents = soup.select_one("".e-content"").text.strip()
title = soup.find(""title"").text.split("" ◆"")[0]
try:
topic = soup.select_one(
'.c-meta a[href^=""/topics/""]'
)[""href""].split(""/topics/"")[1].split(""/"")[0]
except TypeError:
topic = None
docs.append({
""title"": title,
""contents"": contents,
""year"": year,
""author"": author,
""author_slug"": author_slug,
""published"": published,
""url"": url,
""topic"": topic,
})
After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:
[
{
""title"": ""Why Bother with Accessibility?"",
""contents"": ""Web accessibility (known in other fields as inclus..."",
""year"": ""2013"",
""author"": ""Laura Kalbag"",
""author_slug"": ""laurakalbag"",
""published"": ""2013-12-10T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"",
""topic"": ""design""
},
{
""title"": ""Levelling Up"",
""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."",
""year"": ""2013"",
""author"": ""Ashley Baxter"",
""author_slug"": ""ashleybaxter"",
""published"": ""2013-12-06T00:00:00+00:00"",
""url"": ""https://24ways.org/2013/levelling-up/"",
""topic"": ""business""
},
...
My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries.
import sqlite_utils
db = sqlite_utils.Database(""/tmp/24ways.db"")
db[""articles""].insert_all(docs)
That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.
(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)
You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:
$ sqlite3 /tmp/24ways.db
sqlite> .headers on
sqlite> .mode column
sqlite> select title, author, year from articles;
title author year
------------------------------ ------------ ----------
Why Bother with Accessibility? Laura Kalbag 2013
Levelling Up Ashley Baxte 2013
Project Hubs: A Home Base for Brad Frost 2013
Credits and Recognition Geri Coady 2013
Managing a Mind Christopher 2013
Run Ragged Mark Boulton 2013
Get Started With GitHub Pages Anna Debenha 2013
Coding Towards Accessibility Charlie Perr 2013
...
There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:
db[""articles""].enable_fts([""title"", ""author"", ""contents""])
Introducing Datasette
Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet.
We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that?
If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article.
If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:
./jupyter-venv/bin/pip install datasette
This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.
Now you can run Datasette against the 24ways.db file we created earlier like so:
./jupyter-venv/bin/datasette /tmp/24ways.db
This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.
If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db
Publishing the database to the internet
One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish:
$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways
-----> Python app detected
-----> Installing requirements with pip
-----> Running post-compile hook
-----> Discovering process types
Procfile declares types -> web
-----> Compressing...
Done: 47.1M
-----> Launching...
Released v8
https://search-24ways.herokuapp.com/ deployed to Heroku
If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways.
You can run this command as many times as you like to deploy updated versions of the underlying database.
Searching and faceting
Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.
SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A
A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns:
http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic
Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:
http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A
Better search using custom SQL
The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.
This exposes the underlying SQL query, which looks like this:
select rowid, * from articles where rowid in (
select rowid from articles_fts where articles_fts match :search
) order by rowid limit 101
We can do better than this by constructing a custom SQL query. Here’s the query we will use instead:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10;
You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.
There’s a lot going on here! Let’s break the SQL down line-by-line:
select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on.
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.
from articles
join articles_fts on articles.rowid = articles_fts.rowid
articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.
where articles_fts match :search || ""*""
order by rank limit 10;
:search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.
How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects:
JSON API call to search for articles matching SVG
The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.
A simple JavaScript autocomplete search interface
I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.
We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:
function debounce(func, wait, immediate) {
let timeout;
return function() {
let context = this, args = arguments;
let later = () => {
timeout = null;
if (!immediate) func.apply(context, args);
};
let callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
};
We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.
Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these:
const htmlEscape = (s) => s.replace(
/>/g, '>'
).replace(
/Autocomplete search
And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:
// Embed the SQL query in a multi-line backtick string:
const sql = `select
snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,
articles_fts.rank, articles.title, articles.url, articles.author, articles.year
from articles
join articles_fts on articles.rowid = articles_fts.rowid
where articles_fts match :search || ""*""
order by rank limit 10`;
// Grab a reference to the
const searchbox = document.getElementById(""searchbox"");
// Used to avoid race-conditions:
let requestInFlight = null;
searchbox.onkeyup = debounce(() => {
const q = searchbox.value;
// Construct the API URL, using encodeURIComponent() for the parameters
const url = (
""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" +
encodeURIComponent(sql) +
`&search=${encodeURIComponent(q)}&_shape=array`
);
// Unique object used just for race-condition comparison
let currentRequest = {};
requestInFlight = currentRequest;
fetch(url).then(r => r.json()).then(d => {
if (requestInFlight !== currentRequest) {
// Avoid race conditions where a slow request returns
// after a faster one.
return;
}
let results = d.map(r => `
${htmlEscape(r.author)} - ${r.year}
${highlight(r.snippet)}
`).join("""");
document.getElementById(""results"").innerHTML = results;
});
}, 100); // debounce every 100ms
There’s just one more utility function, used to help construct the HTML results:
const highlight = (s) => htmlEscape(s).replace(
/b4de2a49c8/g, ''
).replace(
/8c94a2ed4b/g, ' '
);
This is what those unique strings passed to the snippet() function were for.
Avoiding race conditions in autocomplete
One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:
User types acces
Browser sends request A - querying documents matching acces*
User continues to type accessibility
Browser sends request B - querying documents matching accessibility*
Request B returns. It was fast, because there are fewer documents matching the full term
The results interface updates with the documents from request B, matching accessibility*
Request A returns results (this was the slower of the two requests)
The results interface updates with the documents from request A - results matching access*
This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.
Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.
Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.
When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.
It’s not a lot of code, really
And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like.
How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with.
A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.
More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.
For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code