rowid,title,contents,year,author,author_slug,published,url,topic 290,Creating a Weekly Research Cadence,"Working on a product team, it’s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development. As my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers—always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development. I wanted to find a way for our team to test ideas and validate assumptions sooner and more often—but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate. Setting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred—pushing us out of your comfort zone into a process of more rapid experimentation. Best of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like “Remember what Jenna said last week, about not being able to customize her lists?” would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions. Establishing an efficient recruitment process The key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process—along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users. The process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs. Pick a dedicated time each week for research In order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn’t easy to do! Nevertheless, it’s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn’t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team. I blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours. Divide your time into several research slots After my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users’ busy schedules. Depending on your work, you may need schedule longer sessions—but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own. I used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall. Invite users to sign up for a time that’s convenient for them With a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we’re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule. Setting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block. It may take a bit of experimentation to fine tune your process, but it’s worth the effort to get it right. (The worst thing that’s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design. Getting the most out of your research sessions As you get comfortable with the rhythm of talking to users each week, you’ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress—such as mockups or a simple prototype—and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features—either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that’s the best way to validate the assumptions your team is working from. Whatever you do, plan some time a day or two later to come back together and review what you’ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation. Over time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don’t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you’re ready.",2016,Wren Lanier,wrenlanier,2016-12-02T00:00:00+00:00,https://24ways.org/2016/creating-a-weekly-research-cadence/,ux 205,Why Design Systems Fail,"Design systems are so hot right now, and for good reason. They promote a modular approach to building a product, and ensure organizational unity and stability via reusable code snippets and utility styles. They make prototyping a breeze, and provide a common language for both designers and developers. A design system is a culmination of several individual components, which can include any or all of the following (and more): Style guide or visual pattern library Design tooling (e.g. Sketch Library) Component library (where the components live in code) Code usage guidelines and documentation Design usage documentation Voice and tone guideline Animation language guideline Design systems are standalone (internal or external) products, and have proven to be very effective means of design-driven development. However, in order for a design system to succeed, everyone needs to get on board. I’d like to go over a few considerations to ensure design system success and what could hinder that success. Organizational Support Put simply, any product, including internal products, needs support. Something as cross-functional as a design system, which spans every vertical project team, needs support from the top and bottom levels of your organization. What I mean by that is that there needs to be top-level support from project managers up through VP’s to see the value of a design system, to provide resources for its implementation, and advocate for its use company-wide. This is especially important in companies where such systems are being put in place on top of existing, crufty codebases, because it may mean there needs to be some time and effort put in the calendar for refactoring work. Support from the bottom-up means that designers and engineers of all levels also need to support this system and feel responsibility for it. A design system is an organization’s product, and everyone should feel confident contributing to it. If your design system supports external clients as well (such as contractors), they too can become valuable teammates. A design system needs support and love to be nurtured and to grow. It also needs investment. Investment To have a successful design system, you need to make a continuous effort to invest resources into it. I like to compare this to working out. You can work out intensely for 3 months and see some gains, but once you stop working out, those will slowly fade away. If you continue to work out, even if its less often than the initial investment, you’ll see yourself maintaining your fitness level at a much higher rate than if you stopped completely. If you invest once in a design system (say, 3 months of overhauling it) but neglect to keep it up, you’ll face the same situation. You’ll see immediate impact, but that impact will fade as it gets out of sync with new designs and you’ll end up with strange, floating bits of code that nobody is using. Your engineers will stop using it as the patterns become outdated, and then you’ll find yourself in for another round of large investment (while dreading going through the process since its fallen so far out of shape). With design systems, small incremental investments over time lead to big gains overall. With this point, I also want to note that because of how they scale, design systems can really make a large impact across the platform, making it extremely important to really invest in things like accessibility and solid architecture from the start. You don’t want to scale a brittle system that’s not easy to use. Take care of your design systems, and keep working on them to ensure their effectiveness. One way to ensure this is to have a dedicated team working on this design system, managing tickets and styling updates that trickle out to the rest of your company. Responsibility With some kind of team to act as an owner of a design system, whether it be the design team, engineering team, or a new team made of both designers and engineers (the best option), your company is more likely to keep a relevant, up-to-date system that doesn’t break. This team is responsible for a few things: Helping others get set up on the system (support) Designing and building components (development) Advocating for overall UI consistency and adherence (evangelism) Creating a rollout plan and update system (product management) As you can see, these are a lot of roles, so it helps to have multiple people on this team, at least part of the time, if you can. One thing I’ve found to be effective in the past is to hold office hours for coworkers to book slots within to help them get set up and to answer any questions about using the system. Having an open Slack channel also helps for this sort of thing, as well as for bringing up bugs/issues/ideas and being an channel for announcements like new releases. Communication Once you have resources and a plan to invest in a design system, its really important that this person or team acts as a bridge between design and engineering. Continuous communication is really important here, and the way you communicate is even more important. Remember that nobody wants to be told what to do or prescribed a solution, especially developers, who are used to a lot of autonomy (usually they get to choose their own tools at work). Despite how much control the other engineers have on the process, they need to feel like they have input, and feel heard. This can be challenging, especially since ultimately, some party needs to be making a final decision on direction and execution. Because it’s a hard balance to strike, having open communication channels and being as transparent as possible as early as possible is a good start. Buy-in For all of the reasons we’ve just looked over, good communication is really important for getting buy-in from your users (the engineers and designers), as well as from product management. Building and maintaining a design system is surprisingly a lot of people-ops work. To get buy-in where you don’t have a previous concensus that this is the right direction to take, you need to make people want to use your design system. A really good way to get someone to want to use a product is to make it the path of least resistance, to show its value. Gather examples and usage wins, because showing is much more powerful than telling. If you can, have developers use your product in a low-stakes situation where it provides clear benefits. Hackathons are a great place to debut your design system. Having a hackathon internally at DigitalOcean was a perfect opportunity to: Evangelize for the design system See what people were using the component library for and what they were struggling with (excellent user testing there) Get user feedback afterward on how to improve it in future iterations Let people experience the benefits of using it themselves These kinds of moments, where people explore on their own are where you can really get people on your side and using the design system, because they can get their hands on it and draw their own conclusions (and if they don’t love it — listen to them on how to improve it so that they do). We don’t always get so lucky as to have this sort of instantaneous user feedback from our direct users. Architecture I briefly mentioned the scalable nature of design systems. This is exactly why it’s important to develop a solid architecture early on in the process. Build your design system with growth and scalability in mind. What happens if your company acquires a new product? What happens when it develops a new market segment? How can you make sure there’s room for customization and growth? A few things we’ve found helpful include: Namespacing Use namespacing to ensure that the system doesn’t collide with existing styles if applying it to an existing codebase. This means prefixing every element in the system to indicate that this class is a part of the design system. To ensure that you don’t break parts of the existing build (which may have styled base elements), you can namespace the entire system inside of a parent class. Sass makes this easy with its nested structure. This kind of namespacing wouldn’t be necessary per se on new projects, but it is definitely useful when integrating new and old styles. Semantic Versioning I’ve used Semantic Versioning on all of the design systems I’ve ever worked on. Semantic versioning uses a system of Major.Minor.Patch for any updates. You can then tag released on Github with versioned updates and ensure that someone’s app won’t break unintentionally when there is an update, if they are anchored to a specific version (which they should be). We also use this semantic versioning as a link with our design system assets at DigitalOcean (i.e. Sketch library) to keep them in sync, with the same version number corresponding to both Sketch and code. Our design system is served as a node module, but is also provided as a series of built assets using our CDN for quick prototyping and one-off projects. For these built assets, we run a deploy script that automatically creates folders for each release, as well as a latest folder if someone wanted the always-up-to-date version of the design system. So, semantic versioning for the system I’m currently building is what links our design system node module assets, sketch library assets, and statically built file assets. The reason we have so many ways of consuming our design system is to make adoption easier and to reduce friction. Friction A while ago, I posed the question of why design systems become outdated and unused, and a major conclusion I drew from the conversation was: “If it’s harder for people to use than their current system, people just won’t use it” You have to make your design system the path of least resistance, lowering cognitive overhead of development, not adding to it. This is vital. A design system is intended to make development much more efficient, enforce a consistent style across sites, and allow for the developer to not worry as much about small decisions like naming and HTML semantics. These are already sorted out for them, meaning they can focus on building product. But if your design system is complicated and over-engineered, they may find it frustrating to use and go back to what they know, even if its not the best solution. If you’re a Sass expert, and base your system on complex mixins and functions, you better hope your user (the developer) is also a Sass expert, or wants to learn. This is often not the case, however. You need to talk to your audience. With the DigitalOcean design system, we provide a few options: Option 1 Users can implement the component library into a development environment and use Sass, select just the components they want to include, and extend the system using a hook-based system. This is the most performant and extensible output. Only the components that are called upon are included, and they can be easily extended using mixins. But as noted earlier, not everyone wants to work this way (including Sass a dependency and potentially needing to set up a build system for it and learn a new syntax). There is also the user who just wants to throw a link onto their page and have it look nice, and thats where our versioned built assets come in. Option 2 With Option 2, users pull in links that are served via a CDN that contain JS, CSS, and our SVG icon library. The code is a bit bigger than the completely customized version, but often this isn’t the aim when people are using Option 2. Reducing friction for adoption should be a major goal of your design system rollout. Conclusion Having a design system is really beneficial to any product, especially as it grows. In order to have an effective system, it’s important to primarily always keep your user in mind and garner support from your entire company. Once you have support and acceptance, this system will flourish and grow. Make sure someone is responsible for it, and make sure its built with a solid foundation from the start which will be carefully maintained toward the future. Good luck, and happy holidays!",2017,Una Kravets,unakravets,2017-12-14T00:00:00+00:00,https://24ways.org/2017/why-design-systems-fail/,process 84,Responsive Responsive Design,"Now more than ever, we’re designing work meant to be viewed along a gradient of different experiences. Responsive web design offers us a way forward, finally allowing us to “design for the ebb and flow of things.” With those two sentences, Ethan closed the article that introduced the web to responsive design. Since then, responsive design has taken the web by storm. Seemingly every day, some company is touting their new responsive redesign. Large brands such as Microsoft, Time and Disney are getting in on the action, blowing away the once common criticism that responsive design was a technique only fit for small blogs. Certainly, this is a good thing. As Ethan and John Allsopp before him, were right to point out, the inherent flexibility of the web is a feature, not a bug. The web’s unique ability to be consumed and interacted with on any number of devices, with any number of input methods is something to be embraced. But there’s one part of the web’s inherent flexibility that seems to be increasingly overlooked: the ability for the web to be interacted with on any number of networks, with a gradient of bandwidth constraints and latency costs, on devices with varying degrees of hardware power. A few months back, Stephanie Rieger tweeted “Shoot me now…responsive design has seemingly become confused with an opportunity to reduce performance rather than improve it.” I would love to disagree, but unfortunately the evidence is damning. Consider the size and number of requests for four highly touted responsive sites that were launched this year: 74 requests, 1,511kb 114 requests, 1,200kb 99 requests, 1,298kb 105 requests, 5,942kb And those numbers were for the small screen versions of each site! These sites were praised for their visual design and responsive nature, and rightfully so. They’re very easy on the eyes and a lot of thought went into their appearance. But the numbers above tell an inconvenient truth: for all the time spent ensuring the visual design was airtight, seemingly very little (if any) attention was given to their performance. It would be one thing if these were the exceptions, but unfortunately they’re not. Guy Podjarny, who has done a lot of research around responsive performance, discovered that 86% of the responsive sites he tested were either the same size or larger on the small screen as they were on the desktop. The reality is that high performance should be a requirement on any web project, not an afterthought. Poor performance has been tied to a decrease in revenue, traffic, conversions, and overall user satisfaction. Case study after case study shows that improving performance, even marginally, will impact the bottom line. The situation is no different on mobile where 71% of people say they expect sites to load as quickly or faster on their phone when compared to the desktop. The bottom line: performance is a fundamental component of the user experience. So, given it’s extreme importance in the success of any web project, why is it that we’re seeing so many bloated responsive sites? First, I adamantly disagree with the belief that poor performance is inherent to responsive design. That’s not a rule – it’s a cop-out. It’s an example of blaming the technique when we should be blaming the implementation. This argument also falls flat because it ignores the fact that the trend of fat sites is increasing on the web in general. While some responsive sites are the worst offenders, it’s hardly an issue resigned to one technique. To fix the issue, we need to stop making excuses and start making improvements instead. Here, then, are some things we can do to start improving the state of responsive performance, and performance in general, right now. Create a culture of performance If you understand just how important performance is to the success of a project, the natural next step is to start creating a culture where high performance is a key consideration. One of the things you can do is set a baseline. Determine the maximum size and number of requests you are going to allow, and don’t let a page go live if either of those numbers is exceeded. The BBC does this with its responsive mobile site. A variation of that, which Steve Souders discussed in a recent podcast is to create a performance budget based on those numbers. Once you have that baseline set, if someone comes along and wants to add a something to the page, they have to make sure the page remains under budget. If it exceeds the budget, you have three options: Optimize an existing feature or asset on the page Remove an existing feature or asset from the page Don’t add the new feature or asset The idea here is that you make performance part of the process instead of something that may or may not get tacked on at the end. Embrace the pain This troubling trend of web bloat can be blamed in part on the lack of pain associated with poor performance. Most of us work on high-speed connections with low latency. When we fire up a 4Mb site, it doesn’t feel so bad. When I tested the previously mentioned 5,942kb site on a 3G network, it took over 93 seconds to load. A minute and a half just staring at a white screen. Had anyone working on that project experienced that, you can bet the site wouldn’t have launched in that state. Don’t just crunch numbers. Fire up your site on a slower network and see what it feels like to wait. If you don’t have access to a slow network, simulate one using a tool like Slowy, Throttle or the Network Conditioner found in Mac OS X 10.7. Watch for low-hanging fruit There are a bunch of general performance improvements that apply to any site (responsive or not) but often aren’t made. A great starting point is to refer to Yahoo!‘s list of rules. Some of this might sound complicated or intimidating, but it doesn’t have to be. You can grab an .htaccess file from HTML 5 Boilerplate or use Sergey Chernyshev’s drop-in .htaccess file. You can use tools like SpriteMe to simplify the creation of sprites, and ImageOptim to compress images. Just by implementing these simple optimizations you will achieve a noticeable improvement in terms of weight and page load time. Be careful with images The most common offender for poor responsive performance is downloading unnecessarily large images, or worse yet, multiple sizes of the same image. For background images, simply being careful with where and how you include the image can ensure you don’t get caught in the trap of multiple background images being downloaded without being used. Don’t count on display:none to help. While it may hide elements from displaying on screen, those images will still be requested and downloaded. Content images can be a little trickier. Whatever you do, don’t serve a large image that works on a large screen display to small screens. It’s wasteful, not only in terms of adding weight to the page, but also in wasting precious memory. Instead, use a tool like Adaptive Images or src.sencha.io to make sure only appropriately sized images are being downloaded. The new element that has been so often discussed is another excellent solution if you’re feeling particularly future-oriented. A picture polyfill exists so that you can start using the element now without any worries about support. Conditional loading Don’t load any more than you absolutely need to. If a script isn’t needed at certain sizes, use the matchMedia polyfill to ensure it only loads when needed. Use eCSSential to do the same for unnecessary CSS files. Last year on 24 ways, Jeremy Keith wrote an article about conditional loading of content in a responsive design based on the screen width. The technique was later refined by the Filament Group into what they dubbed the Ajax-Include Pattern. It’s a powerful and simple way to lighten the load on small screens as well as reduce clutter. Go vanilla? If you take a look at the HTTP Archive you’ll see that other than image size, JavaScript is the heaviest asset on a page weighing in at 215kb on average. It also boasts the fifth highest correlation to load time as well as the second highest correlation to render time. Much of the weight can be attributed to our industry’s increasing reliance on frameworks. This is especially a concern on mobile devices. PPK recently exclaimed that current JavaScript libraries are just “too heavy for mobile”. “Research from Stoyan Stefanov on parse times supports this. On some Android and iOS devices, it can take as long as 200-300ms just to parse jQuery. There’s nothing wrong about using a framework, but the problem is that they’ve become the default. Before dropping another framework or plugin into a page, we should stop to consider the value it adds and whether we could accomplish what we need to do using a combination of vanilla JavaScript and CSS instead. (This is a great example of a scenario where a performance budget could help.) Start thinking beyond visual aesthetics We love to tout the web’s universality when discussing the need for responsive design. But that universality is not limited simply to screen size. Networks and hardware capabilities must factor in as well. The web is an incredibly dynamic and interactive medium, and designing for it demands that we consider more than just visual aesthetics. Let’s not forget to give those other qualities the attention they deserve.",2012,Tim Kadlec,timkadlec,2012-12-05T00:00:00+00:00,https://24ways.org/2012/responsive-responsive-design/,design 201,Lint the Web Forward With Sonarwhal,"Years ago, when I was in a senior in college, much of my web development courses focused on two things: the basics like HTML and CSS (and boy, do I mean basic), and Adobe Flash. I spent many nights writing ActionScript 3.0 to build interactions for the websites that I would add to my portfolio. A few months after graduating, I built one website in Flash for a client, then never again. Flash was dying, and it became obsolete in my résumé and portfolio. That was my first lesson in the speed at which things change in technology, and what a daunting realization that was as a new graduate looking to enter the professional world. Now, seven years later, I work on the Microsoft Edge team where I help design and build a tool that would have lessened my early career anxieties: sonarwhal. Sonarwhal is a linting tool, built by and for the web community. The code is open source and lives under the JS Foundation. It helps web developers and designers like me keep up with the constant change in technology while simultaneously teaching how to code better websites. Introducing sonarwhal’s mascot Nellie Good web development is hard. It is more than HTML, CSS, and JavaScript: developers are expected to have a grasp of accessibility, performance, security, emerging standards, and more, all while refreshing this knowledge every few months as the web evolves. It’s a lot to keep track of.   Web development is hard Staying up-to-date on all this knowledge is one of the driving forces for developing this scanning tool. Whether you are just starting out, are a student, or you have over a decade of experience, the sonarwhal team wants to help you build better websites for all browsers. Currently sonarwhal checks for best practices in five categories: Accessibility, Interoperability, Performance, PWAs, and Security. Each check is called a “rule”. You can configure them and even create your own rules if you need to follow some specific guidelines for your project (e.g. validate analytics attributes, title format of pages, etc.). You can use sonarwhal in two ways: An online version, that provides a quick and easy way to scan any public website. A command line tool, if you want more control over the configuration, or want to integrate it into your development flow. The Online Scanner The online version offers a streamlined way to scan a website; just enter a URL and you will get a web page of scan results with a permalink that you can share and revisit at any time. The online version of sonarwal When my team works on a new rule, we spend the bulk of our time carefully researching each subject, finding sources, and documenting it rather than writing the rule’s code. Not only is it important that we get you the right results, but we also want you to understand why something is failing. Next to each failing rule you’ll find a link to its detailed documentation, explaining why you should care about it, what exactly we are testing, examples that pass and examples that don’t, and useful links to even more in-depth documentation if you are interested in the subject. We hope that between reading the documentation and continued use of sonarwhal, developers can stay on top of best practices. As devs continue to build sites and identify recurring issues that appear in their results, they will hopefully start to automatically include those missing elements or fix those pieces of code that are producing errors. This also isn’t a one-way communication: the documentation is not only available on the sonarwhal site, but also on GitHub for editing so you can help us make it even better! A results report The current configuration for the online scanner is very strict, so it might hurt your feelings (it did when I first tested it on my personal website). But you can configure sonarwhal to any level of strictness as well as customize the command line tool to your needs! Sonarwhal’s CLI  The CLI gives you full control of sonarwhal: what rules to use, tweaks to them, domains that are out of your control, and so on. You will need the latest node LTS (v8) or Stable (v9) and your favorite package manager, such as npm: npm install -g sonarwhal You can now run sonarwhal from anywhere via: sonarwhal https://example.com Using the CLI The configuration is done via a .sonarwhalrc file. When analyzing a site, if no file is available, you will be prompted to answer a series of questions: What connector do you want to use? Connectors are what sonarwhal uses to access a website and gather all the information about the requests, resources, HTML, etc. Currently it supports jsdom, Microsoft Edge, and Google Chrome. What formatter? This is how you want to see the results: summary, stylish, etc. Make sure to look at the full list. Some are concise for, perfect for a quick build assessment, while others are more verbose and informative. Do you want to use the recommended rules configuration? Rules are the things we are validating. Unless you’ve read the documentation and know what you are doing, first timers should probably use the recommended configuration. What browsers are you targeting? One of the best features of sonarwhal is that rules can adapt their feedback depending on your targeted browsers, suggesting to add or remove things. For example, the rule “Highest Document Mode” will tell you to add the “X-UA-Compatible” header if IE10 or lower is targeted or remove if the opposite is true. sonarwhal configuration generator questions Once you answer all these questions the scan will start and you will have a .sonarwhalrc file similar to the following: { ""connector"": { ""name"": ""jsdom"", ""options"": { ""waitFor"": 1000 } }, ""formatters"": ""stylish"", ""rulesTimeout"": 120000, ""rules"": { ""apple-touch-icons"": ""error"", ""axe"": ""error"", ""content-type"": ""error"", ""disown-opener"": ""error"", ""highest-available-document-mode"": ""error"", ""validate-set-cookie-header"": ""warning"", // ... } } You should see the scan initiate in the command line and within a few seconds the results should start to appear. Remember, the scan results will look different depending on which formatter you selected so try each one out to see which one you like best. sonarwhal results on my website and hurting my feelings 💔 Now that you have a list of errors, you can get to work improving the site! Note though, that when you scan your website, it scans all the resources on that page and if you’ve added something like analytics or fonts hosted elsewhere, you are unable to change those files. You can configure the CLI to ignore files from certain domains so that you are only getting results for files you are in control of. The documentation should give enough guidance on how to fix the errors, but if it’s insufficient, please help us and suggest edits or contribute back to it. This is a community effort and chances are someone else will have the same question as you. When I scanned both my websites, sonarwhal alerted me to not having an Apple Touch Icon. If I search on the web as opposed to using the sonarwhal documentation, the first top 3 results give me outdated information: I need to include many different icon sizes. I don’t need to include all the different size icons that target different devices. Declaring one icon sized 180px x 180px will provide a large enough icon for devices and it will scale down as appropriate for people on older devices. The information at the top of the search results isn’t always the correct answer to an issue and we don’t want you to have to search through outdated documentation. As sonarwhal’s capabilities expand, the goal is for it to be the one stop shop for helping preflight your website. The journey up until now and looking forward On the Microsoft Edge team, we’re passionate about empowering developers to build great websites. Every day we see so many sites come through our issue tracker. (Thanks for filing those bugs, they help us make Microsoft Edge better and better!) Some issues we see over and over are honest mistakes or outdated ‘best practices’ that could be avoided, so we built this tool to help everyone help make the web a better place. When we decided to create sonarwhal, we wanted to create a tool that would help developers write better and more up-to-date code for their websites. We want sonarwhal to be useful to anyone so, early on, we defined three guiding principles we’ve used along the way: Community Driven. We build for the community’s best interests. The web belongs to everyone and this project should too. Not only is it open source, we’ve also donated it to the JS Foundation and have an inclusive governance model that welcomes the collaboration of anyone, individual or company. User Centric. We want to put the user at the center, making sonarwhal configurable for your needs and easy to use no matter what your skill level is. Collaborative. We didn’t want to reinvent the wheel, so we collaborated with existing tools and services that help developers build for the web. Some examples are aXe, snyk.io, Cloudinary, etc. This is just the beginning and we still have lots to do. We’re hard at work on a backlog of exciting features for future releases, such as: New rules for a variety of areas like performance, accessibility, security, progressive web apps, and more. A plug-in for Visual Studio Code: we want sonarwhal to help you write better websites, and what better moment than when you are in your editor. Configuration options for the online service: as we fine tune the infrastructure, the rule configuration for our scanner is locked, but we look forward to adding CLI customization options here in the near future. This is a tool for the web community by the web community so if you are excited about sonarwhal, making a better web, and want to contribute, we have a few issues where you might be able to help. Also, don’t forget to check the rest of the sonarwhal GitHub organization. PRs are always welcome and appreciated! Let us know what you think about the scanner at @NarwhalNellie on Twitter and we hope you’ll help us lint the web forward!",2017,Stephanie Drescher,stephaniedrescher,2017-12-02T00:00:00+00:00,https://24ways.org/2017/lint-the-web-forward-with-sonarwhal/,code 249,Fast Autocomplete Search for Your Website,"Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive. I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface. Try it out here, then read on to see how I built it. First step: crawling the data The first step in building a search engine is to grab a copy of the data that you plan to make searchable. There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need. I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites. We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running. Then we’ll tell wget to recursively crawl the website, using the --recursive flag. We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2 The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X ""/*/*/comments"". Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this. Tie all of those options together and we get this command: wget --recursive --wait 2 --no-clobber -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 -X ""/*/*/comments"" https://24ways.org/archives/ If you leave this running for a few minutes, you’ll end up with a folder structure something like this: $ find 24ways.org 24ways.org 24ways.org/2013 24ways.org/2013/why-bother-with-accessibility 24ways.org/2013/why-bother-with-accessibility/index.html 24ways.org/2013/levelling-up 24ways.org/2013/levelling-up/index.html 24ways.org/2013/project-hubs 24ways.org/2013/project-hubs/index.html 24ways.org/2013/credits-and-recognition 24ways.org/2013/credits-and-recognition/index.html ... As a quick sanity check, let’s count the number of HTML pages we have retrieved: $ find 24ways.org | grep index.html | wc -l 328 There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead: wget --recursive --wait 2 --no-clobber -I /2018 -X ""/*/*/comments"" https://24ways.org/ Thanks to --no-clobber, this is safe to run every day in December to pick up any new content. We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index. Building a search index using SQLite There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL. I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite. SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch! SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications. This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year. It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension. I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday. If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages: $ python3 -m venv ./jupyter-venv $ ./jupyter-venv/bin/pip install jupyter # ... lots of installer output # Now lets install some extra packages we will need later $ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib # And start the notebook web application $ ./jupyter-venv/bin/jupyter-notebook # This will open your browser to Jupyter at http://localhost:8888/ You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook. A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account. ​ Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on. from pathlib import Path from bs4 import BeautifulSoup as Soup base = Path(""/Users/simonw/Dropbox/Development/24ways-search"") articles = list(base.glob(""*/*/*/*.html"")) # articles is now a list of paths that look like this: # PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html') docs = [] for path in articles: year = str(path.relative_to(base)).split(""/"")[1] url = 'https://' + str(path.relative_to(base).parent) + '/' soup = Soup(path.open().read(), ""html5lib"") author = soup.select_one("".c-continue"")[""title""].split( ""More information about"" )[1].strip() author_slug = soup.select_one("".c-continue"")[""href""].split( ""/authors/"" )[1].split(""/"")[0] published = soup.select_one("".c-meta time"")[""datetime""] contents = soup.select_one("".e-content"").text.strip() title = soup.find(""title"").text.split("" ◆"")[0] try: topic = soup.select_one( '.c-meta a[href^=""/topics/""]' )[""href""].split(""/topics/"")[1].split(""/"")[0] except TypeError: topic = None docs.append({ ""title"": title, ""contents"": contents, ""year"": year, ""author"": author, ""author_slug"": author_slug, ""published"": published, ""url"": url, ""topic"": topic, }) After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this: [ { ""title"": ""Why Bother with Accessibility?"", ""contents"": ""Web accessibility (known in other fields as inclus..."", ""year"": ""2013"", ""author"": ""Laura Kalbag"", ""author_slug"": ""laurakalbag"", ""published"": ""2013-12-10T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/why-bother-with-accessibility/"", ""topic"": ""design"" }, { ""title"": ""Levelling Up"", ""contents"": ""Hello, 24 ways. Iu2019m Ashley and I sell property ins..."", ""year"": ""2013"", ""author"": ""Ashley Baxter"", ""author_slug"": ""ashleybaxter"", ""published"": ""2013-12-06T00:00:00+00:00"", ""url"": ""https://24ways.org/2013/levelling-up/"", ""topic"": ""business"" }, ... My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries. import sqlite_utils db = sqlite_utils.Database(""/tmp/24ways.db"") db[""articles""].insert_all(docs) That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table. (I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.) You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this: $ sqlite3 /tmp/24ways.db sqlite> .headers on sqlite> .mode column sqlite> select title, author, year from articles; title author year ------------------------------ ------------ ---------- Why Bother with Accessibility? Laura Kalbag 2013 Levelling Up Ashley Baxte 2013 Project Hubs: A Home Base for Brad Frost 2013 Credits and Recognition Geri Coady 2013 Managing a Mind Christopher 2013 Run Ragged Mark Boulton 2013 Get Started With GitHub Pages Anna Debenha 2013 Coding Towards Accessibility Charlie Perr 2013 ... There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this: db[""articles""].enable_fts([""title"", ""author"", ""contents""]) Introducing Datasette Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet. We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that? If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article. If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter: ./jupyter-venv/bin/pip install datasette This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette. Now you can run Datasette against the 24ways.db file we created earlier like so: ./jupyter-venv/bin/datasette /tmp/24ways.db This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application. If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db Publishing the database to the internet One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish: $ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways -----> Python app detected -----> Installing requirements with pip -----> Running post-compile hook -----> Discovering process types Procfile declares types -> web -----> Compressing... Done: 47.1M -----> Launching... Released v8 https://search-24ways.herokuapp.com/ deployed to Heroku If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways. You can run this command as many times as you like to deploy updated versions of the underlying database. Searching and faceting Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action. ​ SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL: http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A Better search using custom SQL The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page. This exposes the underlying SQL query, which looks like this: select rowid, * from articles where rowid in ( select rowid from articles_fts where articles_fts match :search ) order by rowid limit 101 We can do better than this by constructing a custom SQL query. Here’s the query we will use instead: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10; You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute. There’s a lot going on here! Let’s break the SQL down line-by-line: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on. articles_fts.rank, articles.title, articles.url, articles.author, articles.year These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table. from articles join articles_fts on articles.rowid = articles_fts.rowid articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it. where articles_fts match :search || ""*"" order by rank limit 10; :search || ""*"" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results. How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects: JSON API call to search for articles matching SVG The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature. A simple JavaScript autocomplete search interface I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own. We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh: function debounce(func, wait, immediate) { let timeout; return function() { let context = this, args = arguments; let later = () => { timeout = null; if (!immediate) func.apply(context, args); }; let callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing. Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these: const htmlEscape = (s) => s.replace( />/g, '>' ).replace( /Autocomplete search

And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript: // Embed the SQL query in a multi-line backtick string: const sql = `select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || ""*"" order by rank limit 10`; // Grab a reference to the const searchbox = document.getElementById(""searchbox""); // Used to avoid race-conditions: let requestInFlight = null; searchbox.onkeyup = debounce(() => { const q = searchbox.value; // Construct the API URL, using encodeURIComponent() for the parameters const url = ( ""https://search-24ways.herokuapp.com/24ways-866073b.json?sql="" + encodeURIComponent(sql) + `&search=${encodeURIComponent(q)}&_shape=array` ); // Unique object used just for race-condition comparison let currentRequest = {}; requestInFlight = currentRequest; fetch(url).then(r => r.json()).then(d => { if (requestInFlight !== currentRequest) { // Avoid race conditions where a slow request returns // after a faster one. return; } let results = d.map(r => `

${htmlEscape(r.title)}

${htmlEscape(r.author)} - ${r.year}

${highlight(r.snippet)}

`).join(""""); document.getElementById(""results"").innerHTML = results; }); }, 100); // debounce every 100ms There’s just one more utility function, used to help construct the HTML results: const highlight = (s) => htmlEscape(s).replace( /b4de2a49c8/g, '' ).replace( /8c94a2ed4b/g, '' ); This is what those unique strings passed to the snippet() function were for. Avoiding race conditions in autocomplete One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case: User types acces Browser sends request A - querying documents matching acces* User continues to type accessibility Browser sends request B - querying documents matching accessibility* Request B returns. It was fast, because there are fewer documents matching the full term The results interface updates with the documents from request B, matching accessibility* Request A returns results (this was the slower of the two requests) The results interface updates with the documents from request A - results matching access* This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on. Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null. Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well. When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results. It’s not a lot of code, really And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like. How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with. A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite. More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now. For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.",2018,Simon Willison,simonwillison,2018-12-19T00:00:00+00:00,https://24ways.org/2018/fast-autocomplete-search-for-your-website/,code 326,Don't be eval(),"JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design. Common mistakes Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it: var property = 'bar'; var value = eval('foo.' + property); Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly. Here’s the right way of doing the above: var property = 'bar'; var value = foo[property]; In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string. Security issues In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks. What’s it good for? Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval(). function evalRequest(url) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { eval(xmlhttp.responseText); } } xmlhttp.open(""GET"", url, true); xmlhttp.send(null); } If you want this to work with Internet Explorer you’ll need to include this compatibility patch.",2005,Simon Willison,simonwillison,2005-12-07T00:00:00+00:00,https://24ways.org/2005/dont-be-eval/,code 267,Taming Complexity,"I’m going to step into my UX trousers for this one. I wouldn’t usually wear them in public, but it’s Christmas, so there’s nothing wrong with looking silly. Anyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right? Well, I’ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it’s OK to sometimes reflect that complexity in the products we build. Myths and legends Less is more, we’ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I’ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff. To paraphrase Einstein, simple doesn’t have to be simpler. In other words, simple doesn’t dictate that we remove the complexity. Complex doesn’t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site. In our decision-making process, principles such as Occam’s razor’s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they’ll be the judge. As a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you’re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there’s a truckload of suggested features and content because it all needs to be there. Don’t be afraid of that weight. In the right hands, less can indeed mean more, but it’s just as likely that less can very often lead to, well… less. Complexity is powerful Simple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important. It’s useful to ask throughout a site’s lifespan, “does the user have everything they need?” It’s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term. The trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they’ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you’re building a relationship, empowering people. This can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community. The design aesthetic Here’s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette’s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be “clean”. Of course, we all like our websites to be clean as that’s more hygienic. But what do these words even mean, really? Early in a project they’re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we’re building is complex, and many of the components perhaps necessary. Simple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience. Recognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user’s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they’re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic — the façade — is only a part of that. Divide and conquer I’m currently working on an app that’s complex in architecture, and complex in ambition. We’ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it’s out of beta and beyond. I’ve noticed that I’m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically. I’m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It’s really OK to have a lot of stuff, so long as we make each component work smartly. It’s a divide and conquer approach. By finding simplicity and logic in each content bucket, I’ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places. I’m also making sure I don’t reduce the app’s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it’s the minority who will be actively building the content, but the power is in providing those opportunities for all. Much of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity. Alas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating — even if they’re mind-meltingly complex behind that façade. Embrace, empathize and tame So, there are loads of ways to exploit complexity, and make it seem simple. I’ve hinted at some methods above, and we’ve already looked at gradual engagement as a way to make sense of complexity, so that’s a big thumbs-up for a release cycle that increases audience power. Prior to each and every release, it’s also useful to rest on the finished thing for a while and use it yourself, even if you’re itching to release. ‘Ready’ often isn’t, and ‘finished’ never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It’s definitely worth building in some contingency time for sitting on your work, so to speak. One thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall. I mentioned the need to put our design egos to one side and not throw out anything useful, and I think that’s vital. I’m a big believer in economy, reduction, and removing the extraneous, but I’m usually referring to decoration, bells and whistles, and fluff. I wouldn’t ever advocate the complete removal of powerful content from a project roadmap. Above all, don’t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.",2011,Simon Collison,simoncollison,2011-12-21T00:00:00+00:00,https://24ways.org/2011/taming-complexity/,ux 328,Swooshy Curly Quotes Without Images,"The problem Take a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images – at all. The traditional way Feint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it’s just too difficult to float a closing one. Why is the traditional way bad? Well, for a start there are no actual curly quotes in the text (unless you’re doing some nifty image replacement). Thus with CSS disabled you’ll only have default blockquote styling to fall back on. Secondly, images don’t resize, so scaling text will have no affect on your graphic curlies. The solution Use really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It’ll also make sense when CSS is unavailable. The problem Creating “Drop Caps” with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters – the A to Z or 1 to 10 – and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters. Curly quotes aren’t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space: See how this is emphasized when we add a normal alphabetical character within the span. This is what we’re dealing with here: Then, there’s size. Call in a curly quote at less than 300% font-size and it ain’t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space. Prepare the curlies Firstly, remove the opening “ from the quote. Replace it with the opening curly quote character entity “. Then replace the closing “ with the entity reference for that, which is ”. Now at least the curlies will look nice and swooshy. Add the hooks Two reasons why we aren’t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we’re doing, and secondly we need to affect the last “letter” of our text also – the closing curly quote. So, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters:
Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.
So far nothing will look any different, aside form the curlies looking a bit nicer. I know we’ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download. The CSS OK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right. .bqstart { float: left; font-size: 700%; color: #FF0000; } .bqend { float: right; font-size: 700%; color: #FF0000; } That gives us this, which is rubbish. I’ve highlighted the actual span area with outlines: Note that the curlies don’t even fit inside the span! At this stage on IE 6 PC you won’t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled. Fiddle with margin and padding Think of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it’s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height: .bqstart { float: left; height: 45px; margin-top: -20px; padding-top: 45px; margin-bottom: -50px; font-size: 700%; color: #FF0000; } .bqend { float: right; height: 25px; margin-top: 0px; padding-top: 45px; font-size: 700%; color: #FF0000; } I wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully: I must admit that the heights, margins and spacing don’t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird. Finished The final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here’s a real example (note that I’m specifying Lucida Grande and then Verdana for my curlies): “Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.” Browsers happy As I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it’ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely. It’s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas.",2005,Simon Collison,simoncollison,2005-12-21T00:00:00+00:00,https://24ways.org/2005/swooshy-curly-quotes-without-images/,business 34,Collaborative Responsive Design Workflows,"Much has been written about workflow and designer-developer collaboration in web design, but many teams still struggle with this issue; either with how to adapt their internal workflow, or how to communicate the need for best practices like mobile first and progressive enhancement to their teams and clients. Christmas seems like a good time to have another look at what doesn’t work between us and how we can improve matters. Why is it so difficult? We’re still beginning to understand responsive design workflows, acknowledging the need to move away from static design tools and towards best practices in development. It’s not that we don’t want to change – so why is it so difficult? Changing the way we do something that has become routine is always problematic, even with small things, and the changes today’s web environment requires from web design and development teams are anything but small. Although developers also have a host of new skills to learn and things to consider, designers are probably the ones pushed furthest out of their comfort zones: as well as graphic design, a web designer today also needs an understanding of interaction design and ergonomics, because more and more websites are becoming tools rather than pages meant to be read like a book or magazine. In addition to that there are thousands of different devices and screen sizes on the market today that layout and interactions need to work on. These aspects make it impossible to design in a static design tool, so beyond having to learn about new aspects of design, the designer has to either learn how to code or learn to work with a responsive design tool. Why do it That alone is enough to leave anyone overwhelmed, as learning a new skill takes time and slows you down in a project – and on most projects time is in short supply. Yet we have to make time or fall behind in the industry as others pitch better, interactive designs. For an efficient workflow, both designers and developers must familiarise themselves with new tools and techniques. A designer has to be able to play with ideas, make small adjustments here and there, look at the result, go back to the settings and make further adjustments, and so on. You can only realistically do that if you are able to play with all the elements of a design, including interactivity, accessibility and responsiveness. Figuring out the right breakpoints in a layout is one of the foremost reasons for designing in a responsive design tool. Even if you create layouts for three viewport sizes (i.e. smartphone, tablet and the most common desktop size), you’d only cover around 30% of visitors and you might miss problems like line breaks and padding at other viewport sizes. Another advantage is consistency. In static design tools changes will not be applied across all your other layouts. A developer referring back to last week’s comps might work with outdated metrics. Furthermore, you cannot easily test what impact changes might have on previously designed areas. In a dynamic design tool such changes will be applied to the entire design and allow you to test things in site areas you had already finished. No static design tool allows you to do this, and having somebody else produce a mockup from your static designs or wireframes will duplicate work and is inefficient. How to do it When working in a responsive design tool rather than in the browser, there is still the question of how and when to communicate with the developer. I have found that working with Sass in combination with a visual style guide is very efficient, but it does need careful planning: fundamental metrics for padding, margins and font sizes, but also design elements like sliders, forms, tabs, buttons and navigational elements, should be defined at the beginning of a project and used consistently across the site. Working with a grid can help you develop a consistent design language across your site. Create a visual style guide that shows what the elements look like and how they behave across different screen sizes – and when interacted with. Put all metrics on paddings, margins, breakpoints, widths, colours and so on in a text document, ideally with names that your developer can use as Sass variables in the CSS. For example: $padding-default-vertical: 1.5em; Developers, too, need an efficient workflow to keep code maintainable and speed up the time needed for more complex interactions with an eye on accessibility and performance. CSS preprocessors like Sass allow you to work with variables and mixins for default rules, as well as style sheet partials for different site areas or design elements. Create your own boilerplate to use for your projects and then update your variables with the information from your designer for each individual project. How to get buy-in One obstacle when implementing responsive design, accessibility and content strategy is the logistics of learning new skills and iterating on your workflow. Another is how to sell it. You might expect everyone on a project (including the client) to want to design and develop the best website possible: ultimately, a great site will lead to more conversions. However, we often hear that people find it difficult to convince their teammates, bosses or clients to implement best practices. Why is that? Well, I believe a lot of it is down to how we sell it. You will have experienced this yourself: some people you trust to know what they are talking about, and others you don’t. Think about why you trust that first person but don’t buy what the other one is telling you. It is likely because person A has a self-assured, calm and assertive demeanour, while person B seems insecure and apologetic. To sell our ideas, we need to become person A! For a timid designer or developer suffering from imposter syndrome (like many of us do in this industry) that is a difficult task. So how can we become more confident in selling our expertise? Write We need to become experts. And I mean not just in writing great code or coming up with beautiful designs but at explaining why we’re doing what we’re doing. Why do you code this way or that? Why is this the best layout? Why does a website have to be accessible and responsive? Write about it. Putting your thoughts down on paper or screen is a really efficient way of getting your head around a topic and learning to make a case for something. You may even find that you come up with new ideas as you are writing, so you’ll become a better designer or developer along the way. Talk Then, talk about it. Start out in front of your team, then do a lightning talk at a web event near you, then a longer talk or workshop. Having to talk about a topic is going to help you put into spoken words the argument that you’ve previously put together in writing. Writing comes more easily when you’re starting out but we use a different register when writing than talking and you need to learn how to speak your case. Do the talk a couple of times and after each talk make adjustments where you found it didn’t work well. By this time, you are more than ready to make your case to the client. In fact, you’ve been ready since that first talk in front of your colleagues ;) Pitch Pitches used to be based on a presentation of static layouts for for three to five typical pages and three different designs. But if we want to sell interactivity, structure, usability, accessibility and responsiveness, we need to demonstrate these things and I believe that it can only do us good. I have seen a few pitches sitting in the client’s chair and static layouts are always sort of dull. What makes a website a website is the fact that I can interact with it and smooth interactions or animations add that extra sparkle. I can’t claim personal experience for this one but I’d be bold and go for only one design. One demo page matching the client’s corporate design but not any specific page for the final site. Include design elements like navigation, photography, typefaces, article layout (with real content), sliders, tabs, accordions, buttons, forms, tables (yes, tables) – everything you would include in a style tiles document, only interactive. Demonstrate how the elements behave when clicked, hovered and touched, and how they change across different screen sizes. You may even want to demonstrate accessibility features like tabbed navigation and screen reader use. Obviously, there are many approaches that will work in different situations but don’t give up on finding a process that works for you and that ultimately allows you to build delightful, accessible, responsive user experiences for the web. Make time to try new tools and techniques and don’t just work on them on the side – start using them on an actual project. It is only when we use a tool or process in the real world that we become true experts. Remember your driving lessons: once the instructor had explained how to operate the car, you were sent to practise driving on the road in actual traffic!",2014,Sibylle Weber,sibylleweber,2014-12-07T00:00:00+00:00,https://24ways.org/2014/collaborative-responsive-design-workflows/,process 27,Putting Design on the Map,"The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things. I could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I’m going to explore something that gives us a direct relationship between a website and the physical world – maps. A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected. Reif Larsen, The Selected Works of T.S. Spivet The simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That’s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn’t just use dark grey Helvetica on every site, would you? Where’s the personality? Where’s the tailored experience? Where is the design? Jumping into design Let’s keep this simple – we all want to be better web folk, not cartographers. We don’t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I’m going to gloss over some of the technical areas and focus on the practical concepts. Tiles If you’ve ever noticed a map loading in sections, it’s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You’ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless. The tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google’s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours). Comparison of Google Maps (top) and the Ordnance Survey (bottom). Carefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You’re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide. I’ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum. A nice example of a customised base map can be found on Michael K Owens’ check-in pages: One of Michael K Owens’ check-in pages. As I’ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour. A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea. Technical You’re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you’re worried about the time it takes to design the maps, don’t be – you can automate most of it. You don’t need to manually draw each tile for the entire world! We’ve learned the importance of web standards the hard way, so you’ll be glad (and I won’t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does. Features Tiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time. Showing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent? Choropleth map showing ranked postcode areas, using ViziCities. Features are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement. Erwin Josephus Raisz, cartographer (1893–1968) Where lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user. Technical As you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can’t just use a vector format such as KML, GeoJSON (or even SVG) – and you can – but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level. User interface There is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps. As I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don’t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on. In some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google’s base map toggle is quite nice since it doesn’t offer many options yet provides very different contexts and styling. It isn’t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content. Putting it into practice I could ramble on for ages about what we can do to customise maps to fit a site’s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web. Google OK, I’ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn’t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets. Leaflet You have probably seen Leaflet before. It isn’t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don’t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API. Mapbox As a freemium service, Mapbox may not be perfect for every use case but it’s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don’t have to only use the maps on the web. Mapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, “Yes, but how on earth can I design those parts of a map?” then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet. If you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools. OpenLayers After a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too. OpenStreetMap I couldn’t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API. ViziCities The newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength. So there you have it – a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content.",2014,Shane Hudson,shanehudson,2014-12-11T00:00:00+00:00,https://24ways.org/2014/putting-design-on-the-map/,design 298,First Steps in VR,"The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people. Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web. WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier. Getting Started with A-Frame I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders. I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun. For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy: Must work on Google Cardboard at a minimum, because of price Therefore, it must not rely on having a controller Auto-moving around a maze would be a good example Move in direction you look Stretch goal: Scoring, time until you hit a wall or get stuck in maze Stretch goal: Levels, so the map doesn’t need to be random Stretch goal: Snow! I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. 24 ways As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them. Styling Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave. Note, you will probably need a server running for images to work. You can do this by running python -m ""SimpleHTTPServer"" in the folder where the code is, then go to localhost:8000 in browser. Textures Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor. The is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image. Replace: With: This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined. box.setAttribute('material', 'src: #texture-wall'); That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. Lighting The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished. There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit. To start with I wanted to light up the scene with a general light, type=""ambient"", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky. I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=""point"" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical. By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see. I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers. Movement One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working. AFRAME.registerComponent('automove-controls', { init: function () { this.speed = 0.1; this.isMoving = true; this.velocityDelta = new THREE.Vector3(); }, isVelocityActive: function () { return this.isMoving; }, getVelocityDelta: function () { this.velocityDelta.z = this.isMoving ? -speed : 0; return this.velocityDelta.clone(); } }); Replace: universal-controls With: universal-controls=""movementControls: automove, gamepad, keyboard"" This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels. Building a map Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels. I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future. var map = { ""data"":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], ""height"":10, ""width"":10 } As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2. document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var WALL_SIZE = 5, WALL_HEIGHT = 3; var el = document.querySelector('#walls'); var wall; for (var x = 0; x < map.height; x++) { for (var y = 0; y < map.width; y++) { var i = y*map.width + x; var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE; if (map.data[i] === 1) { // Create wall wall = document.createElement('a-box'); el.appendChild(wall); wall.setAttribute('color', '#fff'); wall.setAttribute('material', 'src: #texture-wall;'); wall.setAttribute('width', WALL_SIZE); wall.setAttribute('height', WALL_HEIGHT); wall.setAttribute('depth', WALL_SIZE); wall.setAttribute('position', position); wall.setAttribute('static-body', '); } if (map.data[i] === 2) { // Set player position document.querySelector('#player').setAttribute('position', position); } } } console.info('Walls added.'); }); With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web. It’s Not All Fun And Games A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation. There have been a number of cases where it has been shown virtual reality can help as a tool for therapies: Oxford study finds virtual reality can help treat severe paranoia Virtual Reality Therapy for Phobias at the Duke Faculty Practice Bravemind: Virtual Reality Exposure Therapy at the University of Southern California These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool. Wrapping Up Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.” We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to. So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway.",2016,Shane Hudson,shanehudson,2016-12-11T00:00:00+00:00,https://24ways.org/2016/first-steps-in-vr/,code 211,Automating Your Accessibility Tests,"Accessibility is one of those things we all wish we were better at. It can lead to a bunch of questions like: how do we make our site better? How do we test what we have done? Should we spend time each day going through our site to check everything by hand? Or just hope that everyone on our team has remembered to check their changes are accessible? This is where automated accessibility tests can come in. We can set up automated tests and have them run whenever someone makes a pull request, and even alongside end-to-end tests, too. Automated tests can’t cover everything however; only 20 to 50% of accessibility issues can be detected automatically. For example, we can’t yet automate the comparison of an alt attribute with an image’s content, and there are some screen reader tests that need to be carried out by hand too. To ensure our site is as accessible as possible, we will still need to carry out manual tests, and I will cover these later. First, I’m going to explain how I implemented automated accessibility tests on Elsevier’s ecommerce pages, and share some of the lessons I learnt along the way. Picking the right tool One of the hardest, but most important parts of creating our automated accessibility tests was choosing the right tool. We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history. The example below shows the how the beforeScript step completes a login form and then waits for the login to complete before testing the page: beforeScript: function(page, options, next) { // An example function that can be used to make sure changes have been confirmed before continuing to run Pa11y function waitUntil(condition, retries, waitOver) { page.evaluate(condition, function(err, result) { if (result || retries < 1) { // Once the changes have taken place continue with Pa11y testing waitOver(); } else { retries -= 1; setTimeout(function() { waitUntil(condition, retries, waitOver); }, 200); } }); } // The script to manipulate the page must be run with page.evaluate to be run within the context of the page page.evaluate(function() { const user = document.querySelector('#login-form input[name=""email""]'); const password = document.querySelector('#login-form input[name=""password""]'); const submit = document.querySelector('#login-form input[name=""submit""]'); user.value = 'user@example.com'; password.value = 'password'; submit.click(); }, function() { // Use the waitUntil function to set the condition, number of retries and the callback waitUntil(function() { return window.location.href === 'https://example.com'; }, 20, next); }); } The waitUntil callback allows the test to be delayed until our test user is successfully logged in. Another thing to consider when picking a tool is the type of error messages it produces. aXe groups all elements with the same error together, so the list of issues is a lot easier to read, and it’s easier to identify the most commons problems. For example, here are some elements that have insufficient colour contrast: Violation of ""color-contrast"" with 8 occurrences! Ensures the contrast between foreground and background colors meets WCAG 2 AA contrast ratio thresholds. Correct invalid elements at: - #maincontent > .make_your_mark > div:nth-child(2) > p > span > span - #maincontent > .make_your_mark > div:nth-child(4) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(2) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(4) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(6) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(8) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(10) > p > span > span - #maincontent > .inform_your_decisions > div:nth-child(12) > p > span > span For details, see: https://dequeuniversity.com/rules/axe/2.5/color-contrast aXe also provides links to their site where they discuss the best way to fix the problem. In comparison, Pa11y lists each individual error which can lead to a very verbose list. However, it does provide helpful suggestions of how to fix problems, such as suggesting an alternative shade of a colour to use: • Error: This element has insufficient contrast at this conformance level. Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 2.96:1. Recommendation: change text colour to #767676. ⎣ WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail ⎣ #maincontent > div:nth-child(10) > div:nth-child(8) > p > span > span ⎣ Featured products: Integrating into our build pipeline We decided the perfect time to run our accessibility tests would be alongside our end-to-end tests. We have a Jenkins job that detects changes to our staging site and then triggers the end-to-end tests, and in turn our accessibility tests. Our Jenkins job retrieves the contents of a GitHub repository containing our Pa11y script file and npm package manifest. Once Jenkins has cloned the repository, it installs any dependencies and executes the tests via: npm install && npm test Bundling the URLs to be tested into our test script means we don’t have a command line style test where we list each URL we wish to test in the Jenkins CLI. It’s an effective method but can also be cluttered, and obscure which URLs are being tested. In the middle of the office we have a monitor displaying a Jenkins dashboard and from this we can see if the accessibility tests are passing or failing. Everyone in the team has access to the Jenkins logs and when the build fails they can see why and fix the issue. Fixing the issues As mentioned earlier, Pa11y can generate a long list of areas for improvement which can be very verbose and quite overwhelming. I recommend going through the list to see which issues occur most frequently and fix those first. For example, we initially had a lot of errors around colour contrast, and one shade of grey in particular. By making this colour darker, the number of errors decreased, and we could focus on the remaining issues. Another thing I like to do is to tackle the quick fixes, such as adding alt text to images. These are small things that allow us to make an impact instantly, giving us time to fix more detailed concerns such as addressing tabindex issues, or speaking to our designers about changing the contrast of elements on the site. Manual testing Adding automated tests to check our site for accessibility is great, but as I mentioned earlier, this can only cover 20-50% of potential issues. To improve on this, we need to test by hand too, either by ourselves or by asking others. One way we can test our site is to throw our mouse or trackpad away and interact with the site using only a keyboard. This allows us to check items such as tab order, and ensure menu items, buttons etc. can be used without a mouse. The commands may be different on different operating systems, but there are some great guides online for learning more about these. It’s tempting to add alt text and aria-labels to make errors go away, but if they don’t make any sense, what use are they really? Using a screenreader we can check that alt text accurately represents the image. This is also a great way to double check that our ARIA roles make sense, and that they correctly identify elements and how to interact with them. When testing our site with screen readers, it’s important to remember that not all screen readers are the same and some may interact with our site differently to others. Consider asking a range of people with different needs and abilities to test your site, too. People experience the web in numerous ways, be they permanent, temporary or even situational. They may interact with your site in ways you hadn’t even thought about, so this is a good way to broaden your knowledge and awareness. Tips and tricks One of our main issues with Pa11y is that it may find issues we don’t have the power to solve. A perfect example of this is the one pixel image Facebook injects into our site. So, we wrote a small function to go though such errors and ignore the ones that we cannot fix. const test = pa11y({ .... hideElements: '#ratings, #js-bigsearch', ... }); const ignoreErrors: string[] = [ '', '', '' ]; const filterResult = result => { if (ignoreErrors.indexOf(result.context) > -1) { return false; } return true; }; Initially we wanted to focus on fixing the major problems, so we added a rule to ignore notices and warnings. This made the list or errors much smaller and allowed us focus on fixing major issues such as colour contrast and missing alt text. The ignored notices and warnings can be added in later after these larger issues have been resolved. const test = pa11y({ ignore: [ 'notice', 'warning' ], ... }); Jenkins gotchas While using Jenkins we encountered a few problems. Sometimes Jenkins would indicate a build had passed when in reality it had failed. This was because Pa11y had timed out due to PhantomJS throwing an error, or the test didn’t go past the first URL. Pa11y has recently released a new beta version that uses headless Chrome instead of PhantomJS, so hopefully these issues will less occur less often. We tried a few approaches to solve these issues. First we added error handling, iterating over the array of test URLs so that if an unexpected error happened, we could catch it and exit the process with an error indicating that the job had failed (using process.exit(1)). for (const url of urls) { try { console.log(url); let urlResult = await run(url); urlResult = urlResult.filter(filterResult); urlResult.forEach(result => console.log(result)); } catch (e) { console.log('Error:', e); process.exit(1); } } We also had issues with unhandled rejections sometimes caused by a session disconnecting or similar errors. To avoid Jenkins indicating our site was passing with 100% accessibility, when in reality it had not executed any tests, we instructed Jenkins to fail the job when an unhandled rejection or uncaught exception occurred: process.on('unhandledRejection', (reason, p) => { console.log('Unhandled Rejection at:', p, 'reason:', reason); process.exit(1); }); process.on('uncaughtException', (err) => { console.log('Caught exception: ${err}n'); process.exit(1); }); Now it’s your turn That’s it! That’s how we automated accessibility testing for Elsevier ecommerce pages, allowing us to improve our site and make it more accessible for everyone. I hope our experience can help you automate accessibility tests on your own site, and bring the web a step closer to being accessible to all.",2017,Seren Davies,serendavies,2017-12-07T00:00:00+00:00,https://24ways.org/2017/automating-your-accessibility-tests/,code 295,Internet of Stranger Things,"This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1. What you’ll see You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.) It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi. What you’ll need Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3 Micro SD card at least 4Gb Class 10 speed3 Micro USB power supply at least 2A USB Wifi dongle (unless you have a Pi 3 - that has wifi built in). Addressable fairy lights Logic level shifter (with pins soldered unless you want to do it!) Breadboard Jumper wires (3x male to male and 4x female to male) Optional but recommended Base board to hold the Pi and Breadboard (often comes with a breadboard!) Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004. Setting up the Raspberry Pi You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6. network={ ssid=""mywifi"" psk=""mypassword"" } Save the file, eject the card from your computer and plug it into the Raspberry Pi. If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet! Wiring! Time to wire everything up! First of all, push the Logic Level Converter into the middle of the breadboard: Logic Level Converter The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal. Connect the first two wires between the Raspberry Pi pins and the breadboard: Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram. Then the next two: This is what you should have so far: Lights Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard. Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check. Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in). You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out! That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check! The Moment of Truth! Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down! However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up. Rig up the lights! Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row: Now visit lights.seb.ly and you’ll see this : If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn. How it works - the geeky details! Electronics: The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). Addressable LEDs or “Neopixels” We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project. Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide! A Laser Light Synth! Covered with around 800 super bright neopixels! Logic Level Converter The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. Power Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide. Node.js There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server. The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time. WebSockets I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. The Raspberry Pi code The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: node /home/pi/strangerthings/client.js > /dev/null & Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process. Working with the Raspberry Pi headless You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial. Improvements This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. Where next? Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com. There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice! Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources. I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎ Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎ Slower cards will work but performance may suffer ↩︎ Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎ You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots.  ↩︎ SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎ Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎ Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎ So you can see other people sending messages in the browser ↩︎ ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎ You can change this default hostname using raspi-config ↩︎",2016,Seb Lee-Delisle,sebleedelisle,2016-12-01T00:00:00+00:00,https://24ways.org/2016/internet-of-stranger-things/,code 221,"“Probably, Maybe, No”: The State of HTML5 Audio","With the hype around HTML5 and CSS3 exceeding levels not seen since 2005’s Ajax era, it’s worth noting that the excitement comes with good reason: the two specifications render many years of feature hacks redundant by replacing them with native features. For fun, consider how many CSS2-based rounded corners hacks you’ve probably glossed over, looking for a magic solution. These days, with CSS3, the magic is border-radius (and perhaps some vendor prefixes) followed by a coffee break. CSS3’s border-radius, box-shadow, text-shadow and gradients, and HTML5’s ,