home / 24ways

24ways

Custom SQL query returning 13 rows

Query parameters

rowidtitlecontentsyearauthorauthor_slugpublishedurltopic
26 Integrating Contrast Checks in Your Web Workflow It’s nearly Christmas, which means you’ll be sure to find an overload of festive red and green decorating everything in sight—often in the ugliest ways possible. While I’m not here to battle holiday tackiness in today’s 24 ways, it might just be the perfect reminder to step back and consider how we can implement colour schemes in our websites and apps that are not only attractive, but also legible and accessible for folks with various types of visual disabilities. This simulated photo demonstrates how red and green Christmas baubles could appear to a person affected by protanopia-type colour blindness—not as festive as you might think. Source: Derek Bruff I’ve been fortunate to work with Simply Accessible to redesign not just their website, but their entire brand. Although the new site won’t be launching until the new year, we’re excited to let you peek under the tree and share a few treats as a case study into how we tackled colour accessibility in our project workflow. Don’t worry—we won’t tell Santa! Create a colour game plan A common misconception about accessibility is that meeting compliance requirements hinders creativity and beautiful design—but we beg to differ. Unfortunately, like many company websites and internal projects, Simply Accessible has spent so much time helping others that they had not spent enough time helping themselves to show the world who they really are. This was the perfect opportunity for them to practise what they preached. After plenty of research and brainstorming, we decided to evolve the existing Simply Accessible brand. Or, rather, salvage what we could. There was no established logo to carry into the new design (it was a stretch to even call it a wordmark), and the Helvetica typography across the site lacked any character. The only recognizable feature left to work with was colour. It was a challenge, for sure: the oranges looked murky and brown, and the blues looked way too corporate for a company like Simply Accessible. We knew we needed to inject a lot of personality. The old Simply Accessible website and colour palette. After an audit to round up every colour used throughout the site, we dug in deep and played around with some ideas to bring some new life to this palette. Choose effective colours Whether you’re starting from scratch or evolving an existing brand, the first step to having an effective and legible palette begins with your colour choices. While we aren’t going to cover colour message and meaning in this article, it’s important to understand how to choose colours that can be used to create strong contrast—one of the most important ways to create hierarchy, focus, and legibility in your design. There are a few methods of creating effective contrast. Light and dark colours The contrast that exists between light and dark colours is the most important attribute when creating effective contrast. Try not to use colours that have a similar lightness next to each other in a design. The red and green colours on the left share a similar lightness and don’t provide enough contrast on their own without making some adjustments. Removing colour and showing the relationship in greyscale reveals that the version on the right is much more effective. It’s important to remember that red and green colour pairs cause difficulty for the majority of colour-blind people, so they should be avoided wherever possible, especially when placed next to each other. Complementary contrast Effective contrast can also be achieved by choosing complementary colours (other than red and green), that are opposite each other on a colour wheel. These colour pairs generally work better than choosing adjacent hues on the wheel. Cool and warm contrast Contrast also exists between cool and warm colours on the colour wheel. Imagine a colour wheel divided into cool colours like blues, purples, and greens, and compare them to warm colours like reds, oranges and yellows. Choosing a dark shade of a cool colour, paired with a light tint of a warm colour will provide better contrast than two warm colours or two cool colours. Develop colour concepts After much experimentation, we settled on a simple, two-colour palette of blue and orange, a cool-warm contrast colour scheme. We added swatches for call-to-action messaging in green, error messaging in red, and body copy and form fields in black and grey. Shades and tints of blue and orange were added to illustrations and other design elements for extra detail and interest. First stab at a new palette. We introduced the new palette for the first time on an internal project to test the waters before going full steam ahead with the website. It gave us plenty of time to get a feel for the new design before sharing it with the public. Putting the test palette into practice with an internal report It’s important to be open to changes in your palette as it might need to evolve throughout the design process. Don’t tell your client up front that this palette is set in stone. If you need to tweak the colour of a button later because of legibility issues, the last thing you want is your client pushing back because it’s different from what you promised. As it happened, we did tweak the colours after the test run, and we even adjusted the logo—what looked great printed on paper looked a little too light on screens. Consider how colours might be used Don’t worry if you haven’t had the opportunity to test your palette in advance. As long as you have some well-considered options, you’ll be ready to think about how the colour might be used on the site or app. Obviously, in such early stages it’s unlikely that you’re going to know every element or feature that will appear on the site at launch time, or even which design elements could be introduced to the site later down the road. There are, of course, plenty of safe places to start. For Simply Accessible, I quickly mocked up these examples in Illustrator to get a handle on the elements of a website where contrast and legibility matter the most: text colours and background colours. While it’s less important to consider the contrast of decorative elements that don’t convey essential information, it’s important for a reader to be able to discern elements like button shapes and empty form fields. A basic list of possible colour combinations that I had in mind for the Simply Accessible website Run initial tests Once these elements were laid out, I manually plugged in the HTML colour code of each foreground colour and background colour on Lea Verou’s Contrast Checker. I added the results from each colour pair test to my document so we could see at a glance which colours needed adjustment or which colours wouldn’t work at all. Note: Read more about colour accessibility and contrast requirements As you can see, a few problems were revealed in this test. To meet the minimum AA compliance, we needed to slightly darken the green, blue, and orange background colours for text—an easy fix. A more complicated problem was apparent with the button colours. I had envisioned some buttons appearing over a blue background, but the contrast ratios were well under 3:1. Although there isn’t a guide in WCAG for contrast requirements of two non-text elements, the ISO and ANSI standard for visible contrast is 3:1, which is what we decided to aim for. We also checked our colour combinations in Color Oracle, an app that simulates the most extreme forms of colour blindness. It confirmed that coloured buttons over blue backgrounds was simply not going to work. The contrast was much too low, especially for the more common deuteranopia and protanopia-type deficiencies. How our proposed colour pairs could look to people with three types of colour blindness Make adjustments if necessary As a solution, we opted to change all buttons to white when used over dark coloured backgrounds. In addition to increasing contrast, it also gave more consistency to the button design across the site instead of introducing a lot of unnecessary colour variants. Putting more work into getting compliant contrast ratios at this stage will make the rest of implementation and testing a breeze. When you’ve got those ratios looking good, it’s time to move on to implementation. Implement colours in style guide and prototype Once I was happy with my contrast checks, I created a basic style guide and added all the colour values from my colour exploration files, introduced more tints and shades, and added patterned backgrounds. I created examples of every panel style we were planning to use on the site, with sample text, links, and buttons—all with working hover states. Not only does this make it easier for the developer, it allows you to check in the browser for any further contrast issues. Run a final contrast check During the final stages of testing and before launch, it’s a good idea to do one more check for colour accessibility to ensure nothing’s been lost in translation from design to code. Unless you’ve introduced massive changes to the design in the prototype, it should be fairly easy to fix any issues that arise, particularly if you’ve stayed on top of updating any revisions in the style guide. One of the more well-known evaluation tools, WAVE, is web-based and will work in any browser, but I love using Chrome’s Accessibility Tools. Not only are they built right in to the Inspector, but they’ll work if your site is password-protected or private, too. Chrome’s Accessibility Tools audit feature shows that there are no immediate issues with colour contrast in our prototype The human touch Finally, nothing beats a good round of user testing. Even evaluation tools have their flaws. Although they’re great at catching contrast errors for text and backgrounds, they aren’t going to be able to find errors in non-text elements, infographics, or objects placed next to each other where discernible contrast is important. Our final palette, compared with our initial ideas, was quite different, but we’re proud to say it’s not just compliant, but shows Simply Accessible’s true personality. Who knows, it may not be final at all—there are so many opportunities down the road to explore and expand it further. Accessibility should never be an afterthought in a project. It’s not as simple as adding alt text to images, or running your site through a compliance checker at the last minute and assuming that a pass means everything is okay. Considering how colour will be used during every stage of your project will help avoid massive problems before launch, or worse, launching with serious issues. If you find yourself working on a personal project over the Christmas break, try integrating these checks into your workflow and make colour accessibility a part of your New Year’s resolutions. 2014 Geri Coady gericoady 2014-12-22T00:00:00+00:00 https://24ways.org/2014/integrating-contrast-checks-in-your-web-workflow/ design
27 Putting Design on the Map The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things. I could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I’m going to explore something that gives us a direct relationship between a website and the physical world – maps. A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected. Reif Larsen, The Selected Works of T.S. Spivet The simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That’s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn’t just use dark grey Helvetica on every site, would you? Where’s the personality? Where’s the tailored experience? Where is the design? Jumping into design Let’s keep this simple – we all want to be better web folk, not cartographers. We don’t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I’m going to gloss over some of the technical areas and focus on the practical concepts. Tiles If you’ve ever noticed a map loading in sections, it’s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You’ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless. The tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google’s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours). Comparison of Google Maps (top) and the Ordnance Survey (bottom). Carefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You’re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide. I’ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum. A nice example of a customised base map can be found on Michael K Owens’ check-in pages: One of Michael K Owens’ check-in pages. As I’ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour. A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea. Technical You’re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you’re worried about the time it takes to design the maps, don’t be – you can automate most of it. You don’t need to manually draw each tile for the entire world! We’ve learned the importance of web standards the hard way, so you’ll be glad (and I won’t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does. Features Tiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time. Showing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent? Choropleth map showing ranked postcode areas, using ViziCities. Features are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement. Erwin Josephus Raisz, cartographer (1893–1968) Where lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user. Technical As you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can’t just use a vector format such as KML, GeoJSON (or even SVG) – and you can – but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level. User interface There is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps. As I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don’t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on. In some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google’s base map toggle is quite nice since it doesn’t offer many options yet provides very different contexts and styling. It isn’t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content. Putting it into practice I could ramble on for ages about what we can do to customise maps to fit a site’s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web. Google OK, I’ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn’t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets. Leaflet You have probably seen Leaflet before. It isn’t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don’t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API. Mapbox As a freemium service, Mapbox may not be perfect for every use case but it’s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don’t have to only use the maps on the web. Mapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, “Yes, but how on earth can I design those parts of a map?” then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet. If you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools. OpenLayers After a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too. OpenStreetMap I couldn’t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API. ViziCities The newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength. So there you have it – a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content. 2014 Shane Hudson shanehudson 2014-12-11T00:00:00+00:00 https://24ways.org/2014/putting-design-on-the-map/ design
28 Why You Should Design for Open Source Let’s be honest. Most designers don’t like working for nothing. We rally against spec work and make a stand for contracts and getting paid. That’s totally what you should do as a professional designer in the industry. It’s your job. It’s your hard-working skill. It’s your bread and butter. Get paid. However, I’m going to make a case for why you could also consider designing for open source. First, I should mention that not all open source work is free work. Some companies hire open source contributors to work on their projects full-time, usually because that project is used by said company. There are other companies that encourage open source contribution and even offer 20%-time for these projects (where you can spend one day a week contributing to open source). These are super rad situations to be in. However, whether you’re able to land a gig doing this type of work, or you’ve decided to volunteer your time and energy, designing for open source can be rewarding in many other ways. Portfolio building New designers often find themselves in a catch-22 situation: they don’t have enough work experience showcased in their portfolio, which leads to them not getting much work because their portfolio is bare. These new designers often turn to unsolicited redesigns to fill their portfolio. An unsolicited redesign is a proof of concept in which a designer attempts to redesign a popular website. You can see many of these concepts on sites like Dribbble and Behance and there are even websites dedicated to showcasing these designs, such as Uninvited Designs. There’s even a subreddit for them. There are quite a few negative opinions on unsolicited redesigns, though some people see things from both sides. If you feel like doing one or two of these to fill your portfolio, that’s of course up to you. But here’s a better suggestion. Why not contribute design for an open source project instead? You can easily find many projects in great need of design work, from branding to information design, documentation, and website or application design. The benefits to doing this are far better than an unsolicited redesign. You get a great portfolio piece that actually has greater potential to get used (especially if the core team is on board with it). It’s a win-win situation. Not all designers are in need of portfolio filler, but there are other benefits to contributing design. Giving back to the community My first experience with voluntary work was when I collaborated with my friend, Vineet Thapar, on a pro bono project for the W3C’s Web Accessibility Initiative redesign project back in 2004. I was very excited to contribute CSS to a website that would get used by the W3C! Unfortunately, it decided to go a different direction and my work did not get used. However, it was still pretty exciting to have the opportunity, and I don’t regret a moment of that work. I learned a lot about accessibility from this experience and it helped me land some of the jobs I’ve had since. Almost a decade later, I got super into Sass. One of the core maintainers, Chris Eppstein, lamented on Twitter one day that the Sass website and brand was in dire need of design help. That led to the creation of an open source task force, Team Sass Design, and we revived the brand and the website, which launched at SassConf in 2013. It helped me in my current job. I showed it during my portfolio review when I interviewed for the role. Then I was able to use inspiration from a technique I’d tried on the Sass website to help create the more feature-rich design system that my team at work is building. But most importantly, I soon learned that it is exhilarating to be a part of the Sass community. This is the biggest benefit of all. It feels really good to give back to the technology I love and use for getting my work done. Ben Werdmuller writes about the need for design in open source. It’s great to see designers contributing to open source in awesome ways. When A List Apart’s website went open source, Anna Debenham contributed by helping build its pattern library. Bevan Stephens worked with FontForge on the design of its website. There are also designers who have created their own open source projects. There’s Dan Cederholm’s Pears, which shares common patterns in markup and style. There’s also Brad Frost’s Pattern Lab, which shares his famous method of atomic design and applies it to a design system. These systems and patterns have been used in real-world projects, such as RetailMeNot, so designers have contributed to the web in an even larger way simply by putting their work out there for others to use. That’s kind of fun to think about. How to get started So are you stoked about getting into the open source community? That’s great! Initially, you might get worried or uncomfortable in getting involved. That’s okay. But first consider that the project is open source for a reason. Your contribution (no matter how large or small) can help in a big way. If you find a project you’re interested in helping, make sure you do your research. Sometimes project team members will be attached to their current design. Is there already a designer on the core team? Reach out to that designer first. Don’t be too aggressive with why you think your design is better than theirs. Rather, offer some constructive feedback and a proposal of what would make the design better. Chances are, if the designer cares about the project, and you make a strong case, they’ll be up for it. Are there contribution guidelines? It’s proper etiquette to read these and follow the community’s rules. You’ll have a better chance of getting your work accepted, and it shows that you take the time to care and add to the overall quality of the project. Does the project lack guidelines? Consider starting a draft for that before getting started in the design. When contributing to open source, use your initiative to solve problems in a manageable way. Huge pull requests are hard to review and will often either get neglected or rejected. Work in small, modular, and iterative contributions. So this is my personal take on what I’ve learned from my experience and why I love open source. I’d love to hear from you if you have your own experience in doing this and what you’ve learned along the way as well. Please share in the comments! Thanks Drew McLellan, Eric Suzanne, Kyle Neath for sharing their thoughts with me on this! 2014 Jina Anne jina 2014-12-19T00:00:00+00:00 https://24ways.org/2014/why-you-should-design-for-open-source/ design
29 What It Takes to Build a Website In 1994 we lost Kurt Cobain and got the world wide web as a weird consolation prize. In the years that followed, if you’d asked me if I knew how to build a website I’d have said yes, I know HTML, so I know how to build a website. If you’d then asked me what it takes to build a website, I’d have had to admit that HTML would hardly feature. Among the design nerdery and dev geekery it’s easy to think that the nuts and bolts of building a page just need to be multiplied up and Ta-da! There’s your website. That can certainly be true with weekend projects and hackery for fun. It works for throwing something together on GitHub or experimenting with ideas on your personal site. But what about working professionally on client projects? The web is important, so we need to build it right. It’s 2015 – your job involves people paying you money for building websites. What does it take to build a website and to do it right? What practices should we adopt to make really great, successful and professional web projects in 2015? I put that question to some friends and 24 ways authors to see what they thought. Getting the tech right Inevitably, it all starts with the technology. We work in a technical medium, after all. From Notepad and WinFTP through to continuous integration and deployment – how do you build sites? Create a stable development environment There’s little more likely to send a web developer into a wild panic and a client into a wild rage than making a new site live and things just not working. That’s why it’s important to have realistic development and staging environments that mimic the live server as closely as possible. Are you in the habit of developing new sites right on the client’s server? Or maybe in a subfolder on your local machine? It’s time to reconsider. Charlie Perrins writes: Don’t work on a live server – this feels like one of those gear-changing moments for a developer’s growth. Build something that works just as well locally on your own machine as it does on a live server, and capture the differences in the code between the local and live version in a single config file. Ultimately, if you can get all the differences between environments down to a config level then you’ll be in a really good position to automate the deployment process at some point in the future. Anything that creates a significant difference between the development and the live environments has the potential to cause problems you won’t know about until the site goes live – and at that point the problems are very public and very embarrassing, not to mention unprofessional. A reasonable solution is to use a tool like MAMP PRO which enables you to set up an individual local website for each project you work on. Importantly, individual sites give you both consistency of paths between development and live, but also the ability to configure server options (like PHP versions and configuration, for example) to match the live site. Better yet is to use a virtual machine, managed with a tool such as Vagrant. If you’re interested in learning more about that, we have an article on that subject later in the series. Use source control Trent Walton writes: We use source control, and it’s become the centerpiece for how we handle collaboration, enhancements, and issues. It drives our process. I’m hoping by now that you’re either using source control for all your work, or feeling a nagging guilt that you should be. Be it Git, Mercurial, Subversion (name your poison), a revision control system enables you to keep track of changes, revert anything that breaks, and keep rolling backups of your project. The benefits only start there, and Charlie Perrins recommends using source control “not just as a personal backup of your code, but as a way to play nicely with other developers.“ Noting the benefits when collaborating with other developers, he adds: Graduating from being the sole architect of your codebase to contributing to a shared codebase is a huge leap for a developer. Perhaps a practical way for people who tend to work on their own to do that would be to submit a pull request or a patch to an open source project or plugin.” Richard Rutter of Clearleft sees clear advantages for the client, too. He recommends using source control “preferably in some sort of collaborative environment that you can open up or hand over to the client” – a feature found with hosted services such as GitHub. If you’d like to hone your Git skills, Emma Jane Westby wrote Git for Grown-ups in last year’s 24 ways. Don’t repeat, automate! Tim Kadlec is a big proponent of automating your build process: I’ve been hammering that home to every client I’ve had this year. It’s amazing how many companies don’t really have a formal build/deployment process in place. So many issues on the web (performance, accessibility, etc.) can be greatly improved just by having a layer of automation involved. For example, graphic editing software spits out ridiculously bloated images. Very frequently, that’s what ends up getting put on a site. If you have a build process, you can have the compression automated and start seeing immediate gains for no effort. On a recent project, they were able to shave around 1.5MB from their site weight simply by automating compression. Once you have your code in source control, some of that automation can be made easier. Brian Suda writes: We have a few bash scripts that run on git commit: they compile the less, jslint and remove white-space, basically the 3 Cs, Compress, Concatenate, Combine. This is now part of our workflow without even realising it. One great way to get started with a build process is to use a tool like Grunt, and a great way to get started with Grunt is to read Chris Coyier’s Grunt for People Who Think Things Like Grunt are Weird and Hard. Tim reinforces: Issues like [image compression] — or simple accessibility issues like alt tags on images — should never be able to hit a live server. If you can detect it, you can automate it. And if you can automate it, you can free up time for designers and developers to focus on more challenging — and interesting — problems. A clear call to arms to tighten up and formalise development and deployment practices. The less that has to be done manually or is susceptible to change, the less that can go wrong when a site is built and deployed. Any procedures that are automated are no longer dependant on a single person’s knowledge, making it easier to build your team or just cope when someone important is out of the office or leaves. If you’re interested in kicking the FTP habit and automating your site deployments, we have an article later in the series just for you. Build systems, not sites One big theme arising this year was that of building websites as systems, not as individual pages. Brad Frost: For me, teams making websites in 2015 shouldn’t be working on just-another-redesign redesign. People are realizing that in order to make stable, future-friendly, scalable, extensible web experiences they’re going to need to think more systematically. That means crafting deliberate and thoughtful design systems. That means establishing front-end style guides. That means killing the out-dated, siloed, assembly-line waterfall process and getting cross-disciplinary teams working together in meaningful ways. That means treating development as design. That means treating performance as design. That means taking the time out of the day to establish the big picture, rather than aimlessly crawling along quarter by quarter. Designer and developer Jina Bolton also advocates the use of style guides, and recommends making the guide a project deliverable: Consider adding on a style guide/UI library to your project as a deliverable for maintainability and thinking through all UI elements and components. Val Head agrees: “build and maintain a style guide for each project” she wrote. On the subject of approaching a redesign, she added: A UI inventory goes a long way to helping get your head around what a design system needs in the early stages of a redesign project. So what about that old chestnut, responsive web design? Should we be making sites responsive by default? How about mobile first? Richard Rutter: Think mobile first unless you have a very good reason not to. Remember to take the client with you on this principle, otherwise it won’t work as a convincing piece of design. Trent Walton adds: The more you can test and sort of skew your perception for what is typical on the web, the better. 4k displays hooked up to 100Mbps connections can make one extremely unsympathetic. The value of testing with real devices is something Ruth John appreciates. She wrote: I still have my own small device lab at home, even though I work permanently for a well-established company (which has a LOT of devices at its disposal) – it just means I can get a good overview of how things are looking during development. And speaking of systems, Mark Norman Francis recommends the use of measuring tools to aid the design process; “[U]se analytics and make decisions from actual data” he suggests, rather than relying totally on intuition. Tim Kadlec adds a word on performance planning: I think having a performance budget in place should now be a given on any project. We’ve proven pretty conclusively through a hundred and one case studies that performance matters. And over the last year or so, we’ve really seen a lot of great tools emerge to help track and enforce performance budgets. There’s not really a good excuse for not using one any more. It’s clear that in the four years since Ethan Marcotte’s Responsive Web Design article the diversity of screen sizes, network connection speeds and input methods has only increased. New web projects should presume visitors will be using anything from a watch up to a big screen desktop display, and from being offline, through to GPRS, 3G and fast broadband. Will it take more time to design and build for those constraints? Yes, it most likely will. If Internet Explorer is brave enough to ask to be your default browser, you can be brave enough to tell your client they need to build responsively. Working collaboratively A big part of delivering a successful website project is how we work together, both as a design team and a wider project team with the client. Val Head recommends an open line of communication: Keep conversations going. With clients, with teammates. Talking is so important with the way we work now. A good team conversation place, like Slack, is slowly becoming invaluable for me too. Ruth John agrees: We’ve recently opened up our lines of communication by using Slack. It has transformed the way we work. We’re easily more productive and collaborative on projects, as well as making it a lot easier for us all to work remotely (including freelancers). She goes on to point out how tools can be combined to ease team communication without adding further complications: We have a private GitHub organisation (which everyone who works with us is granted access to), which not only holds all our project code but also a team wiki. This has lots of information to get you set up within the team, as well as coding guidelines and best practices and other admin info, like contact numbers/emails for the team. Small-A agile is also the theme of the day, with Mark Norman Francis suggesting an approach of “small iterations with constant feedback around individual features, not spec-it-all-first”. He also encourages you to review as you go, at each stage of the project: Always reflect on what went well and what went badly, and how you can learn from that, even if not Doing Agile™. Ultimately “best practices” should come from learning lessons (both good and bad). Richard Rutter echoes this, warning against working in isolation from the client for too long: Avoid big reveals. Your engagement with the client should be participatory. In business no one likes surprises. This experience rings true for Ruth John who recommends involving real users in the feedback loop, not just the client: We also try and get feedback on what we’re building as soon and as often as we can with our stakeholders/clients and real users. We should also remember that our role is to serve the client’s needs, not just bill them for whatever we can. Brian Suda adds: Don’t sell clients on things they don’t need. We can spout a lot of jargon and scare clients into thinking you are a god. We can do things few can now, but you can’t rip people off because they are unknowledgeable. But do clients know what they’re getting, even when they see it? Trent Walton has an interesting take: We focus on prototypes over image-based comps at all costs, especially when meetings are involved. It’s much easier to assess a prototype, and too often with image-based comps, discussions devolve into how something might feel when actually live, or how a layout could change to fit a given viewport. Val Head also likes to get work into the browser: Sketch design ideas with any software you like, but get to the browser as soon as possible. Beyond your immediate team, Emma Jane Westby has advice for looking further afield: Invest time into building relationships within your (technical) community. You never know when you might be able to lend a hand; or benefit from someone who’s able to lend theirs. And when things don’t go according to plan, Brian Suda has the following advice: If something doesn’t work out, be professional and don’t burn bridges. It will always come back to you. The best work comes from working collaboratively, not just as a team within an agency or department, but with the client and stakeholders too. If doing your job and chucking it over the fence ever worked, it certainly doesn’t fly any more. You can work in isolation, but doing really great work requires collaboration. The business end When you’re building sites professionally, every team member has to think about the business aspects. Estimating time, setting billing rates, and establishing deliverables are all part of the job. In 2008, Andrew Clarke gave us the Contract Killer sample contract we could use to establish a working agreement for a web design project. Richard Rutter agrees that contracts are still an essential part of business: They are there for both parties’ protection. Make sure you know what will happen if you decide you don’t want to work with the client any more (it happens) and, of course, what circumstances mean they can stop taking your services. Having a contract is one thing, but does it adequately protect both you and the client? Emma Jane Westby adds: Find a good IP lawyer/legal counsel. I routinely had an IP lawyer read all of my contracts to find loopholes I wouldn’t have noticed. I didn’t always change the contract, but at least I knew what might come back to bite me. So, you have a contract in place, and know what the project is. Brian Suda recommends keeping track of time and making sure you bill fairly for the hours the project costs you: If I go to a meeting and they are 15 minutes late, the billing clock has already started. They can’t expect me to be in the 1h meeting and not bill for the extra 15–30 minutes they wasted. It goes both ways too. You need to do your best to respect their deadlines and time frame – this is always hard to get right. As ever, it’s good business to do good business. Perhaps we can at last shed the old image of web designers being snowboarding layabouts and demonstrate to clients that we care as much about conducting professional business as they do. Time to review It’s a lot to take in. Some of these ideas and practices will be familiar, others new and yet to be evaluated. The web moves at a fast pace, and we need to be constantly reexamining our tools, techniques and working practices. The most important thing is not to blindly adopt any and all suggestions, but to carefully look at what the benefits might be and decide how they apply to your work. Could you benefit from more formalised development and deployment procedures? Would your design projects run more smoothly and have a longer maintainable life if you approached the solution as a componentised system rather than a series of pages? Are your teams structured in a way that enables the most fluid communication, or are there changes you could make? Are your billing procedures and business agreements serving you and your clients in the best way possible? The new year is a good time to look at your working practices and see what can be improved, and maybe this time next year you’ll look back and think “thank goodness we don’t work like that any more”. 2014 Drew McLellan drewmclellan 2014-12-01T00:00:00+00:00 https://24ways.org/2014/what-it-takes-to-build-a-website/ business
30 Making Sites More Responsive, Responsibly With digital projects we’re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations. A product’s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities. The popularisation of responsive web design is a great example of how we are able to shape the web’s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn’t know or care about the difference in these approaches, but we did. Responsive design was similar in this respect – momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.  We tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text. The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens. When we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more responsive to people’s needs, enhancing their usability and accessibility along the way. Being responsibly responsive Of course, when we think about being more responsive, there’s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that: A responsible responsive design equally considers the following throughout a project: Usability: The way a website’s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions. Access: The ability for users of all devices, browsers, and assistive technologies to access and understand a site’s features and content. Sustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future. Performance: The speed at which a site’s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface. Scott’s book covers these ideas in a lot more detail than I’ll be able to here (put it on your Christmas list if it’s not there already), but for now let’s think a bit more about our roles as digital creators and the power this gives us. Our choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies. The power of the web We all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions. You’ll have seen articles with flashy titles such as “Top 5 JavaScript APIs You’ve Never Heard Of!”, which you’ll probably read, think “That’s quite cool”, yet never use in any real work. There is great potential for technologies like these to be misused, but there are also great prospects for them to be used well to enhance experiences. Let’s have a look at a few examples you may not have considered. Offline first When we make websites, many of us follow a process which involves user stories – standardised snippets of context explaining who needs what, and why. “As a student I want to pay online for my course so I don’t have to visit the college in person.” “As a retailer I want to generate unique product codes so I can manage my stock.” We very often focus heavily on what needs doing, but may not consider carefully how it will be done. As in Scott’s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense – including under different conditions. Offline first is yet another ‘first’ methodology (my personal favourite being ‘tea first’), which encourages us to develop so that connectivity itself is an enhancement – letting users continue with tasks even when they’re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK’s signal-barren East Anglian wilderness as I do), then you’ll realise that connectivity isn’t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I’m sure we’re all familiar with – the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone’s likely to be offline after all. Wouldn’t it be better if we could do something like this instead? Someone visits our conference website. On this initial run, some assets may be cached for future use: the conference schedule, the site’s CSS, photos of the speakers. When the attendee revisits the site on the day, the page shell loads up from the cache. If we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use. If we don’t have something cached already, then we can try grabbing it online. If for any reason our requests for new content fail (we’re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is cached. There are a number of ways we can make something like this, including using the application cache (AppCache) if you’re that way inclined. However, you may want to look into service workers instead. There are also some great resources on Offline First! if you’d like to find out more about this. Building in offline functionality isn’t necessarily about starting offline first, and it’s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we’d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott’s criteria. As I mentioned, this isn’t necessarily the kind of development choice that our clients will ask us for, but it’s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation. Even more accessible We’ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices? Again using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen. Input Ambient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It’s not new – many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels. If our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this: window.addEventListener('devicelight', function(e) { var lux = e.value; if (lux < 50) { //Change things for dim light } if (lux >= 50 && lux <= 10000) { //Change things for normal light } if (lux > 10000) { //Change things for bright light } }); Live demo (requires light sensor and supported browser). Soon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it’ll probably look something like this: @media (light-level: dim) { /*Change things for dim light*/ } @media (light-level: normal) { /*Change things for normal light*/ } @media (light-level: washed) { /*Change things for bright light*/ } While we may be quick to dismiss this kind of detection as being a gimmick, it’s important to consider that apps such as Light Detector, listed on Apple’s accessibility page, provide important context around exactly this functionality. “If you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.” everywaretechnologies.com/apps/lightdetector Input can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft. Output Web technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users. When we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we’re seeing and hearing. Haptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here’s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds. navigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]); Live demo (requires supported browser) With great power… What you’ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn’t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects – if we don’t root for them, they probably won’t happen. This is where our professional responsibility comes in. We won’t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we’ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let’s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations. When you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible. 2014 Sally Jenkinson sallyjenkinson 2014-12-10T00:00:00+00:00 https://24ways.org/2014/making-sites-more-responsive-responsibly/ code
31 Dealing with Emergencies in Git The stockings were hung by the chimney with care, In hopes that version control soon would be there. This summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I’ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information. Let’s start by painting an updated version of Clement Clarke Moore’s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen. Perhaps a few of these images are familiar, or maybe they’re just settings you’ve seen in a movie. It doesn’t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: ‘What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?’ It’s okay if the map isn’t perfect. As you refine your understanding of a new topic, you’ll outgrow the initial metaphors, analogies, and comparisons. With apologies to Mr. Moore, let’s give it a try. Getting Interrupted in Git When on the roof there arose such a clatter! You’re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you’ve been working on is going to need to wait while you investigate the commotion. If you’ve got even a little bit of experience working with Git, you know that you cannot simply change what you’re working on in times of emergency. If you’ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state. Up to this point, you’ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of ‘switching branches for a sec’. This isn’t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren’t finished with what you were working on. You don’t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren’t putting your book away though, you’re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down. Let’s say you’ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof: $ git stash After running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work. With the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it’s not reindeer after all, but just your boss who thought they’d help out by writing some code on the project you’ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time. You fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy: $ git fetch $ git branch -r $ git checkout -b helpful-boss-branch origin/helpful-boss-branch You are now in a local copy of the branch where you are free to look around, and figure out exactly what’s going on. You sigh audibly and say, ‘Okay. Tell me what was happening when you first realised you’d gotten into a mess’ as you look through the log messages for the branch. $ git log --oneline $ git log By using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof. You may also want to compare the work your boss has done to the main branch for your project. For this article, we’ll assume the main branch is named master. $ git diff master Looking through the commits, you may be able to see that things started out okay but then took a turn for the worse. Checking out a single commit Using commands you’re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch. Using the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let’s say the unique identifier you want to checkout is 25f6d7f. $ git checkout 25f6d7f Note: checking out '25f6d7f'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example: $ git checkout -b new_branch_name HEAD is now at 25f6d7f... Removed first paragraph. This is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic. Take a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you’ve temporarily disconnected from a known chain of events. In other words, you’re currently looking at the middle of a story (or branch) about what happened – and you’re not at the endpoint for this particular story. Git allows you to view the history of your repository as a timeline (technically it’s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you’re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work. $ git checkout master Warning: you are leaving 1 commit behind, not connected to any of your branches: 7a85788 Your witty holiday commit message. If you want to keep them by creating a new branch, this may be a good time to do so with: $ git branch new_branch_name 7a85788 Switched to branch 'master' Your branch is up-to-date with 'origin/master'. So, if you want to save the commits you’ve made while in a detached HEAD state, you simply need to put them on a new branch. $ git branch saved-headless-commits 7a85788 With this trick under your belt, you can jingle around in history as much as you’d like. It’s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you’ll need to move back to the branch tip by checking out the branch again. $ git checkout helpful-boss-branch You’re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you’ve followed the instructions Git gave you. That wasn’t so scary after all, now, was it? Back to our reindeer problem. If your boss is anything like the bosses I’ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you’ll want to capture the good work using one of several different methods. Back in the living room, we’ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work. Saving just one commit About that bowl of nuts. If you’re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we’re just going to pick out one nut from the bowl. In Git terms, we’re going to cherry-pick a commit and save it to another branch. First, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into. $ git checkout master $ git checkout -b rescue-the-boss From your boss’s branch, helpful-boss-branch locate the commit you want to keep. $ git log --oneline helpful-boss-branch Let’s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch. $ git cherry-pick e08740b If you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss’s branch. At this point you might need to make a few additional fixes to help your boss out. (You’re angling for a bonus out of all this. Go the extra mile.) Once you’ve made your additional changes, you’ll need to add that work to the branch as well. $ git add [filename(s)] $ git commit -m "Building on boss's work to improve feature X." Go ahead and test everything, and make sure it’s perfect. You don’t want to introduce your own mistakes during the rescue mission! Uploading the fixed branch The next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch. $ git push -u origin rescue-the-boss Cleaning up and getting back to work With your boss rescued, and your bonus secured, you can now delete the local temporary branches. $ git branch --delete rescue-the-boss $ git branch --delete helpful-boss-branch And settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat. $ git checkout waiting-for-st-nicholas $ git stash pop Your working directory has been returned to exactly the same state you were in at the beginning of the article. Having fun with analogies I’ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it’s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it’s like trying to find that one broken light on the string of Christmas lights). It doesn’t matter if the analogy isn’t perfect. It’s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner’s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works. Or, if you’re like me, you can choose to never grow old and just keep mucking about in the analogies. I’d argue it’s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway. 2014 Emma Jane Westby emmajanewestby 2014-12-02T00:00:00+00:00 https://24ways.org/2014/dealing-with-emergencies-in-git/ code
32 Cohesive UX With Yosemite, Apple users can answer iPhone calls on their MacBooks. This is weird. And yet it’s representative of a greater trend toward cohesion. Shortly after upgrading to Yosemite, a call came in on my iPhone and my MacBook “rang” in parallel. And I was all, like, “Wut?” This was a new feature in Yosemite, and honestly it was a little bizarre at first. Apple promotional image showing a phone call ringing simultaneously on multiple devices. However, I had just spoken at a conference on the very topic you’re reading about now, and therefore I appreciated the underlying concept: the cohesion of user experience, the cohesion of screens. This is just one of many examples I’ve encountered since beginning to speak about this topic months ago. But before we get ahead of ourselves, let’s look back at the past few years, specifically the role of responsive web design. RWD != cohesive experience I needn’t expound on the virtues of responsive web design (RWD). You’ve likely already encountered more than a career’s worth on the topic. This is a good thing. Count me in as one of its biggest fans. However, if we are to sing the praises of RWD, we must also acknowledge its shortcomings. One of these is that RWD ends where the browser ends. For all its goodness, RWD really has no bearing on native apps or any other experiences that take place outside the browser. This makes it challenging, therefore, to create cohesion for multi-screen users if RWD is the only response to “let’s make it work everywhere.” We need something that incorporates the spirit of RWD while unifying all touchpoints for the entire user experience—single device or several devices, in browser or sans browser, native app or otherwise. I call this cohesive UX, and I believe it’s the next era of successful user experiences. Toward a unified whole Simply put, the goal of cohesive UX is to deliver a consistent, unified user experience regardless of where the experience begins, continues, and ends. Two facets are vital to cohesive UX: Function and form Data symmetry Let’s examine each of these. Function AND form Function over form, of course. Right? Not so fast, kiddo. Consider Bruce Lawson’s dad. After receiving an Android phone for Christmas and thumbing through his favorite sites, he was puzzled why some looked different from their counterparts on the desktop. “When a site looked radically different,” Bruce observed, “he’d check the URL bar to ensure that he’d typed in the right address. In short, he found RWD to be confusing and it meant he didn’t trust the site.” A lack of cohesive form led to a jarring experience for Bruce’s dad. Now, if I appear to be suggesting websites must look the same in every browser—you already learned they needn’t—know that I recognize the importance of context, especially in regards to mobile. I made a case for this more than seven years ago. Rather, cohesive UX suggests that form deserves the same respect as function when crafting user experiences that span multiple screens or devices. And users are increasingly comfortable traversing media. For example, more than 40% of adults in the U.S. owning more than one device start an activity on one screen and finish it on another, according to a study commissioned by Facebook. I suspect that percentage will only increase in 2015, and I suspect the tech-affluent readers of 24 ways are among the 40%. There are countless examples of cohesive form and function. Consider Gmail, which displays email conversations visually as a stack that can be expanded and collapsed like the bellows of an accordion. This visual metaphor has been consistent in virtually any instance of Gmail—website or app—since at least 2007 when I captured this screenshot on my Nokia 6680: Screenshot captured while authoring Mobile Web Design (2007). Back then we didn’t call this an app, but rather a ‘smart client’. When the holistic experience is cohesive as it is with Gmail, users’ mental models and even muscle memory are preserved.1 Functionality and aesthetics align with the expectations users have for how things should function and what they should look like. In other words, the experience is roughly the same across screens. But don’t be ridiculous, peoples. Note that I said “roughly.” It’s important to avoid mindless replication of aesthetics and functionality for the sake of cohesion. Again, the goal is a unified whole, not a carbon copy. Affordances and concessions should be made as context and intuition require. For example, while Facebook users are accustomed to top-aligned navigation in the browser, they encounter bottom-aligned navigation in the iOS app as justified by user testing: The iOS app model has held up despite many attempts to better it: http://t.co/rSMSAqeh9m pic.twitter.com/mBp36lAEgc— Luke Wroblewski (@lukew) December 10, 2014 Despite the (rather minor) lack of consistency in navigation placement, other elements such as icons, labels, and color theme work in tandem to produce a unified, holistic whole. Data symmetry Data symmetry involves the repetition, continuity, or synchronicity of data across screens, devices, and platforms. As regards cohesive UX, data includes not just the material (such as an article you’re writing on Medium) but also the actions that can be performed on or with that material (such as Medium’s authoring tools). That is to say, “sync verbs, not just nouns” (Josh Clark). In my estimation, Amazon is an archetype of data symmetry, as is Rdio. When logged in, data is shared across virtually any device of any kind, irrespective of using a browser or native app. Add a product to your Amazon cart from your phone during the morning commute, and finish the transaction at work on your laptop. Easy peasy. Amazon’s aesthetics are crazy cohesive, to boot: Amazon web (left) and native app (right). With Rdio, not only are playlists and listening history synced across screens as you would expect, but the cohesion goes even further. Rdio’s remote control feature allows you to control music playing on one device using another device, all in real time. Rdio’s remote control feature, as viewed on my MacBook while music plays on my iMac. At my office I often work from my couch using my MacBook, but my speakers are connected to my iMac. When signed in to Rdio on both devices, my MacBook serves as proxy for controlling Rdio on my iMac, much the same as any Yosemite-enabled device can serve as proxy for an incoming iPhone call. Me, in my office. Note the iMac and speakers at far right. This is a brilliant example of cohesive design, and it’s executed entirely via the cloud. Things to consider Consider the following when crafting cohesive experiences: Inventory the elements that comprise your product experience, and cohesify them.2 Consider things such as copy, tone, typography, iconography, imagery, flow, placement, brand identification, account data, session data, user preferences, and so on. Then, create cohesion among these elements to the greatest extent possible, while adapting to context as needed. Store session data in the cloud rather than locally. For example, avoid using browser cookies to store shopping cart data, as cookies are specific to a single browser on a single device. Instead, store this data in the cloud so it can be accessed from other devices, as well as beyond the browser. Consider using web views when developing your native app. “You’re already using web apps in native wrappers without even noticing it,” Lukas Mathis contends. “The fact that nobody even notices, the fact that this isn’t a story, shows that, when it comes to user experience, web vs. native doesn’t matter anymore.” Web views essentially allow you to display HTML content inside a native wrapper. This can reduce the time and effort needed to make the overall experience cohesive. So whereas the navigation bar may be rendered by the app, for example, the remaining page display may be rendered via the web. There’s readily accessible documentation for using web views in C++, iOS, Android, and so forth. Nature is calling Returning to the example of Yosemite and sychronized phone calls, is it really that bizarre in light of cohesive UX? Perhaps at first. But I suspect that, over time, Yosemite’s cohesiveness — and the cohesiveness of other examples like the ones we’ve discussed here — will become not only more natural but more commonplace, too. 1 I browse Flipboard on my iPad nearly every morning as part of my breakfast routine. Swiping horizontally advances to the next page. Countless times I’ve done the same gesture in Flipboard for iPhone only to have it do nothing. This is because the gesture for advancing is vertical on phones. I’m so conditioned to the horizontal swipe that I often fail to make the switch to vertical swipe, and apparently others suffer from the same muscle memory, too. 2 Cohesify isn’t a thing. But chances are you understood what I meant. Yay neologism! 2014 Cameron Moll cameronmoll 2014-12-24T00:00:00+00:00 https://24ways.org/2014/cohesive-ux/ ux
34 Collaborative Responsive Design Workflows Much has been written about workflow and designer-developer collaboration in web design, but many teams still struggle with this issue; either with how to adapt their internal workflow, or how to communicate the need for best practices like mobile first and progressive enhancement to their teams and clients. Christmas seems like a good time to have another look at what doesn’t work between us and how we can improve matters. Why is it so difficult? We’re still beginning to understand responsive design workflows, acknowledging the need to move away from static design tools and towards best practices in development. It’s not that we don’t want to change – so why is it so difficult? Changing the way we do something that has become routine is always problematic, even with small things, and the changes today’s web environment requires from web design and development teams are anything but small. Although developers also have a host of new skills to learn and things to consider, designers are probably the ones pushed furthest out of their comfort zones: as well as graphic design, a web designer today also needs an understanding of interaction design and ergonomics, because more and more websites are becoming tools rather than pages meant to be read like a book or magazine. In addition to that there are thousands of different devices and screen sizes on the market today that layout and interactions need to work on. These aspects make it impossible to design in a static design tool, so beyond having to learn about new aspects of design, the designer has to either learn how to code or learn to work with a responsive design tool. Why do it That alone is enough to leave anyone overwhelmed, as learning a new skill takes time and slows you down in a project – and on most projects time is in short supply. Yet we have to make time or fall behind in the industry as others pitch better, interactive designs. For an efficient workflow, both designers and developers must familiarise themselves with new tools and techniques. A designer has to be able to play with ideas, make small adjustments here and there, look at the result, go back to the settings and make further adjustments, and so on. You can only realistically do that if you are able to play with all the elements of a design, including interactivity, accessibility and responsiveness. Figuring out the right breakpoints in a layout is one of the foremost reasons for designing in a responsive design tool. Even if you create layouts for three viewport sizes (i.e. smartphone, tablet and the most common desktop size), you’d only cover around 30% of visitors and you might miss problems like line breaks and padding at other viewport sizes. Another advantage is consistency. In static design tools changes will not be applied across all your other layouts. A developer referring back to last week’s comps might work with outdated metrics. Furthermore, you cannot easily test what impact changes might have on previously designed areas. In a dynamic design tool such changes will be applied to the entire design and allow you to test things in site areas you had already finished. No static design tool allows you to do this, and having somebody else produce a mockup from your static designs or wireframes will duplicate work and is inefficient. How to do it When working in a responsive design tool rather than in the browser, there is still the question of how and when to communicate with the developer. I have found that working with Sass in combination with a visual style guide is very efficient, but it does need careful planning: fundamental metrics for padding, margins and font sizes, but also design elements like sliders, forms, tabs, buttons and navigational elements, should be defined at the beginning of a project and used consistently across the site. Working with a grid can help you develop a consistent design language across your site. Create a visual style guide that shows what the elements look like and how they behave across different screen sizes – and when interacted with. Put all metrics on paddings, margins, breakpoints, widths, colours and so on in a text document, ideally with names that your developer can use as Sass variables in the CSS. For example: $padding-default-vertical: 1.5em; Developers, too, need an efficient workflow to keep code maintainable and speed up the time needed for more complex interactions with an eye on accessibility and performance. CSS preprocessors like Sass allow you to work with variables and mixins for default rules, as well as style sheet partials for different site areas or design elements. Create your own boilerplate to use for your projects and then update your variables with the information from your designer for each individual project. How to get buy-in One obstacle when implementing responsive design, accessibility and content strategy is the logistics of learning new skills and iterating on your workflow. Another is how to sell it. You might expect everyone on a project (including the client) to want to design and develop the best website possible: ultimately, a great site will lead to more conversions. However, we often hear that people find it difficult to convince their teammates, bosses or clients to implement best practices. Why is that? Well, I believe a lot of it is down to how we sell it. You will have experienced this yourself: some people you trust to know what they are talking about, and others you don’t. Think about why you trust that first person but don’t buy what the other one is telling you. It is likely because person A has a self-assured, calm and assertive demeanour, while person B seems insecure and apologetic. To sell our ideas, we need to become person A! For a timid designer or developer suffering from imposter syndrome (like many of us do in this industry) that is a difficult task. So how can we become more confident in selling our expertise? Write We need to become experts. And I mean not just in writing great code or coming up with beautiful designs but at explaining why we’re doing what we’re doing. Why do you code this way or that? Why is this the best layout? Why does a website have to be accessible and responsive? Write about it. Putting your thoughts down on paper or screen is a really efficient way of getting your head around a topic and learning to make a case for something. You may even find that you come up with new ideas as you are writing, so you’ll become a better designer or developer along the way. Talk Then, talk about it. Start out in front of your team, then do a lightning talk at a web event near you, then a longer talk or workshop. Having to talk about a topic is going to help you put into spoken words the argument that you’ve previously put together in writing. Writing comes more easily when you’re starting out but we use a different register when writing than talking and you need to learn how to speak your case. Do the talk a couple of times and after each talk make adjustments where you found it didn’t work well. By this time, you are more than ready to make your case to the client. In fact, you’ve been ready since that first talk in front of your colleagues ;) Pitch Pitches used to be based on a presentation of static layouts for for three to five typical pages and three different designs. But if we want to sell interactivity, structure, usability, accessibility and responsiveness, we need to demonstrate these things and I believe that it can only do us good. I have seen a few pitches sitting in the client’s chair and static layouts are always sort of dull. What makes a website a website is the fact that I can interact with it and smooth interactions or animations add that extra sparkle. I can’t claim personal experience for this one but I’d be bold and go for only one design. One demo page matching the client’s corporate design but not any specific page for the final site. Include design elements like navigation, photography, typefaces, article layout (with real content), sliders, tabs, accordions, buttons, forms, tables (yes, tables) – everything you would include in a style tiles document, only interactive. Demonstrate how the elements behave when clicked, hovered and touched, and how they change across different screen sizes. You may even want to demonstrate accessibility features like tabbed navigation and screen reader use. Obviously, there are many approaches that will work in different situations but don’t give up on finding a process that works for you and that ultimately allows you to build delightful, accessible, responsive user experiences for the web. Make time to try new tools and techniques and don’t just work on them on the side – start using them on an actual project. It is only when we use a tool or process in the real world that we become true experts. Remember your driving lessons: once the instructor had explained how to operate the car, you were sent to practise driving on the road in actual traffic! 2014 Sibylle Weber sibylleweber 2014-12-07T00:00:00+00:00 https://24ways.org/2014/collaborative-responsive-design-workflows/ process
37 JavaScript Modules the ES6 Way JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. This is something nearly all other languages come with out of the box, whether it be Ruby’s require, Python’s import, or any other language you’re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let’s be clear: it doesn’t mean the new module system in the upcoming version of JavaScript won’t be useful to you if you’re building smaller websites rather than the next Instagram. Thankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won’t be landing in browsers for a while yet – but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I’d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It’s much simpler than you might think. What is ES6? ECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It’s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn’t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet. The ES6 module spec The ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I’ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it’s what I’ll focus on more. Module syntax The key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we’ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it. Another important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There’s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous. Named exports Modules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword: export function double(x) { return x + x; }; You can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces. var double = function(x) { return x + x; } export { double }; A module can then import the double function like so: import { double } from 'mymodule'; double(2); // 4 Again, curly braces are required around the variable you would like to import. It’s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension. The reason for those extra braces is that this syntax lets you export multiple variables: var double = function(x) { return x + x; } var square = function(x) { return x * x; } export { double, square } I personally prefer this syntax over the export function …, but only because it makes it much clearer to me what the module exports. Typically I will have my export {…} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting. A file importing both double and square can do so in just the way you’d expect: import { double, square } from 'mymodule'; double(2); // 4 square(3); // 9 With this approach you can’t easily import an entire module and all its methods. This is by design – it’s much better and you’re encouraged to import just the functions you need to use. Default exports Along with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so: export default function(x) { return x + x; } And that can be imported: import double from 'mymodule'; double(2); // 4 This time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you’d like. Default exports are not named, so you can import them as anything you like: import christmas from 'mymodule'; christmas(2); // 4 The above is entirely valid. Although it’s not something that is used too often, a module can have both named exports and a default export, if you wish. One of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that’s fine, and you shouldn’t change that to meet the preferences of those designing the spec. Programmatic API Along with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It’s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user’s browser didn’t support a feature your app relied on. An example of doing this is: if(someFeatureNotSupported) { System.import('my-polyfill').then(function(myPolyFill) { // use the module from here }); } System.import will return a promise, which, if you’re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete. This programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task. How to use it today It’s all well and good having this new syntax, but right now it won’t work in any browser – and it’s not likely to for a long time. Maybe in next year’s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we’re stuck with a bit of extra work. ES6 module transpiler One solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it – not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp. The advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don’t have any system in place for handling modules in the browser, using the transpiler doesn’t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn’t do anything to help you get that code running in the browser, but if you have that part sorted it’s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler. SystemJS Another solution is SystemJS. It’s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser). To load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn’t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you. All you need to do to get SystemJS running is to add a <script> element to load SystemJS into your webpage. It will then automatically load the ES6 module loader and Traceur files when it needs them. In your HTML you then need to use System.import to load in your module: <script> System.import('./app'); </script> When you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn’t make a lot of requests on the live site. In development though, it makes for a really nice workflow. When working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application’s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file. SystemJS also provides a workflow for bundling your application together into one file. Conclusion ES6 modules may be at least six months to a year away (if not more) but that doesn’t mean they can’t be used today. Although there is an overhead to using them now – with the work required to set up SystemJS, the module transpiler, or another solution – that doesn’t mean it’s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You’ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution. If you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road. Further reading If you’d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources: ECMAScript 6 modules: the final syntax by Axel Rauschmayer Practical Workflows for ES6 Modules by Guy Bedford ECMAScript 6 resources for the curious JavaScripter by Addy Osmani Tracking ES6 support by Addy Osmani ES6 Tools List by Addy Osmani Using Grunt and the ES6 Module Transpiler by Thomas Boyt JavaScript Modules and Dependencies with jspm by myself Using ES6 Modules Today by Guy Bedford 2014 Jack Franklin jackfranklin 2014-12-03T00:00:00+00:00 https://24ways.org/2014/javascript-modules-the-es6-way/ code
40 Don’t Push Through the Pain In 2004, I lost my web career. In a single day, it was gone. I was in too much pain to use a keyboard, a Wacom tablet (I couldn’t even click the pen), or a trackball. Switching my mouse to use my left (non-dominant) hand only helped a bit; then that hand went, too. I tried all the easy-to-find equipment out there, except for expensive gizmos with foot pedals. I had tingling in my fingers—which, when I was away from the computer, would rhythmically move as if some other being controlled them. I worried about Parkinson’s because the movements were so dramatic. Pen on paper was painful. Finally, I discovered one day that I couldn’t even turn a doorknob. The only highlight was that I couldn’t dust, scrub, or vacuum. We were forced to hire someone to come in once a week for an hour to whip through the house. You can imagine my disappointment. My injuries had gradually slithered into my life without notice. I’d occasionally have sore elbows, or my wrist might ache for a day, or my shoulders feel tight. But nothing to keyboard home about. That’s the critical bit of news. One day, you’re pretty fine. The next day, you don’t have your job—or any job that requires the use of your hands and wrists. I had to walk away from the computer for over four months—and partially for several months more. That’s right: no income. If I hadn’t found a gifted massage therapist, the right book of stretches, the equipment I should have been using all along, and learned how to pay attention to my body—even just a little bit more—I quite possibly wouldn’t be writing this article today. I wouldn’t be writing anything, anywhere. Most of us have heard of (and even claimed to have read all of) Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience, who describes the state of flow—the place our minds go when we are fully engaged and in our element. This lovely state of highly focused activity is deeply satisfying, often creative, and quite familiar to many of us on the web who just can’t quit until the copy sings or the code is untangled or we get our highest score yet in Angry Birds. Our minds may enter that flow, but too often as our brains take flight, all else recedes. And we leave something very important behind. Our bodies. My body wasn’t made to make the same minute movements thousands of times a day, most days of the year, for decades, and neither was yours. The wear and tear sneaks up on you, especially if you’re the obsessive perfectionist that we all pretend not to be. Oh? You’re not obsessed? I wasn’t like this all the time, but I remember sitting across from my husband, eating dinner, and I didn’t hear a word he said. I’d left my brain upstairs in my office, where it was wrestling in a death match with the box model or, God help us all, IE 5.2. I was a writer, too, and I was having my first inkling that I was a content strategist. Work was exciting. I could sit up late, in the flow, fingers flying at warp speed. I could sit until those wretched birds outside mocked me with their damn, cheerful “Hurray, it’s morning!” songs. Suddenly, while, say, washing dishes, the one magical phrase that captured the essence of a voice or idea would pop up, and I would have mowed down small animals and toddlers to get to my computer and hammer out that website or article, to capture that thought before it escaped. Note my use of the word hammer. Sound at all familiar? But where was my body during my work? Jaw jutting forward to see the screen, feet oddly positioned—and then left in place like chunks of marble—back unsupported, fingers pounding the keys, wrists and arms permanently twisted in unnatural angles that we thought were natural. And clicking. Clicking, clicking, clicking that mouse. Thumbing tiny keyboards on phones. A lethal little gesture for tiny little tendons. Though I was fine from, say 1997 to 2004, by the end of 2004 this behavior culminated in disaster. I had repetitive stress injuries, aka repetitive motion injuries. As the Apple site says, “A brief exposure to these conditions would not cause harm. But a prolonged exposure may, in some people, result in reduced ability to function.” I’ll say. I frantically turned to people on lists and forums. “Try a track ball.” Already did that. “Try a tablet.” Worse. One person wrote, “I still come here once in a while and can type a couple sentences, but I’ve permanently got thoracic outlet syndrome and I’ll never work again.” Oh, beauteous web, oh, long-distance friends, farewell. The Wrist Bone’s Connected to the Brain Bone That variation on the old song tells part of the story. Most people (and many of their physicians) believe that tingling fingers and aching wrists MUST be carpel tunnel syndrome. Nope. If your neck juts forward, it tenses and stays tense the entire time you work in that position. Remember how your muscles felt after holding a landline phone with your neck tilted to one side for a long client meeting? Regrettable. Tensing your shoulders because your chair’s not designed properly puts you at risk for thoracic outlet syndrome, a career-killer if ever there was one. The nerves and tendons in your neck and shoulder refer down your arms, and muscles swell around nerves, causing pain and dysfunction. Your elbows have a tendon that is especially vulnerable to repetitive movements (think tennis elbow). Your wrists are performing something akin to a circus act with one thousand shows a day. So, all the fine tendons and ligaments in your fingers have problems that may not start at your wrists at all. Though some people truly do have carpal tunnel syndrome, my finger and wrist problems weren’t solved by heavily massaging my fingers (though, that was helpful, too) or my wrists. They were fixed by work on my neck, upper back, shoulders, arms, and elbows. This explains why many people have surgery for carpal tunnel syndrome and just months later say, “What?! How can I possibly have it again? I had an operation!” Well, fellow buckaroo, you may never have had carpel tunnel syndrome. You may have had—or perhaps will have—one long disaster area from your neck to your fingertips. How to Crawl Back Before trying extreme measures, you may be able to function again even if you feel hopeless. I managed to heal, and so have others, but I’ll always be at risk. As Jen Simmons, of The Web Ahead podcast and other projects told me, “It took a long time to injure myself. It took a long time to get back to where I was. My right arm between my elbow and wrist would start aching intermittently. Eventually, my arm even ached at night. I started each day with yesterday’s pain.” Simple measures, used consistently, helped her back. 1. Massage therapy I don’t remember what the rest of the world is like, but in Portland, Oregon, we have more than one massage therapy college. (Of course we do.) I saw a former teacher at the most respected school. This is not your “It was all so soothing. Why, I fell asleep!” massage. This is “Holy crap, he’s grinding his elbow into my armpit!“ massage therapy, with the emphasis on therapy. I owe him everything. Make sure you have someone who really knows what they’re doing. Get many referrals. Try a question, “Does my psoas muscle affect my back?” If they can’t answer it, flee. Regularly see the one you choose and after a while, depending on how injured you are, you may be able to taper off. 2. Change your equipment You may need to be hands-on with several pieces of equipment before you find the ones that don’t cause more pain. Many companies have restocking fees, charges to ship the equipment you want to return, and other retail atrocities. Always be sure to ask what the return policies are at any company before purchasing. Mice You may have more success than I did with equipment such as the Wacom tablet. Mine came with a pen, and it hurt to repetitively click it. Trackballs are another option but, for many, they are better at prevention than recovery. But let’s get to the really effective stuff. One of the biggest sources of pain is using your mouse. One major reason is that your hand and wrist are in a perpetually unnatural position and you’re also moving your arm quite a bit. Each time you move the mouse, it is placing stress on your neck, shoulders and arms, because you need to lift them slightly in order to move the mouse and you need to angle your wrist. You may also be too injured to use the trackpad all the time, and this mouse, the vertical mouse is a dandy preventative measure, too. Shaking up your patterns is a wise move. I have long fingers, not especially thin, yet the small size works best for me. (They have larger choices available.) What?! A sideways mouse? Yep. All the weight of your hand will be resting on it in the handshake position. Your forearms aren’t constantly twisting over hill and dale. You aren’t using any muscles in your wrist or hand. They are relaxing. You’ll adapt in a day, and oh, oh, what a relief it is. Keyboards I really liked doing business with the people at Kinesis-Ergo. (I’m not affiliated with them in any way.) They have the vertical mouse and a number of keyboards. The one that felt the most natural to me, and, once again, it only takes a day to adapt, is the Freestyle2 for the Mac. They have several options. I kept the keyboard halves attached to each other at first, and then spread them apart a little more. I recommend choosing one that slants and can separate. You can adjust the angle. For a little extra, they’ll make sure it’s all set up and ready to go for you. I’m guessing that some Googling will find you similar equipment, wherever you live. Warning: if you use the ergonomic keyboards, you may have fewer USB ports. The laptop will be too far away to see unless you find a satisfactory setup using a stand. This is the perfect excuse for purchasing a humongous display. You may not look cool while jetting coast to coast in your skinny jeans and what appears to be the old-time orthopedic shoe version of computing gear. But once you have rested and used many of these suggestions consistently, you may be able to use your laptop or other device in all its lovely sleekness during the trip. Other doohickies The Kinesis site and The Human Solution have a wide selection of ergonomic products: standing desks, ergonomically correct chairs, and, yes, even things with foot pedals. Explore! 3. Stop clicking, at least for a while Use keyboard shortcuts, but use them slowly. This is not the time to show off your skillz. You’ll be sort of like a recovering alcoholic, in that you’ll be a recovering repetitive stress survivor for the rest of your life, once you really injure yourself. Always be vigilant. There’s also a bit of software sold by The Human Solution and other places, and it was my salvation. It’s called the McNib for Macs, and the Nib for PCs. (I’ve only used the McNib.) It’s for click-free mousing. I found it tricky to use when writing markup and code, but you may become quite adept at it. A little rectangle pops up on your screen, you mouse over it and choose, let’s say, “Double-click.” Until you change that choice, if you mouse over a link or anything else, it will double-click it for you. All you do is glide your mouse around. Awkward for a day or two, but you’ll pick it up quickly. Though you can use it all day for work, even if you just use this for browsing LOLcats or Gary Vaynerchuk’s YouTube videos, it will help you by giving your fingers a sweet break. But here’s the sad news. The developer who invented this died a few years ago. (Yes, I used to speak to him on the phone.) While it is for sale, it isn’t compatible with Mac OS X Lion or anything subsequent. PowerPC strikes again. His site is still up. Demos for use with older software can be downloaded free at his old site, or at The Human Solution. Perhaps an enterprising developer can invent something that would provide this help, without interfering with patents. Rumor has it among ergonomic retailers (yes, I’m like a police dog sniffing my way to a criminal once I head down a trail) that his company was purchased by a company in China, with no update in sight. 4. Use built-in features That little microphone icon that comes up alongside the keyboard on your iPhone allows you to speak your message instead of incessantly thumbing it. I believe it works in any program that uses the keyboard. It’s not Siri. She’s for other things, like having a personal relationship with an inanimate object. Apple even has a good section on ergonomics. You think I’m intense about this subject? To improve your repetitive stress, Apple doesn’t want you to use oral contraceptives, alcohol, or tobacco, to which I say, “Have as much sex, bacon, and chocolate as possible to make up for it.” Apple’s info even has illustrations of things like a faucet dripping into what is labeled a bucket full of “TRAUMA.” Sounds like upgrading to Yosemite, but I digress. 5. Take breaks If it’s a game or other non-essential activity, take a break for a month. Fine, now that I’ve called games non-essential, I suppose you’ll all unfollow me on Twitter. 6. Whether you are sore or not, do stretches throughout the day This is a big one. Really big. The best book on the subject of repetitive stress injuries is Conquering Carpal Tunnel Syndrome and Other Repetitive Strain Injuries: A Self-Care Program by Sharon J. Butler. Don’t worry, most of it is illustrations. Pretend it’s a graphic novel. I’m notorious for never reading instructions, and who on earth reads the introduction of a book, unless they wrote it? I wrote a book a long time ago, and I bet my house, husband, and life savings that my own parents never read the intro. Well, I did read the intro to this book, and you should, too. Stretching correctly, in a way that doesn’t further hurt you, that keeps you flexible if you aren’t injured, that actually heals you, calls for precision. Read and you’ll see. The key is to stretch just until you start to feel the stretch, even if that’s merely a tiny movement. Don’t force anything past that point. Kindly nurse yourself back to health, or nurture your still-healthy body by stretching. Over the following days, weeks, months, you’ll be moving well past that initial stretch point. The book is brimming with examples. You only have to pick a few stretches, if this is too much to handle. Do it every single day. I can tell you some of the best ones for me, but it depends on the person. You’ll also discover in Butler’s book that areas that you think are the problem are sometimes actually adjacent to the muscle or tendon that is the source of the problem. Add a few stretches or two for that area, too. But please follow the instructions in the introduction. If you overdo it, or perform some other crazy-ass hijinks, as I would be tempted to do, I am not responsible for your outcome. I give you fair warning that I am not a healthcare provider. I’m just telling you as a friend, an untrained one, at that, who has been through this experience. 7. Follow good habits Develop habits like drinking lots of water (which helps with lactic acid buildup in muscles), looking away from the computer for twenty seconds every twenty to thirty minutes, eating right, and probably doing everything else your mother told you to do. Maybe this is a good time to bring up flossing your teeth, and going outside to play instead of watching TV. As your mom would say, “It’s a beautiful day outside, what are you kids doing in here?” 8. Speak instead of writing, if you can Amber Simmons, who is very smart and funny, once tweeted in front of the whole world that, “@carywood is a Skype whore.” I was always asking people on Twitter if we could Skype instead of using iChat or exchanging emails. (I prefer the audio version so I don’t have to, you know, do something drastic like comb my hair.) Keyboarding is tough on hands, whether you notice it or not at the time, and when doing rapid-fire back-and-forthing with people, you tend to speed up your typing and not take any breaks. This is a hand-killer. Voice chats have made such a difference for me that I am still a rabid Skype whore. Wait, did I say that out loud? Speak your text or emails, using Dragon Dictate or other software. In about 2005, accessibility and user experience design expert, Derek Featherstone, in Canada, and I, at home, chatted over the internet, each of us using a different voice-to-text program. The programs made so many mistakes communicating with each other that we began that sort of endless, tearful laughing that makes you think someone may need to call an ambulance. This type of software has improved quite a bit over the years, thank goodness. Lack of accessibility of any kind isn’t funny to Derek or me or to anyone who can’t use the web without pain. 9. Watch your position For example, if you lift up your arms to use the computer, or stare down at your laptop, you’ll need to rearrange your equipment. The internet has a lot of information about ideal ergonomic work areas. Please use a keyboard drawer. Be sure to measure the height carefully so that even a tented keyboard, like the one I recommend, will fit. I also recommend getting the version of the Freestyle with palm supports. Just these two measures did much to help both Jen Simmons and me. 10. If you need to take anti-inflammatories, stop working If you are all drugged up on ibuprofen, and pounding and clicking like mad, your body will not know when you are tired or injuring yourself. I don’t recommend taking these while using your computing devices. Perhaps just take it at night, though I’m not a fan of that category of medications. Check with your healthcare provider. At least ibuprofen is an anti-inflammatory, which may help you. In contrast, acetaminophen (paracetamol) only makes your body think it’s not in pain. Ice is great, as is switching back and forth between ice and heat. But again, if you need ice and ibuprofen you really need to take a major break. 11. Don’t forget the rest of your body I’ve zeroed in on my personal area of knowledge and experience, but you may be setting yourself up for problems in other areas of your body. There’s what is known to bad writers as “a veritable cornucopia” of information on the web about how to help the rest of your body. A wee bit of research on the web and you’ll discover simple exercises and stretches for the rest of your potential catastrophic areas: your upper back, your lower back, your legs, ankles, and eyes. Do gentle stretches, three or four times a day, rather than powering your way through. Ease into new equipment such as standing desks. Stretch those newly challenged areas until your body adapts. Pay attention to your body, even though I too often forget mine. 12. Remember the children Kids are using equipment to play highly addictive games or to explore amazing software, and if these call for repetitive motions, children are being set up for future injuries. They’ll grab hold of something, as parents out there know, and play it 3,742 times. That afternoon. Perhaps by the time they are adults, everything will just be holograms and mind-reading, but adult fingers and hands are used for most things in life, not just computing devices and phones with keyboards sized for baby chipmunks. I’ll be watching you Quickly now, while I (possibly) have your attention. Don’t move a muscle. Is your neck tense? Are you unconsciously lifting your shoulders up? How long since you stopped staring at the screen? How bright is your screen? Are you slumping (c’mon now, ‘fess up) and inviting sciatica problems? Do you have to turn your hands at an angle relative to your wrist in order to type? Uh-oh. That’s a bad one. Your hands, wrists, and forearms should be one straight line while keyboarding. Future you is begging you to change your ways. Don’t let your #ThrowbackThursday in 2020 say, “Here’s a photo from when I used to be able to do so many wonderful things that I can’t do now.” And, whatever you do, don’t try for even a nanosecond to push through the pain, or the next thing you know, you’ll be an unpaid extra in The Expendables 7. 2014 Carolyn Wood carolynwood 2014-12-06T00:00:00+00:00 https://24ways.org/2014/dont-push-through-the-pain/ business
41 What Is Vagrant and Why Should I Care? If you run a web server, a database server and your scripting language(s) of choice on your main machine and you have not yet switched to using virtualisation in your workflow then this essay may be of some value to you. I know you exist because I bump into you daily: freelancers coming in to work on our projects; internet friends complaining about reinstalling a development environment because of an operating system upgrade; fellow agency owners who struggle to brief external help when getting a particular project up and running; or even hardcore back-end developers who “don’t do ops” and prefer to run their development stack of choice locally. There are many perfectly reasonable arguments as to why you may not have already made the switch, from being simply too busy, all the way through to a distrust of the new. I’ll admit that there are many new technologies or workflows that I hear of daily and instantly disregard because I have tool overload, that feeling I get when I hear about a new shiny thing and think “Well, what I do now works – I’ll leave it for others to play with.” If that’s you when it comes to Vagrant then I hope you’ll hear me out. The business case is compelling enough for you to make that switch; as a bonus it’s also really easy to get going. In this article we’ll start off by going through the high level, the tools available and how it all fits together. Then we’ll touch on the justification for making the switch, providing a few use cases that might resonate with you. Finally, I’ll provide a very simple example that you can follow to get yourself up and running. What? You already know what virtualisation is. You use the ability to run an operating system within another operating system every day. Whether that’s Parallels or VMware on your laptop or similar server-based tools that drive the ‘cloud’, squeezing lots of machines on to physical hardware and making it really easy to copy servers and even clusters of servers from one place to another. It’s an amazing technology which has changed the face of the internet over the past fifteen years. Simply put, Vagrant makes it really easy to work with virtual machines. According to the Vagrant docs: If you’re a designer, Vagrant will automatically set everything up that is required for that web app in order for you to focus on doing what you do best: design. Once a developer configures Vagrant, you don’t need to worry about how to get that app running ever again. No more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing. While I’m not sure I agree with the implication that all designers would get others to do the configuring, I think you’ll agree that the “Just check out the code… and start designing” premise is very compelling. You don’t need Vagrant to develop your web applications on virtual machines. All you need is a virtualisation software package, something like VMware Workstation or VirtualBox, and some code. Download the half-gigabyte operating system image that you want and install it. Then download and configure the stack you’ll be working with: let’s say Apache, MySQL, PHP. Then install some libraries, CuRL and ImageMagick maybe, and finally configure the ability to easily copy files from your machine to the new virtual one, something like Samba, or install an FTP server. Once this is all done, copy the code over, import the database, configure Apache’s virtual host, restart and cross your fingers. If you’re a bit weird like me then the above is pretty easy to do and secretly quite fun. Indeed, the amount of traffic to one of my more popular blog posts proves that a lot of people have been building themselves development servers from scratch for some time (or at least trying to anyway), whether that’s on virtual or physical hardware. Or you could use Vagrant. It allows you, or someone else, to specify in plain text how the machine’s virtual hardware should be configured and what should be installed on it. It also makes it insanely easy to get the code on the server. You check out your project, type vagrant up and start work. Why? It’s worth labouring the point that Vagrant makes it really easy; I mean look-no-tangle-of-wires-or-using-vim-and-loads-of-annoying-command-line-stuff easy to run a development environment. That’s all well and good, I hear you say, but there’s a steep learning curve, an overhead to switch. You’re busy and this all sounds great but you need to get on; you’ve got a career to build or a business to run and you don’t have time to learn new stuff right now. In short, what’s the business case? The business case involves saved time, a very low barrier to entry and the ability to give the exact same environment to somebody else. Getting your first development virtual machine running will take minutes, not counting download time. Seriously, use pre-built Vagrant files and provisioners (we’ll touch on this below) and you can start developing immediately. Once you’ve finished developing you can check in your changes, ask a colleague or freelancer to check them out, and then they run the code on the exact same machine – even if they are on the other side of the world and regardless of whether they are on Windows, Linux or Apple OS X. The configuration to build the machine isn’t a huge binary disk image that’ll take ages to download from Git; it’s two small text files that can be version controlled too, so you can see any changes made to the config and roll back if needed. No more ‘It works for me’ reports; no ‘Oh, I was using PHP 5.3.3, not PHP 5.3.11’ – you’re both working on exact same copies of the development environment. With a tested and verified provisioning file you’ll have the confidence that when you brief your next freelancer in to your team there won’t be that painful to and fro of getting the system up and running, where you’re on a Skype call and they are uttering the immortal words, ‘It still doesn’t work’. You know it works because you can run it too. This portability becomes even more important when you’re working on larger sites and systems. Need a load balancer? Multiple front-end servers and a clustered database back-end? No problem. Add each server into the same Vagrant file and a single command will build all of them. As you’ll know if you work on larger, business critical systems, keeping the operating systems in sync is a real problem: one server with a slightly different library causing sporadic and hard to trace issues is a genuine time black hole. Well, the good news is that you can use the same provisioning files to keep test and production machines in sync using your current build workflow. Let’s also not forget the most simple use case: a single developer with multiple websites running on a single machine. If that’s you and you switch to using Vagrant-managed virtual machines then the next time you upgrade your operating system or do a fresh install there’s no chance that things will all stop working. The server config is all tucked away in version control with your code. Just pull it down and carry on coding. OK, got it. Show me already If you want to try this out you’ll need to install the latest VirtualBox and Vagrant for your platform. If you already have VMware Workstation or another supported virtualisation package installed you can use that instead but you may need to tweak my Vagrant file below. Depending on your operating system, a reboot might also be wise. Note: the commands below were executed on my MacBook, but should also work on Windows and Linux. If you’re using Windows make sure to run the command prompt as Administrator or it’ll fall over when trying to update the hosts file. As a quick sanity check let’s just make sure that we have the vagrant command in our path, so fire up a terminal and check the version number: $ vagrant -v Vagrant 1.6.5 We’ve one final thing to install and that’s the vagrant-hostsupdater plugin. Once again, in your terminal: $ vagrant plugin install vagrant-hostsupdater Installing the 'vagrant-hostsupdater' plugin. This can take a few minutes... Installed the plugin 'vagrant-hostsupdater (0.0.11)'! Hopefully that wasn’t too painful for you. There are two things that you need to manage a virtual machine with Vagrant: a Vagrant file: this tells Vagrant what hardware to spin up a provisioning file: this tells Vagrant what to do on the machine To save you copying and pasting I’ve supplied you with a simple example (ZIP) containing both of these. Unzip it somewhere sensible and in your terminal make sure you are inside the Vagrant folder: $ cd where/you/placed/it/24ways $ ls -l -rw-r--r--@ 1 bealers staff 11055 9 Nov 09:16 bealers-24ways.md -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 8 Nov 22:54 vagrant $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 8 Nov 21:50 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3841 9 Nov 08:00 provision.sh The Vagrant file tells Vagrant how to configure the virtual hardware of your development machine. Skipping over some of the finer details, here’s what’s in that Vagrant file: www.vm.box = "ubuntu/trusty64" Use Ubuntu 14.04 for the VM’s OS. Vagrant will only download this once. If another project uses the same OS, Vagrant will use a cached version. www.vm.hostname = "bealers-24ways.dev" Set the machine’s hostname. If, like us, you’re using the vagrant-hostsupdater plugin, this will also get added to your hosts file, pointing to the virtual machine’s IP address. www.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--cpus", "2" ] end Here’s an example of configuring the virtual machine’s hardware on the fly. In this case we want two virtual processors. Note: this is specific for the VirtualBox provider, but you could also have a section for VMware or other supported virtualisation software. www.vm.network "private_network", ip: "192.168.13.37" This specifies that we want a private networking link between your computer and the virtual machine. It’s probably best to use a reserved private subnet like 192.168.0.0/16 or 10.0.0.0/8 www.vm.synced_folder "../", "/var/www/24ways", owner: "www-data", group: "www-data" A particularly handy bit of Vagrant magic. This maps your local 24ways parent folder to /var/www/24ways on the virtual machine. This means the virtual machine already has direct access to your code and so do you. There’s no messy copying or synchronisation – just edit your files and immediately run them on the server. www.vm.provision :shell, :path => "provision.sh" This is where we specify the provisioner, the script that will be executed on the machine. If you open up the provisioner you’ll see it’s a bash script that does things like: install Apache, PHP, MySQL and related libraries configure the libraries: set permissions, enable logging create a database and grant some access rights set up some code for us to develop on; in this case, fire up a vanilla WordPress installation To get this all up and running you simply need to run Vagrant from within the vagrant folder: $ vagrant up You should now get a Matrix-like stream of stuff shooting up the screen. If this is the first time Vagrant has used this particular operating system image – remember we’ve specified the latest version of Ubuntu – it’ll download the disc image and cache it for future reuse. Then all the packages are downloaded and installed and finally all our configuration steps occur incluing the download and configuration of WordPress. Halfway through proceedings it’s likely that the process will halt at a prompt something like this: ==> www: adding to (/etc/hosts) : 192.168.13.37 bealers-24ways.dev # VAGRANT: 2dbfbced1b1e79d2a0942728a0a57ece (www) / 899bd80d-4251-4f6f-91a0-d30f2d9918cc Password: You need to enter your password to give vagrant sudo rights to add the IP address and hostname mapping to your local hosts file. Once finished, fire up your browser and go to http://bealers-24ways.dev. You should see a default WordPress installation. The username for wp-admin is admin and the password is 24ways. If you take a look at your local filesystem the 24ways folder should now look like: $ cd ../ $ ls -l -rw-r--r--@ 1 bealers staff 13074 9 Nov 10:14 bealers-24ways.md drwxr-xr-x 21 bealers staff 714 9 Nov 10:06 code drwxr-xr-x 3 bealers staff 102 9 Nov 10:06 etc -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 9 Nov 10:03 vagrant -rwxr-xr-x 1 bealers staff 1315849 9 Nov 10:06 wp-cli $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 9 Nov 09:41 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3836 9 Nov 10:06 provision.sh The code folder contains all the WordPress files. You can edit these directly and refresh that page to see your changes instantly. Staying in the vagrant folder, we’ll now SSH to the machine and have a quick poke around. $ vagrant ssh Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-39-generic x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 9 10:03:38 UTC 2014 System load: 1.35 Processes: 102 Usage of /: 2.7% of 39.34GB Users logged in: 0 Memory usage: 16% IP address for eth0: 10.0.2.15 Swap usage: 0% Graph this data and manage this system at: https://landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. vagrant@bealers-24ways:~$ You’re now logged in as the Vagrant user; if you want to become root this is easy: vagrant@bealers-24ways:~$ sudo su - root@bealers-24ways:~# Or you could become the webserver user, which is a good idea if you’re editing the web files directly on the server: root@bealers-24ways:~# su - www-data www-data@bealers-24ways:~$ www-data’s home directory is /var/www so we should be able to see our magically mapped files: www-data@bealers-24ways:~$ ls -l total 4 drwxr-xr-x 1 www-data www-data 306 Nov 9 10:09 24ways drwxr-xr-x 2 root root 4096 Nov 9 10:05 html www-data@bealers-24ways:~$ cd 24ways/ www-data@bealers-24ways:~/24ways$ ls -l total 1420 -rw-r--r-- 1 www-data www-data 13682 Nov 9 10:19 bealers-24ways.md drwxr-xr-x 1 www-data www-data 714 Nov 9 10:06 code drwxr-xr-x 1 www-data www-data 102 Nov 9 10:06 etc -rw-r--r-- 1 www-data www-data 118152 Nov 9 10:08 it-works.png drwxr-xr-x 1 www-data www-data 170 Nov 9 10:03 vagrant -rwxr-xr-x 1 www-data www-data 1315849 Nov 9 10:06 wp-cli We can also see some of our bespoke configurations: www-data@bealers-24ways:~/24ways$ cat /etc/php5/mods-available/siftware.ini upload_max_filesize = 15M log_errors = On display_errors = On display_startup_errors = On error_log = /var/log/apache2/php.log memory_limit = 1024M date.timezone = Europe/London www-data@bealers-24ways:~/24ways$ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 43 Nov 9 10:06 bealers-24ways.dev.conf -> /var/www/24ways/etc/bealers-24ways.dev.conf If you want to leave the server, simply type Ctrl+D a few times and you’ll be back where you started. www-data@bealers-24ways:~/24ways$ logout root@bealers-24ways:~# logout vagrant@bealers-24ways:~$ logout Connection to 127.0.0.1 closed. $ You can now halt the machine: $ vagrant halt ==> www: Attempting graceful shutdown of VM... ==> www: Removing hosts Bonus level The example I’ve provided isn’t very realistic. In the real world I’d expect the Vagrant file and provisioner to be included with the project and for it not to create the directory structure, which should already exist in your project. The same goes for the Apache VirtualHost file. You’ll also probably have a default SQL script to populate the database. As you work with Vagrant you might start to find bash provisioning to be quite limiting, especially if you are working on larger projects which use more than one server. In that case I would suggest you take a look at Ansible, Puppet or Chef. We use Ansible because we like YAML but they all do the same sort of thing. The main benefit is being able to use the same Vagrant provisioning scripts to also provision test, staging and production environments using your build workflows. Having to supply a password so the hosts file can be updated gets annoying very quicky so you can give Vagrant sudo rights: $ sudo visudo Add these lines to the bottom (Shift+G then i then Ctrl+V then Esc then :wq) Cmnd_Alias VAGRANT_HOSTS_ADD = /bin/sh -c echo "*" >> /etc/hosts Cmnd_Alias VAGRANT_HOSTS_REMOVE = /usr/bin/sed -i -e /*/ d /etc/hosts %staff ALL=(root) NOPASSWD: VAGRANT_HOSTS_ADD, VAGRANT_HOSTS_REMOVE Vagrant caches the operating system images that you download but it’ll download the installed software packages every time. You can get around this by using a plugin like vagrant-cachier or, if you’re really keen, maintain local Apt repositories (or whatever the equivalent is for your server architecture). At some point you might start getting a large number of virtual machines running on your poor hardware all at the same time, especially if you’re switching between projects a lot and each of those projects use lots of servers. We’re just getting to that stage now, so are considering a medium-term move to a containerised option like Docker, which seems to be maturing now. If you are keen not to use any command line tools whatsoever and you’re on OS X then you could check out Vagrant Manager as it looks quite shiny. Finally, there are a huge amount of resources to give you pre-built Vagrant machines from the likes of VVV for Wordpress, something similar for Perch, PuPHPet for generating various configurations, and a long list of pre-built operating systems at VagrantBox.es. Wrapping up Hopefully you can now see why it might be worthwhile to add Vagrant to your development workflow. Whether you’re an agency drafting in freelancers or a one-person band running lots of sites on your laptop using MAMP or something similar. Vagrant makes it easy to launch exact copies of the same machine in a repeatable and version controlled way. The learning curve isn’t too steep and, once configured, you can forget about it and focus on getting your work done. 2014 Darren Beale darrenbeale 2014-12-05T00:00:00+00:00 https://24ways.org/2014/what-is-vagrant-and-why-should-i-care/ process
47 Developing Robust Deployment Procedures Once you have developed your site, how do you make it live on your web hosting? For many years the answer was to log on to your server and upload the files via FTP. Over time most hosts and FTP clients began to support SFTP, ensuring your files were transmitted over a secure connection. The process of deploying a site however remained the same. There are issues with deploying a site in this way. You are essentially transferring files one by one to the server without any real management of that transfer. If the transfer fails for some reason, you may end up with a site that is only half updated. It can then be really difficult to work out what hasn’t been replaced or added, especially where you are updating an existing site. If you are updating some third-party software your update may include files that should be removed, but that may not be obvious to you and you risk leaving outdated files littering your file system. Updating using (S)FTP is a fragile process that leaves you open to problems caused by both connectivity and human error. Is there a better way to do this? You’ll be glad to know that there is. A modern professional deployment workflow should have you moving away from fragile manual file transfers to deployments linked to code committed into source control. The benefits of good practice You may never have experienced any major issues while uploading files over FTP, and good FTP clients can help. However, there are other benefits to moving to modern deployment practices. No surprises when you launch If you are deploying in the way I suggest in this article you should have no surprises when you launch because the code you committed from your local environment should be the same code you deploy – and to staging if you have a staging server. A missing vital file won’t cause things to start throwing errors on updating the live site. Being able to work collaboratively Source control and good deployment practice makes working with your clients and other developers easy. Deploying first to a staging server means you can show your client updates and then push them live. If you subcontract some part of the work, you can give your subcontractor the ability to deploy to staging, leaving you with the final push to launch, once you know you are happy with the work. Having a proper backup of site files with access to them from anywhere The process I will outline requires the use of hosted, external source control. This gives you a backup of your latest commit and the ability to clone those files and start working on them from any machine, wherever you are. Being able to jump back into a site quickly when the client wants a few changes When doing client work it is common for some work to be handed over, then several months might go by without you needing to update the site. If you don’t have a good process in place, just getting back to work on it may take several hours for what could be only a few hours of work in itself. A solid method for getting your local copy up to date and deploying your changes live can cut that set-up time down to a few minutes. The tool chain In the rest of this article I assume that your current practice is to deploy your files over (S)FTP, using an FTP client. You would like to move to a more robust method of deployment, but without blowing apart your workflow and spending all Christmas trying to put it back together again. Therefore I’m selecting the most straightforward tools to get you from A to B. Source control Perhaps you already use some kind of source control for your sites. Today that is likely to be Git but you might also use Subversion or Mercurial. If you are not using any source control at all then I would suggest you choose Git, and that is what I will be working with in this article. When you work with Git, you always have a local repository. This is where your changes are committed. You also have the option to push those changes to a remote repository; for example, GitHub. You may well have come across GitHub as somewhere you can go to download open source code. However, you can also set up private repositories for sites whose code you don’t want to make publicly accessible. A hosted Git repository gives you somewhere to push your commits to and deploy from, so it’s a crucial part of our tool chain. A deployment service Once you have your files pushed to a remote repository, you then need a way to deploy them to your staging environment and live server. This is the job of a deployment service. This service will connect securely to your hosting, and either automatically (or on the click of a button) transfer files from your Git commit to the hosting server. If files need removing, the service should also do this too, so you can be absolutely sure that your various environments are the same. Tools to choose from What follows are not exhaustive lists, but any of these should allow you to deploy your sites without FTP. Hosted Git repositories GitHub Beanstalk Bitbucket Standalone deployment tools Deploy dploy.io FTPloy I’ve listed Beanstalk as a hosted Git repository, though it also includes a bundled deployment tool. Dploy.io is a standalone version of that tool just for deployment. In this tutorial I have chosen two separate services to show how everything fits together, and because you may already be using source control. If you are setting up all of this for the first time then using Beanstalk saves having two accounts – and I can personally recommend them. Putting it all together The steps we are going to work through are: Getting your local site into a local Git repository Pushing the files to a hosted repository Connecting a deployment tool to your web hosting Setting up a deployment Get your local site into a local Git repository Download and install Git for your operating system. Open up a Terminal window and tell Git your name using the following command (use the name you will set up on your hosted repository). > git config --global user.name "YOUR NAME" Use the next command to give Git your email address. This should be the address that you will use to sign up for your remote repository. > git config --global user.email "YOUR EMAIL ADDRESS" Staying in the command line, change to the directory where you keep your site files. If your files are in /Users/rachel/Sites/mynicewebite you would type: > cd /Users/rachel/Sites/mynicewebsite The next command tells Git that we want to create a new Git repository here. > git init We then add our files: > git add . Then commit the files: > git commit -m “Adding initial files” The bit in quotes after -m is a message describing what you are doing with this commit. It’s important to add something useful here to remind yourself later why you made the changes included in the commit. Your local files are now in a Git repository! However, everything should be just the same as before in terms of working on the files or viewing them in a local web server. The only difference is that you can add and commit changes to this local repository. Want to know more about Git? There are some excellent resources in a range of formats here. Setting up a hosted Git repository I’m going to use Atlassian Bitbucket for my first example as they offer a free hosted and private repository. Create an account on Bitbucket. Then create a new empty repository and give it a name that will identify the repository easily. Click Getting Started and under Command Line select “I have an existing project”. This will give you a set of instructions to run on the command line. The first instruction is just to change into your working directory as we did before. We then add a remote repository, and run two commands to push everything up to Bitbucket. cd /path/to/my/repo git remote add origin https://myuser@bitbucket.org/myname/24ways-tutorial.git git push -u origin --all git push -u origin --tags When you run the push command you will be asked for the password that you set for Bitbucket. Having entered that, you should be able to view the files of your site on Bitbucket by selecting the navigation option Source in the sidebar. You will also be able to see commits. When we initially committed our files locally we added the message “Adding initial files”. If you select Commits from the sidebar you’ll see we have one commit, with the message we set locally. You can imagine how useful this becomes when you can look back and see why you made certain changes to a project that perhaps you haven’t worked on for six months. Before working on your site locally you should run: > git pull in your working directory to make sure you have all of the most up-to-date files. This is especially important if someone else might work on them, or you just use multiple machines. You then make your changes and add any changed or modified files, for example: > git add index.php Commit the change locally: > git commit -m “updated the homepage” Then push it to Bitbucket: > git push origin master If you want to work on your files on a different computer you clone them using the following command: > git clone https://myuser@bitbucket.org/myname/24ways-tutorial.git You then have a copy of your files that is already a Git repository with the Bitbucket repository set up as a remote, so you are all ready to start work. Connecting a deployment tool to your repository and web hosting The next step is deploying files. I have chosen to use a deployment tool called Deploy as it has support for Bitbucket. It does have a monthly charge – but offers a free account for open source projects. Sign up for your account then log in and create your first project. Select Create an empty project. Under Configure Repository Details choose Bitbucket and enter your username and password. If Deploy can connect, it will show you your list of projects. Select the one you want. The next screen is Add New Server and here you need to configure the server that you want to deploy to. You might set up more than one server per project. In an ideal world you would deploy to a staging server for your client preview changes and then deploy once everything is signed off. For now I’ll assume you just want to set up your live site. Give the server a name; I usually use Production for the live web server. Then choose the protocol to connect with. Unless your host really does not support SFTP (which is pretty rare) I would choose that instead of FTP. You now add the same details your host gave you to log in with your SFTP client, including the username and password. The Path on server should be where your files are on the server. When you log in with an SFTP client and you get put in the directory above public_html then you should just be able to add public_html here. Once your server is configured you can deploy. Click Deploy now and choose the server you just set up. Then choose the last commit (which will probably be selected for you) and click Preview deployment. You will then get a preview of which files will change if you run the deployment: the files that will be added and any that will be removed. At the very top of that screen you should see the commit message you entered right back when you initially committed your files locally. If all looks good, run the deployment. You have taken the first steps to a more consistent and robust way of deploying your websites. It might seem like quite a few steps at first, but you will very soon come to realise how much easier deploying a live site is through this process. Your new procedure step by step Edit your files locally as before, testing them through a web server on your own computer. Commit your changes to your local Git repository. Push changes to the remote repository. Log into the deployment service. Hit the Deploy now button. Preview the changes. Run the deployment and then check your live site. Taking it further I have tried to keep things simple in this article because so often, once you start to improve processes, it is easy to get bogged down in all the possible complexities. If you move from deploying with an FTP client to working in the way I have outlined above, you’ve taken a great step forward in creating more robust processes. You can continue to improve your procedures from this point. Staging servers for client preview When we added our server we could have added an additional server to use as a staging server for clients to preview their site on. This is a great use of a cheap VPS server, for example. You can set each client up with a subdomain – clientname.yourcompany.com – and this becomes the place where they can view changes before you deploy them. In that case you might deploy to the staging server, let the client check it out and then go back and deploy the same commit to the live server. Using Git branches As you become more familiar with using Git, and especially if you start working with other people, you might need to start developing using branches. You can then have a staging branch that deploys to staging and a production branch that is always a snapshot of what has been pushed to production. This guide from Beanstalk explains how this works. Automatic deployment to staging I wouldn’t suggest doing automatic deployment to the live site. It’s worth having someone on hand hitting the button and checking that everything worked nicely. If you have configured a staging server, however, you can set it up to deploy the changes each time a commit is pushed to it. If you use Bitbucket and Deploy you would create a deployment hook on Bitbucket to post to a URL on Deploy when a push happens to deploy the code. This can save you a few steps when you are just testing out changes. Even if you have made lots of changes to the staging deployment, the commit that you push live will include them all, so you can do that manually once you are happy with how things look in staging. Further Reading The tutorials from Git Client Tower, already mentioned in this article, are a great place to start if you are new to Git. A presentation from Liam Dempsey showing how to use the GitHub App to connect to Bitbucket Try Git from Code School The Git Workbook a self study guide to Git from Lorna Mitchell Get set up for the new year I love to start the New Year with a clean slate and improved processes. If you are still wrangling files with FTP then this is one thing you could tick off your list to save you time and energy in 2015. Post to the comments if you have suggestions of tools or ideas for ways to enhance this type of set-up for those who have already taken the first steps. 2014 Rachel Andrew rachelandrew 2014-12-04T00:00:00+00:00 https://24ways.org/2014/developing-robust-deployment-procedures/ process
48 A Holiday Wish A friend and I were talking the other day about why clients spend more on toilet cleaning than design, and how the industry has changed since the mid-1990s, when we got our starts. Early in his career, my friend wrote a fine CSS book, but for years he has called himself a UX designer. And our conversation got me thinking about how I reacted to that title back when I first started hearing it. “Just what this business needs,” I said to myself, “another phony expert.” Okay, so I was wrong about UX, but my touchiness was not altogether unfounded. In the beginning, our industry was divided between freelance jack-of-all-trade punks, who designed and built and coded and hosted and Photoshopped and even wrote the copy when the client couldn’t come up with any, and snot-slick dot-com mega-agencies that blew up like Alice and handed out titles like impoverished nobles in the years between the world wars. I was the former kind of designer, a guy who, having failed or just coasted along at a cluster of other careers, had suddenly, out of nowhere, blossomed into a web designer—an immensely curious designer slash coder slash writer with a near-insatiable lust to shave just one more byte from every image. We had modems back then, and I dreamed in sixteen colors. My source code was as pretty as my layouts (arguably prettier) and I hoovered up facts and opinions from newsgroups and bulletin boards as fast as any loudmouth geek could throw them. It was a beautiful life. But soon, too soon, the professional digital agencies arose, buying loft buildings downtown, jacking in at T1 speeds, charging a hundred times what I did, and communicating with their clients in person, in large artfully bedecked rooms, wearing hand-tailored Barney’s suits and bringing back the big city bullshit I thought I’d left behind when I quit advertising to become a web designer. Just like the big bad ad agencies of my early career, the new digital agencies stocked every meeting with a totem pole worth of ranks and titles. If the client brought five upper middle managers to the meeting, the agency did likewise. If fifteen stakeholders got to ask for a bigger logo, fifteen agency personnel showed up to take notes on the percentage of enlargement required. But my biggest gripe was with the titles. The bigger and more expensive the agency, the lousier it ran with newly invented titles. Nobody was a designer any more. Oh, no. Designer, apparently, wasn’t good enough. Designer was not what you called someone you threw that much money at. Instead of designers, there were user interaction leads and consulting middleware integrators and bilabial experience park rangers and you name it. At an AIGA Miami event where I was asked to speak in the 1990s, I once watched the executive creative director of the biggest dot-com agency of the day make a presentation where he spent half his time bragging that the agency had recently shaved down the number of titles for people who basically did design stuff from forty-six to just twenty-three—he presented this as though it were an Einsteinian coup—and the other half of his time showing a film about the agency’s newly opened branch in Oslo. The Oslo footage was shot in December. I kept wondering which designer in the audience who lived in the constant breezy balminess of Miami they hoped to entice to move to dark, wintry Norway. But I digress. Shortly after I viewed this presentation, the dot-com world imploded, brought about largely by the euphoric excess of the agencies and their clients. But people still needed websites, and my practice flourished—to the point where, in 1999, I made the terrifying transition from guy in his underwear working freelance out of his apartment to head of a fledgling design studio. (Note: you never stop working on that change.) I had heard about experience design in the 1990s, but assumed it was a gig for people who only knew one font. But sometime around 2004 or 2005, among my freelance and small-studio colleagues, like a hobbit in the Shire, I began hearing whispers in the trees of a new evil stirring. The fires of Mordor were burning. Web designers were turning in their HTML editing tools and calling themselves UXers. I wasn’t sure if they pronounced it “uck-sir,” or “you-ex-er,” but I trusted their claims to authenticity about as far as I trusted the actors in a Doctor Pepper commercial when they claimed to be Peppers. I’m an UXer, you’re an UXer, wouldn’t you like to be an UXer too? No thanks, said I. I still make things. With my hands. Such was my thinking. I may have earned an MFA at the end of some long-past period of soul confusion, but I have working-class roots and am profoundly suspicious of, well, everything, but especially of anything that smacks of pretense. I got exporting GIFs. I didn’t get how white papers and bullet points helped anybody do anything. I was wrong. And gradually I came to know I was wrong. And before other members of my tribe embraced UX, and research, and content strategy, and the other airier consultant services, I was on board. It helped that my wife of the time was a librarian from Michigan, so I’d already bought into the cult of information architecture. And if I wasn’t exactly the seer who first understood how borderline academic practices related to UX could become as important to our medium and industry as our craft skills, at least I was down a lot faster than Judd Apatow got with feminism. But I digress. I love the web and all the people in it. Today I understand design as a strategic practice above all. The promise of the web, to make all knowledge accessible to all people, won’t be won by HTML5, WCAG 2, and responsive web design alone. We are all designers. You may call yourself a front-end developer, but if you spend hours shaving half-seconds off an interaction, that’s user experience and you, my friend, are a designer. If the client asks, “Can you migrate all my old content to the new CMS?” and you answer, “Of course we can, but should we?”, you are a designer. Even our users are designers. Think about it. Once again, as in the dim dumb dot-com past, we seem to be divided by our titles. But, O, my friends, our varied titles are only differing facets of the same bright gem. Sisters, brothers, we are all designers. Love on! Love on! And may all your web pages, cards, clusters, clumps, asides, articles, and relational databases be bright. 2014 Jeffrey Zeldman jeffreyzeldman 2014-12-18T00:00:00+00:00 https://24ways.org/2014/a-holiday-wish/ ux
Powered by Datasette · Query took 1.584ms