Reading List

The most recent articles from a list of feeds I subscribe to.

Perfwizardry

When Harry Roberts tweeted he would be free to give a workshop in The Netherlands this week, Joël approached Xebia and Fronteers and made it happen. Almost 20 of us gathered in the Wibautstraat in Amsterdam and spent a whole day getting all nerdy about web performance.

Harry told us all about how to sell performance, how (and what) to measure and which tools to use. And of course, how to fix the problems you measure. He also went in to how all of this is impacted by HTTP/2 and Service Workers (spoiler: turn them off when performance testing). The workshop came with plenty of theory, which I liked, as it helped understand how things work in practice. We ended by scrutinising a number of actual websites using the things we learned.

Please find some of my notes below.

Selling performance

  • When trying to sell performance to others in your company, tune the message towards the audience. With performance, it is easy to get all excited about all the technical stuff, especially as a developer. Concurrent downloads in HTTP/2 may interest your audience less than the forecast of paying less for bandwidth.
  • Use stats. If your company does not keep them, you can also refer to stats of others, WPOstats is great.
  • Your users may have capped data plans, they may be in an area with poor network conditions. Think of the next billion internet users, they may not have fast iPhones and great WiFi. See also What does my website cost and Web World Wide.

What to measure

  • Testing on your own device with your own network helps and it is easy, but it is nowhere near as valuable as Real User Monitoring (RUM). There are services that help with this, such as SpeedCurve and Pingdom. Google Analytics also provides some rudimentary RUM tools.
  • Beware of metrics such as page weight, number of requests and fully loaded. It depends on your situation if and how those apply to your performance strategy.
  • Perceived performance is better to focus on, although it is harder to measure. Many automated tools require thorough evaluation. Human judgment is essential.
  • Speed Index is a good metric to focus on. It is a ‘user-centric measurement’: the number is based on visual completeness of a page.
  • You can use Speed Index’s median as a metric in Web Page Test by adding ?medianMetric=SpeedIndex at the end of your Web Page Test url.

The network and requests

  • TCP/IP comes with head of line blocking: if one of the packages fails to come through, the server will request it again and hold up other packages in the meantime.
  • The network is hostile, assume it will work against you.
  • Beware of the difference between latency and bandwidth. Latency is how fast a certain path can be travelled, bandwidth is how much we can carry on our journey. Compare it with a motorway: latency is how fast we can drive, bandwidth is how many lanes are available for us to use.
  • The Connection: Keep-Alive header will leave a connection open for other transfers to happen, preventing new connections (with new DNS looksups, handshakes, etc) having to start all the time.
  • High Performance Browser Networking by Ilya Grigorik is a great book and it is available online for free. It is very technical, the mobile section is especially great.
  • Avoid redirects, they waste your user’s time. This is especially true on mobile, where requests are even more fragile.
  • Beware of whether your URLs have a trailing slash. If you are not precise about where you link to, you can cause redirects to happen. For example, if your ‘About’ page is on yoursite.com/about/, you link to yoursite.com/about and you’ve set up a redirect, you can save your user from this by just linking to the right path in the first place.
  • Elimate 404s, as they are not cached (this is a feature).
  • Make use of cache, do not rely on it.
  • Use compression, for example gzip.
  • Concatenate your files to circumvent the HTTP’s limit on concurrent downloads (stop doing this when using HTTP/2).

Resource hints to increase performance

In which was likely the most exciting section of the workshop, Harry went into resource hints: DNS Prefetch, Preconnect, Prefetch and Prerender. Web performance hero Steve Souders talked about these concepts at his Fronteers 2013 talk “Pre-browsing”. Browser support for resource hints has improved massively since.

  • If you use both dns-prefetch and preconnect, set the preconnect header first, the browser will then skip over the dns-prefetch header as it will have already prefetched the DNS.
  • Long request chains are expensive: this is what happens when you include a CSS file, which imports another, which then requests a font, etc. The browser will only know about the font when it has parsed the second CSS file. With the preload you can tell the browser to preload assets, so that it already has them when the second CSS file requests them.

Anti patterns

  • Avoid Base64 encoding as it has little repetition and therefore gzips badly.
  • Don’t use domain sharding (the practice of using different domains to allow for more concurrent downloads) for any assets that are critical to your site, such as your (critical) CSS.
  • Many of today’s best practices are HTTP/1.1 oriented. When you start serving over HTTP/2, beware some may become anti patterns.

The future

  • If you serve over HTTP/2, you can benefit from multiplexing and concurrency (download all assets at the same time), improved caching, much smaller headers and Server Push, which lets the server send files the user did not yet request
  • Service Workers can speed up things tremendously as you can use them to have more granular control over what you serve from where.

Although I was familiar with most of the concepts covered, I did pick up lots of things that I did not know about. There were also practical examples, and Harry explained everything with lots of patience and at just the right speed. If Harry is coming to your town with his performance workshop, I would recommend coming along.


Originally posted as Perfwizardry on Hidde's blog.

Reply via email

The importance of web standards and design for accessibility

Yesterday I attended the Inclusive Design and Accessibility meetup (“idea11y”) at De Voorhoede in Amsterdam, which was about development tools, graphic design and, the surprise act, the accessibility of React apps.

Florian Beijers, who currently interns at Microsoft, talked about his experience using text editors. He showed us popular editors like Sublime Text and Atom, and how they are impossible to use for him as a blind user. Some just read out BLANK BLANK BLANK, due to bugs in the underlying framework (Electron, in Atom’s case). It was easy for us all to conclude that the products had not looked into accessibility much. During the talk and Q&A two questions arised: why does this happen, and how hard is it to make products accessible? The why, Florian said, seems to be a combination of lack of knowledge and lack of education, which then also causes accessibility bugs to be deprioritised. If you don’t know that your designs or your code places a burden on some of your users, why make improvements? Then the how. According to Florian, adhering to standards can help a great deal in making things accessible. An example is labeling: if you do not use the standard way of labeling controls in your application, your user’s accessibility tools will not understand them.

Design consultant Agnieszka Czajkowska talked about accessibility from a design point of view. She emphasised that good design can help a great deal in removing barriers for some of your users. To come to good design, she said, it is important to include users early on, do accessibility by default and avoid treating it as an optional feature. This is something I personally see a lot in projects: specific user stories or backlog items for accessibility (‘as a user, I want all the things to be accessible’). It doesn’t work like that, every task should include doing it accessibly. In the Q&A, the discussion moved towards flexibility of interfaces. Some people need more contrast or larger fonts. If an interface is flexible enough, they can use their browser settings to do this. Agnieszka said this comes down to giving users agency: make sure your interface allows for rather than hinders user preferences. In her talk, she showed a practical example of this: on a site she used, text overlapped with other text when she set a larger font size. With responsive design, it is perfectly possible to avoid this.

De Voorhoede’s Jasper Moelker stepped in at the last minute to do a lightning talk that he had done before for the ReactNL conference. In the talk, he used examples from the React docs and showed how they performed in three browsers: Chrome (a modern browser), Opera mini (a proxy browser) and Lynx (a text browser). Sadly, most of the examples worked great in Chrome, hardly in Opera mini and not at all in Lynx. He then showed how to rewrite the examples by using semantic HTML. Basic markup 101 like pairing labels with inputs and wrapping form controls in form elements helped a great deal and made some of the examples work in all three browsers. Takeaway: semantic markup is extremely important. It is a great strategy to ensure your application is understood by the largest amount of browsers/devices possible. Again, as Florian also concluded, following existing standards helps a great deal. Another strategy Jasper recommended is to choose ‘mount nodes’ wisely: some turn their whole body into one big React app, which does nothing without React. Instead, you could turn specific parts of your page into mini React apps, and have the rest of the page contain sensible defaults. The latter likely deals more gracefully with failures.

This was my first idea11y meetup, and I have had a great evening. Many thanks to the organisers for putting this event up and De Voorhoede for their hospitality. I am looking forward to attending again!


Originally posted as The importance of web standards and design for accessibility on Hidde's blog.

Reply via email

Using JavaScript to trap focus in an element

For information on how to accessibly implement the components I’m working on, I often refer to WAI-ARIA Authoring Practices specification. One thing this spec sometimes recommends, is to trap focus in an element, for example in a modal dialog while it is open. In this post I will show how to implement this.

Some history

In the early days of the web, web pages used to be very simple: there was lots of text, and there were links to other pages. Easy to navigate with a mouse, easy to navigate with a keyboard.

The modern web is much more interactive. We now have ‘rich internet applications’. Elements appear and disappear on click, scroll or even form validation. Overlays, carousels, AJAX… most of these have ‘custom’ behaviour. Therefore we cannot always rely on the browser’s built-in interactions to ensure user experience. We go beyond default browser behaviour, so the duty to fix any gaps in user experience is on us.

One common method of ‘fixing’ the user’s experience, is by carefully shifting focus around with JavaScript. The browser does this for us in common situations, for example when tabbing between links, when clicking form labels or when following anchor links. In less browser-predictable situations, we will have to do it ourselves.

When to trap focus

Trapping focus is a behaviour we usually want when there is modality in a page. Components that could have been pages of their own, such as overlays and dialog boxes. When such components are active, the rest of the page is usually blurred and the user is only allowed to interact with our component.

Not all users can see the visual website, so we will also need to make it work non-visually. The idea is that if for part of the site we prevent clicks, we should also prevent focus.

Some examples of when to trap focus:

  • user opens a modal in which they can pick a seat on their flight, with a semitransparent layer underneath it
  • user tries to submit a form that could not be validated, and is shown an error message; they can only choose ‘OK’ and cannot interact with the rest of the page
  • user opens a huge navigation menu, the background behind the navigation is blurred (“Categorie kiezen” at Hema.nl)

In these cases we would like to trap focus in the modal, alert or navigation menu, until they are closed (at which point we want to undo the trapping and return focus to the element that instantiated the modal).

Requirements

We need these two things to be the case during our trap:

  • When a user presses TAB, the next focusable element receives focus. If this element is outside our component, focus should be set to the first focusable element in the component.
  • When a user presses SHIFT TAB, the previous focusable element receives focus. If this element is outside our component, focus should be set to the last focusable element in the component.

Implementation

In order to implement the above behaviour on a given element, we need to get a list of the focusable elements within it, and save the first and last one in variables.

In the following, I assume the element we trap focus in is stored in a variable called element .

Get focusable elements

In JavaScript we can figure out if elements are focusable, for example by checking if they either are interactive elements or have tabindex .

This gives a list of common elements that are focusable:

var focusableEls = element.querySelectorAll('a[href]:not([disabled]), button:not([disabled]), textarea:not([disabled]), input[type="text"]:not([disabled]), input[type="radio"]:not([disabled]), input[type="checkbox"]:not([disabled]), select:not([disabled])');

This is an example list of common elements; there are many more focusable elements. Note that it is useful to exclude disabled elements here.

Save first and last focusable element

This is a way to get the first and last focusable elements within an element :

var firstFocusableEl = focusableEls[0];  
var lastFocusableEl = focusableEls[focusableEls.length - 1];

We can later compare these to document.activeElement , which contains the element in our page that currently has focus.

Listen to keydown

Next, we can listen to keydown events happening within the element , check whether they were TAB or SHIFT TAB and then apply logic if the first or last focusable element had focus.

var KEYCODE_TAB = 9;

element.addEventListener('keydown', function(e) {
  if (e.key === 'Tab' || e.keyCode === KEYCODE_TAB) {
    if ( e.shiftKey ) /* shift + tab */ {
      if (document.activeElement === firstFocusableEl) {
        lastFocusableEl.focus();
        e.preventDefault();
      }
    } else /* tab */ {
      if (document.activeElement === lastFocusableEl) {
        firstFocusableEl.focus();
        e.preventDefault();
      }
    }
  }
});

Alternatively, you can add the event listener to the first and last items. I like the above approach, as with one listener, there is only one to undo later.

Putting it all together

With some minor changes, this is my final trapFocus() function:

function trapFocus(element) {
  var focusableEls = element.querySelectorAll('a[href]:not([disabled]), button:not([disabled]), textarea:not([disabled]), input[type="text"]:not([disabled]), input[type="radio"]:not([disabled]), input[type="checkbox"]:not([disabled]), select:not([disabled])');
  var firstFocusableEl = focusableEls[0];  
  var lastFocusableEl = focusableEls[focusableEls.length - 1];
  var KEYCODE_TAB = 9;

  element.addEventListener('keydown', function(e) {
    var isTabPressed = (e.key === 'Tab' || e.keyCode === KEYCODE_TAB);

    if (!isTabPressed) { 
      return; 
    }

    if ( e.shiftKey ) /* shift + tab */ {
      if (document.activeElement === firstFocusableEl) {
        lastFocusableEl.focus();
          e.preventDefault();
        }
      } else /* tab */ {
      if (document.activeElement === lastFocusableEl) {
        firstFocusableEl.focus();
          e.preventDefault();
        }
      }
  });
}

In this function, we have moved the check for tab to its own variable (thanks Job), so that we can stop function execution right there.

Further reading

See also Trapping focus inside the dialog by allyjs, Dialog (non-modal) in WAI-ARIA Authoring Practices and Can a modal dialog be made to work properly for screen-reader users on the web? by Everett Zufelt.


Originally posted as Using JavaScript to trap focus in an element on Hidde's blog.

Reply via email

On initialising JavaScript from markup

In Don’t initialise JavaScript automagically, Adam Silver argues that we should not rely on markup to trigger bits of JavaScript. I disagree with the advice he gives, and think markup is great place to trigger behaviour. Where else?

A quick disclaimer up front: I believe that when it comes to how to trigger JavaScript, there is no good or bad. There is personal preference though, and mine is a markup-based approach.

In this post I will go into three of Adam’s concerns with that approach. They are about what behaviour is being triggered, how many times and whether hiding complexity is a problem. His post does not go into any non-markup-based initialisation methods, so I can only guess what such methods would involve.

What you’re triggering

With markup to trigger behaviour, Adam explains, it is harder to know what you’re initialising. I think this depends on how your declare the behaviour. Enforcing a solid naming approach is essential here.

This is an approach I like:

<a href="#section1" 
   data-handler="open-in-overlay">
Section 1</a>

Let’s assume a website consistently uses the data-handler attribute (see my earlier post about handlers), with as its value a verb that corresponds to a function name. In the codebase for that website, it will be trivial to see what you are initialising.

In the example, the initialised JavaScript will open a thing called “Section 1” in an overlay. The function name is pretty much self-documenting, and it will live wherever your team decided functions to live.

Another approach is the syntax ConditionerJS uses:

<a href="http://maps.google.com/?ll=51.741,3.822"
   data-module="ui/Map"
   data-conditions="media:{(min-width:40em)} and element:{was visible}"> ... </a>

Conditioner (it’s frizz-free) does not use verbs as a naming convention, but expects module names in a data-module attribute. All modules use the same convention for declaring they exist. Again, there is no confusion as to which behaviour is being triggered.

In the above examples agreed upon data attributes are used to declare behaviour. The reason why I think they are intuitive, is that they look very similar to standard HTML. Take this link:

<a href="#section1" 
   title="Section">
Section</a>

We could see this as a ‘link’ module, that has two options set in its attributes: href tells the browser where it needs to take the user, title sets up which text to display in a tooltip when the user hovers a link.

Or this image:

<img src="temple.jpg" alt="Four monks in front of a temple" />

This ‘image’ module comes with two settings: src tells the browser where the image is, alt can be used by browsers to display to users if fetching the image fails. With srcset and sizes, even more attributes are available to trigger behaviour.

In summary: custom attributes are great to declare custom behaviour, because HTML already has lots of native attributes that declare native behaviour. This doesn’t mean we’re doing behaviour in HTML: we’re doing behaviour in JavaScript, and are merely using the markup to declare the nature and conditions of this behaviour.

How many times you’re triggering

When you trigger JavaScript for each occurrence of classname or attribute, you do not know how many times you are triggering it, says Adam.

Correct, but is this a problem? This phenomenon is quite common in other parts of our work. In a component based approach to markup, you also don’t know how many times a heading or a <p> is going to be on a given page. In CSS, you also style elements without knowing on which pages they will exist (or not), or how many times.

Hiding complexity

[Defining behaviour in markup] is magical. […] Automagical loops obfuscate complexity. They make things appear simple when they’re not.

It’s unfair to markup-based initialisation concepts to label them ‘magical’, as if that is something we should at all times avoid. The label makes it sound like there is a simpler and more sensical alternative, but is there?

We are always going to be applying script to DOM elements, whether we define where scripts run in the scripts themselves, or in the markup they apply to. Neither is more or less magical.

Besides, if using classes or attributes to trigger behaviour is magical, is using classes to trigger CSS also magical?

For example:

h2 { color: red; }

Your browser ‘automagically’ applies a red colour to all the headings. Its algorithms for this are hidden away. That’s good. We can declare a colour, without knowing exactly how our declaration is going to be applied. This is very powerful. It is this simplicity that has helped so many people publish great stuff on the web.

The same goes for my HTML examples earlier. The fact that link title tooltips and alt texts are just attributes, makes them easier to use. The fact that the browser’s logic to ‘use’ them is obfuscated is a benefit, if anything.

Abstracting the complexities of initialisation away also helps to keep code DRY. There is one method for initialisation, which is testable and maintainable. Everything else is just suitable function names declared on well-structured HTML, which themselves are also testable and maintainable.

Conclusion

I think defining behaviour in markup is absolutely fine, and it is probably one of the most powerful ways to define behaviour out there. Looping through elements with specific attributes helps developers write DRY, maintainable and testable JavaScript code. Not looping through elements would do away with most of those advantages, without any clear wins.

There is something magical about how DOM structures work together with the styles and scripts that are applied to them, but that will always be the case, regardless of where our initialisation happens. I would be interested to see examples of approaches that do not use markup to infer what behaviour a script needs to apply where, but until then, I’ll be more than happy to use some of the above methods.


Originally posted as On initialising JavaScript from markup on Hidde's blog.

Reply via email

Leaving the Fronteers board: what happened

This week I resigned from the Fronteers board. I tweeted about it earlier, but didn’t express myself as carefully as I would have liked to do. In this post I hope to explain things properly.

TL;DR: Fronteers members democratically voted to hire an external company ran by two former conference team members to take care of the logistics of its conference organisation. Personally it made sense for me to leave the board as a consequence. I think Fronteers has a bright future, and the decisions made can help make Fronteers more professional, and possibly more fair, as less pressure will be put on volunteers that spend too much time on Fronteers.

Also, there should be no confusion over this: this was a perfectly reasonable proposal, which was accepted by a majority of the Fronteers members who came out to vote. Personally, I feel the decision is bad for Fronteers as an organisation that is multi-faceted: meet-ups, workshops, conferences, a job board and lots of community outreach. It puts the conference and its organisers first by making that facet a paid job, and is bad for the other facets. I suspect less volunteers will be likely to choose and spend a lot of time on Fronteers for free. In a year’s time, I think Fronteers should probably review what today’s decision meant for all of these facets.

The discussion was one between people with good intentions and people with good intentions. People who love Fronteers and people who love Fronteers.

My tweet

I tweeted this:

Yesterday I resigned from the @fronteers board after members voted in favour of a proposal that lets some volunteers charge for their work.

This tweet was phrased harsher than I intended. I did not mean any nastiness, but can see that it came across as such. I really should have used different words.

What happened is that people, who were previously volunteers, left the conference team. At the same time they put a proposal forward to Fronteers members, that offered conference organising services through their company. Where I said volunteers would get paid, the nuance is that the company of former volunteers would get paid.

I want to clarify that all of this happened through a fair member consultation in which all members were able to vote. There was no nepotism.

I was planning to leave at some point anyway, as it is almost 10 years ago since I first volunteered for Fronteers (at the first conference). I’ve been involved with most aspects of Fronteers since: more conferences, workshops, marketing, meetups, the board, finances, partnerships, et cetera. The situation brought me to do that a bit earlier and a bit more abrupt.

What happened at the members meeting

Fronteers members voted to hire an external company to take care of the logistics of its conference organisation.

The problem being solved

Fronteers can take a lot of its volunteers’ time. Some volunteers have literally spent 1-3 days a week, for years, working on stuff like the yearly conference. Evenings, weekends, but also lots of day time. Freelancers amongst them had to balance this with their client’s time and had to give up billable time to work on Fronteers tasks. Employees amongst them required lenience from their employers. This has been the case for years and is far from ideal.

It has taken me a lot of time to see the above as a problem, as it all seemed so natural. I also realised that it was in fact a problem that affected me, too. Evenings, weekends, day time. Life/Fronteers balance is important, and it is up to individuals to make sure they make that balance.

The arguments in favour

  • The proposed solution was ready to execute as it was well defined, with plenty of room for Fronteers to negotiate whatever boundaries necessary.
  • The members who brought the proposal into vote had done years of volunteer work for Fronteers. To hire their company made perfect sense: they had experience specifically with Fronteers Conference, spent years organising it for free and contributed a lot to what it is now.
  • They did a fair proposal: their ballpark figures constituted a price that would be hard to get on the market, many commercial conference organisers cost a lot more.
  • If Fronteers were to hire a company to help with logistics, that company could be held responsible and accountable for various aspects.
  • Fronteers already outsourced tasks to third parties, such as catering, WiFi and flight booking
  • Finally, someone recognised The Problem and tried to do something about it.

The arguments against

  • The proposed solution only solved The Problem for some members. Others also spent evenings, weekends and day time.
  • This would be weird for Fronteers volunteers that have to work together with the supplier (weirder than working with a caterer, hotels or venues).
  • I suspected the proposed supplier would be hard to separate from the people behind the company, as they were former conference team members.
  • Fronteers Conference is the product of years of volunteer effort. Can anyone ever earn any money from that? (This argument regards the caterers, hotels, travel agents etc as fundamentally different third parties, as they are not ran by former volunteers, which is both snarky to even mention and pretty much the case)
  • I suspected this would completely change how Fronteers works and possibly cause volunteers to leave.

One last point is that others in Fronteers history who have also faced The Problem, are not going to be paid for their past efforts (this includes those who made this proposal). But this makes sense: none of these people acted as a separate supplier, and applying the new rules to volunteers retroactively would be overdoing it.

To get around of some of my concerns, I co-signed Arjan’s proposal which asked the board to investigate the intricacies around paying all active Fronteers volunteers (financially, legally, ethically, etc), i.e. those who spend a lot of time on Fronteers. If all active people were paid, there would be less weirdness. Unpaid people could still remain, and could feel happy to do a lot less work.

My personal context

To give you an idea of where I’m coming from, I would like to share some of my Fronteers history (feel free to skip ahead to the next section). After I was approached to volunteer at the first conference in 2008, I continued volunteering for every single conference until 2013. In the first few years I was a ‘runner’, in the last years I joined the actual team. I was given responsibilities like sponsorship, volunteers, marketing, visual design, speaker curation and even had the honour to be the chair in my last year.

I also took on other tasks within Fronteers: I briefly chaired the marketing and workshops teams, and after I joined the board in 2014, I became treasurer in 2015. The last year was one of my heaviest: I helped Fronteers transition to a new bookkeeper, organised the preparation of two sets of accounts (2014 and 2015), restarted the workshops team and organised four workshops, helped organising four meetups and kept in touch with various organisations for our new front-end community support program.

I’ve very much enjoyed (most of) this. I made many friends, had fun, learned loads and met many friendly Fronteers fans who appreciated the stuff Fronteers did for the community. Yet it did cost me a lot of time. Evenings, weekends, day time.

I resigned from the board immediately after the vote. I had spent about a week contemplating doing this in case of a vote in favour of the proposal. The reason? I accept the decision, but assumed the proposal would take time to work out, and foresaw a backlash that would take the board’s attention for a while. I decided that I did not want to spend my free time on dealing with the situation. I was looking forward to spending the last year of my 3 year board term organising workshops, meetups, community support and possibly the new website. The acceptance of both proposals radically changed the board’s priorities, and I could not see myself stay yet ignore that. I saw an early end as the best way to solve my Fronteers/life balance.

What now?

Both the proposal to hire a supplier and the proposal to investigate paying Fronteers volunteers were accepted at the members meeting. The vote happened and this is now the direction in which Fronteers is moving. The board and others (like Peter-Paul Koch) have constructively started thinking about how to carry out the proposals. This will be a complex task, because there are a lot of questions about the details and conditions.

The 2017 conference will be fine too. It’s going to be the tenth year of Fronteers! The idea is that a team of volunteers will be erected to organise the conference, and that they will be in charge of contracting Eventstack (and any details concerning that contract).

Personally, I committed to keep carrying out my treasurer tasks until the end of the year, including making sure the accounts for 2016 are prepared and transferring all I know to my successor. I will also continue chairing workshop team, and hope to organise many more workshops in the new year. After, I hope to get some new hobbies.

Update: from 1 April I have left the workshop team, I am now no longer involved with organising things for Fronteers


Originally posted as Leaving the Fronteers board: what happened on Hidde's blog.

Reply via email