Reading List

The most recent articles from a list of feeds I subscribe to.

The accessibility tree

At this month’s Accessible Bristol, Léonie Watson talked about improving accessibility by making use of WAI-ARIA in HTML, JavaScript, SVG and Web Components.

The talk started with the distinction between the DOM tree, which I assume all front-end developers will have heard of, and the accessibility tree. Of the accessibility tree, I had a vague idea intuitively, but, to be honest, I did not know it literally exists.

The accessibility tree and the DOM tree are parallel structures. Roughly speaking the accessibility tree is a subset of the DOM tree.
W3C on tree types

The accessibility tree contains ‘accessibility objects’, which are based on DOM elements. But, and this is the important bit, only on those DOM elements that expose something. Properties, relationships or features are examples of things that DOM elements can expose. Some elements expose something by default, like <button>s, others, like <span> , don’t.

Every time you add HTML to a page, think ‘what does this expose to the accessibility tree?’

This could perhaps lead us to a new approach to building accessible web pages: every time you add HTML to a page, think ‘what does this expose to the accessibility tree?’.

Exposing properties, relationships, features and states to the accessibility tree

An example:

<label for="foo">Label</label>
<input type="checkbox" id="foo" />

In this example, various things are being exposed: the property that ‘Label’ is a label, the relationship that it is a label for the checkbox foo, the feature of checkability and the state of the checkbox (checked or unchecked). All this info is added into the accessibility tree.

Exposing to the accessibility tree benefits your users, and it comes for free with the above HTML elements. It is simply built-in to most browsers, and as a developer, there is no need to add anything else to it.

Non-exposure and how to fix it

There are other elements, like and <div> that are considered neutral in meaning, i.e. they don’t expose anything. With CSS and JavaScript, it is possible to explain what they do, by adding styling and behaviour.

<!-- this is not a good idea -->
<span class="button">I look like a button</span>

With CSS and JavaScript, it can be made to look and behave like a button (and many front-end frameworks do). But these looks and behaviours can only be accessed by some of your users. It does not expose nearly enough. As it is still a <span> element, browsers will assume its meaning is neutral, and nothing is exposed to the accessibility tree.

Léonie explained that this can be fixed by complementing our non-exposing mark-up manually, using role and other ARIA attributes, and the tabindex attribute.

<!-- not great, but at least it exposes to the accessibility tree -->
<span class="button" role="button">I am a bit more button</span>

For more examples of which ARIA attributes exist and how they can be used, I must refer you to elsewhere on the internet — a lot has been written about the subject.

Exposing from SVG

With SVG becoming more and more popular for all kinds of uses, it is good to know that ARIA can be used in SVG. SVG 1.1 does not support it officially, but SVG 2.0, that is being worked on at the moment, will have it built-in.

SVG elements do not have a lot of means to expose what they are in a non-visual way (besides <title> and @), and this can be improved with ARIA labels. Léonie gave some examples, and wrote a blog post about SVG accessibility which I can highly recommend.

Exposing from Web Components

Web Components, without going into too much detail, are custom HTML elements that can be created from JavaScript. For example, if <button> does not suit your use-case, you can create a <my-custom-button> element.

At the moment, there are two ways of new element creation:

  • an element based on an existing element (i.e. is like , but…). It inherits its properties/roles from the existing element if possible
  • a completely new element. This inherits no properties/roles, so those will have to be added by developer that creates the element.

The first method was recently nominated to be dropped from the specification (see also Bruce Lawson’s article about why he considers this in violation with HTML’s priority of constituencies, as well as the many comments for both sides of the debate).

Again, like <span> s used as buttons and with SVG, these custom elements, especially those not inherited from existing elements, scream for more properties and relations to be exposed to the accessibility tree. Some more info on how to do that in the Semantics section of the spec.

“Code like you give a damn”

Léonie ended her talk with a positive note: to build accessible websites, there is no need to compromise on functionality or design, as long as you “code like you give a damn”. This showed from her examples: many elements, like <button> expose to the accessibility trees, they come with some free accessibility features. And even if you use elements that do not come with built-in accessibility features, you can still use ARIA to expose useful information about them to the accessibility tree.

Personally, I think the best strategy is to always use elements what they are used for (e.g. I would prefer using a <button type="button"> for a button to using a supplemented with ARIA). In other words: I think it would be best to write meaningful HTML where possible, and resort to ARIA only to improve those things that don’t already have meaning (like SVG, Web Components or even the <span> s outputted by the front-end framework you may be using).


Originally posted as The accessibility tree on Hidde's blog.

Reply via email

The web is fast by default, let’s keep it fast

A major web corporation recently started serving simplified versions of websites to load them faster1. Solving which problem? The slowness of modern websites. I think this slowness is optional, and that it can and should be avoided. The web is fast by default, let’s keep it fast.

Why are websites slow?

There are many best practices for web performance. There are Google’s Pagespeed Insights criteria, like server side compression, browser caching, minification and image optimisation. Yahoo’s list of rules is also still a great resource.

If there is so much knowledge out there on how to make super fast websites, why aren’t many websites super fast? I think the answer is that slowness sneaks into websites, because of decisions made by designers, developers or the business. Despite best intentions, things that look like great ideas in the meeting room, can end up making the website ‘obese’ and therefore slow to end users.2 Usually this is not the fault of one person.3 All sorts of decisions can quietly cause seconds of load time to be added:

  • A designer can include many photos or typefaces, as part of the design strategy and add seconds
  • A developer can use CSS or JavaScript frameworks that aren’t absolutely necessary and add seconds
  • A product owner can require social media widgets to be added to increase brand engagement and add seconds

Photos and typefaces can make a website much livelier, frameworks can be incredibly useful in a developer’s workflow and social media widgets have the potential to help users engage. Still, these are all things that can introduce slowness, because they use bytes and requests.

An example: an airline check-in page

To start a check-in, this is the user input KLM needs in step one:

  • ticket number
  • flight number

KLM's check-in page What the check-in page looks like

For brevity, let’s just focus on this first step and see how it is done. What we would expect for us to submit the above details to KLM, is to be presented with a form that displays a field for each. However, at the time of writing, the check-in page of KLM.com presents us with:

  • 151 requests
  • 1.3 MB transferred
  • lots of images, including photos of Hawaii, London, Rome, palm trees and phone boxes, various KLM logos, icon sprites and even a spacer gif
  • over 20 JavaScript files

From the looks of it, more bytes are being transferred than absolutely necessary. Let me emphasise, by the way, that this thing is cleverly put together. Lots of optimising was done and we should not forget there are many, many things this business needs this website to do.

On a good connection, it is ready for action within seconds, and provides a fairly smooth UX. On GPRS however, it took me over 30 seconds to see any content, and 1 minute and 30 seconds to get to the form I needed to enter my details in. This excludes mobile network instabilities. On 3G, I measured 6 and 10 seconds respectively.

Is there a different way? I mean… what if we JUST use plain HTML? As a thought experiment, not as a best practice or something we would realistically get signed off.

Constructed with just HTML, the page weighs just over 2kb and takes 0.2 seconds to load on GPRS. This is what it would look like:

A check-in page with just HTML Just HTML. Alas, it looks boring, but it is extremely fast

This is the web in its default state, and it is very fast. No added styles or interactions, just HTML. I’m not arguing we should serve plain HTML, but we should realise that everything that is added to this will make the page slower. There is plenty one can add before worrying about slowness: adding a logo, colours and maybe even a custom font can still leave us with a very fast page.

Designing a fast website does not have to be boring at all. As Mark Skinner of cxpartners said, “it’s about being aware of the performance cost of things you add and being deliberate with your choices”.

Performance budgets to the rescue

In a world where speed is the most important aspect of a website, the web can be incredibly fast (plain looking, but incredibly fast). If branding and business needs are taken into account, websites will be slower. As long as everyone involved is aware of that, this is a challenge more than it is a problem.

The key question here, is how much slowness we can find acceptable. Just like a family can set a budget for how much spending they find acceptable, web teams can set a budget for how much slowness we find acceptable. Performance budgets (see Tim Kadlec’s post about this) are a great method for this.

The idea of a performance budget, as Tim explains, is just what it sounds like: ‘you set a “budget” on your page and do not allow the page to exceed that.’ Performance budgets can be based on all kinds of metrics: load time in seconds, or perhaps page size in (kilo)bytes or number of HTTP requests. They make sure everyone is aware of performance, and literally set a limit to slowness.

Agencies like Clearleft and the BBC (‘make the website usable [on GPRS] within 10 seconds’) have shared their experiences with using performance budgets. Advantages of setting a budget, they say, is that it ensures performance is part of the conversation. With the budget, avoiding additions that cause slowness no longer depends on a single developer, it becomes a team’s commitment. With the Grunt plugin it can even be part of the build workflow and out-of-budget additions would, in theory, never make it to production.

Conclusion

The web is not slow by default, but still, many websites have ended up being slow. Apart from the best practices that can often be automated, there are many human decisions that have impact on page speed. A way to make page speed part of the conversation and optimising it part of a website’s requirement, is to set a performance budget.

Update 02/07/2015: Someone pointed me at this page that provides a lighter check-in option

Links

Notes

1 Or as Benjamin Reid tweeted: ‘Facebook doesn’t think the web is slow. That’s their “legitimate” reason to stop users navigating off facebook.com. It looks better for them to say that rather than “we’re opening external websites in popups so that you don’t leave our website”.’

2 As Jason Grigsby said

3 As Maaike said (lang=nl)


Originally posted as The web is fast by default, let’s keep it fast on Hidde's blog.

Reply via email

Solving problems with CSS

Writing CSS is, much like the rest of web development, about solving problems. There’s an idea for laying out content, the front-end developer comes up with a way to get the content marked up, and she writes some CSS rules to turn the marked up content into the lay-out as it was intended. Simple enough, until we decide to outsource our abstraction needs.

TL;DR: I think there is a tendency to add solutions to code bases before carefully considering the problems they solve. Abstraction and the tooling that make them possible are awesome, but they do not come without some problems.

When going from paper, Photoshop, Illustrator or whatever into the browser, we use HTML, CSS and JavaScript to turn ideas about content, design and behaviour into some sort of system. We solve our problems by making abstractions. We don’t style specific page elements, we style classes of page elements. Some abstract more from there on, and generate HTML with Haml templates, write CSS through Sass or LESS or use CoffeeScript to write JavaScript.

I work at agencies a lot, so in my daily work I see problems solved in lots of different ways. Some choose to enhance their workflow with preprocessors and build tools, some don’t. All fine, many roads lead to Rome. Often, specifics like grids or media query management (apparently that is a thing) are outsourced to a mixin or Sass function. With so many well-documented libraries, plug-ins and frameworks available, there’s a lot of choice.

What tools can do

Libraries like Bourbon for Sass have lots of utilities that are solutions to solve specific problems. Their mixins can do things like:

@linear-gradient($colour,$colour,$fallback,$fallback_colour) 

which is transformed into a linear-gradient preceded by a -webkit prefixed version and a fallback background colour. Someone found it would be good to outsource typing the fallback background colour, the prefixed one and then the actual one. So they came up with a solution that involves typing the gradient colours and the fallback colour as arguments of a mixin.

This kind of abstraction is great, because it can ensure all linear-gradient values in the project are composed in the same, consistent way. Magically, it does all our linear gradienting for us. This has many advantages and it has become a common practice in recent years.

So, let’s import all the abstractions into our project! Or shouldn’t we?

Outsourcing automation can do harm

Outsourcing the automation of our CSS code can be harmful, as it introduces new vocabulary, potentially imposes solutions on our codebase that solve problems we may not have, and makes complex rules look deceivably simple.

It introduces a new vocabulary

The first problem is that we are replacing a vocabulary that all front-end developers know (CSS) with one that is specific to our project or a framework. We need to learn a new way to write linear-gradient. And, more importantly, a new way to write all the other CSS properties that the library contains a mixin for (Bourbon comes with 24 at the time of writing). Quite simple syntax in most cases, but it is new nonetheless. Sometimes it looks like its CSS counterpart, but it accepts different arguments. The more mixin frameworks in a project, the more new syntax to learn. Do we want to raise the requirement from ‘know CSS’ to ‘know CSS and the list of project-related functions to generate it’? This can escalate quickly. New vocabulary is potentially harmful, as it requires new team members to learn it.

It solves problems we may not have

A great advantage of abstracting things like linear-gradient into a linear-gradient mixin, is that you only need to make changes once. One change in the abstraction, and all linear-gradients throughout the code will be outputted differently. Whilst this is true, we should not forget to consider which problem this solves. Will we need to reasonably often change our linear-gradient outputs? It is quite unlikely the W3C decides linear-gradient is to be renamed to linear-grodient. And if this were to happen, would I be crazy if I suggested Find and Replace to do this one time change? Should we insist on having abstractions to deal with changes that are unlikely to happen? Admittedly, heavy fluctuation in naming has happened before (looking at you, Flexbox, but I would call this an exception, not something of enough worry to justify an extra abstraction layer.

Doesn’t abstracting the addition of CSS properties like linear-gradient qualify as overengineering?

It makes complex CSS rules look simple

Paraphrasing something I’ve overheard a couple of times in the past year: “if we use this mixin [from a library], our problem is solved automatically”. But if we are honest, there is no such thing.

Imagine a mathematician shows us this example of his add-function: add(5,2). We would need no argument over the internals of the function, to understand it will yield 7 (unless we are Saul Kripke). Adding 5 + 2 yields 7.

Now imagine a front-end developer showing us their grid function: grid(700,10,5,true). As a curious fellow front-end person, I would have lots of questions about the function’s internals: are we floating, inline-blocking, do we use percentages, min-widths, max-widths, which box model have we set, what’s happening?

Until CSS Grids are well supported, we can’t do grids in CSS. Yet we can, we have many ways to organise content grid-wise by using floats, display modes, tables or flexible boxes. Technically they are all ‘hacks’, and they have their issues: floats need clearing, inline-block elements come with spaces, tables aren’t meant for lay-out, etc. There is no good or bad, really. An experienced front-end developer will be able to tell which solution has which impact, and that is an important part of the job.

CSS problems are often solved with clever combinations of CSS properties. Putting these in a black box can make complex things look simple, but it will not make the actual combination of CSS properties being used less complex. The solution will still be the same complex mix of CSS properties. And solving bugs requires knowledge of that mix.

Solve the problem first

When we use a mixin, we abstract the pros and cons of one solution into a thing that can just do its magic once it is included. Every time the problem exists in the project, the same magic is applied to it. Given a nice and general function name is used, we can then adjust what the magic is whenever we like. This is potentially super powerful.

All I’m saying is: I think we add abstractions to our projects too soon, and make things more complex than necessary. We often overengineer CSS solutions. My proposal would be to solve a problem first, then think about whether to abstract the solution and then about whether to use a third-party abstraction. To solve CSS problems, it is much more important to understand the spec and how browsers implemented it, than which abstraction to use (see also Confessions of a CSS expert). An abstraction can be helpful and powerful, but it is just a tool.


Originally posted as Solving problems with CSS on Hidde's blog.

Reply via email

Making our websites even more mobile friendly

From 21 April, Google will start preferring sites it considers “mobile friendly”. The criteria are listed in a blog post and Google provide a tool for web masters to check whether they deem websites eligible for the label.

First, we should define what we are trying to be friendly for here. What’s the ‘mobile’ in ‘mobile friendly’? Is it people on small screen widths, people on the move, people with a slow connection? The latter is interesting, as there are people using their fast home WiFi on a mobile device on the couch or loo. There are also people who access painfully slow hotel WiFi on their laptops.

Assumptions are pretty much useless for making decisions about what someone is doing on your site. Mobile, as argued before by many, is not a context that helps us make design decisions. We’ll accept that for the purpose of this article, and think of mobile as a worst-case scenario that we can implement technical improvements for: people on-the-go, using small devices on non-stable connection speeds.

Great principles to start with

So what are Google’s criteria for mobile friendliness? According to their blog post, they consider a website mobile friendly if it:

  • Avoids software that is not common on mobile devices, like Flash
  • Uses text that is readable without zooming
  • Sizes content to the screen so users don’t have to scroll horizontally or zoom
  • Places links far enough apart so that the correct one can be easily tapped
    (Source)

Websites that stick to these criteria will be much friendlier to mobile users than websites that don’t. Still, do they cover all it gets to be mobile friendly?

It passes heavy websites, too

At Fronteers, we celebrated April Fool’s by announcing, tongue-in-cheek, that we went all Single Page App. We included as many JavaScript libraries into the website as we could, without ever actually making use of them. It made the page much heavier in size. We also wrapped all of our markup in elements, with no actual JavaScript code to fetch and render content. The site became non-functional for users with JavaScript on. External scripts were included in the <head> of the page, so that their requests would block the page from rendering. With JavaScript turned off, the site was usable as usual, with JavaScript on, all the user saw was an animated spinner gif.

Screenshot of Fronteers website All the user saw was an animated spinner gif

Probably not particularly funny, but it showed a problem that many modern websites have: loading content is made dependent on loading JavaScript, and lots of it. This is not user friendly, and certainly not friendly to users on wobbly mobile connections. In our extreme case, the content was made inaccessible. Still, as Krijn Hoetmer pointed out on Twitter this morning, the joke —still live on the page announcing it (view the source) — passed Google’s test for mobile friendliness:

Nope, Google, a media query doesn’t make a site “Mobile-friendly”. Your algo needs some love: google.com/search?q=site:… pic.twitter.com/GPgEfkpKLV

I think he is right, the algorithm could be improved. This is not trivial, because the algorithm has no way to find out if we intended to do anything more than display a spinner. Maybe we really required all the frameworks.

The tweet inspired me to formulate some other criteria that are not currently part of Google’s algorithm, but are essential for mobile friendliness.

Even more friendly mobile sites

There are various additional things we can do:

Minimise resources to reach goals

Slow loading pages are particularly unfriendly to mobile users. Therefore, we should probably:

  • Be careful with web fonts
    Using web fonts instead of native fonts often cause blank text due to font requests (29% of page loads on Chrome for Android displayed blank text). This is even worse for requests on mobile networks as they often travel longer. Possible warning: “Your website’s text was blank for x seconds due to web fonts”
  • Initially, only load code we absolutely need
    On bad mobile connections, every byte counts. Minimising render-blocking code is a great way to be friendlier to mobile users. Tools like Filament’s loadCSS / loadJS and Addy Osmani’s Critical are useful for this. Possible warning: “You have loaded more than 300 lines of JavaScript, is it absolutely needed before content display?”
  • Don’t load large images on small screens
    Possible warning: “Your website shows large images to small screens”

Google has fantastic resources for measuring page speed. Its tool Pagespeed Insights is particularly helpful to determine if you are optimising enough. A reason not to include it in the “mobile friendly” algorithm, is that speed is arguably not a mobile specific issue.

Use enough contrast

If someone wants to read your content in their garden, on the window seat of a moving train or anywhere with plenty of light, they require contrast in the colours used. The same applies to users with low brightness screens, and those with ultramodern, bright screens who turned down their brightness to deal with battery incapacity.

Using grey text on a white background can make content much harder to read on small screens. Added benefit for using plenty of contrast: it is also a good accessibility practice.

Again, this is probably not mobile specific; more contrast helps many others.

Attach our JavaScript handlers to ‘mobile’ events

Peter-Paul Koch dedicated a chapter of his Mobile Web Handbook to touch and pointer events. Most mobile browsers have mouse events for backwards compatibility reasons (MWH, 148), he describes. That is to say, many websites check for mouse clicks in their scripts. Mobile browser makers decided not to wait for all the developers to also check for touch behaviour. So, mobile browser makers implemented a touch version of the mouse click behaviour, so that click events also fired on touch.

Most mobile browsers listen to touch events, like touchstart or gesturechange (iOS). So maybe we could measure mobile friendliness by checking if such events are listened to? This is a tricky one, because a script that only listens to click events can be mobile friendly, because of backwards compatibility.

Depending on the functionality, JavaScript interactions can be improved by making use of mobile specific events. Carousels (if you really need to) are a good example of this: adding some swipe events to your script can make it much more mobile friendly. Or actually, like the two above, it would make it friendlier for all users, including those with non-mobile touch screens.

Even more friendly sites

Page speed, colour contrast and touch events may not be mobile specific, but I would say that goes for all four of Google’s criteria too. Legible text helps all users. Tappable text also helps users of non-mobile touch screens.

If we are talking about improving just for mobile, Google’s criteria are a great start. But we need to work harder to achieve more mobile friendliness. Above, I’ve suggested some other things we should do. Some can be measured, like improving load times and providing enough contrast. These things are friendly to all users, including mobile ones. Other things will have to be decided case by case, like including ‘mobile’ events in JavaScript, but probably also how much of our CSS or JavaScript is critical. As always, it depends.


Originally posted as Making our websites even more mobile friendly on Hidde's blog.

Reply via email

Switching to HTTPS

From yesterday, this website is only available via HTTPS. In this post I will explain my reasons for making the switch, and how I did it at Webfaction.

HTTP with SSL

HTTPS stands for HyperText Transfer Protocol with Secure Sockets Layer, or HTTP with SSL for short. Data over HTTP is sent as plain text. Data over HTTPS is encrypted before it is sent off, and not decrypted until it arrives at the user.

HTTPS is great and a must-use for websites that deal with sensitive data, e.g. credit card details, because sending such data in plain text would make it trivial to intercept. With HTTPS, the data is encrypted, and thus safe from hackers, at least in theory.

The certificates that are required to serve your website over HTTPS are not expensive (starting at £10 per year), and there are various initiatives that aim at make it easier or free. Let’s encrypt, sponsored by Mozilla and EFF, amongst others, looks promising. What’s more, you used to need a dedicated IP address, but if your server supports SNI, this is no longer required. Note that not all browsers support the latter (notably IE6 and older versions of Android).

Why use it on this website?

“Your website contains just some blog posts”, I hear you say, “so why would you encrypt it?” Even for sites that do not take sensitive data like payment details, there are additional benefits of HTTPS over HTTP. The most important one is content integrity.

Content integrity

The user can be more certain that the content is genuine. On a news website, we want to be sure the news stories were not meddled with. There is quite a number of countries where the state censors the news by default. If in such countries, news websites send their content over plain text, it is much easier for their government to change what is being served, e.g. replace all instances of “war” with “peace”.

I am the last to pretend anyone would want to alter information on my website, but for the sake of argument: even on this website content integrity could be an issue. Serving over HTTPS makes sure that the phone number on my contact page is not replaced by that of someone with malicious intentions.

Preventing ad insertion

Imagine you are on a WiFi hotspot. If the owner of the hotspot wants to earn some extra money, they could insert advertising into websites served via their internet connection (happens). This would not work on websites that are served over HTTPS, hence this website stays the way it was intended to.

Protecting form data

If you want to comment below, I will require your name and email address. Imagine you leave a comment whilst on airport WiFi and someone in the lounge is bored and listens in for e-mail addresses going over the line. They could, if you submit the data over non-secure HTTP. With HTTPS on, we can be more sure that this data goes from you to me, and nowhere else.

Securing my back-end logins

The content on this website is managed by Perch. If I log in to add new content whilst on public WiFi, my (login) data will be encrypted. The same applies when I access my website’s analytics package, Piwik.

Other potential benefits

A personal benefit of serving my site over HTTPS is that I will now be able to experiment with new standards like Service Workers, which is only available for websites served over HTTPS.

Another benefit in the long term, could be if search engines start to prefer sites that are served over HTTPS. Google has said they will. This makes serving over HTTPS more attractive to those interested in a higher search engine ranking. Yes, that’s right, serving your website over HTTPS optimises your SEO in search engines.

How I did it

To get a signed certificate for your domain and serve your content over HTTPS, these are the steps:

  1. Applying for a certificate
    Create a key on your web server, use it to generate a Certificate Signing Request (CSR) and give the CSR to a Certification Authority
  2. Uploading the certificate
    When the Certification Authority has been able to sign your certificate and has returned it to you, upload the certificate to your server
  3. Installing the certificate
    Install the certificate by enabling HTTPS on the server and reloading the server configuration

My host, Webfaction, was able to do the last step for me, so I only had to do the first two steps myself: applying for the certificate and uploading the signed certificate.

Applying for a certificate

Certificates are sold by many different providers, I bought mine at sslcertificaten.nl.

There are various types of certificates, they differ in how much is actually verified. The ones that are only domain name verified seem to be the simplest, I went for one of those. They can be as cheap as ten pounds per year. There are more complex ones that involve more thorough checks and will cost around a hundred pounds. Certificates are for one domain only, unless you buy one with a wildcard. They are more expensive than single domain certificates, but will work for anything.yourdomain.com.

In the purchase process,you will be asked to input a Certificate Signing Request (CSR). The easiest way to generate one, it by opening an SSH session to your webserver in your terminal:

ssh yourserver 

Then, create a key (replace ‘domainname’ with any name):

openssl genrsa -out domainname.key 2048

A key file domainname.key is created. Then create the signing request:

openssl req -new -key domainname.key -out domainname.csr

The command prompt will ask you for some information. When asked for a “common name”, enter the domain name that you need the certificate to work on (in my case: hiddedevries.nl).

Then, to copy the CSR, type this:

more domainname.csr

The code will appear in your Terminal window, ready to copy and paste.

Depending on where you buy your certificate, you will have to fill in your information to complete the purchasing process.

In the type of certificate I chose, the only verification step was done with an email from the Certification Authority to admin@mydomain. The e-mail contained a link that I, i.e. the person with access to the domain name’s admin email address, could click to approve the certificate request.

About 15 minutes after approving the certificate request, I was sent my certificate. For the certificate to function, they sent me two other certificates: a root certificate and an intermediate certificate.

Uploading the certificate

Using SFTP, I uploaded my certificate, the root certificate and the intermediate certificate, to the root of my web server.

Installing the certificate

To get the certificate installed, I opened a support ticket at Webfaction, as per their instructions. Fifteen minutes later I received an email stating that the certificate had been installed.

That’s about it

I have little experience of how it works at other hosting providers, but was quite pleased to see how quick it went at mine. Basically, I uploaded the files and got in touch with support, who then did the rest. Whether “the rest” is incredibly painful, or just a few simple steps, I am not sure (instructions for Apache).

Note that once it is setup, you should also make sure your assets (CSS, JavaScript, images) are served over HTTPS. If you use CDNs, use one that supports HTTPS. Likewise, if you use plug-ins that do stuff with URLs, make sure they know about the change (or use relative URLs).

Some further reading/tips about HTTPS:

But also:


Originally posted as Switching to HTTPS on Hidde's blog.

Reply via email