So… after about 11 months, I’m still working on the dwayne.xyz / DwayneWeb redesign. I’m building a lot of parts at once, and one of those parts is actually a full analytics project. I’ve been using a self-hosted Fathom instance for the current version of the site for a while, but at some point I realized the self-hosted version is their “lite” version, while the real version that does all the stuff I want is only available as a paid version that’s hosted on their servers1.
Some of the things I want:
- Exact numbers of views and visitors per hour on all of my pages
- Percentages of users of different operating systems, browsers, screen sizes, and locations
- A public view count on my articles
- Accurate sponsorship views (per “version”) for website sponsors
- Exact numbers of queries/conversations with DwayneBot
I didn’t think I was gonna get all of this from third-party software, so I started building these features as another sub-project (Rust + SQLite project) to the bigger “website redesign” (big Rust workspace) project I’ve been working on all this time. Development was going pretty well until I started working on the User Agent string parsing I would have to do for thing #2.
User Agent Strings…
Every browser sends a user agent string along with each request to a server. It contains a bunch of details about what device/browser hardware and software the client is using. There are a few articles around detailing the history of user agent strings, but long story short… they’re a mess:
And then Google built Chrome, and Chrome used Webkit, and it was like Safari, and wanted pages built for Safari, and so pretended to be Safari. And thus Chrome used WebKit, and pretended to be Safari, and WebKit pretended to be KHTML, and KHTML pretended to be Gecko, and all browsers pretended to be Mozilla, and Chrome called itself Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13, and the user agent string was a complete mess, and near useless, and everyone pretended to be everyone else, and confusion abounded.
But I figured at least they can still be parsed somewhat reliably to get the browser and OS names/versions. I added the user-agent-parser crate to my project and started testing with Safari on macOS 13, only to find… that the parsed result showed that I was on Mac OS X 10.15?
I thought it was an issue with the crate at first, but after looking into it a bit, I found this WebKit bug report with the following comment:
The world apparently isn’t ready for macOS 11.0 in User Agents. Because of the depressing UA sniffing code on the web, let’s just report our current shipping release until things settle down.
Apparently, Firefox did the same thing. And then I found out Microsoft also decided not to report Windows 11 in the user agent either:
User-Agent strings won’t be updated to differentiate between Windows 11 and Windows 10. We don’t recommend using User-Agent strings to retrieve user agent data. Browsers that don’t support User-Agent Client Hints won’t be able to differentiate between Windows 11 and Windows 10.
So according to Microsoft, the only way to get accurate OS stats is to use HTTP Client Hints.
Client Hints?
Client Hints are apparently supposed to be the replacement for User Agent string parsing. If you want to use them in your web project, you have to have your web server send a client hints header (Accept-CH
) specifying the data you want, and then have your Javascript code query the browser for that information. That data includes stuff like:
- Device to pixel ratio
- Device architecture
- Amount of RAM
- Viewport width
- Approximate bandwith of the connection to the server
- Browser and Platform information
It’s supposed to be safer, since the browser won’t send much of this info unless it’s requested, and the browser can also choose to never send some of the data if the user doesn’t want to.
Cool.
So like with everything else in the frontend web-dev world, the next question is: which browsers actually support this?
Only Chromium based browsers.2 And when will either Safari or Firefox (both of which no longer accurately report operating system versions) support this? Who knows?
So it turns out I actually have no ability to determine how many users are using either Safari or Firefox on any macOS version after 10.15 (nor how many users are using Firefox on Windows 11). And there’s no way of knowing when I might be able to do so. And therefore my new analytics project will be inaccurate from the start and there’s no way around it right now.
Not cool.
I mean, I completely respect the privacy angle, since some3 of the reasoning for all of this is to avoid tracking/fingerprinting based on the specific details of your hardware and software. I don’t think people should be tracked across the web, so it’s good that it’s harder for servers to get this info. But it would have been nice if Apple and Mozilla actually had some kind of replacement (or even just a general timeframe for a replacement) before freezing the User Agent strings like this.
-
I don’t really have a problem with Fathom making money like this, but I’m building out all my own infrastructure. I want my data on my own servers. Also, it feels like there’s way too much of a difference between the paid and “lite” versions. ↩︎
-
And not even Brave, which is based on Chrome, but had Client Hint support removed. ↩︎
-
Although some of the reasoning is also apparently just “some things on the web will break if we update the version numbers”. ↩︎
There is a method for identifying browser/platform that is much more reliable than UA/CH headers. It’s browser fingerprinting via feature testing. One needs not look much further than the MDN compatibility tables to devise a method of detecting specific browsers. I’m willing to bet that there’s multiple bits of information unique to Chromium on Windows 11 too.
See: https://coveryourtracks.eff.org/
The EFF’s panopticlick is opensource and the anti-fingerprinting code of browsers is opensource. There’s likely many relevant projects easily discoverable on Github as well. For proprietary methods, spending some time reverse engineering WAFs such as Cloudflare can’t hurt. Cloudflare goes as far as to weigh low level networking protocol parameters into it’s bot score rating system,i.e., TCP and TLS properties such as cipher parameters and protocol extensions (signature algorithms). The browser/platform/CPU combination of your system will result in different TLS configuration and Cloudflare does a good job at using AI to learn/filter it’s traffic.
Some random site (https://amiunique.org/fp), correctly identifies the Linux kernel version I’m using, etc. etc. I also think Nmap deserves a mention.
There’s more than enough information to correctly identify the browser/platform. There may very well be enough information to make fingerprinting more reliable than IP addresses.