Reading List
The most recent articles from a list of feeds I subscribe to.
Notes on using a single-person Mastodon server
I started using Mastodon back in November, and it’s the Twitter alternative where I’ve been spending most of my time recently, mostly because the Fediverse is where a lot of the Linux nerds seem to be right now.
I’ve found Mastodon quite a bit more confusing than Twitter because it’s a distributed system, so here are a few technical things I’ve learned about it over the last 10 months. I’ll mostly talk about what using a single-person server has been like for me, as well as a couple of notes about the API, DMs and ActivityPub.
I might have made some mistakes, please let me know if I’ve gotten anything wrong!
what’s a mastodon instance?
First: Mastodon is a decentralized collection of independently run servers instead of One Big Server. The software is open source.
In general, if you have an account on one server (like ruby.social
), you
can follow people on another server (like hachyderm.io
), and they can
follow you.
I’m going to use the terms “Mastodon server” and “Mastodon instance” interchangeably in this post.
on choosing a Mastodon instance
These were the things I was concerned about when choosing an instance:
- An instance name that I was comfortable being part of my online
identity. For example, I probably wouldn’t want to be
@b0rk@infosec.exchange
because I’m not an infosec person. - The server’s stability. Most servers are volunteer-run, and volunteer moderation work can be exhausting – will the server really be around in a few years? For example mastodon.technology and mastodon.lol shut down.
- The admins’ moderation policies.
- That server’s general reputation with other servers. I started out on
mastodon.social
, but some servers choose to block or limit mastodon.social for various reasons - The community: every Mastodon instance has a local timeline with all posts from users on that instance, would I be interested in reading the local timeline?
- Whether my account would be a burden for the admin of that server (since I have a lot of followers)
In the end, I chose to run my own mastodon server because it seemed simplest – I could pick a domain I liked, and I knew I’d definitely agree with the moderation decisions because I’d be in charge.
I’m not going to give server recommendations here, but here’s a list of the top 200 most common servers people who follow me use.
using your own domain
One big thing I wondered was – can I use my own domain (and have the username @b0rk@jvns.ca
or something) but be on someone else’s Mastodon server?
The answer to this seems to be basically “no”: if you want to use your own
domain on Mastodon, you need to run your own server. (you can kind of do this,
but it’s more like an alias or redirect – if I used that method to direct b0rk@jvns.ca
to b0rk@mastodon.social
, my
posts would still show up as being from b0rk@mastodon.social
)
There’s also other ActivityPub software (Takahē) that supports people bringing their own domain in a first-class way.
notes on having my own server
I really wanted to have a way to use my own domain name for identity, but to share server hosting costs with other people. This isn’t possible on Mastodon right now, so I decided to set up my own server instead.
I chose to run a Mastodon server (instead of some other ActivityPub implementation) because Mastodon is the most popular one. Good managed Mastodon hosting is readily available, there are tons of options for client apps, and I know for sure that my server will work well with other people’s servers.
I use masto.host for Mastodon hosting, and it’s been great so far. I have nothing interesting to say about what it’s like to operate a Mastodon instance because I know literally nothing about it. Masto.host handles all of the server administration and Mastodon updates, and I never think about it at all.
Right now I’m on their $19/month (“Star”) plan, but it’s possible I could use a smaller plan with no problems. Right now their cheapest plan is $6/month and I expect that would be fine for someone with a smaller account.
Some things I was worried about when embarking on my own Mastodon server:
- I wanted to run the server at
social.jvns.ca
, but I wanted my username to beb0rk@jvns.ca
instead ofb0rk@social.jvns.ca
. To get this to work I followed these Setting up a personal fediverse ID directions from Jacob Kaplan-Moss and it’s been fine. - The administration burden of running my own server. I imported a small list of servers to block/defederate from but didn’t do anything else. That’s been fine.
- Reply and profile visibility. This has been annoying and we’ll talk about it next
downsides to being on a single-person server
Being on a 1-person server has some significant downsides. To understand why, you need to understand a little about how Mastodon works.
Every Mastodon server has a database of posts. Servers only have posts that they were explicitly sent by another server in their database.
Some reasons that servers might receive posts:
- someone on the server follows a user
- a post mentions someone on the server
As a 1-person server, my server does not receive that many posts! I only get posts from people I follow or posts that explicitly mention me in some way.
The causes several problems:
- when I visit someone’s profile on Mastodon who I don’t already follow, my server will not fetch the profile’s content (it’ll fetch their profile picture, description, and pinned posts, but not any of their post history). So their profile appears as if they’ve never posted anything
- bad reply visibility: when I look at the replies to somebody else’s post (even if I follow them!), I don’t see all of the replies, only the ones which have made it to my server. If you want to understand the exact rules about who can see which replies (which are quite complicated!), here’s a great deep dive by Sebastian Jambor. I think it’s possible to end up in a state where no one person can see all of the replies, including the original poster.
- favourite and boost accounts are inaccurate – usually posts show up having at most 1 or 2 favourites / boosts, even if the post was actually favourite or boosted hundreds of times. I think this is because it only counts favourites/boosts from people I follow.
All of these things will happen to users of any small Mastodon server, not just 1-person servers.
bad reply visibility makes conversations harder
A lot of people are on smaller servers, so when they’re participating in a conversation, they can’t see all the replies to the post.
This means that replies can get pretty repetitive because people literally cannot see each other’s replies. This is especially annoying for posts that are popular or controversial, because the person who made the post has to keep reading similar replies over and over again by people who think they’re making the point for the first time.
To get around this (as a reader), you can click “open link to post” or something in your Mastodon client, which will open up the page on the poster’s server where you can read all of the replies. It’s pretty annoying though.
As a poster, I’ve tried to reduce repetitiveness in replies by:
- putting requests in my posts like “(no need to reply if you don’t remember, or if you’ve been using the command line comfortably for 15 years — this question isn’t for you :) )”
- occasionally editing my posts to include very common replies
- very occasionally deleting the post if it gets too out of hand
The Mastodon devs are extremely aware of these issues, there are a bunch of github issues about them:
My guess is that there are technical reasons these features are difficult to add because those issues have been open for 5-7 years.
The Mastodon devs have said that they plan to improve reply fetching, but that it requires a significant amount of work.
some visibility workarounds
Some people have built workarounds for fetching profiles / replies.
Also, there are a couple of Mastodon clients which will proactively fetch replies. For iOS:
- Mammoth does it automatically
- Mona will fetch posts if I click “load from remote server” manually
I haven’t tried those yet though.
other downsides of running your own server: discovery is much harder
Mastodon instances have a “local timeline” where you can see everything other people on the server are posting, and a “federated timeline” which shows sort of a combined feed from everyone followed by anyone on the server. This means that you can see trending posts and get an idea of what’s going on and find people to follow. You don’t get that if you’re on a 1-person server – it’s just me talking to myself! (plus occasional interjections from my reruns bot).
Some workarounds people mentioned for this:
- you can populate your federated timeline with posts from another instance by using a relay. I haven’t done this but someone else said they use FediBuzz and I might try it out.
- some mastodon clients (like apparently Moshidon on Android) let you follow other instances
If anyone else on small servers has suggestions for how to make discovery easier I’d love to hear them.
account migration
When I moved to my own server from mastodon.social
, I needed to run an account migration to move over my followers. First, here’s how migration works:
- Account migration does not move over your posts. All of my posts stayed on my old account. This is part of why I moved to running my own server – I didn’t want to ever lose my posts a second time.
- Account migration does not move over the list of people you follow/mute/block. But you can import/export that list in your Mastodon settings so it’s not a big deal. If you follow private accounts they’ll have to re-approve your follow request.
- Account migration does move over your followers
The follower move was the part I was most worried about. Here’s how it turned out:
- over ~24 hours, most of my followers moved to the new account
- one or two servers did not get the message about the account migration for some reason, so about 2000 followers were “stuck” and didn’t migrate. I fixed this by waiting 30 days and re-running the account migration, which moved over most of the remaining followers. There’s also a tootctl command that the admin of the old instance can run to retry the migration
- about 200 of my followers never migrated over, I think because they’re using ActivityPub software other than Mastodon which doesn’t support account migration. You can see the old account here
using the Mastodon API is great
One thing I love about Mastodon is – it has an API that’s MUCH easier to use than Twitter’s API. I’ve always been frustrated with how difficult it is to navigate large Twitter threads, so I made a small mastodon thread view website that lets you log into your Mastodon account. It’s pretty janky and it’s really only made for me to use, but I’ve really appreciated the ability to write my own janky software to improve my Mastodon experience.
Some notes on the Mastodon API:
- You can build Mastodon client software totally on the frontend in Javascript, which is really cool.
- I couldn’t find a vanilla Javascript Mastodon client, so I wrote a crappy one
- API docs are here
- Here’s a tiny Python script I used to list all my Mastodon followers, which also serves as a simple example of how easy using the API is.
- The best documentation I could find for which OAuth scopes correspond to which API endpoints is this github issue
Next I’ll talk about a few general things about Mastodon that confused or surprised me that aren’t specific to being on a single-person instance.
DMs are weird
The way Mastodon DMs work surprised me in a few ways:
- Technically DMs are just regular posts with visibility limited to the people mentioned in the post. This means that if you accidentally mention someone in a DM (“@x is such a jerk”), it’s possible to accidentally send the message to them
- DMs aren’t very private: the admins on the sending and receiving servers can technically read your DMs if they have access to the database. So they’re not appropriate for sensitive information.
- Turning off DMs is weird. Personally I don’t like receiving DMs from strangers – it’s too much to keep track of and I’d prefer that people email me. On Twitter, I can just turn it off and people won’t see an option to DM me. But on Mastodon, when I turn off notifications for DMs, anyone can still “DM” me, but the message will go into a black hole and I’ll never see it. I put a note in my profile about this.
defederation and limiting
There are a couple of different ways for a server to block another Mastodon server. I haven’t really had to do this much but people talk about it a lot and I was confused about the difference, so:
- A server can defederate from another server (this seems to be called suspend in the Mastodon docs). This means that nobody on a server can follow someone from the other server.
- A server can limit (also known as “silence”) a user or server. This means that content from that user is only visible to that user’s followers – people can’t discover the user through retweets (aka “boosts” on Mastodon).
One thing that wasn’t obvious to me is that who servers defederate / limit is sometimes hidden, so it’s hard to suss out what’s going on if you’re considering joining a server, or trying to understand why you can’t see certain posts.
there’s no search for posts
There’s no way to search past posts you’ve read. If I see something interesting on my timeline and want to find it later, I usually can’t. (Mastodon has a Elasticsearch-based search feature, but it only allows you to search your own posts, your mentions, your favourites, and your bookmarks)
These limitations on search are intentional (and a very common source of arguments) – it’s a privacy / safety issue. Here’s a summary from Tim Bray with lots of links.
It would be personally convenient for me to be able to search more easily but I respect folks’ safety concerns so I’ll leave it at that.
My understanding is that the Mastodon devs are planning to add opt-in search for public posts relatively soon.
other ActivityPub software
We’ve been talking about Mastodon a lot, but not everyone who I follow is using Mastodon: Mastodon uses a protocol called ActivityPub to distribute messages.
Here are some examples of other software I see people talking about, in no particular order:
- Calckey
- Akkoma
- gotosocial
- Takahē
- writefreely
- pixelfed (for images)
I’m probably missing a bunch of important ones.
what’s the difference between Mastodon and other ActivityPub software?
This confused me for a while, and I’m still not super clear on how ActivityPub works. What I’ve understood is:
- ActivityPub is a protocol (you can explore how it works with blinry’s nice JSON explorer)
- Mastodon servers communicate with each other (and with other ActivityPub servers) using ActivityPub
- Mastodon clients communicate with their server using the Mastodon API, which is its own thing
- There’s also software like GoToSocial that aims to be compatible with the Mastodon API, so that you can use a Mastodon client with it
more mastodon resources
- Fedi.Tips seems to be a great introduction
- I think you can still use FediFinder to find folks you followed on Twitter on Mastodon
- I’ve been using the Ivory client on iOS, but there are lots of great clients. Elk is an alternative web client that folks seem to like.
I haven’t written here about what Mastodon culture is like because other people have done a much better job of talking about it than me, but of course it’s is the biggest thing that affects your experience and it was the thing that took me longest to get a handle on. A few links:
- Erin Kissane on frictions people run into when joining Mastodon
- Kyle Kingsbury wrote some great moderation guidelines for woof.group (note: woof.group is a LGBTQ+ leather instance, be prepared to see lots of NSFW posts if you visit it)
- Mekka Okereke writes lots of great posts about issues Black people encounter on Mastodon (though they’re all on Mastodon so it’s a little hard to navigate)
that’s all!
I don’t regret setting up a single-user server – even though it’s inconvenient, it’s important to me to have control over my social media. I think “have control over my social media” is more important to me than it is to most other people though, because I use Twitter/Mastodon a lot for work.
I am happy that I didn’t start out on a single-user server though – I think it would have made getting started on Mastodon a lot more difficult.
Mastodon is pretty rough around the edges sometimes but I’m able to have more interesting conversations about computers there than I am on Twitter (or Bluesky), so that’s where I’m staying for now.
What helps people get comfortable on the command line?
Sometimes I talk to friends who need to use the command line, but are intimidated by it. I never really feel like I have good advice (I’ve been using the command line for too long), and so I asked some people on Mastodon:
if you just stopped being scared of the command line in the last year or three — what helped you?
(no need to reply if you don’t remember, or if you’ve been using the command line comfortably for 15 years — this question isn’t for you :) )
This list is still a bit shorter than I would like, but I’m posting it in the hopes that I can collect some more answers. There obviously isn’t one single thing that works for everyone – different people take different paths.
I think there are three parts to getting comfortable: reducing risks, motivation and resources. I’ll start with risks, then a couple of motivations and then list some resources.
ways to reduce risk
A lot of people are (very rightfully!) concerned about accidentally doing some destructive action on the command line that they can’t undo.
A few strategies people said helped them reduce risks:
- regular backups (one person mentioned they accidentally deleted their entire home directory last week in a command line mishap, but it was okay because they had a backup)
- For code, using git as much as possible
- Aliasing
rm
to a tool like safe-rm or rmtrash so that you can’t accidentally delete something you shouldn’t (or justrm -i
) - Mostly avoid using wildcards, use tab completion instead. (my shell will tab complete
rm *.txt
and show me exactly what it’s going to remove) - Fancy terminal prompts that tell you the current directory, machine you’re on, git branch, and whether you’re root
- Making a copy of files if you’re planning to run an untested / dangerous command on them
- Having a dedicated test machine (like a cheap old Linux computer or Raspberry Pi) for particularly dangerous testing, like testing backup software or partitioning
- Use
--dry-run
options for dangerous commands, if they’re available - Build your own
--dry-run
options into your shell scripts
a “killer app”
A few people mentioned a “killer command line app” that motivated them to start spending more time on the command line. For example:
- ripgrep
- jq
- wget / curl
- git (some folks found they preferred the git CLI to using a GUI)
- ffmpeg (for video work)
- yt-dlp
- hard drive data recovery tools (from this great story)
A couple of people also mentioned getting frustrated with GUI tools (like heavy IDEs that use all your RAM and crash your computer) and being motivated to replace them with much lighter weight command line tools.
inspiring command line wizardry
One person mentioned being motivated by seeing cool stuff other people were doing with the command line, like:
- Command-line Tools can be 235x Faster than your Hadoop Cluster
- this “command-line chainsaw” talk by Gary Bernhardt
explain shell
Several people mentioned explainshell where you can paste in any shell incantation and get it to break it down into different parts.
history, tab completion, etc:
There were lots of little tips and tricks mentioned that make it a lot easier to work on the command line, like:
- up arrow to see the previous command
- Ctrl+R to search your bash history
- navigating inside a line with
Ctrl+w
(to delete a word),Ctrl+a
(to go to the beginning of the line),Ctrl+e
(to go to the end), andCtrl+left arrow
/Ctrl+right arrow
(to jump back/forward a word) - setting bash history to unlimited
cd -
to go back to the previous directory- tab completion of filenames and command names
- learning how to use a pager like
less
to read man pages or other large text files (how to search, scroll, etc) - backing up configuration files before editing them
- using pbcopy/pbpaste on Mac OS to copy/paste from your clipboard to stdout/stdin
- on Mac OS, you can drag a folder from the Finder into the terminal to get its path
fzf
Lots of mentions of using fzf as a better way to fuzzy search shell history. Some other things people mentioned using fzf for:
- picking git branches (
git checkout $(git for-each-ref --format='%(refname:short)' refs/heads/ | fzf)
) - quickly finding files to edit (
nvim $(fzf)
) - switching kubernetes contexts (
kubectl config use-context $(kubectl config get-contexts -o name | fzf --height=10 --prompt="Kubernetes Context> ")
) - picking a specific test to run from a test suite
The general pattern here is that you use fzf to pick something (a file, a git branch, a command line argument), fzf prints the thing you picked to stdout, and then you insert that as the command line argument to another command.
You can also use fzf as an tool to automatically preview the output and quickly iterate, for example:
- automatically previewing jq output (
echo '' | fzf --preview "jq {q} < YOURFILE.json"
) - or for
sed
(echo '' | fzf --preview "sed {q} YOURFILE"
) - or for
awk
(echo '' | fzf --preview "awk {q} YOURFILE"
)
You get the idea.
In general folks will generally define an alias for their fzf
incantations so
you can type gcb
or something to quickly pick a git branch to check out.
raspberry pi
Some people started using a Raspberry Pi, where it’s safer to experiment without worrying about breaking your computer (you can just erase the SD card and start over!)
a fancy shell setup
Lots of people said they got more comfortable with the command line when they started using a more user-friendly shell setup like oh-my-zsh or fish. I really agree with this one – I’ve been using fish for 10 years and I love it.
A couple of other things you can do here:
- some folks said that making their terminal prettier helped them feel more comfortable (“make it pink!”).
- set up a fancy shell prompt to give you more information (for example you can make the prompt red when a command fails). Specifically transient prompts (where you set a super fancy prompt for the current command, but a much simpler one for past commands) seem really nice.
Some tools for theming your terminal:
- I use base16-shell
- powerlevel10k is a popular fancy zsh theme which has transient prompts
- starship is a fancy prompt tool
- on a Mac, I think iTerm2 is easier to customize than the default terminal
a fancy file manager
A few people mentioned fancy terminal file managers like ranger or nnn, which I hadn’t heard of.
a helpful friend or coworker
Someone who can answer beginner questions and give you pointers is invaluable.
shoulder surfing
Several mentions of watching someone more experienced using the terminal – there are lots of little things that experienced users don’t even realize they’re doing which you can pick up.
aliases
Lots of people said that making their own aliases or scripts for commonly used tasks felt like a magical “a ha!” moment, because:
- they don’t have to remember the syntax
- then they have a list of their most commonly used commands that they can summon easily
cheat sheets to get examples
A lot of man pages don’t have examples, for example the openssl s_client man page has no examples. This makes it a lot harder to get started!
People mentioned a couple of cheat sheet tools, like:
- tldr.sh
- cheat (which has the bonus of being editable – you can add your own commands to reference later)
- um (an incredibly minimal system that you have to build yourself)
For example the cheat page for openssl is really
great – I think it includes almost everything I’ve ever actually used openssl
for in practice (except the -servername
option for openssl s_client
).
One person said that they configured their .bash_profile
to print out a cheat
sheet every time they log in.
don’t try to memorize
A couple of people said that they needed to change their approach – instead of trying to memorize all the commands, they realized they could just look up commands as needed and they’d naturally memorize the ones they used the most over time.
(I actually recently had the exact same realization about learning to read x86 assembly – I was taking a class and the instructor said “yeah, just look everything up every time to start, eventually you’ll learn the most common instructions by heart”)
Some people also said the opposite – that they used a spaced repetition app like Anki to memorize commonly used commands.
vim
One person mentioned that they started using vim on the command line to edit files, and once they were using a terminal text editor it felt more natural to use the command line for other things too.
Also apparently there’s a new editor called micro which is like a nicer version of pico/nano, for folks who don’t want to learn emacs or vim.
use Linux on the desktop
One person said that they started using Linux as their main daily driver, and having to fix Linux issues helped them learn. That’s also how I got comfortable with the command too back in ~2004 (I was really into installing lots of different Linux distributions to try to find my favourite one), but my guess is that it’s not the most popular strategy these days.
being forced to only use the terminal
Some people said that they took a university class where the professor made them do everything in the terminal, or that they created a rule for themselves that they had to do all their work in the terminal for a while.
workshops
A couple of people said that workshops like Software Carpentry workshops (an introduction to the command line, git, and Python/R programming for scientists) helped them get more comfortable with the command line.
You can see the software carpentry curriculum here.
books & articles
a few that were mentioned:
articles:
- The Terminal
- command line kung fu (has a mix of Unix and Windows command line tips)
books:
- effective linux at the command line
- unix power tools (which might be outdated)
- The Linux Pocket guide
videos:
- CLI tools aren’t inherently user-hostile by Mindy Preston
- Gary Bernhardt’s destroy all software screencasts
- DistroTube
Some tactics for writing in public
Someone recently asked me – “how do you deal with writing in public? People on the internet are such assholes!”
I’ve often heard the advice “don’t read the comments”, but actually I’ve learned a huge amount from reading internet comments on my posts from strangers over the years, even if sometimes people are jerks. So I want to explain some tactics I use to try to make the comments on my posts more informative and useful to me, and to try to minimize the number of annoying comments I get.
talk about facts
On here I mostly talk about facts – either facts about computers, or stories about my experiences using computers.
For example this post about tcpdump contains some basic facts about how to use tcpdump, as well as an example of how I’ve used it in the past.
Talking about facts means I get a lot of fact-based comments like:
- people sharing their own similar (or different) experiences (“I use tcpdump a lot to look at our RTP sequence numbers”)
- pointers to other resources (“the documentation from F5 about tcpdump is great”)
- other interesting related facts I didn’t mention (“you can use tcpdump -X
too”, “netsh on windows is great”, “you can use
sudo tcpdump -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
to filter for HTTP GET requests) - potential problems or gotchas (“be careful about running tcpdump as root, try just setting the required capabilities instead”)
- questions (“Is there a way to place the BPF filter after IP packet reassembly?” or “what’s the advantage of tcpdump over wireshark?”)
- mistakes I made
In general, I’d say that people’s comments about facts tend to stay pretty normal. The main kinds of negative comments I get about facts are:
- occasionally people get a little rude about facts I didn’t mention (“Didn’t
use -n in any of the examples…please…“). I think I didn’t mention
-n
in that post because at the time I didn’t know why the-n
flag was useful (it’s useful because it turns off this annoying reverse DNS lookup that tcpdump does by default so you can see the IP addresses). - people are also sometimes weird about mistakes. I mostly try to head this off by trying to be self-aware about my knowledge level on a topic, and saying “I’m not sure…” when I’m not sure about something.
stories are great
I think stories encourage pretty good discussion. For example, why you should understand (a little) about TCP is a story about a time it was important for me to understand how TCP worked.
When I share stories about problems I solved, the comments really help me understand how what I learned fits into a bigger context. For example:
- is this a common problem? people will often comment saying “this happened to me too!”
- what are other common related problems that come up?
- are there other possible solutions I didn’t consider?
Also I think these kinds of stories are incredibly important – that post describes a bug that was VERY hard for me to solve, and the only reason I was able to figure it out in the first place was that I read this blog post.
ask technical questions
Often in my blog posts I ask technical questions that I don’t know the answer to (or just mention “I don’t know X…”). This helps people focus their replies a little bit – an obvious comment to make is to provide an answer to the question, or explain the thing I didn’t know!
This is fun because it feels like a guaranteed way to get value out of people’s comments – people LOVE answering questions, and so they get to look smart, and I get the answer to a question I have! Everyone wins!
fix mistakes
I make a lot of mistakes in my blog posts, because I write about a lot of things that are on the edge of my knowledge. When people point out mistakes, I often edit the blog post to fix it.
Usually I’ll stay near a computer for a few hours after I post a blog post so that I can fix mistakes quickly as they come up.
Some people are very careful to list every single error they made in their blog posts (“errata: the post previously said X which was wrong, I have corrected it to say Y”). Personally I make mistakes constantly and I don’t have time for that so I just edit the post to fix the mistakes.
ask for examples/experiences, not opinions
A lot of the time when I post a blog post, people on Twitter/Mastodon will reply with various opinions they have about the thing. For example, someone recently replied to a blog post about DNS saying that they love using zone files and dislike web interfaces for managing DNS records. That’s not an opinion I share, so I asked them why.
They explained that there are some DNS record types (specifically TLSA
) that they find
often aren’t supported in web interfaces. I didn’t know that people used TLSA
records, so I learned something! Cool!
I’ve found that asking people to share their experiences (“I wanted to use X DNS record type and I couldn’t”) instead of their opinions (“DNS web admin interfaces are bad”) leads to a lot of useful information and discussion. I’ve learned a lot from it over the years, and written a lot of tweets like “which DNS record types have you needed?” to try to extract more information about people’s experiences.
I try to model the same behaviour in my own work when I can – if I have an opinion, I’ll try to explain the experiences I’ve had with computers that caused me to have that opinion.
start with a little context
I think internet strangers are more likely to reply in a weird way when they have no idea who you are or why you’re writing this thing. It’s easy to make incorrect assumptions! So often I’ll mention a little context about why I’m writing this particular blog post.
For example:
A little while ago I started using a Mac, and one of my biggest frustrations with it is that often I need to run Linux-specific software.
or
I’ve started to run a few more servers recently (nginx playground, mess with dns, dns lookup), so I’ve been thinking about monitoring.
or
Last night, I needed to scan some documents for some bureaucratic reasons. I’d never used a scanner on Linux before and I was worried it would take hours to figure out…
avoid causing boring conversations
There are some kinds of programming conversations that I find extremely boring (like “should people learn vim?” or “is functional programming better than imperative programming?“). So I generally try to avoid writing blog posts that I think will result in a conversation/comment thread that I find annoying or boring.
For example, I wouldn’t write about my opinions about functional programming: I don’t really have anything interesting to say about it and I think it would lead to a conversation that I’m not interested in having.
I don’t always succeed at this of course (it’s impossible to predict what people are going to want to comment about!), but I try to avoid the most obvious flamebait triggers I’ve seen in the past.
There are a bunch of “flamebait” triggers that can set people off on a conversation that I find boring: cryptocurrency, tailwind, DNSSEC/DoH, etc. So I have a weird catalog in my head of things not to mention if I don’t want to start the same discussion about that thing for the 50th time.
Of course, if you think that conversations about functional programming are interesting, you should write about functional programming and start the conversations you want to have!
Also, it’s often possible to start an interesting conversation about a topic where the conversation is normally boring. For example I often see the same talking points about IPv6 vs IPv4 over and over again, but I remember the comments on Reasons for servers to support IPv6 being pretty interesting. In general if I really care about a topic I’ll talk about it anyway, but I don’t care about functional programming very much so I don’t see the point of bringing it up.
preempt common suggestions
Another kind of “boring conversation” I try to avoid is suggestions of things I have already considered. Like when someone says “you should do X” but I already know I could have done X and chose not to because of A B C.
So I often will add a short note like “I decided not to do X because of A B C” or “you can also do X” or “normally I would do X, here I didn’t because…”. For example, in this post about nix, I list a bunch of Nix features I’m choosing not to use (nix-shell, nix flakes, home manager) to avoid a bunch of helpful people telling me that I should use flakes.
Listing the things I’m not doing is also helpful to readers – maybe someone new to nix will discover nix flakes through that post and decide to use them! Or maybe someone will learn that there are exceptions to when a certain “best practice” is appropriate.
set some boundaries
Recently on Mastodon I complained about some gross terminology (“domain information groper”) that I’d just noticed in the dig man page on my machine. A few dudes in the replies (who by now have all deleted their posts) asked me to prove that the original author intended it to be offensive (which of course is besides the point, there’s just no need to have a term widely understood to be referring to sexual assault in the dig man page) or tried to explain to me why it actually wasn’t a problem.
So I blocked a few people and wrote a quick post:
man so many dudes in the replies demanding that i prove that the person who named dig “domain information groper” intended it in an offensive way. Big day for the block button I guess :)
I don’t do this too often, but I think it’s very important on social media to occasionally set some rules about what kind of behaviour I won’t tolerate. My goal here is usually to drive away some of the assholes (they can unfollow me!) and try to create a more healthy space for everyone else to have a conversation about computers in.
Obviously this only works in situations (like Twitter/Mastodon) where I have the ability to garden my following a little bit over time – I can’t do this on HN or Reddit or Lobsters or whatever and wouldn’t try.
As for fixing it – the dig maintainers removed the problem language years ago, but Mac OS still has a very outdated version for license reasons.
(you might notice that this section is breaking the “avoid boring conversations” rule above, this section was certain to start a very boring argument, but I felt it was important to talk about boundaries so I left it in)
don’t argue
Sometimes people seem to want to get into arguments or make dismissive comments. I don’t reply to them, even if they’re wrong. I dislike arguing on the internet and I’m extremely bad at it, so it’s not a good use of my time.
analyze negative comments
If I get a lot of negative comments that I didn’t expect, I try to see if I can get something useful out of it.
For example, I wrote a toy DNS resolver once and some of the commenters were upset that I didn’t handle parsing the DNS packet. At the time I thought this was silly (I thought DNS parsing was really straightforward and that it was obvious how to do it, who cares that I didn’t handle it?) but I realized that maybe the commenters didn’t think it was easy or obvious, and wanted to know how to do it. Which makes sense! It’s not obvious at all if you haven’t done it before!
Those comments partly inspired implement DNS in a weekend, which focuses much more heavily on the parsing aspects, and which I think is a much better explanation how to write a DNS resolver. So ultimately those comments helped me a lot, even if I found them annoying at the time.
(I realize this section makes me sound like a Perfectly Logical Person who does not get upset by negative public criticism, I promise this is not at all the case and I have 100000 feelings about everything that happens on the internet and get upset all the time. But I find that analyzing the criticism and trying to take away something useful from it helps a bit)
that’s all!
Thanks to Shae, Aditya, Brian, and Kamal for reading a draft of this.
Some other similar posts I’ve written in the past:
Behind "Hello World" on Linux
Today I was thinking about – what happens when you run a simple “Hello World” Python program on Linux, like this one?
print("hello world")
Here’s what it looks like at the command line:
$ python3 hello.py
hello world
But behind the scenes, there’s a lot more going on. I’ll
describe some of what happens, and (much much more importantly!) explain some tools you can use to
see what’s going on behind the scenes yourself. We’ll use readelf
, strace
,
ldd
, debugfs
, /proc
, ltrace
, dd
, and stat
. I won’t talk about the Python-specific parts at all – just what happens when you run any dynamically linked executable.
Here’s a table of contents:
- parse “python3 hello.py”
- figure out the full path to python3
- stat, under the hood
- time to fork
- the shell calls execve
- get the binary’s contents
- find the interpreter
- dynamic linking
- go to _start
- write a string
before execve
Before we even start the Python interpreter, there are a lot of things that have to happen. What executable are we even running? Where is it?
1: The shell parses the string python3 hello.py
into a command to run and a list of arguments: python3
, and ['hello.py']
A bunch of things like glob expansion could happen here. For example if you run python3 *.py
, the shell will expand that into python3 hello.py
2: The shell figures out the full path to python3
Now we know we need to run python3
. But what’s the full path to that binary? The way this works is that there’s a special environment variable named PATH
.
See for yourself: Run echo $PATH
in your shell. For me it looks like this.
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
When you run a command, the shell will search every directory in that list (in order) to try to find a match.
In fish
(my shell), you can see the path resolution logic here.
It uses the stat
system call to check if files exist.
See for yourself: Run strace -e stat
, and then run a command like python3
. You should see output like this:
stat("/usr/local/sbin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
stat("/usr/local/bin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
stat("/usr/sbin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
stat("/usr/bin/python3", {st_mode=S_IFREG|0755, st_size=5479736, ...}) = 0
You can see that it finds the binary at /usr/bin/python3
and stops: it
doesn’t continue searching /sbin
or /bin
.
(if this doesn’t work for you, instead try strace -o out bash
, and then grep
stat out
. One reader mentioned that their version of libc uses a different
system call instead of stat
)
2.1: A note on execvp
If you want to run the same PATH searching logic as the shell does without
reimplementing it yourself, you can use the libc function execvp
(or one of
the other exec*
functions with p
in the name).
3: stat
, under the hood
Now you might be wondering – Julia, what is stat
doing? Well, when your OS opens a file, it’s split into 2 steps.
- It maps the filename to an inode, which contains metadata about the file
- It uses the inode to get the file’s contents
The stat
system call just returns the contents of the file’s inodes – it
doesn’t read the contents at all. The advantage of this is that it’s a lot
faster. Let’s go on a short adventure into inodes. (this great post “A disk is a bunch of bits” by Dmitry Mazin has more details)
$ stat /usr/bin/python3
File: /usr/bin/python3 -> python3.9
Size: 9 Blocks: 0 IO Block: 4096 symbolic link
Device: fe01h/65025d Inode: 6206 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2023-08-03 14:17:28.890364214 +0000
Modify: 2021-04-05 12:00:48.000000000 +0000
Change: 2021-06-22 04:22:50.936969560 +0000
Birth: 2021-06-22 04:22:50.924969237 +0000
See for yourself: Let’s go see where exactly that inode is on our hard drive.
First, we have to find our hard drive’s device name
$ df
...
tmpfs 100016 604 99412 1% /run
/dev/vda1 25630792 14488736 10062712 60% /
...
Looks like it’s /dev/vda1
. Next, let’s find out where the inode for /usr/bin/python3
is on our hard drive:
$ sudo debugfs /dev/vda1
debugfs 1.46.2 (28-Feb-2021)
debugfs: imap /usr/bin/python3
Inode 6206 is part of block group 0
located at block 658, offset 0x0d00
I have no idea how debugfs
is figuring out the location of the inode for that filename, but we’re going to leave that alone.
Now, we need to calculate how many bytes into our hard drive “block 658, offset 0x0d00” is on the big array of bytes that is your hard drive. Each block is 4096 bytes, so we need to go 4096 * 658 + 0x0d00
bytes. A calculator tells me that’s 2698496
$ sudo dd if=/dev/vda1 bs=1 skip=2698496 count=256 2>/dev/null | hexdump -C
00000000 ff a1 00 00 09 00 00 00 f8 b6 cb 64 9a 65 d1 60 |...........d.e.`|
00000010 f0 fb 6a 60 00 00 00 00 00 00 01 00 00 00 00 00 |..j`............|
00000020 00 00 00 00 01 00 00 00 70 79 74 68 6f 6e 33 2e |........python3.|
00000030 39 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |9...............|
00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000060 00 00 00 00 12 4a 95 8c 00 00 00 00 00 00 00 00 |.....J..........|
00000070 00 00 00 00 00 00 00 00 00 00 00 00 2d cb 00 00 |............-...|
00000080 20 00 bd e7 60 15 64 df 00 00 00 00 d8 84 47 d4 | ...`.d.......G.|
00000090 9a 65 d1 60 54 a4 87 dc 00 00 00 00 00 00 00 00 |.e.`T...........|
000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
Neat! There’s our inode! You can see it says python3
in it, which is a really
good sign. We’re not going to go through all of this, but the ext4 inode struct from the Linux kernel
says that the first 16 bits are the “mode”, or permissions. So let’s work that out how ffa1
corresponds to file permissions.
- The bytes
ffa1
correspond to the number0xa1ff
, or 41471 (because x86 is little endian) - 41471 in octal is
0120777
- This is a bit weird – that file’s permissions could definitely be
777
, but what are the first 3 digits? I’m not used to seeing those! You can find out what the012
means in man inode (scroll down to “The file type and mode”). There’s a little table that says012
means “symbolic link”.
Let’s list the file and see if it is in fact a symbolic link with permissions 777
:
$ ls -l /usr/bin/python3
lrwxrwxrwx 1 root root 9 Apr 5 2021 /usr/bin/python3 -> python3.9
It is! Hooray, we decoded it correctly.
4: Time to fork
We’re still not ready to start python3
. First, the shell needs to create a
new child process to run. The way new processes start on Unix is a little weird
– first the process clones itself, and then runs execve
, which replaces the
cloned process with a new process.
*See for yourself: Run strace -e clone bash
, then run python3
. You should see something like this:
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f03788f1a10) = 3708100
3708100
is the PID of the new process, which is a child of the shell process.
Some more tools to look at what’s going on with processes:
pstree
will show you a tree of all the processes on your systemcat /proc/PID/stat
shows you some information about the process. The contents of that file are documented inman proc
. For example the 4th field is the parent PID.
4.1: What the new process inherits.
The new process (which will become python3
) has inherited a bunch of from the shell. For example, it’s inherited:
- environment variables: you can look at them with
cat /proc/PID/environ | tr '\0' '\n'
- file descriptors for stdout and stderr: look at them with
ls -l /proc/PID/fd
- a working directory (whatever the current directory is)
- namespaces and cgroups (if it’s in a container)
- the user and group that’s running it
- probably more things I’m not thinking of right now
5: The shell calls execve
Now we’re ready to start the Python interpreter!
See for yourself: Run strace -f -e execve bash
, then run python3
. The -f
is important because we want to follow any forked child subprocesses. You should see something like this:
[pid 3708381] execve("/usr/bin/python3", ["python3"], 0x560397748300 /* 21 vars */) = 0
The first argument is the binary, and the second argument is the list of command line arguments. The command line arguments get placed in a special location in the program’s memory so that it can access them when it runs.
Now, what’s going on inside execve
?
6: get the binary’s contents
The first thing that has to happen is that we need to open the python3
binary file and read its contents. So far we’ve only used the stat
system call to access its metadata,
but now we need its contents.
Let’s look at the output of stat
again:
$ stat /usr/bin/python3
File: /usr/bin/python3 -> python3.9
Size: 9 Blocks: 0 IO Block: 4096 symbolic link
Device: fe01h/65025d Inode: 6206 Links: 1
...
This takes up 0 blocks of space on the disk. This is because the contents of
the symbolic link (python3.9
) are actually in the inode itself: you can see
them here (from the binary contents of the inode above, it’s split across 2
lines in the hexdump output):
00000020 00 00 00 00 01 00 00 00 70 79 74 68 6f 6e 33 2e |........python3.|
00000030 39 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |9...............|
So we’ll need to open /usr/bin/python3.9
instead. All of this is happening
inside the kernel so you won’t see it another system call for that.
Every file is made up of a bunch of blocks on the hard drive. I think each of these blocks on my system is 4096 bytes, so the minimum size of a file is 4096 bytes – even if the file is only 5 bytes, it still takes up 4KB on disk.
See for yourself: We can find the block numbers using debugfs
like this: (again, I got these instructions from dmitry mazin’s “A disk is a bunch of bits” post)
$ debugfs /dev/vda1
debugfs: blocks /usr/bin/python3.9
145408 145409 145410 145411 145412 145413 145414 145415 145416 145417 145418 145419 145420 145421 145422 145423 145424 145425 145426 145427 145428 145429 145430 145431 145432 145433 145434 145435 145436 145437
Now we can use dd
to read the first block of the file. We’ll set the block size to 4096 bytes, skip 145408
blocks, and read 1 block.
$ dd if=/dev/vda1 bs=4096 skip=145408 count=1 2>/dev/null | hexdump -C | head
00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|
00000010 02 00 3e 00 01 00 00 00 c0 a5 5e 00 00 00 00 00 |..>.......^.....|
00000020 40 00 00 00 00 00 00 00 b8 95 53 00 00 00 00 00 |@.........S.....|
00000030 00 00 00 00 40 00 38 00 0b 00 40 00 1e 00 1d 00 |....@.8...@.....|
00000040 06 00 00 00 04 00 00 00 40 00 00 00 00 00 00 00 |........@.......|
00000050 40 00 40 00 00 00 00 00 40 00 40 00 00 00 00 00 |@.@.....@.@.....|
00000060 68 02 00 00 00 00 00 00 68 02 00 00 00 00 00 00 |h.......h.......|
00000070 08 00 00 00 00 00 00 00 03 00 00 00 04 00 00 00 |................|
00000080 a8 02 00 00 00 00 00 00 a8 02 40 00 00 00 00 00 |..........@.....|
00000090 a8 02 40 00 00 00 00 00 1c 00 00 00 00 00 00 00 |..@.............|
You can see that we get the exact same output as if we read the file with cat
, like this:
$ cat /usr/bin/python3.9 | hexdump -C | head
00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|
00000010 02 00 3e 00 01 00 00 00 c0 a5 5e 00 00 00 00 00 |..>.......^.....|
00000020 40 00 00 00 00 00 00 00 b8 95 53 00 00 00 00 00 |@.........S.....|
00000030 00 00 00 00 40 00 38 00 0b 00 40 00 1e 00 1d 00 |....@.8...@.....|
00000040 06 00 00 00 04 00 00 00 40 00 00 00 00 00 00 00 |........@.......|
00000050 40 00 40 00 00 00 00 00 40 00 40 00 00 00 00 00 |@.@.....@.@.....|
00000060 68 02 00 00 00 00 00 00 68 02 00 00 00 00 00 00 |h.......h.......|
00000070 08 00 00 00 00 00 00 00 03 00 00 00 04 00 00 00 |................|
00000080 a8 02 00 00 00 00 00 00 a8 02 40 00 00 00 00 00 |..........@.....|
00000090 a8 02 40 00 00 00 00 00 1c 00 00 00 00 00 00 00 |..@.............|
an aside on magic numbers
This file starts with ELF
, which is a “magic number”, or a byte sequence that
tells us that this is an ELF file. ELF is the binary file format on Linux.
Different file formats have different magic numbers, for example the magic
number for gzip is 1f8b
. The magic number at the beginning is how file blah.gz
knows that it’s a gzip file.
I think file
has a variety of heuristics for figuring out the file type of a
file, not just magic numbers, but the magic number is an important one.
7: find the interpreter
Let’s parse the ELF file to see what’s in there.
See for yourself: Run readelf -a /usr/bin/python3.9
. Here’s what I get (though I’ve redacted a LOT of stuff):
$ readelf -a /usr/bin/python3.9
ELF Header:
Class: ELF64
Machine: Advanced Micro Devices X86-64
...
-> Entry point address: 0x5ea5c0
...
Program Headers:
Type Offset VirtAddr PhysAddr
INTERP 0x00000000000002a8 0x00000000004002a8 0x00000000004002a8
0x000000000000001c 0x000000000000001c R 0x1
-> [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
...
-> 1238: 00000000005ea5c0 43 FUNC GLOBAL DEFAULT 13 _start
Here’s what I understand of what’s going on here:
- it’s telling the kernel to run
/lib64/ld-linux-x86-64.so.2
to start this program. This is called the dynamic linker and we’ll talk about it next - it’s specifying an entry point (at
0x5ea5c0
, which is where this program’s code starts)
Now let’s talk about the dynamic linker.
8: dynamic linking
Okay! We’ve read the bytes from disk and we’ve started this “interpreter” thing. What next? Well, if you run strace -o out.strace python3
, you’ll see a bunch of stuff like this right after the execve
system call:
execve("/usr/bin/python3", ["python3"], 0x560af13472f0 /* 21 vars */) = 0
brk(NULL) = 0xfcc000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=32091, ...}) = 0
mmap(NULL, 32091, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f718a1e3000
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 l\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=149520, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f718a1e1000
...
close(3) = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3
This all looks a bit intimidating at first, but the part I want you to pay
attention to is openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpthread.so.0"
.
This is opening a C threading library called pthread
that the Python
interpreter needs to run.
See for yourself: If you want to know which libraries a binary needs to load at runtime, you can use ldd
. Here’s what that looks like for me:
$ ldd /usr/bin/python3.9
linux-vdso.so.1 (0x00007ffc2aad7000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2fd6554000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2fd654e000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f2fd6549000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fd6405000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fd63d6000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fd63b9000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fd61e3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2fd6580000)
You can see that the first library listed is /lib/x86_64-linux-gnu/libpthread.so.0
, which is why it was loaded first.
on LD_LIBRARY_PATH
I’m honestly still a little confused about dynamic linking. Some things I know:
- Dynamic linking happens in userspace and the dynamic linker on my system is at
/lib64/ld-linux-x86-64.so.2
. If you’re missing the dynamic linker, you can end up with weird bugs like this weird “file not found” error - The dynamic linker uses the
LD_LIBRARY_PATH
environment variable to find libraries - The dynamic linker will also use the
LD_PRELOAD
environment to override any dynamically linked function you want (you can use this for fun hacks, or to replace your default memory allocator with an alternative one like jemalloc) - there are some
mprotect
s in the strace output which are marking the library code as read-only, for security reasons - on Mac, it’s
DYLD_LIBRARY_PATH
instead ofLD_LIBRARY_PATH
You might be wondering – if dynamic linking happens in userspace, why don’t we
see a bunch of stat
system calls where it’s searching through
LD_LIBRARY_PATH
for the libraries, the way we did when bash was searching the
PATH
?
That’s because ld
has a cache in /etc/ld.so.cache
, and all of those
libraries have already been found in the past. You can see it opening the cache
in the strace output – openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
.
There are still a bunch of system calls after dynamic linking in the full strace output that I
still don’t really understand (what’s prlimit64
doing? where does the locale
stuff come in? what’s gconv-modules.cache
? what’s rt_sigaction
doing?
what’s arch_prctl
? what’s set_tid_address
and set_robust_list
?). But this feels like a good start.
aside: ldd is actually a simple shell script!
Someone on mastodon pointed out that ldd
is actually a shell script
that just sets the LD_TRACE_LOADED_OBJECTS=1
environment variable and
starts the program. So you can do exactly the same thing like this:
$ LD_TRACE_LOADED_OBJECTS=1 python3
linux-vdso.so.1 (0x00007ffe13b0a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f01a5a47000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f01a5a41000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f2fd6549000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fd6405000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fd63d6000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fd63b9000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fd61e3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2fd6580000)
Apparently ld
is also a binary you can just run, so /lib64/ld-linux-x86-64.so.2 --list /usr/bin/python3.9
also does the the same thing.
on init
and fini
Let’s talk about this line in the strace
output:
set_tid_address(0x7f58880dca10) = 3709103
This seems to have something to do with threading, and I think this might be
happening because the pthread
library (and every other dynamically loaded)
gets to run initialization code when it’s loaded. The code that runs when the
library is loaded is in the init
section (or maybe also the .ctors
section).
See for yourself: Let’s take a look at that using readelf:
$ readelf -a /lib/x86_64-linux-gnu/libpthread.so.0
...
[10] .rela.plt RELA 00000000000051f0 000051f0
00000000000007f8 0000000000000018 AI 4 26 8
[11] .init PROGBITS 0000000000006000 00006000
000000000000000e 0000000000000000 AX 0 0 4
[12] .plt PROGBITS 0000000000006010 00006010
0000000000000560 0000000000000010 AX 0 0 16
...
This library doesn’t have a .ctors
section, just an .init
. But what’s in
that .init
section? We can use objdump
to disassemble the code:
$ objdump -d /lib/x86_64-linux-gnu/libpthread.so.0
Disassembly of section .init:
0000000000006000 <_init>:
6000: 48 83 ec 08 sub $0x8,%rsp
6004: e8 57 08 00 00 callq 6860 <__pthread_initialize_minimal>
6009: 48 83 c4 08 add $0x8,%rsp
600d: c3
So it’s calling __pthread_initialize_minimal
. I found the code for that function in glibc,
though I had to find an older version of glibc because it looks like in more
recent versions libpthread is no longer a separate library.
I’m not sure whether this set_tid_address
system call actually comes from
__pthread_initialize_minimal
, but at least we’ve learned that libraries can
run code on startup through the .init
section.
Here’s a note from man elf
on the .init
section:
$ man elf
.init This section holds executable instructions that contribute to the process initialization code. When a program starts to run
the system arranges to execute the code in this section before calling the main program entry point.
There’s also a .fini
section in the ELF file that runs at the end, and
.ctors
/ .dtors
(constructors and destructors) are other sections that
could exist.
Okay, that’s enough about dynamic linking.
9: go to _start
After dynamic linking is done, we go to _start
in the Python interpreter.
Then it does all the normal Python interpreter things you’d expect.
I’m not going to talk about this because here I’m interested in general facts about how binaries are run on Linux, not the Python interpreter specifically.
10: write a string
We still need to print out “hello world” though. Under the hood, the Python print
function calls some function from libc. But which one? Let’s find out!
See for yourself: Run ltrace -o out python3 hello.py
.
$ ltrace -o out python3 hello.py
$ grep hello out
write(1, "hello world\n", 12) = 12
So it looks like it’s calling write
I honestly am always a little suspicious of ltrace – unlike strace (which I
would trust with my life), I’m never totally sure that ltrace is actually
reporting library calls accurately. But in this case it seems to be working. And
if we look at the cpython source code, it does seem to be calling write()
in some places. So I’m willing to believe that.
what’s libc?
We just said that Python calls the write
function from libc. What’s libc?
It’s the C standard library, and it’s responsible for a lot of basic things
like:
- allocating memory with
malloc
- file I/O (opening/closing/
- executing programs (with
execvp
, like we mentioned before) - looking up DNS records with
getaddrinfo
- managing threads with
pthread
Programs don’t have to use libc (on Linux, Go famously doesn’t use it and calls Linux system calls directly instead), but most other programming languages I use (node, Python, Ruby, Rust) all use libc. I’m not sure about Java.
You can find out if you’re using libc by running ldd
on your binary: if you
see something like libc.so.6
, that’s libc.
why does libc matter?
You might be wondering – why does it matter that Python calls the libc write
and then libc calls the write
system call? Why am I making a point of saying
that libc
is in the middle?
I think in this case it doesn’t really matter (AFAIK the write
libc function
maps pretty directly to the write
system call)
But there are different libc implementations, and sometimes they behave differently. The two main ones are glibc (GNU libc) and musl libc.
For example, until recently musl’s getaddrinfo
didn’t support TCP DNS, here’s a blog post talking about a bug that that caused.
a little detour into stdout and terminals
In this program, stdout (the 1
file descriptor) is a terminal. And you can do
funny things with terminals! Here’s one:
- In a terminal, run
ls -l /proc/self/fd/1
. I get/dev/pts/2
- In another terminal window, write
echo hello > /dev/pts/2
- Go back to the original terminal window. You should see
hello
printed there!
that’s all for now!
Hopefully you have a better idea of how hello world
gets printed! I’m going to stop
adding more details for now because this is already pretty long, but obviously there’s
more to say and I might add more if folks chip in with extra details. I’d
especially love suggestions for other tools you could use to inspect parts of
the process that I haven’t explained here.
Thanks to everyone who suggested corrections / additions – I’ve edited this blog post a lot to incorporate more things :)
Some things I’d like to add if I can figure out how to spy on them:
- the kernel loader and ASLR (I haven’t figured out yet how to use bpftrace + kprobes to trace the kernel loader’s actions)
- TTYs (I haven’t figured out how to trace the way
write(1, "hello world", 11)
gets sent to the TTY that I’m looking at)
I’d love to see a Mac version of this
One of my frustrations with Mac OS is that I don’t know how to introspect my
system on this level – when I print hello world
, I can’t figure out how to
spy on what’s going on behind the scenes the way I can on Linux. I’d love to
see a really in depth explainer.
Some Mac equivalents I know about:
ldd
->otool -L
readelf
->otool
- supposedly you can use
dtruss
ordtrace
on mac instead of strace but I’ve never been brave enough to turn off system integrity protection to get it to work strace
->sc_usage
seems to be able to collect stats about syscall usage, andfs_usage
about file usage
more reading
Some more links:
- A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux
- an exploration of “hello world” on FreeBSD
- hello world under the microscope for Windows
- From LWN: how programs get run (and part two) have a bunch more details on the internals of
execve
- Putting the “You” in CPU by Lexi Mattick
- “Hello, world” from scratch on a 6502 (video from Ben Eater)
Why is DNS still hard to learn?
I write a lot about technologies that I found hard to learn about. A while back my friend Sumana asked me an interesting question – why are these things so hard to learn about? Why do they seem so mysterious?
For example, take DNS. We’ve been using DNS since the 80s (for more than 35 years!). It’s used in every website on the internet. And it’s pretty stable – in a lot of ways, it works the exact same way it did 30 years ago.
But it took me YEARS to figure out how to confidently debug DNS issues, and I’ve seen a lot of other programmers struggle with debugging DNS problems as well. So what’s going on?
Here are a couple of thoughts about why learning to troubleshoot DNS problems is hard.
(I’m not going to explain DNS very much in this post, see Implement DNS in a Weekend or my DNS blog posts for more about how DNS works)
it’s not because DNS is super hard
When I finally learned how to troubleshoot DNS problems, my reaction was “what, that was it???? that’s not that hard!“. I felt a little bit cheated! I could explain to you everything that I found confusing about DNS in a few hours.
So – if DNS is not all that complicated, why did it take me so many years to
figure out how to troubleshoot pretty basic DNS issues (like “my domain doesn’t
resolve even though I’ve set it up correctly” or “dig
and my browser have
different DNS results, why?“)?
And I wasn’t alone in finding DNS hard to learn! I’ve talked to a lot of smart friends who are very experienced programmers about DNS of the years, and many of them either:
- didn’t feel comfortable making simple DNS changes to their websites
- or were confused about basic facts about how DNS works (like that records are pulled and not pushed)
- or did understand DNS basics pretty well, but had the some of the same
knowledge gaps that I’d struggled with (negative caching and the details of
how
dig
and your browser do DNS queries differently)
So if we’re all struggling with the same things about DNS, what’s going on? Why is it so hard to learn for so many people?
Here are some ideas.
a lot of the system is hidden
When you make a DNS request on your computer, the basic story is:
- your computer makes a request to a server called resolver
- the resolver checks its cache, and makes requests to some other servers called authoritative nameservers
Here are some things you don’t see:
- the resolver’s cache. What’s in there?
- which library code on your computer is making the DNS request (is it libc
getaddrinfo
? if so, is it the getaddrinfo from glibc, or musl, or apple? is it your browser’s DNS code? is it a different custom DNS implementation?). All of these options behave slightly differently and have different configuration, approaches to caching, available features, etc. For example musl DNS didn’t support TCP until early 2023. - the conversation between the resolver and the authoritative nameservers. I
think a lot of DNS issues would be SO simple to understand if you could
magically get a trace of exactly which authoritative nameservers were
queried downstream during your request, and what they said. (like, what if
you could run
dig +debug google.com
and it gave you a bunch of extra debugging information?)
dealing with hidden systems
A couple of ideas for how to deal with hidden systems
- just teaching people what the hidden systems are makes a huge difference. For a long time I had no idea that my computer had many different DNS libraries that were used in different situations and I was confused about this for literally years. This is a big part of my approach.
- with Mess With DNS we tried out this “fishbowl” approach where it shows you some parts of the system (the conversation with the resolver and the authoritative nameserver) that are normally hidden
- I feel like it would be extremely cool to extend DNS to include a “debugging information” section. (edit: it looks like this already exists! It’s called Extended DNS Errors, or EDE, and tools are slowly adding support for it.
Extended DNS Errors seem cool
Extended DNS Errors are a new way for DNS servers to provide extra debugging information in DNS response. Here’s an example of what that looks like:
$ dig @8.8.8.8 xjwudh.com
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 39830
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
; EDE: 12 (NSEC Missing): (Invalid denial of existence of xjwudh.com/a)
;; QUESTION SECTION:
;xjwudh.com. IN A
;; AUTHORITY SECTION:
com. 900 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1690634120 1800 900 604800 86400
;; Query time: 92 msec
;; SERVER: 8.8.8.8#53(8.8.8.8) (UDP)
;; WHEN: Sat Jul 29 08:35:45 EDT 2023
;; MSG SIZE rcvd: 161
Here I’ve requested a nonexistent domain, and I got the extended error EDE:
12 (NSEC Missing): (Invalid denial of existence of xjwudh.com/a)
. I’m not
sure what that means (it’s some DNSSEC Thing), but it’s cool to see an extra
debug message like that.
I did have to install a newer version of dig
to get the above to work.
confusing tools
Even though a lot of DNS stuff is hidden, there are a lot of ways to figure out
what’s going on by using dig
.
For example, you can use dig +norecurse
to figure out if a given DNS resolver
has a particular record in its cache. 8.8.8.8
seems to return a SERVFAIL
response if the response isn’t cached.
here’s what that looks like for google.com
$ dig +norecurse @8.8.8.8 google.com
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11653
;; flags: qr ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 21 IN A 172.217.4.206
;; Query time: 57 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Jul 28 10:50:45 EDT 2023
;; MSG SIZE rcvd: 55
and for homestarrunner.com
:
$ dig +norecurse @8.8.8.8 homestarrunner.com
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 55777
;; flags: qr ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;homestarrunner.com. IN A
;; Query time: 52 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Jul 28 10:51:01 EDT 2023
;; MSG SIZE rcvd: 47
Here you can see we got a normal NOERROR
response for google.com
(which is
in 8.8.8.8
’s cache) but a SERVFAIL
for homestarrunner.com
(which isn’t).
This doesn’t mean there’s no DNS record homestarrunner.com
(there is!), it’s
just not cached).
But this output is really confusing to read if you’re not used to it! Here are a few things that I think are weird about it:
- the headings are weird (there’s
->>HEADER<<-
,flags:
,OPT PSEUDOSECTION:
,QUESTION SECTION:
,ANSWER SECTION:
) - the spacing is weird (why is the no newline between
OPT PSEUDOSECTION
andQUESTION SECTION
?) MSG SIZE rcvd: 47
is weird (are there other fields inMSG SIZE
other thanrcvd
? what are they?)- it says that there’s 1 record in the ADDITIONAL section but doesn’t show it, you have to somehow magically know that the “OPT PSEUDOSECTION” record is actually in the additional section
In general dig
’s output has the feeling of a script someone wrote in an adhoc
way that grew organically over time and not something that was intentionally
designed.
dealing with confusing tools
some ideas for improving on confusing tools:
- explain the output. For example I wrote how to use dig explaining how
dig
’s output works and how to configure it to give you a shorter output by default - make new, more friendly tools. For example for DNS there’s
dog and doggo and my dns lookup tool. I think these are really cool but
personally I don’t use them because sometimes I want to do something a little
more advanced (like using
+norecurse
) and as far as I can tell neitherdog
nordoggo
support+norecurse
. I’d rather use 1 tool for everything, so I stick todig
. Replacing the breadth of functionality ofdig
is a huge undertaking. - make dig’s output a little more friendly. If I were better at C programming,
I might try to write a
dig
pull request that adds a+human
flag to dig that formats the long form output in a more structured and readable way, maybe something like this:
$ dig +human +norecurse @8.8.8.8 google.com
HEADER:
opcode: QUERY
status: NOERROR
id: 11653
flags: qr ra
records: QUESTION: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
QUESTION SECTION:
google.com. IN A
ANSWER SECTION:
google.com. 21 IN A 172.217.4.206
ADDITIONAL SECTION:
EDNS: version: 0, flags:; udp: 512
EXTRA INFO:
Time: Fri Jul 28 10:51:01 EDT 2023
Elapsed: 52 msec
Server: 8.8.8.8:53
Protocol: UDP
Response size: 47 bytes
This makes the structure of the DNS response more clear – there’s the header, the question, the answer, and the additional section.
And it’s not “dumbed down” or anything! It’s the exact same information, just formatted in a more structured way. My biggest frustration with alternative DNS tools that they often remove information in the name of clarity. And though there’s definitely a place for those tools, I want to see all the information! I just want it to be presented clearly.
We’ve learned a lot about how to design more user friendly command line tools in the last 40 years and I think it would be cool to apply some of that knowledge to some of our older crustier tools.
dig +yaml
One quick note on dig: newer versions of dig do have a +yaml
output format
which feels a little clearer to me, though it’s too verbose for my taste (a
pretty simple DNS response doesn’t fit on my screen)
weird gotchas
DNS has some weird stuff that’s relatively common to run into, but pretty hard to learn about if nobody tells you what’s going on. A few examples (there are more in some ways DNS can break:
- negative caching! (which I talk about in this talk) It took me probably 5 years to realize that I shouldn’t visit a domain that doesn’t have a DNS record yet, because then the nonexistence of that record will be cached, and it gets cached for HOURS, and it’s really annoying.
- differences in
getaddrinfo
implementations: until early 2023,musl
didn’t support TCP DNS - resolvers that ignore TTLs: if you set a TTL on your DNS records (like “5 minutes”), some resolvers will ignore those TTLs completely and cache the records for longer, like maybe 24 hours instead
- if you configure nginx wrong (like this), it’ll cache DNS records forever.
- how ndots can make your Kubernetes DNS slow
dealing with weird gotchas
I don’t have as good answers here as I would like to, but knowledge about weird gotchas is extremely hard won (again, it took me years to figure out negative caching!) and it feels very silly to me that people have to rediscover them for themselves over and over and over again.
A few ideas:
- It’s incredibly helpful when people call out gotchas when explaining a topic. For example (leaving DNS for a moment), Josh Comeau’s Flexbox intro explains this minimum size gotcha which I ran into SO MANY times for several years before finally finding an explanation of what was going on.
- I’d love to see more community collections of common gotchas. For bash, shellcheck is an incredible collection of bash gotchas.
One tricky thing about documenting DNS gotchas is that different people are going to run into different gotchas – if you’re just configuring DNS for your personal domain once every 3 years, you’re probably going to run into different gotchas than someone who administrates DNS for a domain with heavy traffic.
A couple of more quick reasons:
infrequent exposure
A lot of people only deal with DNS extremely infrequently. And of course if you only touch DNS every 3 years it’s going to be harder to learn!
I think cheat sheets (like “here are the steps to changing your nameservers”) can really help with this.
it’s hard to experiment with
DNS can be scary to experiment with – you don’t want to mess up your domain. We built Mess With DNS to make this one a little easier.
that’s all for now
I’d love to hear other thoughts about what makes DNS (or your favourite mysterious technology) hard to learn.