Reading List
The most recent articles from a list of feeds I subscribe to.
Mozilla wants its documentation to gaslight you

Mozilla is one of the most important companies on the Internet. For a very long time, they have represented the only real competitor that Microsoft and Google have had as far as web browsers go. Mozilla is widely seen as a force for good by vast numbers of the developer community, but they seem to be torching all that good will by just giving up.
One of their most critical resources is the Mozilla Developer Network (MDN). This documents every HTML, CSS, and JavaScript feature so that developers of all skill levels can understand and implement things. To say this is widely used is an understatement. I'd be willing to bet that it's used by developers at Microsoft, Facebook, Google, Amazon, Apple, and all Fortune 500 companies. I personally use MDN more than basically any other resource for learning how HTML, CSS, and JavaScript features work. It is critical load-bearing infrastructure on the Internet.
Recently they added an AI Help feature, and I'm uncertain about it being a good idea. One of the greatest assets that Mozilla has with the MDN team is their technical writers that make fantastic documentation, examples, and breakdowns of every single feature in browsers. This takes an unimaginable amount of work and has lead to creating one of the best resources possible. So obviously, we need to go and take all of that and replace it with the new technical writer that never sleeps, eats, has kids, goes on vacation, or gets burnout: ChatGPT. This opens up the possibility for taking this source of joy and creativity into a source of a new philosophical horror: gaslighting as a service.
As a part of noting my biases, I found out about this issue after being mentioned on Mastodon pointing to a GitHub thread. I was taken aback at first but I honestly do see how this could let skilled human writers speed up in the way that AI lets you avoid the "blank canvas problem".
I've been meaning to write more about this, but the basic idea is that for many people there is nothing more terrifying than a blank canvas, empty editor frame, or camera sitting on a desk. One of the first things you learn as an artist is to use the blank canvas as an asset and get over the fear of it by starting with something, anything to help you get started. Even a single line or circle to position things around.
I think that tools like ChatGPT could legitimately help people get past that "blank canvas" problem by giving them something, anything to start from. When you combine that tool with the genuine skill that the MDN docs team has, this could lead to amazing things.
I don't see that as possible with Mozilla's current leadership and business model. I don't know if Mozilla's business model ever made sense, it makes a lot more sense if it's something closer to a nonprofit rather than a commercial entity. Certianly not something that spends five million dollars per year on the CEO's paycheck. We're just watching Mozilla circle down the drain towards irrelevance.
In 2017 Mozilla released Firefox Quantum, a ground-up reimagining of what using Firefox really means. XUL was excised out of the browser, Rust was introduced to have truly memory safe code in the browser, browser speed was drastically increased, and overall it's one of the best updates that Firefox has ever had or likely will have. I checked the LinkedIn profiles of the major players that made Quantum possible and found out that nearly none of them still work at Mozilla.
There were claims that they were victims of layoffs in 2020 because apparently people weren't using web browsers in the age of COVID. During the age of COVID. When everyone was locked down. At home. Bored out of their minds. With nothing to really entertain them but. The Internet. Which you access with browsers.
Other big-ticket projects from Mozilla are no longer being made in-house. Rust is the biggest one. Rust was made as a language to replace C++ in Firefox and let the browser have actual memory safe code so that we don't run into the perinneal C++ landmines that lead to remote code execution triggering panic updates. Rust is still used in Firefox (and soon Chrome too), but it's no longer a Mozilla project.
Servo was the browser engine to surpass Gecko. It was a ground-up reimagining of how browser engines could work, taking advantage of native GPU hardware features as much as possible. This would allow the browser to render at literal thousands of frames per second and absolutely sip battery power in the process. This would have been an even more profound reimaging of what a browser engine could be that would make even the Quantum update look like a mere recolor in comparison.
But no, the entire Servo team was sacked and Firefox limps on with Gecko. The only real competitor being WebKit and Blink, which are both just forks of forks of the KDE library KHTML.
It really hurts to see Mozilla fall and watch as other browser engines also fall the way of the dodo. I hate living in a world where the only real option for web browsers is Google Chrome. I realize that browser engines are mind-numbingly complicated because they have to be, but good lord it hurts to see diversity in implementation just fade to nil because investors want an infinite money generator.
This is the legacy that our generation will leave. One where true genius and innovation are swept under the rug because they couldn't make a line go up fast enough to make line-obsessed fools only focus on the line.
I'm sad to see such a critical web resource fall prey to hype cycles that rot out the best part of MDN: the technical writers. The technical writing at MDN is seriously some of the best you can get in the industry. I'd pay for a printed version of MDN. But now with this automated gaslighting as a service, I don't think I can justify paying for "MDN Plus" to find new and interesting ways for ChatGPT to lie to me about what CSS features can do. If I wanted to do that, I could just ask Mimi.
Array.prototype.with
to replace a value in an array that I pass to a function?Array.prototype.with is not a valid JavaScript feature. The
Array.prototype object does not include a method called with for
replacing values in an array.If you would like to replace values in an array, you can use the standard JavaScript array methods such as
splice, map, or forEach. These methods can help you
modify specific elements within an array or create a new array with
replaced values.If you provide more details on what you are trying to achieve, such as the specific value you want to replace and the conditions for replacement, I would be happy to provide you with a code example using built-in array methods.
Imagine what developers feel when they get told something exists, then upon trying to get more information about it get told that it does not in fact exist. On the same page.
If this isn't gaslighting as a service, what is? How will the next generation of developers get trained if they are constantly mislead by the shadows of meaning we used to displace human authorship because it's expensive to make sure people can afford to eat?
I don't know how I feel about email

So recently I've been trying to use email and I just don't know how I should feel about it. There's a lot of core problems with it that seem to seep out to every part of the protocol, user experience, and the entire thing is just leaving me uncertain for its future.
I have to choose a server
One of the biggest things that confuses me is that I have to choose a server to use email. I'm not really given explanations of what each of these servers mean and what the differences are, but I have heard that some servers can't interact with eachother due to petty administration disputes. I don't know which server I should pick, but my phone keeps trying to get me to use iCloud mail.
A lot of my friends use this Gmail thing run by Google, but I don't know if I really want to put Google back in my life after I cut it out.
Also apparently there's no real way to migrate between email servers either, so whatever I choose is going to be my permanent home. Lots of online applications will tie into your email address (some even make your email address a primary key for your account with no way to change it!), so whatever email address I choose will probably have to be used forever. I can forward emails from an old account to a new account, but Google is going to delete inactive accounts and I can't imagine that other providers aren't going to follow suit.
Oh and if that server goes down and stays down, I lose access to any of my emails that I haven't downloaded yet.
I can't run my own server
Okay, so if I have to use a server can't I just run my own server? I have an AnalogBrocean and I can set up an Ubuntu instance or something to act as a mailserver. It shouldn't be that complicated right?
What do you mean spam is a thing? Doesn't the spamfilter take care of that?
What do you mean email servers don't come with spamfilters by default?
What do you mean that the default configuration of email servers means that I have to vigilantly monitor everything to ensure that bots don't send emails and destroy my reputation at unreasonable hours of the night?
What do you mean that if someone else on my server chooses a bad password, bots will figure it out and start sending a torrent of spam?
What do you mean that IP addresses aren't a reliable way to detect who people are?
What do you mean that the spamfilter software takes 12 gigabytes of ram, 4 CPU cores, and 35 GB of local space in order to work?
What do you mean I can do everything right but some AI model will get angry at me and all of the efforts I do to "fix" it are wasted time?
I don't know which client to use
Once I have my email account (and maybe my own email server if I really hate myself), I need to connect to it with a client. Every OS comes with a mail client, but all of them suck (Apple Mail seems decent though?). If I open the App Store and search "email client" I get hundreds of results. Same in the Google Play store. More if I look for clients for Windows. Even more if I look for clients for my Steam Deck. I don't know enough to know if these mail clients are legit or not. Which ones are reputable? Which ones are made by development teams I can trust? Which ones will support the kinds of emails I will read?
Oh and even better, apparently Gmail is starting to lock down access to your emails with only a username and password in the interest of "security", so you have to go through convoluted hoops in order to check your email in something that isn't the gmail web UI. It is literally impossible for me to check my work mail in something like Aerc.
And then comes the issue of clients and message formatting. When I compose and send an email, I type text in the box, drag in attachments, and maybe bold important things. Then I try to send my email to a friend and they tell me they can't read it and send back a bunch of HTML garbage. I don't get it. Why does the format of messages matter? I just want to send my emails and have my friends read them, but then I have to dig through confusing or impossible to set configuration options to only send things in "plain text" instead of formatted messages. My iPad doesn't have the option to send emails in "plain text".
Apparently some smaller email servers will defederate with yours if you send HTML emails too, so if my client fucks something up in the eyes of another server admin, it could get my server defederated. What the hell kind of user experience is that?
Mailing lists
So email has groups called mailing lists, but using them with many email clients is an exercise in futility. Apparently some of the biggest open source projects in the world are developed on these things? Every mailing list has their own unspoken rules on how you're supposed to use your client, reply to emails, and more.
If you break these rules, you're threatened with banning. It's bullshit. I don't know how people deal with this. I just want to ask questions about the linux kernel, I don't want to have to entirely redo my entire setup on my iPad just to send clarifying questions back when I'm at a coffee shop. I don't care about "plain text", I care about getting the answer to my question.
Maybe this is why Microsoft was looking at proposing the Linux kernel move to something more amenable to the modern age. They got shot down for this of course, but why can we have nice things when we have "simple configuration options" in a billionty email clients?
I give up on email. If you want to talk with me, you'll need to project to my astral sigil. You can find it in the upper left hand corner of the website. There's a villa in the northwest corner of the island. I'll be waiting there.
HVE-BC1750-0001: Deceptive Information Disclosure Vulnerability in Human Interaction Protocols

In this report, we describe a discovered remote code execution vulnerability in neural language processing systems. These systems, currently in active use by major social media networks including but not limited to Twitter, Facebook, and LinkedIn, allow for the crafting of a carefully selected message that allows successful attackers to gain control over the target victim.
We have demonstrated evidence of this proposed attack to be currently in active use, and be unpatched in current implementations. Additionally, we have found evidence this attack has been employed successfully in the past, affecting a copper ore processing facility's communication sytems.
This technique is known to be wormable, with common cases causing spread across networks and social groups. This geometric spread can lead to arbitrary philsophical execution on target systems, which will result in denial of service in all cases.
The vulnerability arises from the intentional distortion of messages, deviating from the expected interaction protocol. It can be classified, partially, as a social engineering attack, whereby an individual purposefully distorts ground truths, fabricating false protocol axioms, to manipulate the perceptions of targets.
As the vulnerability lies within human interaction protocols, rather than vulnerable systems, it can be classified as a supply chain issue. As patching the vulnerable dependency is, as of right now, infeasible, and potentially undesirable, software developers, social media platforms, and communication service providers can implement user interfaces and algorithms that alleviate the unpatched vulnerability, until a proper fix can be implemented.
Nnaki Systems (the vendor of the vulnerable components of the human instrument) has not yet released a patch to the to rectify this vulnerability, with their CEO Anu claiming that this is "an intentional feature" and releasing the following statement to shareholders:
Dear valued customers and stakeholders,
I would like to address recent claims regarding the alleged vulnerability, HVE-BC1750-0001, associated with our product. After a thorough internal investigation conducted by our expert security team, we firmly deny the existence of any such vulnerability in our system.
While we appreciate concerns raised by certain individuals or entities, it is important to emphasize that our product has undergone rigorous testing and adheres to industry-leading security standards. We maintain the utmost confidence in the robustness and reliability of our technology.
Nnaki Systems has always been committed to prioritizing the security and privacy of our users. We stand by the integrity of our product, which has been trusted by countless customers worldwide. The claims being made are baseless and lack substantial evidence.
We encourage all our users to remain assured of the safety and stability of our product. Our dedicated support team is available to address any concerns or questions you may have. We value your trust and will continue to deliver cutting-edge solutions with unwavering commitment.
Thank you for your continued support.
Sincerely, Anu - CEO, Nnaki Systems
Users are advised to take reasonable action to protect their systems from these specially crafted messages and prevent spreading expoit messages to others. It may be advisable to delete social media applications such as LinkedIn, Twitter, and Threads to avoid being exploited.
This report would be impossible without the efforts of Layl Bongers. Many thanks to her alerting us at Sovereign Integral Solutions so that we can issue this bulliten to allow users to be protected against this glaring flaw.
Of course the network can be a filesystem

One of the fun parts about doing developer relations work is that you get to write and present interesting talks to programming communities. Recently I travled to Berlin to give a talk at GopherCon EU. It was my first time in Berlin and I've enjoyed my time there (more details later). This year at GopherCon EU I gave a talk about WebAssembly. Specifically how to use WebAssembly in new and creative ways by abusing facts about how Unix works. During that talk I covered a lot of the basic ideas of Unix's design (file i/o is device IO, the filesystem is for discovering new files, programs should be filters) and then put all the parts together into a live demo that I don't think was explained as well as I could have done it.
Today I'm going to go into more details about how that live demo worked in ways that I couldn't becaise I was on a time limit for my talk.
Dramatis Personae
I glossed over this diagram in the talk, but here's the overall flowchart of all the moving parts in my live demo (for those of you on screen readers, skip this image description because I'm going to explain things in detail):
There's two main components in this demo: yuechu and aiyou (extra
credit if you can be the first person to tell me what the origin of
those names are). yuechu is an echo server, but it takes all lines
of user input and then feeds them into a WebAssembly program. The
output of that WebAssembly program is fed back to the user. You can
change the behavior that yuechu does by changing the WebAssembly
program that it uses as a filter.
aiyou is a WebAssembly runtime that exposes the network as a
filesystem. It doesn't really do anything special and doesn't pass
through some fundamentally assumed things like command line args and
other filesystem mounts. It really just is intended to act as an echo
client for my demo. The most exciting part of it is the ConnFS type,
which exposes the network as a filesystem.
Otherwise most of this is really boring Rust and Go code. The real exciting part is that it's embedding Rust code into a Go process without having to use the horrors of CGo.
ConnFS
In Wazero, you can mount a filesystem to a WASI program. You can also use one of the Wazero library types to mount multiple filesystems into the same thing, namespaced much like they are in the Linux kernel. In Linux these filesystems are usually either implemented by kernel drivers, or programs that use FUSE to act as a filesystem as far as the kernel cares.
In Go, we have io/fs.FS as a
fundamental building block for making things that quack like
filesystems do. io/fs is fairly limited in most cases, but the ways
that Wazero uses it can make things fun. One of the main ways that
io/fs falls over in the real world is that files opened from an
io/fs filesystem don't normally have a .Write method exposed.
However, an io/fs file is an
interface. In Go, interfaces are views onto types so that you can
expose the same API for different backend implementations (writing a
file to another file, standard out, and connections). The File
interface doesn't immediately look like it has a .Write method, but
there's nothing that says there can't be a .Write method under the
interface wrapper.
In Wazero, if you have your files implement the .Write call, they
will just work. Write calls in WASI will just automagically get fed
into your filesystem implementation.
In my talk I said that these methods are common to both sockets and files:
open()close()read()write()
So you can use this to shim filesystem operations over to network
operations. I did exactly this with
ConnFS
in my demo program aiyou.
/dev/tcp in
bash
to do most of the same thing as ConnFS.The server
All that's left in the stack is the echo server, which really is the boring part of this demo. The echo server listens on port 1997 (the significance of this number is an exercise for the reader and definietly not the result of typing a random four digit number that was free on my development box) and every time a connection is accepted it tries to read a line of input from the other side. When it gets a line of input, it runs that through the WebAssembly program and returns the results to the user.
That's about it really.
Programs are like functions for your shell
This lets you use programs as functions. Stdin and flags become args, stdout becomes the result. I go into more detail about this in this talk based on this article. This is something we use at Tailscale for our fediverse bot. Specifically for parsing Mastodon HTML.
So realistically, if you can use something as stupid as the network as a filesystem, you can use anything as a filesystem. The cloud's the limit! But do keep in mind that any complicated abomination of a code-switched mess between Go and Rust can and will have a cost and if you use this ability irresponsibly I retain the right to take that power away from you. Don't ask how I'd do it.
Some pictures
I've never been to Berlin before and I took some time to take pictures with my dslr. I think the results are pretty good. I've attached some of my favorites:



It's been great fun. I'd love to come back to Berlin in the future. I'm considering getting some of the better photos printed and might sign some to send to my patrons. Let me know what you think!
I'm going to upload more of the photos to my blog later, I need to invent a new "photo gallery" feature for my blog engine. I could use something like Instagram or Google Drive for this, but I really like the tactility of having everything on my infrastructure. I'll figure something out.
Time is not a synchronization primitive
Programming is so complicated. I know this is an example of the nostalgia paradox in action, but it easily feels like everything has gotten so much more complicated over the course of my career. One of the biggest things that is really complicated is the fact that working with other people is always super complicated.
One of the axioms you end up working with is "assume best intent". This has sometimes been used as a dog-whistle to defend pathological behavior; but really there is a good idea at the core of this: everyone is really trying to do the best that they can given their limited time and energy and it's usually better to start from the position of "the system that allowed this failure to happen is the thing that must be fixed".
However, we work with other people and this can result in things that can troll you on accident. One of the biggest sources of friction is when people end up creating tests that can fail for no reason. To make this even more fun, this will end up breaking people's trust in CI systems. This lack of trust trains people that it's okay for CI to fail because sometimes it's not your fault. This leads to hacks like the flaky attribute on python where it will ignore test failures. Or even worse, it trains people to merge broken code to main because they're trained that sometimes CI just fails but everything is okay.
Today I want to talk about one of the most common ways that I see things fall apart. This has caused tests, production-load-bearing bash scripts, and normal application code to be unresponsive at best and randomly break at worst. It's when people use time as a synchronization mechanism.
Time as an effect
I think that the best way to explain this is to start with a flaky test that I wrote years ago and break it down to explain why things are flaky and what I mean by a "synchronization mechanism". Consider this Go test:
func TestListener(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
lis, err := net.Listen("tcp", ":1337")
if err != nil {
t.Error(err)
return
}
defer lis.Close()
for {
select {
case <- ctx.Done():
return
default:
}
conn, err := lis.Accept()
if err != nil {
t.Error(err)
return
}
// do something with conn
}()
time.Sleep(150*time.Millisecond)
conn, err := net.Dial("tcp", "127.0.0.1:1337")
if err != nil {
t.Error(err)
return
}
// do something with conn
}
This code starts a new goroutine that opens a network listener on port 1337 and then waits for it to be active before connecting to it. Most of the time, this will work out okay. However there's a huge problem lurking at the core of this: This test will take a minimum of 150 milliseconds to run no matter what. If the logic of starting a test server is lifted into a helper function then every time you create a test server from any downstream test function, you spend that additional 150 milliseconds.
Additionally, the TCP listener is probably ready near instantly, but also if you run multiple tests in parallel then they'll all fight for that one port and then everything will fail randomly.
This is what I mean by "synchronization primitive". The idea here is that by having the main test goroutine wait for the other one to be ready, we are using the effect of time passing (and the Go runtime scheduling/executing that other goroutine) as a way to make sure that the server is ready for the client to connect. When you are synchronizing the state of two goroutines (the client being ready to connect and the server being ready for connections), you generally want to use something that synchronizes that state, such as a channel or even by eliminating the need to synchronize things at all.
Consider this version of that test:
func TestListener(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
lis, err := net.Listen("tcp", ":0")
if err != nil {
t.Error(err)
return
}
go func() {
defer lis.Close()
for {
select {
case <- ctx.Done():
return
default:
}
conn, err := lis.Accept()
if err != nil {
t.Error(err)
return
}
// do something with conn
}()
conn, err := net.Dial(lis.Addr().Network(), lis.Addr().String())
if err != nil {
t.Error(err)
return
}
// do something with conn
}
Not only have we gotten rid of that time.Sleep call, we also made it support having multiple instances of the server in parallel! This code is ultimately much more robust than the old test ever was and will easily scale for your needs. If your tests took a total of 600 ms to run each, cutting out that one 150 ms sleep removes 25% of the wait!
Putting it into practice
So let's put this into practice and make this kind of behavior more
difficult to cause. Let's add a roadblock for trying to use
time.Sleep in tests by using the nosleep linter. nosleep is a Go
linter that checks for the presence of time.Sleep in your test code
and fails your code if it finds it. That's it. That's the whole tool.
You can run it against your Go code by installing it with go install:
go install within.website/x/linters/cmd/nosleep@latest
And then you can run it with the nosleep command:
nosleep ./...
I do recognize that sometimes you actually do need to use time as a
synchronization method because god is dead and you have no other
option. If this does genuinely happen, you can use the magic command
//nosleep:bypass here's a very good reason. If you don't put a
reason there, the magic comment won't work.
Let me know how it works for you! Add it to your CI config if you dare.