Reading List

The most recent articles from a list of feeds I subscribe to.

Ebb and flow

I’ve been thinking a lot about work-life balance lately. I used to see it as a balancing scale that should remain on the same position at all times. However, I believe a better mental model is to see it as ebb and flow.

There are times you need to focus on work. There are times you need to focus on life. There are times you can focus on either. Transitioning between these states can happen in a forced way (circumstance), or you can slowly move towards a new area like ebb and flow.

The trick is to learn to pace yourself moving in one direction, or you’ll become overwhelmed. Don’t let ebb become drought, or flow become flood. If you try to maintain an arbitrary, exact balance at all times, you risk not moving forward on any focus area.

This can be applied to other places as well, like working on a main project vs a side project, or maintaining essential vs accidental complexity in a software project.

Use Blink to execute something once and only once

Our Blink package is marketed as a caching solution to memoize data for the duration of a web request. Recently, we came upon another use case for the package: to execute something once and only once.

Blink looks just like Laravel’s cache. You can store things and retrieve them. The Blink cache is stored on a singleton object, so it’s destroyed at the end of the request (similar to Laravel’s array cache driver).

$orders = blink('orders', fn () => Order::all());
$orders = blink('orders');

Now for a different use case. Say we want to generate and store PDF invoices for an order. We want to regenerate them whenever an order or an order line changes. We can use Eloquent events to determine when to dispatch a job.

Order::saved(fn (Order $order) =>
dispatch(new GenerateInvoicePdf($order))
);
OrderLine::saved(fn (OrderLine $order) =>
dispatch(new GenerateInvoicePdf($orderLine->order))
);

If someone modifies an order and an order line in the same request, GenerateInvoicePdf will be dispatched twice, clogging the queue with unnecessary work.

Instead, we want to make sure the job only gets dispatched once. Here’s where Blink comes in. Instead of using Blink to store something, we’ll use it to execute something only once.

Order::saved(fn (Order $order) =>
blink('generate-invoice-pdf:' . $order->id, fn () =>
dispatch(new GenerateInvoicePdf($order))
)
);
OrderLine::saved(fn (OrderLine $order) =>
blink('generate-invoice-pdf:' . $orderLine->order->id, fn () =>
dispatch(new GenerateInvoicePdf($orderLine->order))
)
);

Now it doesn’t matter how many saved events are triggered; the job will only be dispatched once.

Cleaning things up, I like keeping cache calls to the same key in one place.

class Order extends Model
{
public function generateInvoicePdf(): void
{
blink('generate-invoice-pdf:' . $this->id, fn () =>
dispatch(new GenerateInvoicePdf($this))
);
}
}
Order::saved(fn (Order $order) =>
$order->generateInvoicePdf()
);
OrderLine::saved(fn (OrderLine $order) =>
$orderLine->order->generateInvoicePdf()
);

This won’t work when the environment is persisted across requests, like in a Horizon worker or Laravel Octane. In that case, you can fall store a should_regenerate boolean on the orders table and schedule a command to dispatch the GenerateInvoicePdf jobs.

Adding backlinks to a GitHub wiki

Backlinks, or bi-directional liks, are becoming table-stakes for productivity apps since they’ve been popularized by Roam. It’s a simple concept: when you create a link from page A to page B, page B will also reference page A. With traditional hyperlinks, page B wouldn’t know about page A unless you explicitly link it back.

Backlinks allow a graph of knowledge to grow organically. When you create a doc for Orders, and mention Products, a Products page will be created or updated with a backlink. Even when not actively documenting Products, readers can get an idea of what they entail because of the linked references.

For example, Orders might reference Products.

Orders contain one or more [[Products]].

A system that supports backlinks would add a reference to the Products page.

## Backlinks
- [[Orders]]
- Orders contain one or more [[Products]].

Sometimes we use GitHub wiki for documentation. Documenting a large system can be difficult because there are a lot of interdependencies, and it’s not always obvious where something belongs. This is a challange for the writer (where does this go?) and for the reader (where can I find this?). Backlinks help out because they ensure something is mentioned in multiple places where relevant.

GitHub wikis are based on Markdown files. They’re also a plain git repository that supports GitHub Actions, so you can automate tasks. Andy Matuschak created note-link-janitor, a script that crawls files and adds backlinks to your Markdown notes. A fork of the script is also available as a GitHub Action, which you can configure to run whenever your wiki is updated.

Note-link-janitor indexes all your wiki style links, and creates a References section on the bottom of each page with backlinks.

To enable note-link-janitor, create a workflow file.

# .github/workflows/note-link-janitor.yml

name: Note Link Janitor

on:
 gollum:
 workflow_dispatch:

jobs:
 update:
 runs-on: self-hosted

 steps:
 - uses: actions/checkout@v2
 with:
 repository: ${{github.repository}}.wiki
 - uses: sander/note-link-janitor@v5

This workflow will run whenever the wiki is updated (the gollum event), or when triggered manually (workflow_dispatch). Whenever you use wiki-style links in your documentation, the Action will push a commit with newly generated backlinks.

Finding out which ports are in use

Sometimes you want to spin up a process, but the port it wants to bind to is already in use. Or a port isn’t listening to a process as you expected. lsof is a debugging life saver in these situations.

lsof -i -P -n | grep LISTEN

This will list all processes listening to a port.

nginx 514 sebastiandedeyne 7u IPv4 0x2718856ef232ee5b 0t0 TCP 127.0.0.1:80 (LISTEN)
nginx 514 sebastiandedeyne 8u IPv4 0x2718856ef233026b 0t0 TCP 127.0.0.1:443 (LISTEN)
nginx 514 sebastiandedeyne 9u IPv4 0x2718856ef2330c73 0t0 TCP 127.0.0.1:60 (LISTEN)

If you want to find a process on a specific port, you can chain another grep.

lsof -i -P -n | grep LISTEN | grep 80
nginx 514 sebastiandedeyne 7u IPv4 0x2718856ef232ee5b 0t0 TCP 127.0.0.1:80 (LISTEN)

Leaner feature branches

In most projects, we use git flow to some extent — depending on the project and team size. This includes feature branches.

Feature branches (or sometimes called topic branches) are used to develop new features for the upcoming or a distant future release. When starting development of a feature, the target release in which this feature will be incorporated may well be unknown at that point. The essence of a feature branch is that it exists as long as the feature is in development, but will eventually be merged back into develop (to definitely add the new feature to the upcoming release) or discarded (in case of a disappointing experiment).

Working on a project with a lot of interdependencies between features with a bigger team comes with a new set of challenges dealing with git.

We’ve recently set up a new guideline: if it’s not directly tied to your feature, don’t put it in your feature branch.

In practice, our instinct seems to be “work on the feature branch until the feature is complete” without thinking twice.


For example, you’re working on a dynamic footer feature in a multi-tenant app. The footer contains the tenant’s address (among other things). You want the tenant to store the address on their settings page and pull that data into the footer. You created a feature/footer branch from develop.

While you could keep everything on the footer branch, your team members (and sometimes users) are better off if you branch out. Create a new feature/address branch from develop to add the address settings, merge it into develop, and finally, bring feature/footer back up to date with develop.


If other developers are building something that requires the same derived feature, they don’t have to wait on the main feature to be merged in to continue.

This also keeps PRs smaller and more focussed, which means reviews will be easier and faster to process, which results in a tighter feedback loop.

Finally, if it turns out the main feature has a lot of rabbit holes, we can ship the derived feature on its own.

The hard part is identifying derived features as features, not as a bullet point on the main feature’s specs.