My progress on this massive Go to Rust rewrite

Dwayne Harris   ·   About 5,068 words

I’ve been pretty obsessively working on my web architecture rewrite, so I’ve been completely neglecting posting lately. I was planning on saying a lot more about the rewrite as I worked on it, but I just… haven’t.

Some of the reason was because I wanted to wait until I published the source code before I started talking about it, but I’m still not sure when I even plan to do that yet. Another reason is just that it’s just been a lot of work and I just kept going at it without taking the time to pause and talk/write about it. Once I put a big plan like this together I tend to get a little too wrapped up in it.

So I still have weeks (months?) to go on this, but I really feel like I should push myself to write about it right now instead of finishing the whole thing silently like I’ve been doing.

So let me tell you a little bit about what I’ve been working on these past few months.

The Plan

Almost everything running here on this website (the web server, admin pages, RSS reader, DwayneBot, etc) has been written as one big Go app. I decided to rewrite the entire project in Rust and do the following:

  • Split all the services I’ve been building into separate apps
  • Change the type of database(s) being used
  • Improve user authentication to work better with both the website and APIs
  • Update the website design
  • Redesign, improve, and remove features

I even ended up making a few big architecture changes in the middle of the rewrite, including:

  • Moving from using a single Postgres database to multiple SQLite (and DuckDB) data stores
  • Changing the in-progress frontend web app (which is new to the Rust redesign) to a Rust/WebAssembly app

I decided to do all of this at once, which was basically a decision to rewrite the entire web project from scratch again (while learning Rust). So that meant learning and using Rust libraries like the Actix web framework, Tokio async runtime, Tonic rpc framework, Yew frontend web framework, and rebuilding all the features I’ve been adding to the site over the past year and a half, including stuff like contact form handling, markdown editing of posts, article comments, notes/drafts, webmentions, and multiple APIs.

This was obviously a very ambitious thing to do, and is actually turning out to be one of the biggest and most complicated projects I’ve worked on by myself.

I didn’t have a good estimate of how long I thought it would take, but it’s been about 4 months of development so far, and I’m guessing I’m somewhere around 60% of the way to launching it. Right now (in my dev environment) I can log in and out, use the API, get responses from the new version of DwayneBot, and see all the content I imported into the new database, so it’s in pretty good shape so far. But there’s still a lot to go.

I’ll really try to write more about individual parts of the project as I get closer to launch. And I will definitely make almost all of the source code accessible on my Gitea server once I’m done making big architecture changes.

Goals for the rewrite

Building a project like this requires making infinite decisions about how to do things and what technologies to use. The easiest way for me to make those decisions is to have a few overall principles/goals to always keep in mind.

Of course the programming language (and frameworks) to use is one of those early decisions. Obviously, Rust is very popular these days, and hearing so much about it over the past few years led me to make the switch. There’s a big learning curve with the language, but I feel like it’s really paid off so far.

Rust is known for speed and memory efficiency/safety (which is now a marketing/selling point for all the new Rust apps coming out), so a lot of my goals for the project involves taking advantage of that. A lot of times, scalability comes with speed and efficiency so being able to scale out and handle large amounts of traffic is another goal. I mean, the current Go architecture can handle many many times more hits than I regularly get right now, but I decided I never want the site to get taken down by a sudden spike in traffic after having a link posted somewhere like reddit, so scalability is still a big one. So my goals, in order, are:

  1. Efficiency - In memory usage, CPU usage, database usage, network traffic, etc. This project is meant to be a foundation for me to build more off of over the years, so the core should be as efficient as possible.
  2. Scalability - Specifically being able to split services up onto separate servers. The server I’m using for Go is bigger than I need it to be. I want to use more, smaller servers for different parts of the project. And if I’m expecting more traffic, I can spin up bigger servers and move apps if I need to.
  3. Capability - I want to have a lot of really powerful and stable services here that I can rely on to build even more cool shit in the future. And I want to provide even more useful APIs for users.
  4. Speed - As usual, all of my web projects are meant to have as little latency as possible. Using Rust on my own servers means I probably don’t have to worry about speed/latency very much, but it’s still always high priority.

Every time I need to make another big tech/architecture decision, I think about these goals to figure out what option makes the most sense.

Rust Project Organization

The first thing I had to do is figure out how I wanted to organize this project. The current site is a project called website-go, which contains 266 Go files with 36,253 lines of code (and 214 HTML files, 1,040 lines of SQL, and 6,995 lines of Javascript). I compile it into one executable and deploy it to my web server. (Note: To compare, so far, at maybe 60% completion, the Rust project has 126 Rust files with 11,043 lines of code.)

I wanted to split this functionality into multiple projects/executables. So I made a Rust workspace and added the following projects to it:

  • analytics - My own version of Fathom/Google Analytics
  • api - The API server
  • backend - The only part of the entire architecture that talks to the Postgres database
  • bot - All of the DwayneBot code
  • bot-communicator - A project that connects to IRC/Matrix and communicates with the bot project
  • db - Command line database import/export app
  • frontend-app - The frontend WebAssembly app
  • frontend-server - The main web server
  • lib - Shared library for all the other projects
  • reader - My own version of Miniflux/Google Reader

Some of these talk to each other using grpc (using Tonic), so the workspace also contains a “proto” folder with all files needed for that to happen.

Splitting things up into multiple executable like this means I can put any of the projects on separate servers. And if any of the projects become overloaded or crash or whatever, they won’t affect the others. And using a workspace is nice since Rust gets to build smaller executables, but share the build output for all the shared dependencies between the projects (stuff like the futures lib, Actix, Chrono, Tonic, Serde, Tokio, etc).

The build times for the project are still kind of high, but luckily I don’t have to build the whole thing very often. But when I do… ugh.

Finished dev [unoptimized + debuginfo] target(s) in 5m 06s

5 minutes. 😬 My fresh Go builds (right after a go clean) take 30 seconds at most.

Anyway, once I decided on the project organization, I started focusing on the database stuff.

The Database(s)

The database is usually the core part of a web project, so I figured it was good to start there, and start learning Rust by building a console app. The db project started as basically a command line migration manager. It let me create migrations (SQL files to update the database) and manage what version of the schema is on what environment.

The new overall design for the project at some point included having the different Rust projects talking with the backend project, which was the only one that communicates with the Postgres database. Something felt off about that though. It didn’t feel like I was getting a lot of the advantages of this new architecture when the backend project would end up handling every single database request from every part of the system, and potentially maybe be held up trying to make all these network requests.

Part way through development in that direction, I came across a few articles on how useful SQLite can be. How it isn’t a toy database like some people might think, and how if used right, it can be way more fast/efficient than other database options, since it’s an in-process database and doesn’t involve any calls over your network. It still seems like most developers look at it as completely unsuitable for a web app, so maybe it’s a controversial choice for something like this (keeping in mind my scalability and speed goals).

But I decided to try it anyway. I completely changed the database layer to try using individual SQLite and DuckDB databases/files for each one of the projects that need to store things. Now (so far) analytics will store its own data using DuckDB, and the api, backend, bot, and reader projects will each use SQLite databases. It’s working pretty well in my testing so far, but I haven’t run any benchmarks or done any stress testing yet.

Because of the database change, the db project went from a migration manager to just an app to import data from the database used for the Go version of things to each of the SQLite/DuckDB stores (each project manages its own migrations now). The import process will only need to happen once in production, so it’s mostly just useful during the development process.

The project uses clap for command line argument parsing, colored to make things look nice, and the basic postgres crate for the import process.

When it comes to the actual database schema, the first set of changes (before I decided to go with SQLite/DuckDB) changed some things from the old database, like using UUIDs instead of the serial (integer) type for primary keys (which still seems to be a controversial choice, but I wanted to try it out), improving indexes (including adding full text search), improving how authentication works, making more things not null and stuff like that.

Once I decided to use SQLite and DuckDB, the entire schema changed to take advantage of the strengths of those projects. I laughed at the fact that redesigning back to integer primary keys instead of UUIDs was one of the things I had to do. But I’ve also had to do a lot of rethinking of how I do queries for things like comments and tags since the performance implications are completely different.

Some other things I had to figure out:

  • How to effectively use this very synchronous library (a SQLite database is a file, so most of the connection logic is basic I/O, which is generally sync) from a very async context (everything about actix involves using the Tokio async runtime).
  • Best performance practices, including making a custom ConnectionManager for the bb8 connection pool library so I can set the right pragma statements on each connection. (Note: There’s already a rusqlite bb8 connection manager out there, but it doesn’t allow for automatically running a command on each connection like I want, so I created my own version with that functionality.)

Once I got far enough with the db project, I started working on the backend project.

The Backend Project

From the start, I knew I would have at least 3 projects in this workspace that need the same type of core data from the database (and external APIs): frontend-server, api, and bot. Those three projects use grpc to get that data from the backend project.

The Go version of the site doesn’t use any frontend frameworks at all (and therefore never really needed a separate API portion). This time around, I decided to use a frontend app in some places; mostly for admin pages or for a few of the more complicated features. The rest of the site will still be regular server rendered HTML pages. I had to spend more time on a full API this time around. So, that means the frontend-server project, the API project (for the frontend app and any other API access I’ll provide), and the DwayneBot project that all need access to the same data.

This project provides that data. It’s a grpc server that uses uses its own SQLite database to store everything. It also handles things like caching, data pagination, external requests and api keys, etc.

Once I figured that part out, I moved on to adding stuff to the shared library.

Lib

I created a library as part of the Rust workspace for shared code. For some reason when I started implementing it I thought it (using a shared library) would be complicated, but it’s not. Just specify where it is in your Cargo.toml:

1[dependencies]
2dwayne-lib = { path = "../dwayne-lib", features = [] }

And then use it:

1use dwayne_lib::database::{DatabaseError, PaginationToken, SqlBuilder};
2use dwayne_lib::rand::new_sid;
3use dwayne_lib::text::combine_text;
4use dwayne_lib::uuid::new_v4;

Once I decided to use a WebAssembly app on the frontend instead of using a Javascript frontend framework, I started looking into including the library on the frontend (in the browser) too. I started pay a lot more attention to how Rust handles specifying features for conditional compilation and optional dependencies. Since including all this code on the frontend increases the size of the assets the user has to download, having parts of code (including their dependencies) that can be skipped when the library is included there helps keeps the total size down. I’ll say more about that in the frontend app section.

Some of the stuff I have in the library so far:

  • database module: ConnectionPool stuff and some result pagination logic. It’s actually a pretty heavy module that depends on crates like tokio and rusqlite.
  • time module: Some time based logic that uses the chrono time and date library heavily. Chrono can be used with WebAssembly, but only with a specific feature enabled.
  • text module: Lots of text functions that are mostly used for DwayneBot, but will maybe be used on the frontend. Includes an optional feature (recognition) that tries to recognize certain things in a string of text, and a markdown feature that will convert markdown text to HTML (this can be used on the frontend too!)
  • server module: Some shared Actix middleware and HttpRequest handler helpers. It depends on multiple Actix crates.
  • rand module: Some functions for randomness (again mostly for DwayneBot) and uuid generation.

My Cargo.toml file has the following feature section listing the optional dependencies for each:

 1[features]
 2database = [
 3	"dep:async-trait",
 4	"dep:base64",
 5	"dep:bb8",
 6	"dep:bincode",
 7	"dep:prost",
 8	"dep:prost-types",
 9	"dep:rusqlite",
10	"dep:sql-builder",
11	"dep:thiserror",
12	"dep:tokio",
13]
14markdown = [
15	"dep:pulldown-cmark",
16]
17random = [
18	"dep:rand",
19	"dep:uuid",
20]
21recognition = [
22	"dep:lazy_static",
23	"dep:regex",
24	"time",
25]
26server = [
27	"dep:actix-service",
28	"dep:actix-utils",
29	"dep:actix-web",
30	"dep:futures",
31	"dep:futures-core",
32	"dep:futures-util",
33	"dep:validator",
34]
35time = [
36	"dep:chrono",
37	"dep:prost-types",
38]


frontend-server and api

These are the two projects in the workspace that get put behind nginx (I haven’t deployed these yet, but I’m guessing I’ll put them on separate cheap DigitalOcean droplets). They both use the backend project at runtime via grpc for the core web data (users, posts, tags, etc). The api project uses its own SQLite database for API client information, rate limiting, statistics, etc.

frontend-server will serve everything you see on the website, and on some pages will include a frontend app that makes calls to the api.

These projects were a little more difficult than expected mostly because of the Actix web framework. Overall, it’s a good one, but it definitely feels a little over-complicated, especially when it comes to writing and maintaining middleware.

Here’s an example of one of the middleware structs that comes with Actix (I wanted to paste the source here, but it’s way too long). And this is a guide I used to help me write my own authentication middleware.

I wrote authentication and authorization middleware for both API clients and user accounts in the api project (4 structs), and authentication and authorization middleware for users in the frontend-server project (another 2 structs). I tried making them generic enough to put into the lib project so they can be shared, but it got a lot more complicated than I wanted it to be, so right now some of the code is duplicated instead.

The DwayneBot Project

The bot project has been really interesting and very difficult to write so far. In the Go app, it was just part of the one web app. When the server starts, it also starts a “bot loop” which checks for requests to respond to.

The new Rust version is a grpc server that runs a similar loop. It waits for either streaming requests (from a chatroom through the bot-communicator project), or a sync request (from the api project). It talks to the backend project for some of the data it needs, and uses its own SQLite database for all the conversational stuff it has to remember.

I think the hardest part of this project was implementing the queues of requests DwayneBot responds to. Because the bot can connect to chat rooms, one of the features is to store the last few things that were asked, and respond all at once on an interval (right now, every two seconds). So when it connects to a room, a new queue of requests is created in memory. When it checks the queues, it has to respond to the right room through the right grpc connection that was created.

The grpc handler that streams responses from the bot to the IRC/Matrix room looks like this:

 1async fn async_query(
 2    &self,
 3    request: Request<AsyncQueryRequest>,
 4) -> Result<Response<Self::AsyncQueryStream>, Status> {
 5    // Create a channel for Bot responses
 6    let (tx, rx) = tokio::sync::mpsc::channel(100);
 7    let request = request.into_inner();
 8
 9    // Lock the list of queues so we can add tx
10    // to the queue for this room (request.room_id)
11    let mut queues = self.queues.write().await;
12    queues.add_channel(&request.room_id, tx).await;
13
14    // Define a ResponseStream to be returned to the client
15    struct ResponseStream(Receiver<QueryResponse>);
16
17    impl Stream for ResponseStream {
18        type Item = Result<AsyncQueryResponse, Status>;
19
20        // The Tokio runtime uses polling for async
21        // functionality. On poll, just check the underlying
22        // rx's poll status and return a response if one
23        // exists
24        fn poll_next(
25            mut self: Pin<&mut Self>,
26            cx: &mut Context<'_>,
27        ) -> Poll<Option<Self::Item>> {
28            match self.0.poll_recv(cx) {
29                Poll::Pending => Poll::Pending,
30                Poll::Ready(Some(response)) => Poll::Ready(Some(Ok(AsyncQueryResponse {
31                    responses: vec![response],
32                }))),
33                Poll::Ready(None) => Poll::Ready(None),
34            }
35        }
36    }
37
38    // Create a new ResponseStream and pass the rx part of
39    // the channel to it, then return the stream to the client
40    Ok(Response::new(
41        Box::pin(ResponseStream(rx)) as Self::AsyncQueryStream
42    ))
43}

It accepts a grpc request from a room and creates a Tokio channel that will send the bot’s responses between threads. It adds the tx (transmit) part of the channel to the room’s queue of responses, and uses the rx (receive) part to create an async streaming grpc response.

This definitely took me a minute to put together.

WebAssembly Frontend

WebAssembly (wasm) is a browser technology that lets you compile code written in other languages to a binary format that can run in the browser. When I first started the frontend app (when I first started this Rust rewrite), I had heard about wasm, but not enough to consider using it. I actually started with React, just because I didn’t know which framework I wanted to use yet, so I figured I would start testing the API with something I already knew.

When I came across wasm again recently, I was actually surprised to hear how good the browser support is now. I guess I assumed it would take forever for Safari to support it properly, but it turns out all it works very well in all major browsers these days.

The idea of wasm is that any language toolchain could add support for it as a compilation target, so theoretically it can work with any language. But Rust seems to have some of the best support for it right now. With that in mind, I decided it made sense to give it a try.

I looked up Rust/wasm web frameworks and came across Yew, which seems to be the most popular. Interestingly, Yew was inspired by React, so it has a similar development API. It even has its own version of function components and hooks, so I mostly kept the same techniques I was using from the React version of the frontend when I switched. (Note: I’ve used React/Redux and Hooks a lot at a few different jobs I’ve had. I’m comfortable with them, but I don’t really love the direction Facebook has been taking them [in fact, I don’t like that they’re from Facebook in the first place] and would rather go with less convoluted ways of building frontend apps. I might eventually switch from Yew, but this works for now.)

So I looked at Yew’s getting started guide and started using trunk like they suggested. It was actually really easy to get set up. I had a new wasm frontend app up and running pretty quickly, and it immediately worked well in all the browsers I tested. I do find that translating some Rust mechanics into this frontend environment and having to do things like cloning objects constantly in the components is pretty goofy:

 1let onsubmit = {
 2    let loading = loading.clone();
 3    let email = email.clone();
 4    let password = password.clone();
 5    let email_error = email_error.clone();
 6    let password_error = password_error.clone();
 7
 8    Callback::from(move |e: FocusEvent| {
 9        e.prevent_default();
10
11        let toast_agent = toast_agent.clone();
12        let auth_agent = auth_agent.clone();
13        let loading = loading.clone();
14
15        let email = email.clone();
16        let password = password.clone();
17        let email_error = email_error.clone();
18        let password_error = password_error.clone();
19
20        wasm_bindgen_futures::spawn_local(async move {
21            let credentials = Credentials {
22                email: (*email).clone(),
23                password: (*password).clone(),
24            };
25
26            // A bunch of code here that uses toast_agent,
27            // auth_agent, loading, etc...
28        });
29    })
30};

All of those cloned objects are either UseStateHandle or UseBridgeHandle objects that come from Yew’s use_state and use_bridge React-like hooks, and meant to be cheaply cloned when needed (the internal structures are reference counted pointers). That cloning is needed because the actual variables are “moved” into those scopes (notice the move keyword on lines 8 and 20), so they can’t be referenced afterwards. They’re both eventually deref’ed to get the underlying string values, and then those string values are… cloned again. 😐

But overall I’ve been enjoying using Yew so far.

One thing that I’ve been focusing on lately during development is related to controlling the wasm output size. When you build a wasm app using trunk, you end up with a .wasm output file, an html file, and a small Javascript file that loads the .wasm one. Depending on how you build and what you link to your project, that file can get pretty big.

The Yew docs mention some release mode configuration changes and a command line tool called wasm-opt that compresses the output further. It’s actually been very interesting playing around with the results both with and without a function from the library being included and comparing it to the React Javascript output size from the version before the wasm switch. Some notes:

  • The React and wasm apps being compared are at almost the same amount of pages/functionality. The wasm app has a few more pages in it.
  • Both the lib and no-lib versions of the wasm app actually have the lib project linked to it. So far, I’m only using one function from the library. The “no-lib” version of the project just has that one function call commented out. The compiler is smart enough to leave the whole library out of the output when that happens.
  • The wasm binary format was meant to be very compression friendly and it definitely looks like that’s true (but not as much as Javascript apparently).
  • The compressed output is using the Actix Compression middleware. It looks for the standard HTTP compression header and automatically compresses the response.
  • When the wasm apps are run in “Debug” mode, they include the debug symbols that are normally included with Rust debug builds. Since the React build is just Javascript being converted to more Javascript, I don’t have debug and release builds to compare.

The sizes:

                    Debug   Release   wasm-opt   Compressed
WASM App (No-lib)   5.7M    1.2M      1.0M       318.2 KB (3.24x)
WASM App (Lib)      6.6M    1.4M      1.2M       407.7 KB (3.02x)
React App                   1.1M                 281.8 KB (4.22x)

I’ll be messing around with the release configuration and wasm-opt settings a lot more as I continue development. I’m guessing these sizes will get a lot bigger before I’m done building out the rest of the frontend app, and that definitely hurts my efficiency goal, so I’ll absolutely be keeping an eye on it.

Next Steps

This is a huge project and I’ve gotten a lot done so far, but there’s still a long way to go. I still have to:

Finish all of the CRUD functionality for the core backend data: The database schema for this website is actually pretty big, so I have more entities to write queries and create grpc handlers for (and in most cases also make API endpoints for).

Do some stress testing and benchmarking: One of the obvious questions in doing all of this is if the Rust version of the site is actually any faster or more scalable than the Go version. It’s a hard question since there are a lot of things I’m doing very differently in the new version. Either way, I really need to stress test all the parts of the system and benchmark pretty extensively to understand if I’m doing things as efficiently as I think I am. It would be pretty shitty if after all of this hard Rust development, I ended up with code that doesn’t perform anywhere near as well as the Go version.

Start the analytics and reader projects: I wrote the database schemas and proto (grpc) files for both, but I haven’t started on the code. I’m really hoping these don’t take me longer than I think they will.

Come up with a brand new frontend design: I have a design I’ve been using so far to test functionality, but I’ve been putting a lot of thought into some kind of design framework for all the many parts of this thing. The plain HTML pages and wasm app are both using the same .css file and the same styles and components, so I’m building a design framework that works pretty generally, and is completely reactive to things like like screen size and light/dark mode settings and all that.

Figure out deployment processes: Deploying the Go app to production was pretty easy. I just rsync’d the source files to the server, did a quick build, moved the executable to the right place, and restarted nginx. Now I have a much bigger and longer build process, including some extra steps like transforming wasm output. I have to come up with some good build scripts to handle that process.


I’ll be working on these things over the next few weeks, but I really hope I manage to write more about individual parts of the project as I finish things up. I will be publishing the code at some point before launch too, so keep an eye out if you’re interested in a project that includes Rust code all way through the stack, from the database layer, through the API, all the way up to Rust/WebAssembly in the browser.

Feedback
  ·   7 Likes
Last Updated

Contribute

Was this post useful to you? Want to support this website? Learn more. Thanks for reading!

Latest Webmentions

None yet.

Comments

No comments yet.

Add Comment

Add a Code so you can edit or delete this comment later. Using the same Name and Code for multiple comments will link them together. Learn more.
By posting a comment, you are agreeing to the Terms of Use.