Have your team or you have ever been in a situation where developer productivity and application speed were critical, but business demands were changing rapidly? My team is the IBM Garage Method, and I found myself in this exact scenario when we were asked to create a proof of concept transpolar and find a solution for our content files.
I had to find a technology platform to deliver on all fronts and integrate well with our existing toolchain. This article will explain how I decided to use Rust with Web Assembly, and why it is so powerful.
Note: This tool is still in development, and we can only implement some business requirements.
The journey to Rust and Web Assembly
Let me first elaborate on the business requirements that I mentioned earlier. Our team is rewriting and restructuring our entire application, which uses a DSL for all content. These files contain thousands and sometimes multiple templates.
We must automate transferring content to a new system for a successful rewrite. It is important to remember that hundreds of our content authors are already familiar with the DSL and will need to learn how to use it. We had two choices when faced with these requirements:
We would use this command-line tool to mass migrate existing content. The new system would be taught to content writers.
This command-line tool could perform a mass migration or run locally modified files for incremental builds.
We needed maximum flexibility for our team, and retraining all our content writers wasn’t feasible, so we chose the second option. We built a tool that could be run once or in incremental builds.
Performance considerations
Our first consideration was speed. If we expect this to be run every time someone commits changes, the transpiring process must be quick and occur within seconds. One of our scenarios was to expose the tool as an API that transforms data and returns the result to the client. This increases performance considerations as files can easily exceed a megabyte in size. The file would be sent, parsed, and converted to the new format in this example. Then it would be compressed and sent to the client within seconds. Speed was an important consideration.
A second important consideration was the possibility of running the transpolar online. These files could be processed quickly in many languages on a local computer, but they can take much longer if transferred to the internet.
We needed a language easily integrated with our Node—js-based environment. Our team has appreciated the Node.js ecosystem. Our newly refactored application is a Vue client with an Express-based backend implemented as a REST API. Node’s open-source community, tooling, and ability to write the client and server in the same Language (in our case Typescript) are considerable boons to our agile delivery schedule. While we knew we had the opportunity to preserve these incredible benefits for our apps, we also knew that we would need to search elsewhere to achieve the required performance. It was, therefore, essential to have excellent integration with our Node apps.
Once we had a clear understanding of our business requirements, it was now time to create some proof of concept.
Scala: First proof of concept
I started to find a language that had the following properties.
Support for functional programming
It is well-known with large businesses that use it in production
Demonstrable ability to use the Language for parsing
Interface with JavaScript and Node.js
I was initially motivated to consider Scala because it met all my requirements: Scala is a well-established language that runs on the Java Virtual Machine (JVM), has excellent functional programming support, documentation showing it works well as a parsing and coding Language, and Scala.js allows me to use JavaScript and Node.
After working with Scala initially on the parsing module, I made some progress. Even parsing the basic rules for our DSL (based on Handlebars and Markdown) was possible. However, as I worked on the architecture, I discovered many issues.
Scala development was also prolonged. The test-driven development process was slowed considerably because tests took as little as 5 minutes each time. It also failed in performance, with basic parsing tests running in development configuration taking between 500-1000ms and 300-450ms, respectively.
Note: Although I’m not a JVM developer by trade, better performance can be achieved. However, we wanted something efficient by default and not to have to hack around in Scala and the JVM if it wasn’t necessary.