Using Rust and Web assembly. Let's make heavy calculations on the client side.
Table of contents
We are witnessing the emergence of supercomputers and data centers capable of handling heavy calculations. With the rise of new applications that require serious computing power, such as artificial intelligence and mechanical computing, the need for efficient computing solutions is more pressing than ever. RustLang has become one of my favorite languages after reading the book "The Rust Programming Language," and I am excited to explore its full potential.
I believe we can harness the power of Rust and WebAssembly to perform heavy calculations on the client-side. By creating a WebWorker in WebAssembly using Rust, we can achieve much greater processing power than traditional JavaScript-based solutions.
To demonstrate this, I plan to build a web application that calculates prime numbers. However, I want to ensure that the calculations are verified on the client-side, with the server managing connections and receiving results via an endpoint for the client.
To build this application, I will create a back end with Rust that will manage each client socket. The architecture I will be using is powerful, with the client socket being passed to the relevant function as needed. I will open a thread using a Tokio runtime to manage each client that wants to connect to the server.
fn handle_server(addr: String, sender: Sender<Event>) {
Runtime::new().expect("Could not start a new runtime").block_on(async {
let listener = TcpListener::bind(addr.to_string()).await.expect("Failed to bind socket");
let mut id_manager = Counter::new(0);
loop {
match listener.accept().await {
Ok((stream, socket_addr)) => {
tokio::spawn(handle_new_client(Connection::new(id_manager.next(),
sender.clone(),
stream,
socket_addr)));
}
Err(e) => {
eprintln!("Impossible to connect with remote client, error : {}", e);
}
}
}
});
}
We give the task of managing the client a Sender<Event>
, which is a channel that allows the task to communicate with the main thread.
async fn handle_new_client(connection: ImmatureConnection) {
connection
.accept_handshake().await
.send_connection_opened_notification();
}
Once the handshake is complete, the Connection
object has a method that sends a connection_opened
notification using the channel.
Here's where it gets interesting: the channel sends the Connection
object itself to the main thread wrapped in a notification.
fn send_connection_opened_notification(self) {
self.event_sender.clone().send(Event::ConnectionOpened(self)).expect("Failed to send `Connection Opened` example");
}
And all those notifications are managed in the main thread.
loop {
let event = match sm.poll_event() {
Ok(event) => { event }
Err(e) => {
eprintln!("An unexpected error happened when trying to poll an event : {}", e);
exit(1);
}
};
match event {
Event::ConnectionOpened(connection) => {
let next = job.pop_front().unwrap();
id_worker_number_map.entry(connection.get_id()).or_insert(next);
connection.send_message(next.to_string());
job.push_back(number_manager.next())
}
Event::MessageReceived(connection, message) => {
let value = id_worker_number_map.get_mut(&connection.get_id()).expect("An unexpected error happened");
if message == "true" {
//println!("worker n°{} has found a prime number : {}", connection.get_id(), value);
*last_prime.lock().unwrap() = *value
}
let next = job.pop_front().unwrap();
connection.send_message(next.to_string());
*value = next;
job.push_back(number_manager.next());
}
Event::ConnectionClosed(connection) => {
let value = id_worker_number_map.get(&connection.get_id()).expect("An unexpected error happened");
println!("Worker n°{} has closed the connection without finishing his job, his number {} goes back to the queue", connection.get_id(), value);
job.push_back(*value);
id_worker_number_map.remove(&connection.get_id()).expect("This worker should have been in the map");
}
Event::CloseReceived(connection, _close_frame) => {
let value = id_worker_number_map.get(&connection.get_id()).expect("An unexpected error happened");
println!("Worker n°{} has closed the connection without finishing his job, his number {} goes back to the queue", connection.get_id(), value);
job.push_back(*value);
id_worker_number_map.remove(&connection.get_id()).expect("This worker should have been in the map");
}
e => println!("{:?}", e),
}
}
Now that the back-end is ready, I will be building the front-end in Yew. This will make it easy to compile Rust into a WebAssembly package that can be displayed on the front-end. To implement the WebWorker, I will be using Gloo, which is relatively easy to use.
After some work, I have created an agent that creates a web worker that connects to the back-end, requests a number, and replies with either true (if prime) or false (if not prime). I understand that this implementation is not optimized (it would be better to use a range of numbers where the client can send back the prime numbers in that range), but I created this to showcase the proof of concept of this architecture.
I used Traefik to manage the SSL certificate and other aspects efficiently, and now the application is ready to go! Check it out at https://distributed-computing.mathias-vandaele.dev/
If you're interested, the entire codebase is available at https://github.com/mathias-vandaele/project-distributed-calculus. Thank you for reading!
Subscribe to my newsletter
Read articles from Mathias Vandaele directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by