Ephemeral Sandboxes and Trusted Authentication: Rethinking Application Security

It's amazing how sometimes when you just stop, take a step back, and do some thinking—or in some cases stop thinking altogether—you can manifest an epiphany when it comes to creation.
I started working on a new experimental form of something I refer to as a file-based operating system (F-BOS), or file-based instruction string system. The truth is, none of these terms really explain what I started working on long ago, but I'll attempt to explain it in a moment.
Shortly after I started coding this project—let's continue to refer to it as F-BOS until I come up with something better—I hit a wall. Not the kind of wall you hit through technological limitations, but something similar to writer's block.
I knew what I wanted to build and how I wanted it to function, but I honestly didn't know the best approach to take. I had some experimental code written in C++ that formed a basic prototype. It compiled without errors, and then... I stopped.
I stopped because at the time I thought it was going nowhere and wasn't producing the results I envisioned. I set the project aside with no real plan of when or if I would return to it.
Many software developers and engineers do the same thing—create code samples and prototypes, only to cast them aside for another day. Sometimes things work, and sometimes they don't. That's the nature of coding.
Recently, while looking for something unrelated in an old backup directory on my server, I found a directory called 1V—the code name I used for the project since I never had an actual name for F-BOS.
Curiosity got the better of me, as it always does, so I gave the source code the batcat treatment. Much to my surprise, I had written some really good code that not only had real potential to achieve what I had set out to do, but was at least 50% completed.
Some quick tampering with the code and a few more compiles with g++ showed it wasn't actually anywhere near as bad as I had originally thought. Why I thought it was bad at the time, prompting me to put the project on hiatus, I'll never know. But I have now revisited F-BOS and resumed working on the code.
There's a strange form of irony in my resurfacing this old project. I have also recently been working on a new form of application launcher that functions via scripts.
On the surface, it looks and functions like any other application launcher. But under the hood, applications and all launcher entries are executed via scripts with support for Bash, Python, and Perl. Flexibility is infinite—any language can be included at any time. But I wanted to stick to those three for simplicity and their reputation for robustness.
I have always been a firm believer in the "encrypt everything" philosophy. At the very least, if encryption isn't possible or functional, privacy and security must be key considerations in any project I create.
How does this fit with an application launcher? Well, application launchers aren't exactly considered sexy—they're rather boring. Most users have probably never given thought to the security implications of having an application launched from their launcher that they never added or don't understand.
This got me thinking: what if an application launcher worked on the basis of scripts, pointing to specific locations that are secured by design?
No matter how you approach security while maintaining free and open-source principles, there are always vulnerabilities and possible entry points for nefarious code injection. There's a delicate trade-off between security and keeping things transparent and open.
I don't ignore the potential for someone to dive into the code and add nefarious entries linked to underlying scripts, modify existing scripts to launch malicious code without user knowledge, or execute malware alongside regular applications so users remain unaware. These are existential problems that need deep consideration.
I've been working on exactly this problem, with two possible implementation routes. The first approach sandboxes applications on launch locally and natively. When an application launches, a sandbox is immediately created, isolating the application until the user closes it.
For maximum security, sandboxes must not pre-exist, must be destroyed when applications close, and each launch must create a completely fresh, unique sandbox with built-in verification via an on-file authentication system.
This is where F-BOS comes in—it's essentially a file-based operating system that handles security and sandbox authentication, making fake sandboxes more difficult to emulate (though not impossible).
The second implementation builds on the first but resolves its security vulnerabilities. Instead of F-BOS running locally, it could run on an external server operated by a trusted party.
I know what you're thinking: "Who are these 'trusted' parties?" Put aside that skepticism for a moment.
The sandbox would still be built and trashed locally with each open/close cycle, but it would be hard-coded with authentication that pulls from a trusted server. If authentication isn't verified correctly or problems occur during application launch, the application simply fails to launch, prompting the user to investigate.
All of this might sound tedious, but it really isn't. I've built makeshift prototypes of everything described here, and even in rough prototype state, they prove there's very little overhead added to these processes. Any overhead is minimal and negligible considering the heightened security benefits.
As the digital world finds itself fully engaged in an information war, more focus needs to be placed on localized user security.
Subscribe to my newsletter
Read articles from Chris McGimpsey-Jones 🏴☠️👻 (@cipheranarchist) directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
