Dev's Journal 2
Welcome back to the Dev's Journal. I'm going to try and make this a regular Thursday publication. The past week I mostly worked on project clean-up, bug fixes, researched formal verification, and did math. It was pretty fun.
Building with Solmate Auth + Roles Authority
I've been implementing access control on a project using Solmate RolesAuthority and the design is elegant and simple. Also, since it's Solmate, you know transmissions11 has siphoned out as much gas as possible to make it a lightweight addition to your system.
The authority allows setting function-level permissions (by selector) on a per contract basis. This means you can deploy one RolesAuthority and use it to define roles and permissions for multiple contracts, unlike other access control frameworks where you inherit some functionality and it only applies to that contract. The RolesAuthority contract can have up to 256 Roles, denoted by a uint8
value (0 to 255). Capabilities (permission to call a specific function (using the function selector) on a specific contract) are granted to a Role. Addresses can be assigned a Role and they then have the permissions granted to the Role by its Capabilities.
By default, it has an owner which can assign capabilities to roles and roles to addresses. If desired, you can provide another top-level authority to control granting role permissions (nesting authorities). If you'd like to set access control and then make it immutable, you can set the owner to the zero address. Permissions will not be able to be changed at that point.
To use the access control on a contract, you simply inherit the Auth
base and set the Authority on construction. Each contract has the ability for an owner to swap out the Authority used in the contract, but you can set the owner to the zero address to prevent this from happening.
Functions on a contract can be gated by adding the requiresAuth
modifier. If you'd rather use an internal function to save bytecode space, you can call isAuthorized(msg.sender, msg.sig)
at the top of your function. msg.sig
returns the 4 byte function selector from the transaction and is how the system checks whether the Role has the Capability to call the function.
A useful pattern I came up with using forge script
is to deploy a RolesAuthority with msg.sender
as the owner, deploy the smart contracts in your system, create the roles you need with specific capabilities, assign them to the provided addresses, and then change the owner over to one of the provided addresses. Here is an example:
function deploy(address guardian_, address policy_) external {
vm.startBroadcast();
/// Create system roles authority
authority = new RolesAuthority(msg.sender, Authority(address(0)));
/// Deploy system in this order: Contract1, Contract2
contract1 = new Contract1(guardian_, authority);
contract2 = new Contract2(guardian_, authority, contract1);
/// Configure access control on Authority
/// Role 0 - Guardian
/// Contract 1
authority.setRoleCapability(
uint8(0),
address(contract1),
contract1.toggleActive.selector,
true
);
/// Contract 2
authority.setRoleCapability(
uint8(0),
address(contract2),
contract2.initialize.selector,
true
);
/// Role 1 - Policy
/// Contract 1
authority.setRoleCapability(
uint8(1),
address(contract1),
contract1.setDefaults.selector,
true
);
/// Assign roles to addresses
authority.setUserRole(guardian_, uint8(0), true);
authority.setUserRole(policy_, uint8(1), true);
/// Transfer ownership of authority to guardian
authority.setOwner(guardian_);
vm.stopBroadcast();
}
Kelvin Fichter (aka smartcontracts) recently tweeted that it'd be nice to have a standard UI for interacting with these type of auth libraries to make it easier to manage permissions. To add on to this, I think it would also be useful to have a UI that shows a smart contract system's architecture and lists the addresses with various roles or permissions on each contract. That way access control is more transparent to non-technical users. The double edge sword with mutable access control is that it makes developers lives easier while requiring more trust in the team by the community.
Building Simulations in Solidity
A colleague and I implemented a simulation in hardhat to prove the existence of a bug in a contract we'd been reviewing. After proving it was there, we spent a bit of time re-working the contract logic to resolve. The codebase and tests are written in foundry, which meant that a test needed to be written in Solidity to ensure the changes worked as intended (currently do not have a fancy foundry + hardhat setup like Seaport).
Writing the test was challenging because of the number of variables required in the simulation was high. This was not a problem in Javascript, but Solidity is a different story. I ended up fighting stack too deep for a while (read: way too long). While doing testing and scripting in forge has been great for avoiding context switching penalties and dealing with value conversions, stack size is a downsize for certain use cases. Well written code should be able to deal with it most of the time, but it may take a bit to get there.
Tricks to remember to help with stack too deep:
- Pack local variables into a struct so that they can be offloaded from the stack to memory (pass this struct into the function or internal functions if possible)
- Factor code into smaller functions to limit the number of variables in each scope
- Use block scoping where able to reduce variables added to the stack in the top-level function context
Building with Git Commit Standards
We've been trying to continuously improve development standards at Olympus over the past few months and have made a lot of progress. One area in particular is guidelines for structuring commits to organization repositories (currently doing on newer internal ones, but likely to expand later). As we've migrated to foundry and updated other development practices, another dev suggested using a keyword for each commit we make to a branch to denote the type of changes being made and then limiting the changes per commit to that context. This is similar to the [foundry] repository, which I believe was the inspiration. Here are the keywords we use for commits:
- build: Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
- ci: Changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)
- docs: Documentation only changes
- feat: A new feature
- fix: A bug fix
- perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
- style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
- test: Adding missing tests or correcting existing tests
- git: anything unholy you have to do with git (this last one makes me laugh 🤣)
Documenting commits this way makes it easier for others to review PRs and understand what type of changes are being made (assuming they are used correctly). Additionally, it enforces better commit habits by separating changes into different commits. I've definitely been guilty of working on a branch all day and then just pushing a monster commit with a rambling explanation. Following this structure helps me avoid that.
Researching Formal Verification
I've done more hard thinking about math in the last week than I did in the 5 years before I started smart contract development. I studied math for my undergraduate degree and participated in a research fellowship during my time there. I've been getting back to these roots recently, and it has felt really good.
I joined OlympusDAO as a smart contract dev in February after publishing my deep dive on their Olympus Bonds V2 contracts, which spurred some conversations with various team members. Since joining, one of the things I've been working on are some updates to bonds (shhh). As we've thought about the system more and wanting to ensure its behavior is 1. as intended and 2. optimal, I started thinking about the need to formally model the logic as a system of equations.
Some experienced readers may say "shouldn't that be done first?" The answer to that is complicated. Systems come in different flavors, but to generalize, time-based systems can work using absolute calculations for certain variables like price (such as the Paradigm Continuous GDA design) and there are systems that implement logic iteratively. Iterative systems are (relatively) easy to implement, but they are hard to formalize since incremental relationships should be generalized with induction when possible. The Olympus bond system uses a number of iterative calculations to adjust the price of a market over time based on market activity. This helps ensure the price is fair when external prices move.
In a broader sense, I think this is representative of a number of DeFi and smart contract systems. In many cases, they have been developed intuitively, refined based on testing and simulations, and are functional despite lacking more formal specification and verification of their underlying logic. Many of the blue-chip protocols are exceptions and have been formalized either during development or afterwards.
However, as the space matures (and has matured), we are seeing more thorough research, specifications, and formulations of smart contract systems that have a theoretical grounding. I'm excited to dive deeper into these areas.
Formal Verification is the act of proving the correctness of a system in respect to a formal specification using mathematics techniques. Broadly, I'm envisioning Formal Verification in the following steps:
- Create a specification of what we intend the system to do
- Create a mathematical model of the intended system
- Prove the correctness of the model
- Verify that the system implements the model
An example is the Uniswap x*y=k formal verification paper by Runtime Verification which was commissioned prior to the initial release of Uniswap.
Thinking about Project Evolution over Time
I recently spent an afternoon implementing some minor changes to a "nearly done" smart contract system. The codebase is robust, well documented, and well tested at this point. I spent about 30 minutes making the changes and then 2.5 hours fixing all the test files and cleaning up documentation. This is probably the norm for projects once they get to a certain size and isn't isolated to smart contracts. It's really a symptom of a broader phenomena: Evolution of a project from new -> established, free flowing -> rigid, sketchy -> dependable, etc.
The problem is that working on new things where we can move fast is fun, whereas working on established projects and making concrete improvements that affect a lot of users is useful. When you start working on a new project, you can structure things however you want. There is no existing structure to mind, guidelines to follow, or tests to ensure are updated. When you're part of a large existing project, things are more rigid and changes take longer. One is not better than the other. Both are needed to build new things and then scale them to reach their potential. I think some people are geared more towards one or the other, but being comfortable in both environments is good if you want to build something and see it through.
In the context of smart contract development, the difficulty with changes increases if you update base contracts which are inherited multiple times or an interface which is used in a number of different contexts. One main way to make this more bearable is to factor re-used code out into dedicated files or functions, even in your test framework. I didn't do this early for tests and the number of test files has increased to the point (~12 files) where keeping them in sync is a big pain. I need to take some time to factor out redundant components and remove some of the technical debt here.
Currently reading
- Snow Crash - Neal Stephenson. Nearly finished. It's been an excellent read. I'm going to think twice before staring at any bitmaps in the future...
- Some older articles (A Short History of Uniswap, On Path Independence) from the founding of Uniswap and the inspiration behind
x*y=k
and the formal verification linked above. As I've been working on Olympus bonds, thinking about expanded use cases, and OTC markets in general, I wanted to see understand more about how other market primitives were developed. Luckily, there is a lot of great information out there, and it helps me fill in gaps of knowledge from not being part of the space at the time. Additionally, it sparked my interest in Formal Verification, which I described above.
Subscribe to my newsletter
Read articles from Jordan Hall directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Jordan Hall
Jordan Hall
I’m Jordan, a developer mostly working on Ethereum Virtual Machine (EVM) based smart contracts and applications. I'm interested in Decentralized Finance (DeFi) and Gaming use cases on the blockchain. I am part of the Solidity Guild, a collective of web3 developers. I built raider.finance, an auto-compounding vault implementation for the Crypto Raiders LP staking contracts. I'm currently working with OlympusDAO to build and enhance their smart contract systems. I write weekly newsletter based on my notes as a smart contract developer, called Dev's Journal. I try to learn by doing deep-dive reviews of popular blockchain protocols and thinking about new innovations that could build on them. 📱 Message me on Twitter @oightytag