Enterprise Grade EVM through Hyperledger Besu

Igor BerlenkoIgor Berlenko
67 min read

Table of contents

Unlocking enterprise potential means embracing cutting-edge technology and redefining how small and medium businesses access innovation.

Thank you for joining us in Brussels. This location is a bit new for us, but we are very happy to be here. A special thanks to the people of VC for hosting us today on Enterprise-grade EVM and Hyperledger. We have a lot of technical topics to cover, especially in the context of the Side Event of LC, where we will finally discuss Enterprise blockchain. So, no crypto discussions today—sorry! We will focus on more Enterprise solutions.

Let me provide a quick overview. I am Daniel, the research coordinator at RoWest. At RoWest, we focus on seeing beyond obstacles—sorry for the pun—by creating opportunities, developing talents, raising the bar, and inspiring people to realize their full potential. We challenge individuals to do things differently, which is the focus of the University College of West in Brussels, as well as in CC. Today, we are exceptionally gathered here in Brussels.

We are also a proud member of the Hyper Belgium Group and a bit of F. This session is an opportunity to share insights. As the research coordinator of one of the research units in the Hest Cyber Three Lab, we strongly focus on how small and medium enterprises (SMEs) can utilize cutting-edge technology. There is a heavy burden on using these technologies, as big companies usually have easier access to such information. However, for SMEs wanting to use Web 3, AI, cybersecurity, and emerging technology, it can be quite challenging. This is where our group of 15 to 20 researchers focuses on immersive tech, cybersecurity, and Web 3, as well as the combination of these areas.

To give you a very brief overview, here are some of the projects we are currently working on. We have numerous European projects, including the DEUM project on FCN, which will be discussed in later sessions. This project looks at how we can build operations for the European Blockchain Service Infrastructure. We have also worked on blockchain and government through the BLINK project. Another project we are starting is EXITE, which examines how the metaverse can provide access to local governments, referred to as the Cityverse. Additionally, we are partners in the BIG LSP project, which is part of the European Digital Identity initiative.

We don't only conduct research; we also offer a variety of master classes. If you're interested, we have a basic Web 3 master class, secure blockchain development, and identity courses. As a university college, we focus on these educational opportunities. Today, I would also like to introduce my colleagues: Patrick, who is present, and Shane, who is actively working on these topics. Please feel free to reach out to them if you have any questions or discussions.

We would also like your assistance. We are currently reflecting on our position as a university college and considering whether there is a need for blockchain certification. We have numerous courses and master classes, but there may also be interest in having a certification that proves competency, such as being a certified architect or member. We will be sending out emails to invite you to provide feedback on the idea of blockchain certification.

The same applies to all the slides; they will be sent out as well. We kindly ask you to complete a small survey to share your impressions, what went well, and what could be improved. Your feedback is invaluable to us.

Now, regarding the program, I am slightly behind schedule but believe we have an interesting agenda ahead. The welcome has already concluded. Next, Hart Montgomery will provide a brief introduction to the new Hyperledger Foundation, which is very fresh—it hasn’t even started yet! Following that, we will have a combined session where representatives from Koo will discuss running Hyperledger in the enterprise, covering what’s new and what’s next, along with a practical approach to privacy for enterprise use cases.

After that, we will jump into another session where we will explore the merging of the Diamond Proxy and other relevant topics.

=> 00:05:40

Open source is the future of decentralized technology; collaboration thrives in transparency and trust.

Before we begin, I would like to mention that we are already five minutes into the program. However, I believe we have an interesting program lined up for you. The welcome session has already concluded, and now, I would like to introduce Hart Montgomery, who will provide a brief introduction to the completely new Hyperledger Foundation. Although we haven't officially started yet, this initiative is very fresh.

Following Hart's introduction, we will have a combined session where the team from Koo will discuss running Hyperledger in the Enterprise. They will cover what's new, what's next, and provide a very practical approach to privacy for Enterprise use cases. This session promises to be informative, as there is a wealth of information to share.

Next, we will transition into another session focusing on the merging of the diamond proxy and the Bion proxy for upgradable size limit smart contracts. Alberto will lead this discussion, which I presume will be a bit technical; however, I don't think that will be a problem for this audience.

I will then share insights about the European project and the next generation of EPSI. We will explore how the European Union and its member states are responding to blockchain technology and what their ambitions are. This segment will also include a 20-minute Q&A session.

Our last speaker will discuss the Alaska public permission blockchain Consortium, also related to Hyperledger. We anticipate that these talks will last approximately 20 minutes each, with a five-minute Q&A following each session. If you have any urgent questions, please feel free to ask, as we are a small group and encourage interaction.

At the end of the program, we would like to invite you to our networking session. Please connect and stay with us, but do keep in mind that we need to vacate the building by 9 PM. I understand that some of you may want to leave early to watch the football game, particularly since Spain is playing.

I hope you enjoy this evening, which is packed with information. As a reward, we will conclude with a reception.

Now, I would like to hand it over to Hart Montgomery.

[Applause]

Thank you, everyone, for coming tonight. I am Hart, and I work as the CTO of the Hyperledger Foundation. I will provide you with a very brief overview of what Hyperledger is. First, I want to extend a warm welcome to all of you and remind everyone of the antitrust policy notice that accompanies all Linux Foundation participation.

So, what is Hyperledger? Hyperledger is an open-source global ecosystem for enterprise-grade blockchain technologies, focusing on critical implementations and developments worldwide. We are part of the Linux Foundation. Are people here familiar with the Linux Foundation?

For those who are not, the Linux Foundation is known for projects like the Linux kernel, Kubernetes, Automotive Grade Linux, and the Academy Software Foundation. As a sub-foundation of the Linux Foundation, we enable developers, enterprises, and organizers to collaboratively develop in the open. We inherit all the principles of open source, transparency, and open governance from the Linux Foundation.

In summary, we believe that open source, open development, and open governance are the future of decentralized technologies. We do not support centralized governance for decentralized software, as it does not inherently make sense. When you encounter projects that claim to be open source, we encourage you to ask if they are also openly governed.

As a foundation, we serve as a neutral third-party home for code, standards, and sometimes even data collaborations between different entities that may not fully trust one another. We are backed by a trusted legal framework and have a fantastic staff experienced in open source.

The goal of Hyperledger and the Linux Foundation is to facilitate collaboration on open development, allowing others to use and contribute to your code base. While these numbers may not be entirely up to date, we have a significant global effort underway. We have been involved in blockchain since its inception, and this overview does not even capture some of our most recent projects scheduled for 2024.

Thank you for your attention.

=> 00:10:33

Collaboration in open source is the key to unlocking innovation in blockchain technology.

Another important aspect to highlight is that we are backed by the most trusted legal framework, and we have a fantastic staff that has been working in open source for some time. The goal of Hyperledger and the Linux Foundation is to facilitate collaboration on open development. If you want others to use and contribute to your code base, this is the problem that we solve.

Although the numbers presented aren't totally up to date, we have a massive effort worldwide. We've been involved in blockchain since its inception, and while this information doesn't include some of the most recent projects we have planned for 2024, it is important to note that we have been doing open blockchain work since almost anyone else.

What many people may not know is that Hyperledger has been involved with Ethereum for quite some time. Does anyone here remember Hyperledger Bur? That was a really cool project that I believe was actually too far ahead of its time, as it was essentially a modular EVM that could be used with a number of different blockchains. This initiative dates back to 2017, and since then, we have had a lot of collaboration and work with Ethereum and the Ethereum Foundation, including the EA joining and, of course, as everyone will tell you today, about Basu.

Very briefly, the goals for this Meetup are to ensure that everyone learns a lot about Basu today, and importantly, I hope everyone can explore possible opportunities for collaboration. If you are wondering how you can get what you want out of the community, I encourage you to join our Discord. We have many different ways to get involved if you are interested in the community or want to learn more.

Thank you very much for your attention; that concludes my five minutes. I will now turn it over to Matt, and if you have any questions for me, I will be around, so please feel free to ask.


Hello, my name is Matthew Whad, and I have about 15 minutes to share with you. I will be handing over to Jim Z shortly. I am going to give an update on what’s new and what’s next in HU or Enterprise users.

A bit about me: I am a principal engineer at Kido, and as you will see on some of the subsequent slides, Kido and I lead the Enterprise roadmap for the project. You will see a lot of me on Discord, where we actually have an Enterprise-specific channel in the best section of Discord. Here, we discuss Enterprise-specific requirements and features. I work very closely with the main open source project leads, largely from the Consensus organization, to shape how the public and Enterprise roadmaps are going, how they relate to each other, and what we have in common, as well as what things we will work on separately.

Kido is a Web3 technology company that offers a wide range of Web3-oriented technologies. In this context, the most relevant things we do specifically around B include packaging it for both hosted and self-hosted environments. We take given better releases, validate them, add sonds to them, and test them to a greater extent than the regular bet releases are tested in the open source releases. We then make those available as part of our offering.

Kido has been heavily involved in investing and has been running nodes on our platform for several years now. With our leadership of the Enterprise roadmap for B, we are driving it even further forward.

I will skip to a few slides since I am a little short on time. I suspect most people here have a good understanding of why B is particularly good for the Enterprise. Its growth in the public chain, even though it is very public chain-oriented, is a significant part of that story for Enterprises. It demonstrates that it is being matured, well-used, and exercised, with pars being utilized by all the home users and people running validators on Mainnet. As that ratio continues to grow, the combination of that with all the features that permissioned chains offer, such as UFT, account permissioning, and no permissioning, makes it a really good fit for Enterprises. Additionally, it has a very private license being attitude licensed, and so on.

I mentioned the Bess roadmap, and I look forward to discussing it further.

=> 00:15:07

Enterprise growth thrives on the synergy between public and permissioned chains, driving innovation and stability in blockchain technology.

People here kind of have a good understanding for why Bess is particularly good for the Enterprise. Its growth in the public chain, even though it's very public chain oriented, is a really good part of that story for Enterprises. It shows that the technology is being matured, well used, and exercised, with participants being involved, such as home users and people running validators on the Mainnet. As that ratio continues to grow, the combination of this with all the features that permissioned chains offer, like UFT, account permissioning, and no permissioning, makes it a really good fit for Enterprises. Additionally, it has a very private license being attitude licensed.

I mentioned the Bess roadmap, and I would encourage anyone here to go and look at it on the B Wiki. As part of having the public and the permissioned or Enterprise-focused features being developed in parallel, we have separated out the roadmaps. This allows you to see what are the things that are coming next for Bess, what's being worked on right now, what's in progress, and what's been implemented and released recently. We are trying to ensure that the public and the Enterprise routes move along cohesively. This is the part of the Enterprise roadmap that I lead, and I would also encourage you to come chat with me here on Discord or elsewhere to discuss features you think Bess should deliver for use cases.

Most of the rest of my talk will be a summary of what's been going on in Bess Enterprise over the last 12 to 18 months. At the end, I will focus mainly on the right-hand side, discussing the things that are being actively worked on at the moment. This hopefully gives you an idea of the amount of work that has gone into Bess in the last 12 months, the relevance of the features that have been worked on, and why they matter. I have a slide that highlights the main things, particularly the two middle ones, which contain the most interesting content. I will talk about each of those items in detail.

It's probably worth mentioning some of the developments from late last year. We introduced a transaction pool that was much more focused on Enterprise use cases. This was an evolution of what was the only pool, but the public main chain pool had really different requirements compared to Enterprises. Some of the features that came in last year included the sequence transaction pool, which provides much more predictable behavior for transactions. Transactions come in more or less in the order that they arrive, which tends to be the order that people want in an Enterprise. This makes diagnosing issues much easier, as you don't have transactions stuck in a pool without knowing why they haven't been mined while others have gone ahead.

A significant amount of work was done, partly through the engagement of companies like Kido with many production customers, to iron out issues related to high performance and stability when running for extended periods. Now, I would like to discuss some of the more substantial work that we've accomplished. One of the first things delivered at the beginning of the year was support for QVF chains working with the Shanghai Fork. For those who aren't familiar, the Shanghai Fork is an EIP of the public chain from about a year or more ago. It introduces several features that are specific to the public chain, but it also adds an additional operation to the EVM that improves the performance of certain transaction types.

There are two compelling reasons why Enterprises should care about some of the public EIPs. First, the performance enhancements provide benefits to any transaction type, whether it's on the public chain or in a permissioned chain. Second, the ecosystem that developers are using in your application teams evolves alongside the public chain. They are not left waiting for two, three, or four years with developers using outdated tools and old versions of the compiler. These development tools and technologies are advancing at the speed of the public chain.

What we started to see a lot of at the end of last year and the beginning of this year was many people saying, "Well, I'm using the latest version of Solidity against the QBF chain, and the contracts won't deploy," or "I'm using Open5 against the QFT chain.

=> 00:18:55

Stay ahead in blockchain development by embracing the latest tools and features—don't let outdated tech hold your team back.

Enhancements provide benefits to any transaction type, whether it's on the public chain or within a permissioned chain. Furthermore, the ecosystem that developers are utilizing in their application teams evolves alongside the public chain; they are not left waiting for two, three, or four years while using outdated development tools or old versions of the compiler. As these development tools change, they move at the speed of the public chain.

At the end of last year and the beginning of this year, we observed many developers expressing frustration. They would say, "Well, I'm using the latest version of Solidity against the QBF chain, and the contracts won't deploy," or "I'm using Open5 against the QFT chain, and the contracts don't deploy." The underlying issue is that as compiler versions and libraries advance, they begin by default to utilize the latest features of the EVM, which may not be supported if you are using a QBF consensus agent.

To address this, we specifically worked on ensuring that the P for QVF and IBFT chains was up to date with the latest forks and supported the Shanghai Fork. This allows your development teams and application teams to leverage all the latest features, such as Open75, which was released in the first quarter of this year.

Additionally, we recognized that the defaults for Bess were simply that—defaults. If you ran the defaults for Bess, you received configurations generally designed for public chains. Therefore, we explored how to make it easier to run Bess for the enterprise. The solution we implemented is the ideal profiles. We began with two or three profiles, including a public staging profile with sensible defaults and an enterprise profile.

The purpose of the Enterprise profile is to provide sensible defaults tailored for a permissioned environment. For example, it includes a sequence transaction P that defaults to a minimum gas price of zero. It also sets various parameters regarding transaction prioritization, which most users do not want, thus turning it off by default. While users can still override these defaults and customize their settings, using the --profile Enterprise option will yield a more sensible set of defaults for a permissioned chain.

Moreover, we noticed that while Bess generally performs well, feedback from enterprise users indicated that using an old version of Bess against current data was often inadequate. Users expressed concerns about data corruption, particularly if our GIOPS pipeline malfunctioned and pulled the wrong version of Bess, leading to potential data corruption. To mitigate this risk, we ensured that the recent versions of Bess would detect if an earlier version was started and would prevent the start by default. Users can still override this behavior if they are confident in their version checks, but preventing a start that could corrupt data is a much more desirable default behavior for enterprise users.

Most recently, these enhancements were delivered into the mainline of Bess about a week ago. This work aims to bring emission chains to parity with public chains. The introduction of the Bonsai database is particularly significant, as it is much better suited for pruning data over time and offers lower read times for state from the world state tree. Additionally, snap sync provides much faster synchronization times when adding a node to your network.

Previously, the combination of QBF and the Bonsai database had not received significant investment from the community. While users could enable these features, they often did not function correctly. Early in the year, some work was done to prevent starting Bess if these features were enabled, as they simply did not work. However, the Bonsai database now offers substantial benefits, and the future database for Bess will eventually phase out the forest database in favor of Bonsai.

=> 00:22:51

Bonsai database and Snap sync are game-changers for faster data pruning and node synchronization, paving the way for a more efficient future in blockchain technology.

The Bonsai database is much better suited for pruning data over time and provides lower read times for state from the world State Tree. Additionally, it incorporates snap sync, which results in much faster sync times when you add a node to your network. This combination of QBF, the Bonsai database, and snap sync had never really been invested in by the community. You could turn them all on, but there was nothing that prevented you from doing so, and it didn't really work. In fact, some work was done very early in the year to stop better starting if you turned it on because the community found that they just didn't work.

Now, the Bonsai database has these benefits, and the future database for Bess will involve removing the forest database at some point. The direction of travel for Bess is clear, and it's important that we start getting good onze support for permission chains. Likewise, snap sync is going to be the way you sync anything that isn't an archive node in the future. Fast sync, which is somewhat similar to snap sync, is again going to be deprecated as Bess moves away from it, as it was a kind of stock cap syncing protocol. Snap sync makes it much quicker to sync a new node against a chain that has been running for five years and is millions of blocks deep.

We have ensured that the work has been completed to allow the use of QBF with the Bonsai database and snap sync, and it all works out of the box. This includes addressing a number of edge cases that you wouldn't expect if you are a public chain user. A lot of the work we had to do involved scenarios such as performing snap sync against a chain that doesn't have any accounts in it. No one had invested time to code for that case because everything was focused on the public chain with millions of accounts. Many of those edge cases arose when I started a new chain just to develop against, which had no state. A lot of the snaps didn't expect there to be none, so they simply didn't function. Thus, there were many loose ends to tie up and testing to conduct.

Now, this means that you can use this combination to achieve much quicker sync times when you add nodes to your network. You are now utilizing all the technologies that Bess is heading towards, which aligns with the strategic direction for Bess.

As for the details on how to turn some of these features on, you will receive this information afterwards. You need to turn on the snap sync server on all the nodes that will serve snap sync data. This feature is marked as experimental while we allow the dust to settle and address any bugs. Typically, you want to reduce the number of peers that the node expects. On the public chain, finding 25 peers is very easy, so a default of five is fine. However, in some chains, if you're just running one or two nodes, you will typically want to reduce that number.

Lastly, we have spent a lot of time running nodes and helping customers diagnose issues in Vu, which has not always been great, particularly around QVF changes. Diagnosing many cases was complicated because not many people had taken the time to figure out what would really help in those situations. A good example of this is a school chain. If you have four QVF nodes, you always need three of them at any one time to keep producing blocks. In the case where you go down to two, you will see that no new blocks are produced and no transactions can be mined. However, in the logs, you see nothing really; you notice that blocks were produced, and then four minutes go by while the situation is recovered, but there is no information available.

You could turn on debug or trace, but you would probably need to do that on all the nodes or at least some of them to see which nodes are talking to each other and which ones are offline. The QVF protocol is quite complicated, going through increasing rounds where all of the nodes must reach the same round to agree on a new block. Unfortunately, you don't receive any useful information about this process.

From the pain we have seen customers experience while diagnosing these issues, we looked at how we could make it clearer what is happening in a network during these situations. Recently, we implemented a feature that, when a node goes offline, pops out information about the QFT logic. It now indicates that this node has gone through QFT round one, which has expired, and it will move on to round two, providing you with valuable insights into the network's status.

=> 00:26:45

Simplifying complex network diagnostics can transform the user experience, making it easier to troubleshoot and optimize performance.

In the context of network communication among nodes, it is crucial to monitor which nodes are offline and how they are interacting with each other. The QVF protocol is quite complicated, as it involves multiple rounds where all nodes must reach the same round to agree on a new block. Unfortunately, users often do not receive this information in a useful manner. Based on the pain points experienced by customers in diagnosing these issues, we sought to improve clarity regarding network status during such events.

About a week ago, we observed a situation where some nodes went offline. In this instance, the system provided valuable information about the QFT logic, indicating that a specific node had completed QFT round one, which had expired, and was moving on to round two. It also specified the duration until round two would expire and logged the nodes it was communicating with, including their respective round statuses. It is important to remember that all nodes must synchronize to the same round to agree on a new block.

Initially, we noted that two nodes were progressing through the rounds, and eventually, a third node joined the process. After someone restarted a node, the network recovered, but it returned to round one. This illustrates how QVF operates. From a single node, without needing to enable trace or debug modes, we can observe how these nodes converge over time, indicating that blocks will soon be produced again. For instance, we saw a fourth node join the network, which is a positive sign. By simply examining the logs of this one node, we can ascertain that progress is being made without needing to log into other nodes.

Eventually, all nodes reached an agreed round, and we were able to confirm that blocks were being produced again. While these improvements may seem simple, they significantly benefit enterprise users. I have spent nearly 20 years working for IBM in Enterprise messaging, which has given me ample experience in how a small amount of work can greatly ease the lives of operators and those debugging issues.

In terms of performance, we have received feedback from customers regarding the best practices for optimizing performance. We have spent time assessing the current state of performance, referencing a paper from 2022 that indicated 400 TPS is achievable without excessive effort. We investigated whether this benchmark still holds and explored tuning options to identify limits and thresholds. While I cannot delve into extensive details here, I will be hosting a webinar in a couple of weeks that will cover this topic in greater depth.

Through our efforts, we have conducted extensive testing of JVM profiles to understand the lower limits for setting tuning parameters. As a result of this work and minimal configuration changes, we are now comfortably achieving 50% to 70% higher performance than the figures reported a couple of years ago. We anticipate further performance enhancements, particularly with the transition from Forest DB to Bonsai DB, which has already contributed additional TPS to our top-level performance metrics. We are also aware of other inefficiencies in block validation processes and plan to continue focusing on performance improvements.

Before I hand over to Jim, I would like to address what is next on our agenda. The work on Bonsai DB is a promising start for enterprise users; however, it currently lacks archive capabilities. For example, it does not allow users to query the state of a smart contract on block four while being on block one million. If you were to run in full sync mode with Forest DB, you could achieve this, but with Bonsai DB, you cannot because it inherently prioritizes current state over historical data.

To address this limitation, I am actively working, alongside Kider, to develop a method for retrieving archived data from a Bonsai database. This will enable enterprises to access all necessary information from historical parts of the blockchain. Additionally, we will be examining QFT behavior further. The logging I demonstrated earlier illustrated how the chain recovered after an issue, but it did not expedite the process. It still took four minutes after restarting two nodes for them to begin producing blocks again.

=> 00:30:42

Unlocking blockchain privacy means hiding transactions while keeping them decentralized. It's all about finding the balance between confidentiality and programmability.

In sync mode, you could do that on IDB; however, you can't currently do that because what it inherently does is PR state. Over time, you lose that state, and you have to look at other techniques to rebuild to a point in time. Therefore, what I'm actively working on, and what Kido is actively pushing as part of the best roadmap, is a way of retrieving archive data from a bonsai database. This will allow enterprises to easily access all the information they need from historic parts of the chain.

We will also be looking at QFT Behavior. The logging I showed you was really from demonstrating how the chain was coming back up after an issue. However, what it didn't do was make it quicker; it still took four minutes after you started two nodes for them to start producing GRS again. Four minutes after you've restarted all your infrastructure to start doing something useful is not the norm for an enterprise environment. Therefore, I think we will be looking at further performance enhancements in this area.

That's pretty much it for now. I encourage you to come and join us on Discord and find us on GitHub. I would also like to make one shout out for the Ben Bestu Financial Services working group, which meets every three weeks on a Friday or Thursday, depending on which time zone you're in. I think Har is probably the best person to get people in contact with that, or you can reach out to me as well. We can find links to join those calls hosted by DTCC, who are chairing that workgroup.

Thank you very much, and feel free to come find me with questions later.

Best everybody, I’m Jim, one of the co-founders of Kido and head of protocol at the company. Today, I’m going to talk about the new project that we just contributed to Hypure Labs. This is part of a journey for us to make a more significant contribution. As some of you may know, we are the creators of a larger project called Firefly, which focuses on the off-chain components to make building blockchain-based solutions easier. This is another significant contribution we are planning to make, and this project at Hypure Labs is the first step, focusing on privacy.

As some of you have already been dealing with, any EVM protocol may know that there is no built-in support for privacy. Tesra was an early attempt to implement privacy on EVM protocols, but it encountered issues. One of the biggest problems with the approach that Tesra takes is that you can't really run a token design on this privacy model. Therefore, we need something more that works at the EVM level and provides all the abilities you need, regardless of what kind of solutions you're building, specifically for tokens. This is particularly challenging because, by necessity, a token economy must be fully decentralized; the entire supply of the token must be visible to every node. However, at the same time, you need to hide everything. So, how do we do that? That’s what this project is for.

As you can see, this was already contributed to Hypure Labs. If you are interested, after today, you can go there and try out the tutorial yourself. What exactly are we doing in terms of privacy? There are four key things we need to achieve. First, confidentiality is about hiding information. When you send transactions to the chain, you shouldn't be declaring, "I'm sending a million dollars to Matt." We need to hide that.

We also need to hide ownership. If people can see that JY is sending something to M, even though they can't see the amount, that's still not good for privacy purposes. Additionally, programmability is crucial. Can we do other things on top of the privacy tokens? For example, can we perform DVP (delivery versus payment) from one token to another? This is very important for enterprise use cases. Finally, we added a fourth requirement: we need options. In the enterprise space, it's never a one-size-fits-all situation; every use case has slightly different requirements regarding privacy.

Before we get into exactly how ZTO works, we want to compare different approaches to building privacy tokens. Generally, you can build it in one of two ways: either you can follow the current ERC20 or ERC721 account model, where what is maintained on-chain is essentially a GI map. You have account addresses and balances. You can achieve privacy by essentially encrypting the balance, resulting in an encrypted balance rather than a clear one.

=> 00:35:59

In the world of privacy tokens, one size never fits all; we need tailored options to meet diverse privacy requirements.

In this project, we added number four, which emphasizes the need for options. It's important to note that within the Enterprise space, it's never a one-size-fits-all situation. Every use case has slightly different requirements regarding privacy, so we need privacy options before we delve into exactly how ZTO works.

To provide context, we want to conduct a comparison between different approaches to building privacy tokens. Generally, there are two ways to build these tokens: one can follow the current ERC20 or ERC721, which is based on an account model. This model maintains a GI map on-chain, consisting of account addresses and balances. Privacy can be achieved in this model by encrypting the balance, resulting in an encrypted balance rather than a clear balance. Additionally, using FHE cryptography, one can perform mathematical operations on the ciphertext.

However, if you have explored this space, you may have encountered existing solutions like Anonymous Zether, which is built on this model. There is a significant issue with this approach: if multiple people attempt to modify the same account, you may never be able to spend from your account. This occurs because Anonymous Zether relies on generating a proof to demonstrate that you know your balance, while others do not. When you send a value to another person, you must prove that you have enough in your account to support that transfer.

The challenge arises when, before your transaction is mined by the node, someone else sends you even a small amount, such as a penny or a dollar. If that transaction is included in the chain before yours, your balance would have been modified, rendering the proof you generated invalid. Consequently, you would be unable to spend your money. If someone continues to flood the chain with small transactions, you will never be able to spend.

To address this issue, certain mechanisms can be introduced, such as a spending window. While this does help with the spending issue, it limits you to one spending transaction per spending window, which in turn restricts your throughput.

On the other hand, the alternative approach to building a privacy token is the UTXO model. This model resembles how Bitcoin operates, as it is built on UTXO rather than maintaining a table of accounts and balances. In this model, all tokens are viewed as a pile of coins, where each coin has a different denomination to represent its value. Your balance is not just a single number; rather, it is a collection of all the coins you own. For example, you might have 10,000 different coins totaling $1 million. When spending, you can execute multiple transactions simultaneously, each using a different part of your collection, without them interfering with each other. If others send you more coins, they simply add to your collection without impacting your spending transactions.

This UTXO-based approach offers much better parallelism, allowing for higher throughput. This is indeed how we built ZTO.

The basic constructs of UTXO with ZTO operate similarly to how Bitcoin works. In Bitcoin, the ledger consists of a collection of UTXOs, each with a spending condition that requires you to prove your identity, such as signing something to confirm you are Alice or Bob. Each UTXO represents a different value. For privacy purposes, it is essential to hide these values. A great method for achieving this is through hashing. By hashing these values along with the public key of the owner, we can maintain confidentiality.

Thus, on the ledger for the smart contract managing the tokens, only a series of hashes are visible. Hashes are beneficial because they are a one-way function, meaning you cannot reverse-engineer them to discover what they represent. However, the challenge arises when sending a transaction to the chain: you are spending a collection of hashes and producing new hashes. How will the smart contract handle this?

=> 00:41:03

Confidentiality in blockchain is achieved through hashing, ensuring your balance stays private while still proving honest transactions with advanced cryptography.

Each transaction represents a different value, allowing us to see the balance of Bob, which is 15.8, and the balance of another account, which is 0.1. For privacy purposes, we need to hide these amounts; nobody should be able to see how much I have. A great way of hiding this information is by using a hash. We hash these values along with the public key of the owner and additional data. Consequently, on the ledger for the smart contract maintaining the tokens, all you see are a whole bunch of hashes. Hashes are a great function because you can't work backwards to see what they represent, thus achieving confidentiality.

However, when you send a transaction to the chain, you are spending a bunch of hashes and producing a bunch of hashes. The question arises: how would the smart contract know that your transaction represents an honest transaction? How do you convince the smart contract? This is where Zero-Knowledge Proofs come in. It is a very advanced cryptographic approach that we don't have time to delve into today, but it is based on 40 years of research. If you want to know how it works, you can talk to an expert in cryptography from Stanford.

Essentially, it is a proof that shows the smart contract that against those hashes, you can be certain that they represent honest transactions, guaranteed by mathematics. Although it sounds complex and fancy, it is actually a mature technology used in many web3 projects. I won't go into the details of the circuit designs, but just know that this toolkit provides a lot of choices for different use cases. Sometimes you may want on-chain privacy, while other times you may not. You can choose to use nullifiers or not. There are fungible tokens, non-fungible tokens, and also decentralized virtual private servers (DVPS) that can be built on top of this in terms of programmability.

I will skip all of that and just leave you with this: all this information is in the repository. You are welcome to either scan this QR code or visit Hypercal Apps to find more information. If you have any questions or feedback, feel free to come talk to us; we have our dedicated channel. Thank you.

Now, I would like to open the floor for questions. Can anyone from the audience ask a question? If you have a question, please raise your hand. If not, feel free to reach out to us; we welcome your inquiries here.

Next, I would like to introduce the next presentation.

"Can you hear me?"

Okay, so my name is Alberto, and I am part of the innovation team at Builders. I joined my company a couple of years ago and the industry for that matter. Before that, I worked as a software developer for more than 10 years. My presentation will be a little bit technical, but I promise to keep it accessible for you.

Let me begin by introducing my company. We are a Spanish blockchain technology firm founded in 2018, specializing in Enterprise Solutions. Our team is made up of architects, blockchain specialists, and business and legal experts who partner with companies to unlock their potential in blockchain and distributed ledger technology (DLT) adoption. We have a business and regulatory compliance mindset, helping our clients navigate the complex landscape of D Technologies. Our solutions are designed with compliance in mind, and we collaborate with top-tier law firms while closely engaging with regulators like the European Securities and Markets Authority (ESMA) and the Spanish National Securities Market Commission (CNMV) to ensure the highest standards in our solutions.

Innovation is embedded in our DNA, as we are constantly exploring new technologies to transform and evolve specific markets. Finally, here are some numbers about the company: we have collaborated with more than 30 clients worldwide, delivering over 50 projects, and currently, we employ around 80 employees.

This is how I will structure my presentation today. First, we will examine the business requirements that we received. After that, we will look at the things that we can and cannot do within the Ethereum Virtual Machine (EVM), including some restrictions when it comes to deploying smart contracts. Then, we will go through the thinking process that we carried out to find the best solution, which involved first trying to reuse existing patterns and then identifying the reasons why they didn't work for us. Finally, we will present our findings.

=> 00:46:30

Navigating the complexities of blockchain requires innovative solutions to meet common business needs, like deploying identical smart contracts across various sectors.

About the company, we have collaborated with more than 30 clients worldwide, delivering more than 50 projects. Currently, we employ around 8 employees.

This presentation will be structured as follows: first, we will examine the business requirement that we received. After that, we will look at the things that we can and cannot do within the EVM (Ethereum Virtual Machine), including some restrictions it has regarding the deployment of smart contracts. Then, we will go through the thinking process that we carried out to find the best solution. Initially, we tried to reuse existing patterns, but we discovered reasons why they did not work for us. Finally, the most interesting part of the presentation will be the pattern that we developed to solve our requirement.

The business requirement we received was to deploy and manage independent equities and bonds on-chain. All equities would follow the same rules, and the same would apply to bonds. Technically speaking, this requirement translates to deploying multiple independent items on-chain with identical functionality, essentially creating clones of each other. Although our particular context is finance, this requirement could be applied to multiple other domains. For instance, in insurance, we could have car insurances on one hand and life insurances on the other. All car insurances would operate in the same way; obviously, the data of each specific insurance would change, but the behavior behind that insurance would remain consistent. The same situation applies to real estate, where you can have state shares and loans that behave similarly, with only the data differing from one to another. Thus, this is a very common business requirement that anyone working on blockchain will eventually face.

Now, regarding the smart contract restrictions, these are the rules of the game that we must respect in order to find a solution. The EVM imposes a couple of restrictions on smart contracts: immutability and maximum size.

First, let's discuss immutability. Once we deploy a smart contract, we cannot change its logic. This is to ensure security, but it also means that we cannot fix errors or extend the functionality of the contract. Upgrading a smart contract is difficult; to update the contract, we need to deploy a new one and migrate all the data from the first to the second. This means that all users interacting with those smart contracts will have to update the smart contract address they are using. Additionally, data migration from contract one to contract two will likely require multiple transactions, resulting in increased time and transaction fees.

The second restriction pertains to the size limit. Smart contracts cannot be larger than 24 kilobytes. If we need to deploy business logic that exceeds this limit, which is often the case, we must deploy multiple smart contracts. Consequently, users interacting with our system will need to store multiple smart contract addresses, not just one. To complicate matters further, if we need to upgrade several smart contracts, users will also have to follow through and update their respective addresses. As you can see, migrating from one contract to another in this manner can be quite cumbersome.

To implement the equities and bonds we were tasked with, we will need to deploy business logic that exceeds 24 kilobytes and should be upgradeable for obvious reasons. If you deploy code and forget about it, you will encounter problems.

Our initial approach was to utilize the patterns and solutions that already exist in the market. The first pattern we considered is a very well-known one, often referred to as the proxy pattern. This solution is well established in the ecosystem and is defined by EIP 1967. Although the specific details of this EIP may not be crucial for our discussion, it is important to note that this pattern addresses the immutability constraint effectively.

=> 00:51:11

Upgrade your smart contracts seamlessly with the proxy and diamond patterns, ensuring flexibility and scalability without disrupting user experience.

To implement the equities and the bonds that we were asked to, we will need to deploy business logics that are more than 24 KB. Again, these should be upgradeable for obvious reasons, because if you deploy code and just forget about it, you will encounter some problems.

The initial approach we had was to use the patterns and the solutions that already exist in the market and the industry. We aimed to see if we could solve our issues using these established methods. The first pattern we considered is a very old and well-known one, which is the proxy pattern. This solution is very well known in the ecosystem and is defined by the EIP 1967.

The way the proxy pattern works is that we deploy a proxy contract, which is essentially a smart contract with almost no logic. The only functionality this proxy contract has is to delegate its execution to another smart contract, known as the implementation smart contract. This implementation smart contract contains the actual business logic. By doing so, the proxy is borrowing the implementation contract's logic and executing it in its own context. This means that the storage used for the variables will be the proxy's storage. In simpler terms, we are separating the data from the logic.

The delegation functionality is something that the EVM offers, and it is actually very powerful. The big advantage of this pattern is that if we need to upgrade the business logic, we simply have to deploy a new implementation. We then go to the proxy and make it point to the new implementation we just deployed. This process is completely transparent to the user, as the user only knows about the proxy's existence. They continue to interact with the proxy, which now delegates to a different smart contract that has a different business logic, effectively fixing or extending functionality without the user being aware of the changes.

The two significant advantages of this approach are that we no longer need to migrate any data because the data is stored in the proxy, which does not change. Additionally, users do not need to upgrade or change the address of the contract they are using; they will continue to interact with the proxy and remain oblivious to the implementation contracts deployed behind the scenes. However, if our logic exceeds 24 KB, we still need multiple contracts, meaning this pattern does not completely solve our needs.

To address this limitation, we can utilize another pattern, the EIP 2535, known as the diamond pattern. This pattern is also well known and serves to help with the second constraint I mentioned earlier: the maximum size limit. If you have understood the proxy pattern I presented before, the diamond pattern can be seen as an extension of the first one.

The way the diamond pattern works is that we deploy a diamond contract, which, like the proxy, is a smart contract with its own storage. It delegates its execution to an implementation smart contract, but the significant difference here is that diamonds can delegate to different implementation contracts, referred to as facets. Depending on the incoming call, the diamond can delegate execution to various facets.

To illustrate, consider a user invoking selector A on the diamond. A selector is like the method ID, and when a user interacts with a diamond, they are calling a specific method identified by this selector. The diamond will then delegate the execution to facet A. If the user invokes selector C, the delegation will be made to facet B. This capability allows us to delegate to multiple smart contracts, not just one, thereby bypassing the restriction of 24 KB. Consequently, we can encapsulate an unlimited amount of business logic within a single diamond contract, far exceeding the previous limitations.

=> 00:55:46

Streamline your blockchain experience: with a diamond architecture, users only need to remember one address for unlimited business logic and seamless upgrades.

To illustrate the concept, let's consider a simple example. A user invokes selector A on the diamond. In this context, a selector acts like the method ID. When a user interacts with a diamond, they are calling a specific method that has an ID, which we refer to as a selector in the blockchain world. The diamond then delegates the execution to facet A.

If the user invokes selector C, the delegation will occur to facet B. This demonstrates that we are delegating to multiple smart contracts, not just one. Consequently, we have bypassed the restriction of 24 KB. Essentially, we are encapsulating an unlimited amount of business logic within a single diamond contract, far exceeding the 24 KB restriction.

When we need to upgrade some business logic, we will proceed similarly to how we handle proxies. We will change the facet contract address within the diamond, and that’s it. We deploy facet A2, update the address of facet A in the diamond, and it remains completely transparent to the user. As a result, users no longer need to store multiple addresses; they only need to remember the address of the diamond. This gives them access to an unlimited amount of business logic, and any upgrades are completely transparent to them.

Now, if we delve deeper into the inner workings of the diamond, we can see that it functions like a trie because it delegates execution. It contains a mapping table that maps input selectors to output facets, along with some logic related to this mapping table. The functionality is divided into two parts: the loop and the cut.

The loop functionality encompasses all the code required to match an incoming request's selector with its corresponding facet contracts. Once the facet is determined, the diamond delegates to that facet. On the other hand, the cut functionality manages the mapping table, including adding new selectors and facets. This functionality is used when upgrading the diamond. It is important to note that the loop functionality is public and accessible to everyone, while the cut functionality is protected and only available to authorized individuals, as they can change the business logic behind the smart contracts.

The two patterns presented so far are quite interesting as they help us bypass the two initial restrictions: immutability and size limit. However, another issue arises when we consider multiple clones. Until now, the discussion has focused on a single proxy with its implementation and one diamond with multiple facets. However, our business requirement involves deploying multiple equities or bonds, where each equity corresponds to one diamond. If we are deploying thousands of equities, we would end up with thousands of diamonds. Upgrading them would then require thousands of transactions, which is obviously impractical.

There are several problems associated with this scenario. For instance, there would be a higher maintenance cost because upgrading the proxies one by one means submitting a transaction for each, leading to multiple transaction fees. Additionally, there is a lack of consistency risk. Without a unified pattern, ensuring consistent behavior and updates across all contracts becomes challenging. It is possible to forget or fail to update some proxies, resulting in some being updated while others are not, which can create significant issues.

Another disadvantage is the increased deployment cost. If each proxy or diamond is individually upgradeable, then each must contain its own upgrade logic, resulting in larger bytecode and higher deployment costs. There are also synchronization challenges; since each item is upgraded individually, some may still point to the old implementation contract until the entire process is completed, leading to a lack of synchronization.

Furthermore, there is reduced transparency. Some proxies could point to different implementations at a given time, making it difficult for users and developers to track the current version. Finally, there are governance complications, as coordinating governance and decision-making becomes more complex in this scenario.

=> 01:00:29

Upgrading multiple smart contracts can be a headache, but the resolver pattern simplifies it by centralizing upgrade logic, making it easier to manage multiple proxies at once.

Increased deployment cost can be a significant disadvantage when dealing with proxy or diamond patterns. If each proxy or diamond is individually upgradeable, then each one must contain its own upgrade logic. This requirement results in larger bytecode, which consequently becomes more expensive to deploy. Additionally, there can be synchronization challenges; since each item is upgraded individually, some of them may still point to the old implementation contract until the entire upgrade process is completed. This leads to a lack of synchronization among the contracts.

Moreover, there is a reduced transparency in this system. At any given time, some proxies may point to different implementations, making it difficult for users and developers to keep track of their current version. Finally, there are governance complications. Coordinating governance and decision-making across numerous contracts can be cumbersome and inefficient. This issue arises not from the Ethereum Virtual Machine (EVM) itself, but rather from the business requirements we face.

To address these challenges, there is another pattern known as the beacon pattern, which is commonly used to solve the clone issue previously mentioned. The way it works is that a smart contract called the Beacon is deployed to store the address of the implementation proxy contract that proxies will delegate to. All proxies point to the beacon instead of directly to the implementation contract. Whenever a proxy receives an incoming call, it queries the beacon contract to determine the address of the implementation contract it should delegate to. By doing this, upgrading multiple proxies becomes as simple as upgrading the beacon contract itself, which allows for the simultaneous upgrade of an unlimited number of proxies.

However, the beacon pattern is primarily useful for regular proxies and did not exist for diamonds. Therefore, we developed our own solution called the resolver pattern, which is very similar to the beacon approach. In this case, a contract called the resolver is deployed, and all diamonds point to it. When a diamond receives an incoming call, it asks the resolver which facet must be used for the specific selector. The resolver then returns the facet address, allowing the diamond to delegate to it.

The main difference between the beacon pattern and the resolver pattern is that resolvers return different addresses depending on the selector provided as an argument. The resolver contains the same logic that a regular diamond has, specifically the logic that returns the facet address for a specific selector. When diamonds receive an incoming call, they invoke a method in the resolver that looks up the facet address in a mapping table and returns it, allowing the diamond to delegate to the appropriate facet.

In essence, we have extracted all diamond-related functionality from the diamonds themselves and made it accessible through the resolver. This solution has already been tested and deployed on a hyper network and another Ethereum-compatible network. It applies to all Ethereum-based blockchains and has been audited as part of a larger project.

A question often arises regarding the centralization of this solution. Yes, the resolver does centralize some aspects of the system, which can pose risks if the person in control of the diamond's functionality has full control over the business logic behind all the diamonds. However, it is important to note that this diamond functionality is also part of the diamonds, even if they are deployed individually.

=> 01:05:28

Centralization can streamline upgrades, but true decentralization is key for a resilient digital future.

The project in question was focused on AED, and it encompassed various components. I apologize for being a bit late to the discussion, so I hope my question hasn’t already been addressed. My inquiry revolves around the centralization aspect of the project. Typically, such initiatives are designed to be completely decentralized. However, as we delve into the details, it becomes clear that the resolver is centralizing certain elements.

This centralization does pose some risks, particularly if the individual in control—who has access to the diamond C functionality—exercises full control over the business logic behind all the diamonds. Nevertheless, it is important to note that this diamond C functionality is also integrated into the diamonds, even when they are deployed individually. Thus, the admin of each diamond will maintain control over the business logic associated with that specific diamond. The key distinction here is that while there may be one admin, this could actually encompass multiple individuals; for instance, a multi-account setup could be employed.

By centralizing this control in a single location, a significant advantage arises: with just one transaction, all diamonds can be upgraded. The benefits of this approach far outweigh the disadvantages.

Moving on to a less technical aspect, I would like to discuss what Europe is doing in this space, particularly regarding the European Commission and its member states, as well as the public sector's involvement. I want to introduce you to Europeum, which is the European Digital Infrastructure Consortium, also known as EPSC or the European Blockchain Service Infrastructure. I serve as the Belgian representative in this consortium and am involved in numerous projects related to FC.

Europe has faced numerous challenges and setbacks in the technology sector compared to the US, and it is determined not to repeat those mistakes. The continent has identified several cutting-edge technologies that will form the foundation of the new digital society of tomorrow. These technologies include quantum computing, microcomputing, and 5G, with blockchain being recognized as a key technology that will underpin other advancements. In this new digital society, the rules will evolve to focus on data spaces and security.

Moreover, the implementation of these technologies must also consider aspects such as participation and sustainability. This perspective emphasizes that the objective is not merely technical; rather, these technologies should serve a higher purpose. This principle is shared by Europe and allied states globally, as they strive for a new digital decade that transcends technicalities.

Despite the struggles that blockchain has faced—especially during the highs of 2017—Europe remains committed to its potential. The vision is to develop a multi-country project because, at its core, Europe is inherently decentralized. A single centralized solution cannot effectively address the complexities of the ecosystem, which comprises numerous member states and organizations across the public and private sectors. Therefore, a decentralized approach is essential, which is why investing in these technologies is a priority.

However, a challenge arises when considering the implementation of decentralized governance models. In Europe, there is a notable issue: to implement something in production, there must be a liable organization responsible for such initiatives. Currently, Europe lacks a definitive answer to this challenge. In response, a new entity has been established, known as the European Digital Infrastructure Consortium. This organization aims to manage decentralized infrastructures, recognizing that no single member state can govern them all. Instead, a collaborative approach among multiple member states is necessary to ensure effective decentralized governance. This situation presents a contradiction, as liability is essential for operationalizing these initiatives, yet it cannot rest with a single organization.

=> 01:09:48

Decentralization in governance is the future, but it needs a solid framework to thrive. Europe is stepping up with a new digital infrastructure that balances innovation and accountability.

The decentralized governance model in Europe presents a unique challenge. If one wishes to implement something in production, there is a need for an organization that is liable for such initiatives. However, in Europe, there isn't a clear answer to this issue. To address this, a new entity has been created, referred to as the European Digital Infrastructure Consortium. This consortium aims to manage decentralized infrastructures, recognizing that no single member state can oversee all aspects. Instead, it comprises multiple member states that require an organization to facilitate decentralized governance.

Belgium has taken the initiative to explore blockchain applications across different member states, aiming to create a public sector blockchain. While some may argue for a permissioned blockchain that is public sector-driven, the goal is to transform public services by integrating these technologies. This European initiative is not intended to be the only blockchain solution; rather, it serves as a gateway for member states' public services to enter the Web 3.0 space in a secure and controlled manner. Currently, ten countries have engaged in forming this new European digital infrastructure, known as European ACYP. The aim is to expand this initiative to include more countries, ideally reaching 27 member states. Interest has also been expressed from countries such as Canada, Ukraine, Taiwan, and the Nordic states, all seeking to collaborate on this project.

The philosophy behind this project emphasizes decentralization and the need for added value in an increasingly complex world. The public sector requires innovative solutions, as existing registries and services are no longer sufficient. The complexity of interactions necessitates scalable technologies, which is why embracing these advancements is crucial for public services. The project is built on a stable infrastructure, providing a multi-domain trust infrastructure where actors are equal and not hierarchically aligned, fostering a culture of transparency.

Moreover, the initiative aims to create data spaces that facilitate the sharing of valuable information, moving away from centralized silos. Compliance with EU values and regulations is paramount, particularly concerning GDPR and AI regulations, as well as emerging data space regulations. These aspects are central to the project's foundation, which is designed to serve the public sector effectively.

In terms of architecture, the FC Ledger serves as the foundational base for the entire system. Building upon this, the project seeks to establish specifications and frameworks, such as the verifiable credentials profile. This aspect is crucial, as having an infrastructure is one thing, but agreeing on technical and functional standards is another significant challenge that needs to be addressed.

=> 01:14:09

Building a decentralized infrastructure for public services isn't just about technology; it's about aligning with regulations and creating a collaborative ecosystem where the public and private sectors can thrive together.

Of course, there's a real reason why the section of people in the government aspect of the public sector wanted to create a new infrastructure based on their rules and governance model, serving the public sector.

Now, what is our overall architecture? It is very easy in that sense. The FC Ledger or the Bas implementation is the foundation and the base of everything. Built on that, we try to have specifications and frameworks, like the verifiable credentials profile, that we aim to define and implement. This is the most difficult part; you can have an infrastructure, but you need to agree on a technical and functional level. How do you deal with those kinds of things? What kind of technical standards do you follow?

What we don't do, because it's not our job, is look at the private sector, which is responsible for the wallet and all the applications provided for business users. We want to be the road, but all the traffic—the cars and everything else—are something we expect from the private sector. This leads to a lot of conversations and discussions. We don't want to provide an end-to-end solution; it's something in layers, a public good that can be used by others.

A classical example of the decentralized identity aspects is where we have issues that need verification. In that sense, we try to ensure that public services can act as issuers, providing all the government documents to citizens and the holders, and facilitating the verification of those documents without needing to check with the issuer. Blockchain plays a small but essential role here. This kind of model is also inspired by regulations, such as the European digital identity wallet, which is a classical solution driven by the state.

We aim to implement it in a way that is inspired by these concepts. The reason we do this is not only for the UDI (Universal Digital Identity) but also because the other big project, the FP project, is inspired by the decentralized concepts associated with Web 3. You can see a movement in Europe, slowly but steadily, embracing decentralized technology concepts.

As mentioned, we need to have alignment with the regulations of Europe. AI and UDI also play a role in this context. While it sounds easy, it is not simple because we must comply with a lot of regulations. As you know, Europe places significant weight on regulation, and we need to understand what this means for our infrastructure to be legally compliant—GDPR-proofed, eIDAS-proofed, and all those kinds of things. Therefore, there are many regulations we need to consider in our governance model.

The project itself has evolved from the commission and is now transferred to the member states, supported by numerous projects that are trying to deploy various services and capabilities. The Digital Europe Program, a funding program, is also looking to see if we can have multiple projects that develop business cases or deploy certain activities. We need to view this as not just one project but a multitude of projects that are related.

The most important ones currently are the large-scale pilots for decentralized identity or UDI. To be clear, these include the European Digital Vol Consortium and the Digital Credential for Europe, which aim to implement the decentralized identity framework with the European wallet that will be provided. Additionally, many projects, such as FC VOR and FC TW, are trying to develop more and more multimillion-dollar projects to enhance this aspect.

Next to these big projects, we see a lot of attention in the market to explore how we can use this for our specific use cases. There are over 25 projects, mostly focused on the verification of documents, which was one of the initial use cases, but also on traceability, verification of products, and verification of legal entities. Again, it is not our ambition to be the only blockchain or ledger; rather, we want to provide a neutral place for all things related to the public sector to be deployed and used by a broader audience of products.

We also see many wallet providers looking to link their services to various platforms, such as ABSY, Alas, and ID Union. There is a multitude of these kinds of integrations, and we aim to ensure easy access for the public sector on a general level. However, I must clarify that I am not a technical expert; I consider myself hopefully classical but correct in my understanding.

=> 01:18:45

Building a secure and accessible blockchain infrastructure for public services is just the beginning of transforming how we verify and trace important information across Europe.

The verification of documents is a crucial aspect of our operations, particularly in the context of traceability verification of products and the verification of legal entities. Our ambition is not to be the sole blockchain or ledger solution; rather, we aim to establish a neutral platform where all things related to the public sector can be deployed and utilized by a broader audience of products.

We observe a significant interest from various wallet providers who are exploring the possibility of linking their services to platforms such as Absy, Alas, and ID Union. This indicates a multitude of opportunities for integration. Furthermore, we are committed to ensuring easy access for the public sector on a general level. Although I am not a technical expert, I believe we are operating within a hyperledger architecture that encompasses an infrastructure layer, nodes, and external capabilities.

At the core of our efforts, we are actively engaging with the I community to explore deployment strategies. We recognize this as a significant undertaking and are assessing the necessary components for successful implementation. Currently, our network consists of the European Commission infrastructure for operations, which includes both private and public environments. We are focusing on three private environments and three public environments for deployment tests, pilot production, and production-grade operations.

As we evolve from the pilot network, we are encountering challenges not only on a technical level but also on a legal level as we transition into production. Gradually, we are moving forward, but we require the EIC to facilitate this process. It is important to note that this is only the beginning; we anticipate that our application could potentially serve millions of citizens, with current estimates ranging from 5 million to 12 million users.

In terms of our live maps, we are developing a geographic application. We also serve as a node provider, but it is important to clarify that our model is not like Ethereum, where anyone can host a node. Instead, our approach is governed by public service countries that endorse the application of the nodes collaboratively. We aim for a diverse geographical representation across Europe, and while we do not require thousands of nodes, we maintain a controlled environment.

We are particularly focused on public services and sensitive information, such as data related to social security and education. Although we do not store this information on our chain, it is imperative that we ensure security during testing and information presentation. A significant portion of our resources is dedicated to addressing security aspects, which is a priority for us.

We closely monitor developments within the Hyperledger Foundation and have recently transitioned to a new protocol, as mentioned by Matthew. This transition is significant for us as we strive to implement the latest security measures. However, it is worth noting that our focus in Europe is primarily on the Hyperledger Fabric network, as most use cases are deployed using this technology.

While we have experimented with Hyperledger Fabric, we have found it challenging to deploy at a larger scale due to its complexity. Although it is manageable for a single controller, it becomes cumbersome when deploying across multiple instances. This realization has prompted us to explore the next generation of capabilities and use cases, particularly in the context of high volume and high scalability transactions.

We are currently investigating potential use cases that may require alternative protocols, but for now, we are satisfied with our existing framework. We remain open to exploring new protocols as our network evolves and as new use cases emerge.

=> 01:23:25

The future of digital identity hinges on collaboration between public and private sectors, where infrastructure is built by the latter but governance remains a shared responsibility.

The base is nice for this kind of generation of capabilities and use cases. However, we must consider what the next steps are. What if we have millions of transactions and high volume, high scalability needs? In this context, they are exploring pre-commercial procurement to identify what could be the next generation of solutions. Based on this exploration, we are looking into what IoT offers and how Chway Oran is trying to create a new protocol.

To be honest, at the current moment, we don't see use cases that really require something else. We are very happy with Ru and all the developments that have been made, particularly regarding the work done on tokens. Currently, we are somewhat sticking to these kinds of solutions, but we are open to the possibility that some potential use cases may require other protocols. Our network is, by default, designed with potentially multiprotocol capabilities.

To conclude this brief presentation, we have a lot of technical information available on our website. Our main focus is on identifying what kind of use cases exist, as this constitutes about 80% to 90% of our work. While the infrastructure is nice, it requires a significant amount of effort. This is why we need to reach out to many partners, including both technical and business partners.

We see a lot of interest in areas such as carbon credits, credential sharing, and the nativization of various elements, always emphasizing collaboration. A key reflection we must consider is the role of the public sector in this landscape. Should governance be managed by the public sector, or is it something society or the private sector will handle? This presents a very difficult balance that we need to navigate, especially in relation to larger European projects where we can contribute.

We also see ourselves playing a role in the digital identity landscape in Europe, particularly concerning digital products. Can they implement something aligned with the Web 3.0 focus? We view it as our task to motivate stakeholders in that direction.

This was a brief overview of my presentation on your beam. If you have any questions or if you would like me to elaborate on less technical aspects, I am here to help.

To clarify, the idea is that all public services are implemented by private companies. We deploy the infrastructure, but the linking to that infrastructure needs to be managed by the private sector. The Enterprise wallet, as we call it, or the interface, is something we will rely on the private sector to develop. Essentially, we only build the infrastructure, which also provides opportunities for solution providers to create applications quickly.

Regarding the scalability of the QBF, you mentioned that it can scale to 120 validators. Is the idea that every country will be a validator? The answer is no; not every country will be a validator. It is a complex process, and we are working with various stakeholders to determine how many validators we actually need, along with the number of normal nodes and pilot nodes. I believe we will need around 12 to 20 validators, which should be sufficient.

Additionally, it is not necessary to have a proof-of-work system in place. The proof of authority model means that nobody needs to validate in the traditional sense.

I have two questions regarding the wallet aspect. I believe that the wallet should be considered part of the infrastructure because, as we see where things are going, people are using platforms like Metamask. Applications are becoming infrastructure, encompassing all financial and private details. This allows for the development of applications that are natively GDPR compliant.

I think this is something that should be managed by the private sector, but it is indeed a significant discussion point. We want to clarify that it is not our responsibility to control these aspects, but we recognize the need for effective solutions. The discussion becomes even more complex when considering the digital identity passport and related elements. We want to ensure ownership and control, but this leads to a challenging conversation with our colleagues working on the European digital identity wallet. They view it differently, seeing it as something issued by member states and closely linked to your base identity, such as a Belgium wallet linked with your Belgian identity.

=> 01:28:13

Navigating the future of digital identity is a balancing act between decentralization and control, but the real challenge lies in bridging the gap between technology and policy understanding.

The discussion around the private sector and its role in digital identity is indeed a significant topic. The challenge lies in determining where to place responsibility for these systems. It's important to note that we are not in control of these kinds of things, but we need to have effective ways to manage them. This leads to a complex discussion, especially regarding the European digital identity wallet.

The European digital identity wallet is perceived differently by various stakeholders. Some view it as something issued by member states, tightly linked to one’s base identity. For instance, you would have a Belgium wallet that is connected to your Belgian identity, allowing the state to revoke it if necessary. However, this is not the model we aspire to develop. Currently, we find ourselves in a hybrid status, utilizing both decentralized and centralized components. There is ongoing debate about who will serve as the wallet provider. Should it be an infrastructure claimed by society, or will banking providers offer these wallets? The direction of this development remains unclear, and there is a risk of heading in the wrong direction.

In relation to the decision-making process within the EU, particularly concerning digital identity, there are numerous options available, such as the digital product passport. The commission has indicated that it will be a decentralized solution, yet they have not confirmed that it will be built on blockchain infrastructure. This raises the question of whether we can expect a clear choice to be made in the future. The challenge lies in the fact that the commission and the public sector aim to remain neutral regarding technology. They prefer to stay technology-agnostic, which complicates the development of specific solutions.

Furthermore, many decentralized concepts are essential for implementing a digital product passport. The European digital identity wallet is also tied to decentralization, yet the implementation remains a constant struggle. There is a recognition that the current approach is a hybrid wallet, which is a positive step forward. However, it is anticipated that challenges will persist over the next five years, particularly concerning privacy and control.

The issue is compounded by the fact that policy decision-makers often lack a deep understanding of the consequences of certain technology choices. The IDAS working group, for instance, is highly technical, yet many policymakers do not grasp the implications of their decisions. There is also a notable presence of lobbyists in this arena, which adds another layer of complexity.

In the latest amendments, there has been a specific mention of qualified electronic ledgers. Regulators and legislators are hesitant to align themselves with any specific technology, which often implies blockchain. The historical context of this evolution is important; initially, there was a focus on written regulations, which has since shifted. However, the reluctance to commit to specific technologies remains a challenge.

Despite the ongoing struggle, it is crucial to differentiate between blockchain and crypto, as the latter often carries negative connotations. This is why we refer to decentralized identity rather than blockchain technology. This distinction reflects the reality we face in the current landscape.

Next speaker—thank you very much! We can continue this discussion during the networking drinks and food that will follow.

Hello, I am Migelo, the lead of emerging technologies at 30s, the company I work for, and I am also a board member of Alasa. Right now, I am here as a member of...

=> 01:33:32

Blockchain isn't just crypto; it's a powerful tool for innovation and collaboration in technology.

The discussion begins with a recognition of the struggle faced in the blockchain space, particularly due to the connotation that "blockchain is crypto" and "crypto is bad." This perception often leads to the belief that "crypto is scamming." To address this, the speaker emphasizes the importance of mentioning decentralized identity rather than solely focusing on block technology.

Following this introduction, the next speaker, Migelo, expresses gratitude and invites further discussion during the networking event. He introduces himself as the lead of emerging technologies at his company, which is a rapidly growing public company with nearly 2,000 employees. Additionally, he serves as a board member of Alasa and has experience in starting five companies, including one successful exit.

Migelo elaborates on his role, stating that he leads a group focused on building digital products using emerging technologies, including blockchain and quantum technologies. His team is involved in both research and development, as well as market deployment. One of their notable products is Identify, an open-source wallet compatible with EFC and Asria L chain, which is currently being adapted to Polygon. He highlights their collaboration with the European Commission on some of the largest scale pilot projects in Europe.

He then discusses Atria, which is described as one of the largest public permissioned networks globally. Atria was likely the first to inspire blockchain technology in the lam area, predating many current initiatives.

Migelo also mentions Alasa, an association in Spain aimed at fostering a public-private ecosystem around blockchain technology. The association comprises more than five members, with half being corporate and public sector representatives. He notes that members include Telica and other corporates, all participating under equal circumstances.

The association has developed two blockchain networks and is working on a standard similar to the C standard, which has been in development for the last five years. This standard is open for collaboration among members, who compete in providing smart contracts and services to customers.

In 2019, they decided to build a public permissioned network, which allows transparency in operations. Currently, they operate two networks: one based on Quorum, in collaboration with JP Morgan Chase and Consensus, and another introduced in 2022. They are in the process of merging data from these networks, which is a complex endeavor. As of now, there are more than 90 documented use cases within the network, showcasing the breadth of their initiatives.

=> 01:39:11

Transparency and collaboration are the keys to building innovative networks that drive real-world impact.

The permissioned network allows everybody to see what is happening within it. At that time, things were really innovative, and while the concept is now well adopted, it was quite new back then. Currently, we have two networks: one based on Quorum, which we collaborated on with J.P. Morgan Chase and Consensys, and another one that we introduced in 2022. We are trying to switch to and merge the data from all of them, but it's a complex process.

Right now, we have some validated nodes and regular nodes, which provide more information. There are more than 90 use cases documented in the network, spanning a variety of economic sectors. These include normalization, evidence registration, identity organization, and utility. The association is well represented and actively participates in all decision-making spaces in blockchain, both in Europe and Latin America.

Indeed, many Spanish participants are involved in European projects, and we have been collaborating actively in fostering projects not only in Spain but also globally. The last promoted initiative was the first technical documentation regarding digital identities worldwide. We have been actively participating in standardization forums as well. Some of our projects are really significant, and the framework we are developing is open source, available on GitHub, with participation from more than 20 members.

Regarding the networks, we currently have the D Network and the B Network. The D Network is named for its underlying technology, while the B Network was initiated as a Polygon network. We also started a P Network, but that has not progressed as expected. The Hyperledger Fabric was challenging to scale, so we are working on that. In our D Network, we have 191 regular nodes and 6 validator nodes. While we could have more validator nodes, we found that increasing their number negatively impacts the network's performance. We are working on a rotating validator system to maintain the neutrality of the network without affecting performance.

We have deployed the identity model and made progress in our decentralized monitoring system, which we consider crucial. In the B Network, which we restarted in 2022, we currently have 50 regular nodes and 6 validator nodes. There are additional nodes deployed on top of that, and the infrastructure is similar to that of the D Network.

I wanted to provide a screenshot to share the URL and some numbers regarding the network. Typically, we have around 4,000 transactions, although I am unsure what happened today. Members can start nodes and join the networks, which is open to everyone. We have been working on off-chain governance, which is very important to us. There is a link in the document for anyone interested, and it is available in Spanish, with some sections in English.

We have reached a level of maturity in off-chain governance, and we aim to ensure a balance between control and decentralization. We are currently on the path to implementing this in our governance model. Some lessons learned from deploying these networks include the realization that having nodes without a company structure can be challenging.

=> 01:45:06

Balancing control and decentralization is key to effective offchain governance.

In off-chain governance, for us, it is really important. You can access the relevant documents through a provided link, which is open to anyone. The documents are available in Spanish, as we had Spanish sessions first, although there are also sessions conducted in English.

We want to emphasize that we have reached a level of anonymity in off-chain governance, and it is crucial for us to ensure a balance between control and decentralization. Currently, we are on the path to implementing this on CH.

Some lessons learned from our deployment include the importance of having identifiable nodes. Entities in this space are never anonymous; identification is essential for operation. While access to read the network is open and nondiscriminatory, you must be a member of ALAS to write on the network.

CI protection is critical for governance to remain open and inclusive. We strive to be as sustainable and friendly as possible. We have identified the need for a consensus algorithm, especially considering the number of validator nodes we have. I am not sure if the current QV supports such a large number of validator nodes, as we have tested that performance significantly decreases with the number of validator nodes.

Currently, we have permissioned access to write, which limits the number of nodes to six. However, the issue arises when one validator node crashes or becomes stuck; there is no way to rotate that node unless we do it manually. This situation could lead to complications, which I will discuss when we are off the record.

We are working on having standby validator nodes operated by different entities, companies, and members of ALAS. This setup allows us to rotate validator nodes regularly, not just as a backup. This way, a company or potential attacker cannot predict how long a validator node will be active.

Additionally, we need to automate the process of putting nodes in quarantine. We have learned that with a large number of companies operating nodes, an automated system is necessary to manage them effectively. We aim to develop a centralized monitoring tool that can isolate a node from the network if it is not providing the proper service.

We usually hold meetings on technology strategy committees, where we discuss various questions and ongoing projects. We are trying to engage with different providers to enhance the network and implement innovative experiments, as we have done in the past. This includes the implementation of monitoring tools and explorers.

In conclusion, if you are interested in joining ALRIA, you are welcome to do so, and I encourage you to check for more information. Thank you very much for your attention. Can you please close the record so we can proceed with the closure?

0
Subscribe to my newsletter

Read articles from Igor Berlenko directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Igor Berlenko
Igor Berlenko