Today we continue to talk about a project that is an old wine in a new bottle, Sonic, which has been quite hot in recent days because there will be a TGE (Token Generation Event) tomorrow, and there will also be airdrops. Sonic is L2 of Sol, and everyone is surprisingly curious why such a fast SOL still needs L2? Isn't this just a hot topic needed in the crypto circle?
So why does SOL need L2?
As dAPP and DeFi activities on Solana grow faster, daily on-chain transactions exceeded 200 million in January 2024, and analysts conservatively estimate that transaction volume will exceed 4 billion by 2026.
Under this foreseeable pressure, Solana's TPS hovers around 2500-4000, and the average ping time of Solana clusters fluctuates between 6 seconds to 80 seconds; when TPS increasingly saturates or even exceeds 4000, the success rate of Solana transactions only reaches 70% to 85%.
Apart from physical delays and network fluctuations caused by network conditions, it is clear that the main reason is related to the continuously increasing saturation of TPS. According to Solana's growth trend, it is predicted that the TPS value will reach 10,000 to tens of thousands in the coming years, which will lead to more pronounced performance issues.
The above is the answer provided by the Sonic white paper, and I would like to add a little:
1. In a report on Ethereum L2 by Messari from the previous two days, it was mentioned that the current L1 is best suited for secure settlement, while applications are better placed on L2.
2. Sonic currently focuses on being a game public chain, emphasizing fully on-chain games. If there are truly 100,000 or even a million players online, and if the data is fully on-chain, then thousands to tens of thousands of TPS indeed cannot bear it, so faster and cheaper L2 is indeed needed.
I. Introduction
Sonic is the first atomic SVM chain (what is atomic-level interoperability, that is account and program interoperability with Sol), and it is also the fastest L2 layer of Solana. Sonic is built on Solana's first concurrent scaling framework, HyperGrid. Through the interpreter of HyperGrid, dApps can be easily deployed from EVM chains to Solana, so it is compatible with both EVM and SOL.
Sonic publicly offers native composable game primitives and scalable data types on-chain based on the ECS framework. The game engine sandbox provides developers with practical tools to help them build business logic on-chain.
II. Rush ECS framework
Rush is a declarative fast Entity-Component-System (ECS) framework completely built in Rust, with the sole goal of minimizing the integration complexity of blockchain technology with tools familiar to developers (such as game engine SDKs and APIs) by adopting verified developmental experience abstraction techniques.
Rush envisions a future where any game that has been built or is about to be built can easily be transformed into a Fully-Onchain Game or Autonomous World through Rush and their preferred game development tech stack.
How Rush Works
Typically, game developers use game engines to create games.
With the game engine, the complexity of the underlying logic is greatly reduced, allowing developers to focus on game design and mechanics, while the complexity is handled by the game engine.
On the other hand, the realization of fully on-chain games (FOCG) and autonomous worlds (AW) must rely on decentralized data storage, such as blockchains.
This decentralized feature greatly enhances data persistence compared to a single data repository, but it also comes with additional costs.
Challenges faced by developers
To implement FOCG or AW, game developers need to master full-stack blockchain technology or hire blockchain experts.
Whether learning or hiring, this requires significant resources and is often a huge barrier for game developers to move towards fully on-chain games or autonomous worlds.
Rush was born to solve this problem.
It uses verified efficient developmental experience abstraction techniques, such as:
- Declarative configuration
- Entity-Component-System (ECS)
- Code generation
Thus greatly simplifying complexity.
III. HyperGrid Framework
The HyperGrid protocol is a Rollup extension and orchestration framework designed to support Rollup operators in the SVM ecosystem. It achieves potentially unlimited transaction throughput through state compression and Byzantine Fault Tolerance (BFT) technology. This is accomplished by implementing horizontal scaling across multiple grids, as demonstrated by Sonic (a grid focused on gaming), where transactions ultimately settle on Solana.
3.1 Architectural Overview
Overview of HyperGrid's multi-grid architecture: Semi-autonomous grids achieve consensus and finality through Solana.
The architecture of HyperGrid is based on a multi-grid model, with each grid operating in a semi-autonomous manner while achieving consensus and finality through the Solana mainnet.
3.2 HyperGrid system architecture
Key components
1. Solana base layer:
- The foundation of the HyperGrid system, providing ultimate consensus and data finality.
2. HyperGrid Shared State Network (HSSN):
- Core of the system, operating across all grids.
- Includes multiple validators (Validator 1 to Validator N).
- State sharing between grids and the Solana base layer.
- Manage batch zero-knowledge proofs (ZK Proofs) for settlement.
3. Grid structure (taking Grid 1 and Grid 2 as examples):
- Each grid represents a semi-autonomous ecosystem that can be dedicated to specific applications (such as different games).
- Components of each grid include:
- ZK co-processor: Handles grid-specific operations and Merkle proofs.
- SVM runtime: Executes grid operations on the Solana virtual machine.
- Sonic Gas engine: Manages computational resources.
- Concurrent Merkle tree generator: Efficiently handles state transitions.
4. User interaction:
- Users can independently interact with each grid.
- Transactions (including SVM and EVM transactions) circulate between users and the corresponding grid runtime.
- Transaction responses are returned to users.
3.2 Data flow
Interoperability and state sharing:
- Bidirectional state sharing between the Solana base layer and HSSN.
- HSSN shares state with each grid.
- State sharing can also occur between different grids.
3.4 ZK Proof:
1. Transactions are compressed and aggregated into Merkle trees.
2. For each block, submit the corresponding root state hash value.
3. Validity Proof is computed on the grid.
4. HSSN uses ZK proofs for settlement and submits them to the Solana base layer.
IV. Grid and network communication
The network architecture of HyperGrid: The relationship between the shared state network, grid instances, and the Solana base layer provides support for scalable dApp deployment.
Data flow in the HyperGrid Shared State Network
The HyperGrid framework is designed to support a wide range of application-specific networks or decentralized applications (dApp), with a particular focus on applications with high demand within its ecosystem, such as games, DeFi, AI agents, etc.
The goals of this architecture include:
1. Alleviate performance pressure on the Solana base layer.
2. Minimize performance conflicts and competition caused by block space contention between the base layer and various domain-specific dApps.
Key features
1. Flexibility, providing choices for grid network creators:
- Developers can choose:
- Using the HyperGrid public network.
- Horizontal scaling to create dedicated networks for specific needs.
2. Performance and cost optimization:
- The choice between public networks and dedicated networks depends on the developer's assessment of performance needs and related costs.
3. Network independence:
- Developers can disable their dedicated networks at any time without affecting other networks in the ecosystem.
Operational framework
1. Verification:
- Each network independently processes its transaction and state change verification.
2. Logging:
- Each network independently maintains its transaction and state change logs.
3. Data retrieval:
- Data retrieval processes are executed independently in each network.
5. Interoperability with Solana
5.1 Will read data from Solana to HyperGrid
The above diagram illustrates the following process when executing state synchronization from Solana to grids on HyperGrid (e.g., Sonic).
Initial loading: Load pre-existing Solana programs from storage into the cache of HyperGrid.
Users send read requests for specific programs to the Sonic RPC of HyperGrid.
The synchronization program checks if the requested program exists in the cache but does not find it.
The synchronization program sends program requests to the Solana base layer RPC.
The Solana base layer responds using program data.
The synchronization program receives the response and uses the new program data to update the cache of HyperGrid.
The synchronization program sends read responses back to Sonic RPC.
Sonic RPC will forward the read response to users.
5.2 Will update synchronization back to the Solana base layer
The above diagram illustrates the following process when executing state synchronization from grids on HyperGrid (e.g., Sonic) back to Solana.
Initial loading: Load pre-existing programs from storage into the cache of HyperGrid.
Users send write requests for specific programs to the Sonic RPC of HyperGrid.
The synchronization program checks if the requested program exists in the cache but does not find it.
The synchronization program sends requests to lock programs on the Solana base layer.
The Solana Base Layer RPC locks the requested program.
The Solana base layer responds using program data.
The synchronization program receives the response and uses the new program data to update the cache of HyperGrid.
The synchronization program sends requests to release locks and writes updated program data to the Solana base layer.
The Solana Base Layer RPC releases locks and writes updated data.
The synchronization program sends write responses back to Sonic RPC.
Sonic RPC will forward write responses to users.
VI. HyperGrid Shared State Network (HSSN)
The HyperGrid Shared State Network (HSSN) is a key component within the Grid ecosystem. It serves as the consensus layer, communication hub, and state management cluster, facilitating interaction between the Grid and Solana base layer. This network manages the state of all communications, including regularly synchronizing block data from Grid Rollups to the Solana base layer.
Key components and functions
HSSN architecture: Built on the Cosmos framework, ensuring reliability and security for cross-chain communication.
Data structure: Manages the state between grids and the Solana base layer, including:
- Grid registration
- Communication data source
- Version control
- Read/Write state
- Expanded account data fields: Enhance the native Solana base layer account data fields to accommodate new fields for HSSN management, ensuring synchronization with grid account status.
- Refactored grid RPC: Implements direct communication between grids and HSSN, facilitating interoperability within the ecosystem.
GAS refueling and distribution mechanism: Users pay gas fees for certain grid requests, and a dedicated grid (Sonic Grid) runs gas computation programs to centrally manage the natural gas of the entire grid ecosystem.
Project financing situation
Sonic completed $12 million in Series A financing, led by Bitkraft Ventures, with participation from Galaxy Interactive, BigBrain Holdings, and others, currently valued at $120 million.
TGE:
The total supply of SONIC is 2.4 billion, of which 57% is allocated to the community, including community and ecosystem development (30%), initial claims (7%), and HyperGrid rewards (20%). The TGE is scheduled for January 7, 2025, with an initial circulation of 15% of the total. SONIC will be fully distributed within 6 years, at which point all SONIC will be fully circulating, with most allocated to the community. No team or investor tokens will be unlocked in the first 12-18 months after launch, and locked tokens cannot be staked.
To summarize this project, SOL has started to launch L2, so there should actually be L2 demand in currently fast public chains. According to this logical reasoning, AVAX's multi-chain architecture actually proves to be useful, and SOL's L2 may rise in popularity.