Dragon's Mouth gRPC Subscriptions

Streaming Account Updates for Backend Applications.

Dragon's Mouth is our Geyser-fed gRPC interface that supports streaming:

  • Account Writes

  • Transactions

  • Entries

  • Block notifications

  • Slot notifications

It also supports unary operations:

  • getLatestBlockhash

  • getBlockHeight

  • getSlot

  • isValidBlockhash

The gRPC streams and RPC calls are supported through Solana's Geyser interface. This is the fastest way to receive updates on on-chain events. This interface is more stable and faster than the traditional WebSocket interface. We recommend using gRPC for all future development of backend clients.

Dragon's Mouth also streams transactions as they are processed in real-time. You will receive multiple account updates within the current slot. This contrasts with regular RPC, where you receive only one update at the end of the slot. For DeFi traders, Dragon's Mouth can give you up to a 400ms advantage over other traders!

Use Dragon's Mouth to stream data directly to your application middle-layer hosted on a cloud service provider. Update your backend database with the lowest possible latency.

gRPC is unsupported by web browsers, so Dragon's Mouth is entirely targeted at backend software. Another Yellowstone project, Whirligig, provides a WebSocket interface to replace the current Solana WebSocket implementation.

Protocol files

You can find the latest version of protobuf files in the repository https://github.com/rpcpool/yellowstone-grpc/tree/master/yellowstone-grpc-proto/proto or use Rust crate https://crates.io/crates/yellowstone-grpc-proto.

Clients

We offer sample clients in multiple languages, and you can also use the generic grpcurl client to test the interface. As the underlying gRPC proto can change, it is essential to test with clients matching the current version of the Solana/gRPC interface.

grpcurl

grpcurl is a good client for testing. You will also need the following two Protobuf proto files to describe the protocol:

Example subscription:

./grpcurl \
  -proto geyser.proto \
  -d '{"slots": { "slots": { } }, "accounts": { "usdc": { "account": ["9wFFyRfZBsuAha4YcuxcXLKwMxJR43S7fPfQLusDBzvT"] } }, "transactions": {}, "blocks": {}, "blocks_meta": {}}' \
  -H "x-token: <token>" \
  api.rpcpool.com:443 \
  geyser.Geyser/Subscribe

Customers should specify their endpoint + token in the example above, developers looking to run their own RPC nodes can test it against their own Solana instances, just remove the x-token header as it's probably not relevant to you.

Rust

A sample Rust client is available at https://github.com/rpcpool/yellowstone-grpc/tree/master/examples/rust.

Golang

A sample Golang client is available at https://github.com/rpcpool/yellowstone-grpc/tree/master/examples/golang.

NodeJS/TypeScript

You can include NodeJS Yellowstone gRPC client as a dependency by running the following command:

npm install --save @triton-one/yellowstone-grpc

# or, for yarn:

yarn add @triton-one/yellowstone-grpc

A sample Typescript/Nodejs client is available at https://github.com/rpcpool/yellowstone-grpc/tree/master/examples/typescript. You can also switch the language of code samples to TypeScript in the following documentation.

Initializing the client

Once you have installed the client dependency, you can initialize it as follows:

import Client from "@triton-one/yellowstone-grpc";

const client = new Client("https://api.rpcpool.com:443", "<insert your token here>");

// now you can call the client methods, e.g.:

const version = await client.getVersion(); // gets the version information
console.log(version);

Please note that the client is asynchronous, so it is expected that all calls are executed inside an async block or async function.

Subscription streams

You can get updates and send requests through the subscription stream. You can create it by calling the client.subscribe() method:

import { SubscribeRequest } from "@triton-one/yellowstone-grpc";

// Create a subscription stream.
const stream = client.subscribe();

// Collecting all incoming events.
stream.on("data", (data) => {
  console.log("data", data);
});

// Create a subscription request.
const request: SubscribeRequest = {
  // you can use the standard JSON request format here.
  // the following documentation describes available requests and filters.
  ...
};

// Sending a subscription request.
await new Promise<void>((resolve, reject) => {
  stream.write(request, (err) => {
    if (err === null || err === undefined) {
      resolve();
    } else {
      reject(err);
    }
  });
}).catch((reason) => {
  console.error(reason);
  throw reason;
});

Sizing recommandation

Benchmarks: Account Updates + Transaction

Guessing the throughput of Dragon's Mouth highly depends on the current traffic of the blockchain. Therefore it is hard to quantity the actual throughput that geyser can provide. Running some benchmark directly onto a RPC node we tend to observe the following metrics:

  1. Every 6-8 seconds we produce 100 000 geyser events, whether it is account updates or new transaction

  2. The average account update "true" memory size is between 8KB-10KB.

  3. The average transaction "true" memory size is about 2.5KB.

  4. Every batch of 100 000 geyser events has an average cumulative size of 750MB to 1GB of "true" memory size.

  5. The "shallow size" of every geyser event is at 536 bytes on Solana 1.18 : the "shallow size" is called this way because SubscribeUpdate event holds not only small data but mostly pointers to variable size data.

Operator recommandation

When configuring the Geyser plugin at the RPC node level, make sure the properly size channel_capacity correctly.

The channel_capactiy represents the buffer capacity when a client connects to Dragon's Mouth. This buffer holds "shallow geyser event". We use the term "shallow" since this event hardly contains any data but pointers to variable size data. The "true" size of an event is typically several multiples of the shallow size. Please refer to the "metrics" section above.

Please note : the channel_capacity directly dictates how many RAM allocation we need to do when a new client establish a connection with Dragon's mouth.

If you pick a channel_capacity of 1_000_000 (one million), the runtime will allocate 536 bytes * 1_000_000 = 536Mb

This amount of RAM is pre-allocated right away and is fixed. Obviously has data coming in, some dynamic memory allocation has to be done to accommodate any variable size data the geyser event is needs. As data is being sent to the remote consumer, dynamic allocations are being freed. However, If a client is too far away from our RPC node or is processing its event too slowly, this can cause a drastic memory consumption at the server level if we are too generous with the channel Capacity.

As stated before, the true size of a geyser event is much bigger than its shallow size.

Let say someone is trying to subscribe to every account updates and new transaction happening in real time. Let also suppose that the channel_capacity is really generous and is set to 1_000_000.

If every 100 000 account updates/new transaction with cumulate 1Gb, then one a slow client could cause RAM allocation up to +/-10Gb.

If many slow clients are trying to connect at the same time, this could cause an server failure as the geyser process may resort to swap memory which would drastically reduce the performance.

gRPC guarantee and back-pressure

Fortunately, gRPC already has a built-in backpressure mechanism. The protocol waits for the client's acknowledgment before sending the next batch of data, which helps avoid wasting CPU and I/O resources on clients that are too busy.

However, there's a downside. If a client accumulates too much lag, Dragonmouth has a safeguard in place that disconnects lagging clients. This is intentional and necessary to prevent potential issues. Clients need to be aware that if they can't keep up with the data they requested, they will eventually be disconnected.

What channel_capacity should we use?

The reality is that we likely already have slow-consuming clients, and reducing the channel_capacity could quickly expose them.

If we currently have a high channel_capacity, we should consider gradually lowering it to a more manageable level.

There's no absolute right or wrong choice here—only trade-offs. The fact is that Geyser is designed for fast consumers that are geographically close to us. Clients could also apply more filters to reduce incoming traffic, allowing them to keep up the pace.

From an operational perspective, a smaller channel_capacity allows us to handle more concurrent clients and improve reliability. However, this comes at the cost of requiring clients to apply more filtering. Another option for clients is to open multiple connections with different filter sets to different RPC nodes, which could help them manage ingestion rates more effectively.

Example Subscribe Requests

Here are examples of subscribe requests you can make to the gRPC interface.

Subscribe to an account

{"slots": { "slots": {} }, "accounts": { "wsol/usdc": { "account": ["8BnEgHoWFysVcuFFX7QztDmzuH8r5ZFvyP3sYwn1XTh6"] } }, "transactions": {}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": [], "commitment": 1}

This sample subscribes to the SOL-USDC OpenBook account on confirmed commitment level. In the example above, "wsol/usdc" is a client-assigned label. You can specify different JSON files to subscribe to different items. You can combine any of these variables below into a JSON to receive a combination of program, account, block, and slot updates.

Subscribe to an account with `account_data_slice`

{
    "accounts": {
        "usdc": {
            "owner": ["TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA"],
            "filters": [{
                "token_account_state": true
            }, {
                "memcmp": {
                    "offset": 0,
                    "data": {
                        "base58": "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v"
                    }
                }
            }]
        }
    },
    "accounts_data_slice": [{ "offset": 32, "length": 40 }]
}

This sample subscribes to the USDC Tokenkeg accounts. With account_data_slice instead of receiving all 165 bytes we receive only 40 bytes from account data (offset field with 32 gives us owner and lamports).

Subscribe to a program

{"slots": { "slots": {} }, "accounts": { "solend": {  "owner": ["So1endDq2YkqhipRh3WViPa8hdiSpxWy6z3Z6tMCpAo"] } }, "transactions": {}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": [], "commitment": 0}

Subscribe to multiple programs

{"slots": { "slots": {} }, "accounts": { "programs": {  "owner": [ "So1endDq2YkqhipRh3WViPa8hdiSpxWy6z3Z6tMCpAo", "9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin"] } }, "transactions": {}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

OR, if you want different tags for different program updates:

{"slots": { "slots": {} }, "accounts": { "solend": {  "owner":  ["So1endDq2YkqhipRh3WViPa8hdiSpxWy6z3Z6tMCpAo"] }, "serum": { "owner": ["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin"] } }, "transactions": {}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to all finalized non-vote and non-failed transactions

{"slots": { "slots": {} }, "accounts": {}, "transactions": { "alltxs": { "vote": false, "failed": false }}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": [], "commitment": 2}

For transactions, if all fields are empty, then all transactions are broadcasted. Otherwise, fields work as logical AND, and values in arrays as logical OR. You can include/exclude vote transactions and include/exclude failed transactions.

Subscribe to non-vote transactions mentioning an account

{"slots": { "slots": {} }, "accounts": {}, "transactions": { "serum": { "vote": false, "account_include": [ "9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin" ]}}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to transactions excluding accounts

{"slots": { "slots": {} }, "accounts": {}, "transactions": { "serum": { "account_exclude": [ "9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin", "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA" ]}}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to transactions mentioning accounts & excluding certain accounts

{"slots": { "slots": {} }, "accounts": {}, "transactions": { "serum": { "account_include": [ "9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin" ], "account_exclude": [ "9wFFyRfZBsuAha4YcuxcXLKwMxJR43S7fPfQLusDBzvT" ] }}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to a transaction signature

You can subscribe to an individual transaction signature, which will provide updates as the signature is confirmed and finalized.

{"slots": {}, "accounts": {}, "transactions": { "sign": { "signature": "5rp2hL9b6kexex11Mugfs3vfU9GhieKruj4CkFFSnu52WLxiGn4VcLLwsB62XURhMmT1j4CZiXT6FFtYbXsLq2Zs"}}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to slots

You do not need to provide further details to subscribe to slot notifications. All you'll need to provide is a name for the slot updates that they will be tagged as.

{"slots": { "incoming_slots": {} }, "accounts": {}, "transactions": {}, "blocks": {}, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to blocks

This will return all the blocks as they are produced. It will send blocks along with the transactions:

{"slots": {}, "accounts": { }, "transactions": {}, "blocks": { "blocks": {} }, "blocks_meta": {}, "accounts_data_slice": []}

By default Block message includes all transactions, but you can exclude them or include updated accounts:

{"slots": {}, "accounts": { }, "transactions": {}, "blocks": { "blocks": {"include_transactions": false, "include_accounts": true} }, "blocks_meta": {}, "accounts_data_slice": []}

If you interested only in transactions/accounts where any of specified accounts are mentioned you can use special filter:

{"slots": {}, "accounts": { }, "transactions": {}, "blocks": { "blocks": {"account_include": ["So1endDq2YkqhipRh3WViPa8hdiSpxWy6z3Z6tMCpAo"]} }, "blocks_meta": {}, "accounts_data_slice": []}

Subscribe to block metadata

If you want to subscribe just to notifications as blocks are processed without receiving all the transactions, then you can use the block meta subscription:

{"slots": {}, "accounts": {}, "transactions": {}, "blocks": {}, "blocks_meta": { "blockmetadata": {} }, "accounts_data_slice": []}

Modifying subscription

The Subscribe method offers a bi-directional stream, so you can modify the subscription by simply submitting your newly updated subscription string, and you will start receiving updates on your modified filters.

This will entirely overwrite the previous subscription, so ensure your client maintains a local register of the entire subscription config you are interested in.

Unsubscribing

If you want to unsubscribe from all streams, send the following request:

{"slots": {}, "accounts": {}, "transactions": {}, "blocks": {}, "blocks_meta": {}}

This will clear all current subscriptions but keep the connection open for future subscriptions.

Managing commitment levels

The gRPC streams happen by default on the processed commitment level.

We also support specifying confirmed and finalized commitment levels. In these cases, Dragon's Mouth will buffer the incoming updates for you and release them once the updates have become confirmed or finalized.

For maximum performance, however, we recommend handling commitment levels client side.

To specify commitment level in your Dragon's Mouth gRPC calls provide the following values:

enum CommitmentLevel {
  PROCESSED = 0;
  CONFIRMED = 1;
  FINALIZED = 2;
}

Benefits of working at processed

The benefit of working on processed is that you can process transactions as soon as they arrive, but only commit to them once you know whether they are confirmed or finalized. This means that you can get faster response times in your UI by doing a lot of the processing work at a lower commitment level and then be able to surface the changes as soon as you see that the event is committed.

How to manage `confirmed` and `finalized`

To manage confirmed and finalized you need to buffer events by slot. Each event (transaction or account write) will have a slot attached to it. You store these events in a buffer ordered by slot.

You then also make sure you subscribe to slot notifications. This will give you information about when a slot is confirmed or finalized. Depending on the commitment level you are interested in, you should release your buffer when you receive the slot notification for a particular slot at a particular commitment level.

You will receive all the transaction notifications or account write notifications for the slot before you receive the "confirmed" and "finalized" notification for that slot.

The special thing about finalized

Unfortunately, due to a quirk (fixed in master of solana) in the way that Geyser works on Solana not every slot finalized notification is issued. This means that you need some special processing if you want to handle finalized correctly.

The special handling is the following: whenever you see a finalized slot notification, you need to check the parents and grand parents (and great-grandparents and so on) of that slot and mark those as finalized too even if you didn't receive a notification for them.

Last updated