Atoma smart contract

The Atoma Network is supported by an on-chain smart contract on the Sui blockchain. That said, the Atoma protocol is chain agnostic, in particular, we have future plans to expand Atoma's functionality to other chains, such as EVM compatible chains, Solana, Near, etc. We will also explore the possibility of integrating as an EigenLayer AVS, or building our own L1/L2 for native payments.

This document outlines the key features, upcoming developments, and usage instructions for interacting with Atoma's smart contracts on Sui.

Atoma Contract Features

You can find the Atoma's smart contract open source repository in [here](https://github.com/atoma-network/atoma-contracts/).

The Atoma contract on Sui implements the following key features:

  1. Node Registration: Nodes must register to participate in the Atoma Network and process requests.

  2. Collateral Management: Nodes deposit TOMA tokens as collateral upon registration.

  3. Fee Accrual: Nodes earn fees in TOMA tokens based on processed requests, withdrawable after two epochs.

  4. Model Subscription: Nodes specify which AI models they host and can process.

  5. Node Deregistration: Allows nodes to exit the network and withdraw collateral.

  6. Hardware Specification: Nodes declare their GPU configurations to ensure deterministic outputs within quorums.

  7. Echelon System: Organizes nodes into compute shards (echelons) based on hardware capabilities.

  8. Request Handling: Manages submission and payment (in TOMA) for network requests.

  9. Load Balancing: Distributes requests across suitable echelons based on performance and workload.

  10. Random Node Sampling: Selects a subset of nodes within an echelon to process each request.

  11. Timeout Enforcement: Monitors request processing times and slashes collateral for late responses.

  12. Output Consensus: Nodes submit cryptographic commitments of outputs to reach consensus.

  13. Dispute Resolution: Handles disagreements on output state using high-reputation nodes.

Upcoming Features

  1. Staking: Reward system for nodes based on performance within echelons.

  2. Governance: Voting mechanism for TOMA holders to influence network decisions.

  3. Enhanced Dispute Resolution: Implementing BFT and trusted hardware oracle solutions.

  4. General Compute Tasks: Support for WASM applications running inside Trusted Execution Environments (TEEs).

This contract design ensures a robust, scalable, and secure decentralized compute network for AI and other intensive tasks.

Atoma Contract Documentation

The following instructions provide a detailed description on how to interact with the Atoma contract, on the Sui blockchain.

Atoma on Sui

Useful links:

The packages and CLI is pointed to the currently released Sui mainnet version tag.

Upgrade your CLI to the appropriate mainnet version that matches the Move.toml:

cargo install --locked --git https://github.com/MystenLabs/sui.git --tag mainnet-vX.Y.Z sui

When upgrading, the version needs to be changed in

  • atoma package

  • toma package

  • cli binary

Events

The Atoma contract emits various types of events:

  • db::NodeRegisteredEvent is emitted when a new node puts up collateral to register.

  • db::NodeSubscribedToModelEvent is emitted when a node subscribes to a model echelon and is ready to receive prompts.

  • gate::Text2TextPromptEvent is emitted when a user submits a text to text prompt.

  • gate::Text2ImagePromptEvent is emitted when a user submits a text to image prompt.

  • settlement::FirstSubmissionEvent is emitted when a node submits the first response to a prompt.

  • settlement::DisputeEvent is emitted when a node disputes a submission. Now, we want for an oracle to resolve the dispute.

  • settlement::SettledEvent is emitted when a ticket is settled and fee is distributed.

  • settlement::NewlySampledNodesEvent is emitted when a new set of nodes is sampled for a prompt because of timeout.

Create a Sui Wallet

To interact with the Atoma contract on the Sui blockchain, you'll need a Sui wallet. If you already have one, you can skip to the next section. Otherwise, follow these steps to create a new wallet:

  1. Choose a Sui wallet:

  2. Install your chosen wallet and follow the setup instructions.

  3. Securely store your recovery phrase (seed words) in a safe place.

  4. Fund your wallet:

    • For testnet: Use the Sui Faucet in the official Sui Discord.

    • For mainnet: Purchase SUI tokens from a supported exchange.

  5. Verify your wallet balance using the Sui Explorer or your wallet interface.

For more detailed instructions and additional wallet options, refer to the official Sui documentation on wallets.

How to use the Atoma protocol

To interact with the Atoma protocol, utilize the gate module within the atoma package, responsible for prompt submission.

A crucial parameter is the shared object ID for AtomaDb. These, along with the package ID, should be configured once and remain unchanged. The AtomaDb object ID can be derived from the package ID by querying the first transaction of the package and locating the shared object with the type name AtomaDb if necessary.

Before we list all the parameters, here are some general rules:

  • Floats are stored on-chain as u32 integers. To convert from float to u32, convert the float to little-endian bytes and then interpret those bytes as a little-endian u32: u32::from_le_bytes(xxx_f32.to_le_bytes()) Conversely, to convert from u32 to float, use the reverse process.

As of now, the supported modalities are:

  • submit_text2text_prompt with params Text2TextPromptParams:

    • max_tokens: determines the maximum output to be generated and also the amount of TOMA tokens charged. Unused tokens are refunded upon response generation. We discuss pricing below.

    • model: a string identifier of the model for text-to-text generation. Refer to our website for supported models.

    • pre_prompt_tokens: For in-context applications, this is the number of tokens already generated before the user's current prompt.

    • prompt: input text prompt. There's no limit to the prompt length at the protocol level, but a Sui transaction can be at most 128KB.

    • random_seed: any random number to seed the random generator for consistent output across nodes. Before Sui stabilizes random generator, you can use atoma::utils::random_u64.

    • repeat_last_n: instructs the model to avoid reusing tokens within the last n tokens.

    • repeat_penalty: a float number determining token repetition avoidance.

    • should_stream_output: a boolean indicating whether the output should be streamed or not to a suitable output destination.

    • temperature: a float number determining randomness in the output.

    • top_k: an integer determining token consideration for the next generation.

    • top_p: a float number determining token consideration for the next generation.

  • submit_text2image_prompt with params Text2ImagePromptParams:

    • guidance_scale: a float number determining the consideration of the guidance image.

    • height: height of the image. See pricing below.

    • img2img: an optional string indicating the image to start generating with stable diffusion.

    • img2img_strength: a float number indicating the consideration of the img2img image.

    • model: same as above.

    • n_steps: an integer indicating how many steps the model should take to generate the image.

    • num_samples: an integer indicating how many samples the model should generate.

    • prompt: same as above.

    • random_seed: same as above.

    • uncond_prompt: negative word prompt.

    • width: width of the image.

A wallet with TOMA tokens is required for prompt payment, with charges varying based on prompt type. Pricing for input and output tokens differs for each model. Each model has a pricing for input and output tokens as two separate parameters. For text to text models, these two parameters are likely to be the same.

The parameter nodes_to_sample is optional and defaults to a sensible value. Higher number of nodes means higher confidence in the generated output. However, the price is also higher as nodes multiply the prompt price.

  • Text2TextPromptParams charges nodes_to_sample * (prompt_len * input_token_price + max_tokens * output_token_price) upon prompt submission.

  • Text2ImagePromptParams charges nodes_to_sample * (prompt_len * input_token_price + num_samples * output_token_price) upon submission.

Unused tokens are reimbursed upon response generation by sending a Coin<TOMA> object to the prompt submitter.

submit_text2text_prompt function has a max_fee_per_token parameter. This applies to both input and output token prices. If no nodes can generate the prompt within the budget, the transaction fails.

submit_text2image_prompt has a max_fee_per_input_token and max_fee_per_output_token parameters. These apply to input and output token prices, respectively.

The last parameter is nodes_to_sample, as an optional parameter. If specified, a higher number of nodes means higher confidence in the generated output, overall. However, the price is also higher as nodes multiply the prompt price. This behavior is part of our standard Sampling Consensus protocol. If the value of nodes_to_sample is not specified, then the protocol will advance with the Cross-Validation Sampling Consensus mechanism. That is, a single node will be sampled by the contract and once the node generates the response, the contract will sample more nodes to attest to the response's correctness, with some probability p, specified at the protocol level. This approach reduces the cost of verifiable inference, while guaranteeing that the protocol converges to game-theoretical Nash equilibrium, where honest nodes are incentivized to act honestly.

Refer to the atoma::prompts module for sample implementations. If you are developing a custom smart contract for prompt submission, this module is a great starting point.

Since these functions are public but not entry, they must be used in Sui's programmable transactions from the client's perspective.

Dev Environment

There's a check shell script that builds all packages.

As of right now we don't use localnet for testing because the Sui CLI support for faucet is broken.

CLI

Env

The CLI loads values from environment variables. You can set these in your shell or in a .env file in the root of the repository.

If any value is not provided, the CLI does best effort to figure it out from the context. For example, if you provide package ID but not atoma DB object ID, the CLI will query Sui to find it.

WALLET_PATH=
PACKAGE_ID=
ATOMA_DB_ID=
MANAGER_BADGE_ID=
NODE_BADGE_ID=
NODE_ID=
TOMA_WALLET_ID=
GAS_BUDGET=

You can also generate these values by running the following command:

./cli db print-env --package "YOUR PACKAGE ID"

The following commands should get you started once you have the Sui binary installed.

# check what environment (devnet, testnet, mainnet) you're in
sui client active-env
# check what is your active address
sui client active-address
# get some coins into the active address from the faucet
sui client faucet

TOMA token

The TOMA token is used as collateral that nodes must lock up to participate. It's defined in the toma package.

Node registration

In order to register a node, it is required to deposit a given amount of collateral onto the Atoma contract, indexed in TOMA tokens. Therefore, a node must acquire enough TOMA tokens before registration. Currently, the required amount of TOMA tokens for collateral is 10_000.

Once the node operator has enough TOMA tokens, it can register itself as a node, through the cli command:

./cli db register-node \
    --package "TODO(add package id here)"
    --echelon NODE_ECHELON

Current node echelons are the following (based on the node's type of GPU):

Node model subscription

In order to subscribe to a given model, the node operator can run the following command

./cli db add-node-to-model \
    --package "0x8fc663315a07208e86473b808d902c9b97a496a3d2c3779aa6839bd9d26272b8" \
    --model "MODEL" \

Notice that once a node subscribes to a given model, it is entitled to execute requests for that specific model. It the node doesn't host the model, node submission will timeout. This means that part of the node's submitted collateral will be slashed for timeout. In order to avoid this, the node operator must be sure to subscribe to only host to models it currently hosts.

The available list of supported models is:

Submit prompt requests on the Atoma Network

In order to interact with Atoma's cli, you should first clone the Atoma's smart contract repository as:

$ git clone https://github.com/atoma-network/atoma-contracts/

Once you have cloned the repository, you should change directory to the sui/cli crate:

$cd atoma-contract/sui/cli

Text Prompt Request

To submit a text prompt request to the Atoma network, say on Llama3.18b instruct model, while sampling 3 nodes for verifiability, a user can run the following command:

./cli gate send-prompt-to-ipfs \
    --package "your package id can be found when publishing" \
    --model "llama3_8b_instruct" \
    --prompt "YOUR_PROMPT" \
    --max-tokens 512 \
    --max-fee-per-token 1 \
    --nodes-to-sample 3

The above command will submit a text prompt request to the Atoma network and print the corresponding transaction digest, the output text will be stored on IPFS and the user can retrieve it with the correct IPFS cid. We also support storage on Gateway. To do so, the user can run the following command:

./cli gate send-prompt-to-gateway \
    --package "your package id can be found when publishing" \
    --model "llama3_8b_instruct" \
    --prompt "YOUR_PROMPT" \
    --max-tokens 512 \
    --max-fee-per-token 1 \
    --gateway-user-id "YOUR_GATEWAY_USER_ID" \
    --nodes-to-sample 3

Where you need to provide your Gateway user ID, which you have set once registering to Atoma Gateway portal.

Image Prompt Request

Image Prompt Request to IPFS

To submit an image prompt request to the Atoma network, say on Flux-dev model, while sampling 3 nodes for verifiability, a user can run the following command:

./cli gate send-image-prompt-to-ipfs \
    --package "your package id can be found when publishing" \
    --model "flux_dev" \
    --prompt "YOUR_PROMPT" \
    --height 512 \
    --width 512 \
    --max_fee_per_input_token 1 \
    --max_fee_per_output_token 1 \
    --nodes-to-sample 3

where max_fee_per_input_token and max_fee_per_output_token are the maximum fees to be paid to nodes per text input token and output image pixel, respectively.

Image Prompt Request to Gateway

To submit an image prompt request to the Atoma network, say on Flux-dev model, while sampling 3 nodes for verifiability, a user can run the following command:

./cli gate send-image-prompt-to-gateway \
    --package "your package id can be found when publishing" \
    --model "flux_dev" \
    --prompt "YOUR_PROMPT" \
    --height 512 \
    --width 512 \
    --max_fee_per_input_token 1 \
    --max_fee_per_output_token 1 \
    --gateway-user-id "YOUR_GATEWAY_USER_ID" \
    --nodes-to-sample 3

Where you need to provide your Gateway user ID, which you have set once registering to Atoma Gateway portal.

Last updated