Atoma node deployment

Deploying an Atoma node

Quickstart

  1. Clone the repository

git clone https://github.com/atoma-network/atoma-node.git
cd atoma-node
  1. Configure environment variables by creating .env file:

Start by running

cp .env.example .env

You should then see a file of the form:

# Hugging Face Configuration
HF_CACHE_PATH=~/.cache/huggingface
HF_TOKEN=   # Required for gated models

# Inference Server Configuration
INFERENCE_SERVER_PORT=50000    # External port for vLLM service
MODEL=meta-llama/Llama-3.1-70B-Instruct
MAX_MODEL_LEN=4096            # Context length
GPU_COUNT=1                   # Number of GPUs to use
TENSOR_PARALLEL_SIZE=1        # Should be equal to GPU_COUNT

# Sui Configuration
SUI_CONFIG_PATH=~/.sui/sui_config

# Atoma Node Service Configuration
ATOMA_SERVICE_PORT=3000       # External port for Atoma service

You need to fill the HF_TOKEN variable with your HuggingFace api key. See the official [website](https://huggingface.co/docs/hub/security-tokens) for more information on how to set an HF api key.

  1. Configure config.toml, using config.example.toml as template, by running:

You should now have a config.toml file with the following contents

You can set multiple services for your node such as inference, embeddings and multi-modal, by setting the public url.

  1. Create required directories

  1. Start the containers

If you plan to run a chat completions service:

For text embeddings:

For image generation:

It is possible to run any combination of the above, if a node has enough GPU compute available. For example to run all services simultaneously, simply run:

Container Architecture

The deployment consists of two main services:

  • vLLM Service: Handles the AI model inference

  • Atoma Node: Manages the node operations and connects to the Atoma Network

Service URLs

  • vLLM Service: http://localhost:50000 (configured via INFERENCE_SERVER_PORT)

  • Atoma Node: http://localhost:3000 (configured via ATOMA_SERVICE_PORT)

Volume Mounts

  • HuggingFace cache: ~/.cache/huggingface:/root/.cache/huggingface

  • Sui configuration: ~/.sui/sui_config:/root/.sui/sui_config

  • Logs: ./logs:/app/logs

  • SQLite database: ./data:/app/data

Managing the Deployment

Check service status:

View logs:

Stop services:

Troubleshooting

  1. Check if services are running:

  1. Test vLLM service:

  1. Test Atoma Node service:

  1. Check GPU availability:

  1. View container networks:

Security Considerations

  1. Firewall Configuration

  1. HuggingFace Token

  • Store HF_TOKEN in .env file

  • Never commit .env file to version control

  • Consider using Docker secrets for production deployments

  1. Sui Configuration

  • Ensure Sui configuration files have appropriate permissions

  • Keep keystore file secure and never commit to version control

Last updated