Hardware Requirements

Running an InfraMind node is open to any individual or organization with a stable internet connection and basic Linux-based compute. The system is designed to accommodate a wide range of environments, from low-power ARM boards serving edge workloads to high-performance GPU servers running large-scale inference.

Nodes are not provisioned by central approval. They self-register by proving capability through periodic heartbeats and successful job completions. This means any machine that meets minimum requirements can join the mesh, contribute compute, and begin earning rewards. The more performant and reliable the hardware, the higher the tier of jobs it will be assigned.

Minimum requirements are intentionally low to encourage geographic diversity and hardware heterogeneity across the network.

Minimum CPU/RAM

For basic CPU-bound inference workloads:

  • 2 vCPU (x86_64 or ARM64)

  • 4 GB RAM

  • 10 GB free disk space (for caching containers)

  • Linux (Debian, Ubuntu, Arch, Alpine, or CentOS)

  • Docker (20.x or newer) or Podman

  • Python 3.10+ (optional for local CLI)

Example hardware that meets the minimum spec:

  • DigitalOcean 2vCPU / 4GB droplet

  • Raspberry Pi 5 8GB

  • Local laptop on idle mode

  • Low-tier spot instances on AWS/GCP

Nodes with this configuration are eligible to serve:

  • Lightweight transformer models (e.g. distilled BERT)

  • Quantized classification or embedding models

  • Stateless job pipelines (one-off inference)

GPU Optional (for higher-tier jobs)

Nodes equipped with CUDA-compatible GPUs unlock access to large-scale jobs such as:

  • LLM inference (>6B parameters)

  • Diffusion model generation

  • Batched sequence models

  • Real-time vision processing

Recommended specs for GPU-tier jobs:

  • NVIDIA GPU with:

    • CUDA Compute Capability >= 7.0

    • 8GB+ VRAM

    • Installed NVIDIA Docker runtime

    • nvidia-smi returns correct driver version

Supported cards include:

  • NVIDIA 3060/3070/3080/3090

  • A6000 / A100 / H100

  • RTX 4000 series

  • Jetson-class devices (in edge mode)

For multi-GPU servers, each job is scheduled to a specific device via CUDA_VISIBLE_DEVICES assignment by the agent.

Example systemd service override:

[Service]
Environment="CUDA_VISIBLE_DEVICES=0"

Nodes that misreport GPU capability or fail to serve within resource limits will be slashed and downgraded in reputation.

Bandwidth, Storage, Reliability Tiers

All nodes must be connected to the public internet and expose at least one open port for job coordination. NAT traversal or dynamic DNS is discouraged due to unpredictable job routing behavior.

Minimum recommended bandwidth:

  • Downlink: 10 Mbps

  • Uplink: 5 Mbps

  • Latency to gateway/scheduler: < 150ms

Nodes that meet higher bandwidth and availability thresholds receive faster job routing and larger container pulls.

Storage requirements vary by workload:

  • Default: 10–50GB (for container cache)

  • Heavy reuse: 100–250GB (for persistent multi-model caching)

  • Pinning nodes: up to 1TB (for model mirroring and registry)

Disk must support at least 100 MB/s sequential write for reliable job turnaround time.

Reliability tiers are assessed by:

  • Uptime in the last 30 days

  • Job completion ratio

  • Proof return latency

  • Protocol heartbeat consistency

Nodes that exceed 98.5% SLA over 30 days are marked as “Tier-1” nodes and prioritized for paid workloads.

Recommended Setups

Home Deployment:

  • Intel NUC with 16GB RAM

  • Raspberry Pi 5 with passive cooling

  • Custom mini-ITX build with GTX 1660

  • Proxmox VM on home server with Docker runtime

  • Always-on Internet, UPS backup preferred

Data Center Server:

  • 16-core AMD or Intel CPU

  • 64–128GB RAM

  • Dual A6000 GPUs or A100

  • 1Gbps+ symmetrical bandwidth

  • Redundant power and cooling

  • IPv4 and IPv6 address

Cloud Spot Setup:

  • AWS g5.xlarge (NVIDIA T4 GPU, 4vCPU)

  • Hetzner GPU SX62 with RTX 4080

  • Vast.ai bid node with RTX 3090

  • Cloud-init for node auto-start

  • Cron job for price rebid or fallback

Nodes can run unattended, provided logs are written to persistent disk and remote health checks are enabled.

InfraMind validates performance periodically through synthetic workload injection, so hardware that performs below declared spec will be throttled or suspended. For this reason, operators are encouraged to benchmark and register their nodes with realistic capability declarations.

Node classification:

Tier
Description
Example Use

Tier-0

Edge nodes, low latency, ARM

Vision filters, scoring

Tier-1

CPU-only, general-purpose

Text classifiers, routing

Tier-2

GPU-enabled, real-time inference

LLM, vision, speech

Tier-3

Multi-GPU, stateful workloads

Sequence batch, pipelines

Tier-X

Secure enclave (TEE/WASM)

Encrypted model serving

InfraMind’s architecture treats every node as sovereign compute. No centralized provisioning, no API key lock-in, no fleet control. All participation is validated through declared capability and delivered service. If your machine can execute the job and verify the result, it belongs in the mesh.

Last updated