# Welcome

## &#x20;Welcome to InfraMind

### Overview

InfraMind is a decentralized compute mesh built for high-performance AI model deployment. It provides an open, fault-tolerant, and latency-aware infrastructure for developers, node operators, and autonomous agents to deploy and run containerized AI models across a globally distributed network of independent servers.

InfraMind is not a hosted platform. It’s an execution layer. Every endpoint, every job, every node interaction happens without centralized orchestration. Intelligence should not be bottled up in a data center. InfraMind allows it to move.

***

### Features

* **Decentralized Runtime Mesh**\
  Models are served from nodes distributed globally, selected based on latency, capacity, and reliability.
* **Containerized AI Serving**\
  Models are packaged in portable OCI-compliant containers, versioned, and distributed with integrity proofs.
* **Low-Latency Scheduling**\
  Job routing is dynamic and locality-aware. Closest capable node receives the inference call.
* **Permissionless Node Contribution**\
  Any machine can become a compute node by running a lightweight agent. No KYC. No centralized approval.
* **Token-Incentivized Execution**\
  Jobs are cryptographically signed, performance is measured, and work is rewarded in $INFRA tokens.
* **Support for gRPC & REST APIs**\
  Endpoints support modern communication protocols and stateless or session-aware inference payloads.

***

### Supported Use Cases

* On-demand model inference with REST/gRPC endpoints
* Distributed training across multi-node mesh clusters
* Vision/audio model deployment to edge environments
* Quantized model serving in low-resource regions
* Swarm-based AI agent orchestration
* Private model execution using TEE/WASM/FHE (in progress)

***

### System Requirements

**Minimum Requirements for Node Operators:**

* Linux/macOS (x86\_64 or ARM64)
* 2+ vCPU, 4GB+ RAM
* Stable public IP and bandwidth
* Docker installed (>= 20.10.x)
* Optional: NVIDIA GPU with CUDA >= 11.0 (for high-capacity workloads)

**Required Ports:**

* 9000 TCP (default inference port)
* 4369 TCP (heartbeat + peer index)
* 9100 TCP (metrics, optional)

***

### Quick Start

```bash
curl -sL https://inframind.host/install.sh | bash
```

This command performs the following:

* Installs Docker if not present
* Pulls the official InfraMind Node container
* Registers the node using a cryptographic identity
* Connects to the InfraMind scheduler mesh
* Starts listening for jobs in the background

Check status:

```bash
docker logs -f inframind-node
```

Or via the CLI:

```bash
infra status
```

***

### Node Roles

| Node Type        | Requirements                  | Capabilities                              |
| ---------------- | ----------------------------- | ----------------------------------------- |
| CPU-only Node    | Standard VM or bare-metal     | Basic inference, control routing          |
| GPU-enabled Node | CUDA-capable + drivers        | LLMs, stable diffusion, quantized chains  |
| Edge Node        | ARM SBC or browser-based WASM | Low-power jobs, privacy-aware processing  |
| TEE Node         | Trusted enclave / SGX         | Confidential workloads (FHE/zkML support) |

***

### What InfraMind Is Not

* Not a cloud replacement — it's compute infrastructure that operates outside cloud vendors.
* Not a model hub — it doesn’t host weights, only executions.
* Not a blockchain — it's not tied to any single chain, though it uses on-chain proofs and payments.
* Not speculative — rewards are based on completed work, not token emissions.

***

### Ecosystem Compatibility

* Container Format: OCI / Docker
* Model Runtimes: Python, ONNX, Torch, TensorFlow, Rust (via WASM)
* Networking: REST, gRPC, ZeroMQ (coming)
* Storage: IPFS, Arweave, HTTP fallback
* Token Layer: EVM-compatible (initial), with rollup abstraction planned

***

### Reading the Documentation

This documentation is organized into five primary sections:

1. **Introduction**\
   Learn about the vision, problem space, and evolution of InfraMind as a compute layer.
2. **Architecture**\
   Deep dive into the technical design — from container standards to job orchestration.
3. **Running a Node**\
   Complete walkthrough for becoming a node operator, earning rewards, and contributing to the mesh.
4. **Deploying Models**\
   How to prepare, containerize, and publish AI workloads to the network.
5. **Economics & Governance**\
   Token design, staking mechanics, reputation systems, and DAO transition path.

***

### Community & Support

* [📡 ](https://inframind.host/)[Join the Community](https://inframind.host/)
* 📧 Email support: `support@inframind.host`
* GitHub (coming soon): `github.com/inframind`
* CLI Reference: `infra help`

***

InfraMind is not infrastructure as a service.\
It’s intelligence as a right — served by anyone, anywhere.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.inframind.host/welcome.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
