Testing & Simulating Jobs Locally

Testing and simulating InfraMind jobs locally is essential for ensuring container correctness, schema compliance, and runtime reliability before deploying models to the mesh. The InfraMind CLI provides a dedicated set of tools to help developers validate their containers, troubleshoot execution errors, inspect input/output behavior, and trace performance metrics — all without requiring live mesh integration.

These tools run your model inside the same container environment used by production nodes, apply the declared model.yaml config, and simulate a full execution lifecycle, including schema enforcement, resource limiting, and container log capture.


Dry Run with CLI

To validate a container and its manifest without pushing it to the mesh, use:

infra test --model model.yaml

This command will:

  • Parse and validate model.yaml

  • Load the container image from disk or local Docker

  • Start the container with declared resources

  • Send a synthetic test input (if no input provided)

  • Validate output against output_schema

  • Print logs, latency, and error traces

You can provide an input payload using --input:

infra test --model model.yaml --input test.json

test.json might look like:

{
  "text": "Decentralized compute protocols enable permissionless AI."
}

If the test fails, the CLI will return a detailed breakdown:

[Validation] input schema: ✅
[Runtime] container launch: ✅
[Execution] output schema: ❌

Error: Expected key `summary` missing from response

Simulate a Full Job Execution

To simulate the full lifecycle of a job, including proof generation, use:

infra simulate --model model.yaml --input test.json

This wraps the job in a temporary job ID and mimics mesh behavior:

  • Assigns a fake scheduler signature

  • Creates a sandboxed container runtime

  • Records execution time

  • Outputs job receipt to ~/.inframind/receipts/devmode/

Simulated job output:

{
  "job_id": "dev-f9ae1d",
  "latency_ms": 172,
  "status": "success",
  "output": {
    "summary": "Decentralized compute enables..."
  },
  "signature": "0x4ab9...",
  "timestamp": 1720321012
}

To include full logs:

infra simulate --model model.yaml --input test.json --verbose

You’ll see:

[Container Logs]
[INFO] Server started at port 9000
[INFO] Received input: Decentralized compute protocols...
[INFO] Summary generated.

Debugging Failed Runs

When a simulation fails, the CLI will capture all context:

  • stderr and stdout from the container

  • Exit codes

  • Schema mismatch information

  • Container pull/cache behavior

  • Resource constraint violations

Check the most recent error:

infra debug last

Or inspect a specific run:

infra debug --job dev-f9ae1d

This prints:

  • Job input and output

  • Output hash and signature

  • Container logs

  • Line number of failure (if applicable)

If the container exited prematurely:

[ERROR] Container exited with code 137 (OOMKilled)
Try lowering input size or increasing memory in model.yaml

If schema failed:

[Schema Error] 'summary' is a required property
Returned: {"result": "text"}
Expected: {"summary": "string"}

Local Tracing Tools

The CLI provides optional tracing overlays to visualize performance during simulation:

infra trace --model model.yaml --input test.json

This adds:

  • Input processing time

  • Container boot time

  • Inference duration

  • Schema validation latency

  • Total wall clock time

Output:

[Trace Summary]
Input Validation:       3ms
Container Startup:     421ms
Model Inference:       117ms
Output Validation:       6ms
Total Runtime:         547ms

Enable continuous profiling mode (developer preview):

infra profile --watch

This launches a local Flask server on localhost:3001 that graphs container memory usage, CPU spikes, and job durations in real time.


Run Container Standalone

To bypass the InfraMind runtime and just run your container directly:

docker run -it --rm -p 9000:9000 summarizer-v1

Test with curl:

curl -X POST http://localhost:9000/inference \
  -H "Content-Type: application/json" \
  -d '{"text": "Test run"}'

You can combine this with infra test to test both runtime and container behavior side-by-side.


Recommended Testing Flow

  1. Write model.yaml and containerize your model

  2. Run infra test --model model.yaml to validate schema and image

  3. Run infra simulate with test input

  4. Use infra debug and infra trace to inspect edge cases

  5. Run infra profile or expose Prometheus port if doing batch tests

  6. Once confirmed, deploy via infra register


Summary

InfraMind’s simulation and testing CLI mirrors the real execution behavior of jobs on the mesh — including schema checks, container isolation, runtime latency, and proof generation. Whether you're validating a new model, debugging inconsistent outputs, or optimizing latency before deployment, these tools ensure that your container behaves predictably under production conditions. Nothing is assumed. Everything is verified.

Last updated