# Generic

## Overview

Generic artifacts extend Bluebricks orchestration beyond native IaC tools. They let you run any containerized task (scripts, CLIs, data migrations, automation flows) as a one-shot Kubernetes Job that executes your code, captures outputs, and cleans up.

<table><thead><tr><th width="289.23828125">Feature</th><th>Details</th></tr></thead><tbody><tr><td><strong>Bring-your-own Docker image</strong></td><td>Run any language or runtime that fits in a container</td></tr><tr><td><strong>No-code plan</strong></td><td>Planning always reports "no changes"</td></tr><tr><td><strong>Container I/O wiring</strong></td><td>Props and secrets injected as JSON files and environment variables</td></tr><tr><td><strong>Portable outputs</strong></td><td>Your script writes <code>outputs.json</code>; Bluebricks stores the values for downstream use</td></tr><tr><td><strong>Ephemeral execution</strong></td><td>Kubernetes Job, auto-cleaned after exit</td></tr></tbody></table>

Common use cases:

<table><thead><tr><th width="177.2109375">Use case</th><th width="248.5078125">Example image</th><th>Typical command</th></tr></thead><tbody><tr><td>Shell automation</td><td><code>alpine:latest</code></td><td><code>sh run.sh</code></td></tr><tr><td>Python script</td><td><code>python:3.12</code></td><td><code>python main.py</code></td></tr><tr><td>Ansible playbook</td><td><code>alpine/ansible</code></td><td><code>ansible-playbook site.yml</code></td></tr><tr><td>Database migration</td><td><code>python:3.11-slim</code></td><td><code>python migrate.py</code></td></tr><tr><td>API integration</td><td><code>curlimages/curl:latest</code></td><td><code>sh api_call.sh</code></td></tr></tbody></table>

For a complete guide to how inputs and outputs work across all IaC tools, see [Inputs & Outputs](https://bluebricks.co/docs/core-concepts/packages/inputs-and-outputs).

## Required files and directory structure

A Generic artifact requires user-defined scripts or executables in the directory specified by `native.path`:

```
my-generic-artifact/
├── bricks.json              # Artifact manifest
└── src/                     # native.path points here → mounted at /workspace
    ├── main.py              # Entry script
    ├── helper.py
    └── sidefiles/
        └── config.json
```

The `native.path` directory is mounted as `/workspace` inside the container at runtime. This is the working directory and the only writable mount point.

## bricks.json reference

<table><thead><tr><th width="119.76171875">Field</th><th width="131.4296875">Required</th><th>Description</th></tr></thead><tbody><tr><td><strong><code>image</code></strong></td><td>No</td><td>Docker image (default: <code>busybox:stable</code>). Must come from an <a href="container-config#approved-container-registries">approved registry</a>.</td></tr><tr><td><strong><code>command</code></strong></td><td>No</td><td>Entry command array. Omit to use the image's default entrypoint.</td></tr><tr><td><strong><code>args</code></strong></td><td>No</td><td>Arguments array appended to command.</td></tr><tr><td><strong><code>path</code></strong></td><td>Yes</td><td>Folder in the artifact to mount as <code>/workspace</code>.</td></tr><tr><td><strong><code>env_vars</code></strong></td><td>No</td><td>Key-value map of environment variables injected into the container. See <a href="generic/container-config">Container Configuration</a> for details.</td></tr><tr><td><strong><code>lifecycle</code></strong></td><td>No</td><td>Per-stage overrides for plan, apply, and destroy. See <a href="generic/lifecycle">Lifecycle and Execution</a>.</td></tr></tbody></table>

## Approved container registries

Only images from the registries below are accepted at publish time. Attempting to use an image from any other registry will be rejected.

| Registry                          | Description                  |
| --------------------------------- | ---------------------------- |
| `docker.io`                       | Docker Hub                   |
| `ghcr.io`                         | GitHub Container Registry    |
| `quay.io`                         | Red Hat Quay                 |
| `registry.gitlab.com`             | GitLab Container Registry    |
| `mcr.microsoft.com`               | Microsoft Container Registry |
| `gcr.io`                          | Google Container Registry    |
| `artifactregistry.googleapis.com` | Google Artifact Registry     |
| `ecr.aws`                         | AWS ECR                      |
| `*.us-east-1.amazonaws.com`       | AWS ECR us-east-1            |
| `*.eu-west-1.amazonaws.com`       | AWS ECR eu-west-1            |

{% hint style="warning" %}
Images from registries not on this list are rejected at publish time. See [Container Configuration](https://bluebricks.co/docs/core-concepts/packages/artifacts-overview/generic/container-config) for image selection best practices.
{% endhint %}

## How to create this artifact

The only requirement is a directory with your script or executable. Any language, any runtime: if it runs in a container, Bluebricks can orchestrate it. Bluebricks handles input injection, output capture, and container lifecycle for you.

You can create a Generic artifact in two ways:

* **In the Bluebricks app** during [blueprint creation](https://github.com/bluebricks-dev/Bluebricks-Documentation/blob/main/core-concepts/packages/artifacts-overview/blueprints-overview/creating-blueprints.md): select your repository and the directory containing your code, and Bluebricks generates the artifact automatically
* **Via CLI**: run `bricks bprint publish` from the directory containing your code. See [Creating Artifacts](https://github.com/bluebricks-dev/Bluebricks-Documentation/blob/main/core-concepts/packages/artifacts-overview/generic/creating-artifacts.md) for the full workflow

***

> The sections below explain how Bluebricks maps your code to inputs, outputs, and operations under the hood. Everything here is optional reading. To get started, head to [Creating Blueprints](https://bluebricks.co/docs/core-concepts/packages/blueprints-overview/creating-blueprints).

## Inputs

### What becomes an input

Generic artifacts have no native input construct; all inputs are declared explicitly as `props` in `bricks.json`. There is no auto-discovery.

### How inputs are delivered at runtime

Bluebricks delivers inputs through three mechanisms simultaneously:

<table><thead><tr><th width="225.1796875">Mechanism</th><th>Location inside the container</th><th>Notes</th></tr></thead><tbody><tr><td><strong>Environment variables</strong></td><td><code>GREETING</code>, <code>REGION</code>, ... (one per prop/secret)</td><td>Secrets are also passed; avoid printing them</td></tr><tr><td><strong><code>/workspace/vars.json</code></strong></td><td>JSON file with all props</td><td>Non-secret configuration</td></tr><tr><td><strong><code>/workspace/secrets.json</code></strong></td><td>JSON file with all secrets</td><td>0600 permissions</td></tr></tbody></table>

### Reading inputs in code

{% tabs %}
{% tab title="Python" %}

```python
import json

with open('/workspace/vars.json', 'r') as f:
    vars = json.load(f)

database_host = vars['database_host']
```

{% endtab %}

{% tab title="Node.js" %}

```javascript
const fs = require('fs');

const vars = JSON.parse(fs.readFileSync('/workspace/vars.json', 'utf8'));
const databaseHost = vars.database_host;
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash
VARS_FILE="/workspace/vars.json"

DATABASE_HOST=$(jq -r '.database_host' "$VARS_FILE")
DATABASE_NAME=$(jq -r '.database_name' "$VARS_FILE")
```

{% endtab %}
{% endtabs %}

### Auto-injected environment variables

Bluebricks injects the following environment variables into every Generic artifact execution, in addition to any `env_vars` you define:

| Variable        | Description                                                             |
| --------------- | ----------------------------------------------------------------------- |
| `BRICKS_ACTION` | Current deployment stage: `plan`, `apply`, `plan-destroy`, or `destroy` |
| `BRICKS_STATE`  | Base64-encoded JSON of previous deployment state (when state exists)    |
| `BRICKS_JOB_ID` | UUID identifying the current job execution                              |

See [Lifecycle and Execution](https://bluebricks.co/docs/core-concepts/packages/artifacts-overview/generic/lifecycle) for details on using these variables.

## Outputs

### What becomes an output

Your code defines the outputs by writing a JSON file. There is no native output construct; outputs are whatever your script produces.

### Output contract

Your code must create `/workspace/outputs.json` before exiting:

<table><thead><tr><th width="149.51171875">Rule</th><th>Detail</th></tr></thead><tbody><tr><td><strong>File name</strong></td><td><code>outputs.json</code> (exact)</td></tr><tr><td><strong>Location</strong></td><td><code>/workspace/outputs.json</code></td></tr><tr><td><strong>Size limit</strong></td><td>1 MiB maximum</td></tr><tr><td><strong>Format</strong></td><td>Flat JSON object (nested objects/arrays allowed)</td></tr><tr><td><strong>Auto-added</strong></td><td><code>job_id</code> is automatically included</td></tr></tbody></table>

### Writing outputs in code

{% tabs %}
{% tab title="Python" %}

```python
import json, pathlib

with open("/workspace/vars.json") as f:
    variables = json.load(f)

(pathlib.Path("/workspace") / "outputs.json").write_text(
  json.dumps({"message": "done", "records": 42}, indent=2)
)
```

{% endtab %}

{% tab title="Node.js" %}

```javascript
const fs = require('fs');

const outputs = {
    migration_status: 'success',
    rows_affected: 150,
    completion_time: new Date().toISOString()
};

fs.writeFileSync('/workspace/outputs.json', JSON.stringify(outputs, null, 2));
```

{% endtab %}

{% tab title="Bash" %}

```bash
#!/bin/bash
cat > /workspace/outputs.json <<EOF
{
  "migration_status": "success",
  "rows_affected": 150,
  "completion_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
```

{% endtab %}
{% endtabs %}

### Error handling pattern

Always write `outputs.json` even on failure, then exit with a non-zero code:

```python
import json, sys

try:
    result = perform_operation()
    with open('/workspace/outputs.json', 'w') as f:
        json.dump({'status': 'success', 'result': result}, f)
    sys.exit(0)
except Exception as e:
    with open('/workspace/outputs.json', 'w') as f:
        json.dump({'status': 'failed', 'error': str(e)}, f)
    sys.exit(1)
```

If the file is missing or invalid, the deployment fails.

### Referencing outputs downstream

In a blueprint, reference a Generic artifact's output the same way as any other package using `Data.hello_world.cidr_list`.

## Testing locally

You can test your Generic artifact locally with Docker before publishing:

```bash
# Create a test vars.json
cat > /tmp/vars.json << 'EOF'
{
  "database_host": "localhost",
  "database_name": "test_db"
}
EOF

# Run the container with mounted files
docker run --rm \
  -v /tmp/vars.json:/workspace/vars.json \
  -v $(pwd)/src:/workspace \
  python:3.11-slim \
  python /workspace/main.py

# Check the outputs
cat /workspace/outputs.json
```

## Supported operations

<table><thead><tr><th width="138.7109375">Operation</th><th>What happens</th><th>Notes</th></tr></thead><tbody><tr><td><strong>Plan</strong></td><td>Always returns an empty plan (<code>{}</code>) and lists outputs as "known after apply"</td><td>No resources are previewed; Generic is execution-only</td></tr><tr><td><strong>Apply</strong></td><td>Launches a Kubernetes Job with your image, mounts your artifact, runs the command, captures <code>outputs.json</code>, then cleans up</td><td>Job fails if exit code is not 0, timeout is hit, or <code>outputs.json</code> is invalid</td></tr><tr><td><strong>Plan Destroy</strong></td><td>Also always empty</td><td>Nothing to show; there are no persistent resources</td></tr><tr><td><strong>Apply Destroy</strong></td><td>No-op. Marks the deployment destroyed immediately.</td><td>Any cleanup must be coded into your script.</td></tr></tbody></table>

## Runtime environment

<table><thead><tr><th width="124.80078125">Resource</th><th>Value</th></tr></thead><tbody><tr><td><strong>CPU</strong></td><td>0.1 core guaranteed, burstable</td></tr><tr><td><strong>Memory</strong></td><td>256 MiB request, 256 MiB limit</td></tr><tr><td><strong>Timeout</strong></td><td>60 minutes (hard stop)</td></tr><tr><td><strong>User</strong></td><td>UID 1000 (non-root); no privilege escalation</td></tr><tr><td><strong>Filesystem</strong></td><td><code>/workspace</code> is read-write; remainder is read-only</td></tr></tbody></table>

Need more resources? Split the task, optimize code, or run it in your own environment and call back to Bluebricks via API.

## Python dependencies

If your Python script needs remote dependencies at runtime:

```sh
export HOME=${CWD} && pip install --user -r requirements.txt
```

This installs dependencies to the user's local Python package directory, avoiding system-wide permissions.

## Best practices

* Pick a minimal image with only what you need; smaller = faster pull
* Pin image digests (`python@sha256:...`) for reproducible builds
* Log verbosely to stdout; Bluebricks streams logs in real time
* Validate inputs early; exit 1 with a helpful message if invalid
* Keep execution under 5 minutes or design for resumable/partitioned runs
* Never echo secrets; they're in env vars, so accidental `print(os.environ)` will leak them to logs
