Generic

Generic artifact reference: run any containerized task as a one-shot Kubernetes Job

Overview

Generic artifacts extend Bluebricks orchestration beyond native IaC tools. They let you run any containerized task (scripts, CLIs, data migrations, automation flows) as a one-shot Kubernetes Job that executes your code, captures outputs, and cleans up.

Feature
Details

Bring-your-own Docker image

Run any language or runtime that fits in a container

No-code plan

Planning always reports "no changes"

Container I/O wiring

Props and secrets injected as JSON files and environment variables

Portable outputs

Your script writes outputs.json; Bluebricks stores the values for downstream use

Ephemeral execution

Kubernetes Job, auto-cleaned after exit

Common use cases:

Use case
Example image
Typical command

Shell automation

alpine:latest

sh run.sh

Python script

python:3.12

python main.py

Ansible playbook

alpine/ansible

ansible-playbook site.yml

Database migration

python:3.11-slim

python migrate.py

API integration

curlimages/curl:latest

sh api_call.sh

For a complete guide to how inputs and outputs work across all IaC tools, see Inputs & Outputs.

Required files and directory structure

A Generic artifact requires user-defined scripts or executables in the directory specified by native.path:

my-generic-artifact/
├── bricks.json              # Artifact manifest
└── src/                     # native.path points here → mounted at /workspace
    ├── main.py              # Entry script
    ├── helper.py
    └── sidefiles/
        └── config.json

The native.path directory is mounted as /workspace inside the container at runtime. This is the working directory and the only writable mount point.

bricks.json reference

Field
Required
Description

image

No

Docker image (default: busybox:stable). Must come from an approved registry.

command

No

Entry command array. Omit to use the image's default entrypoint.

args

No

Arguments array appended to command.

path

Yes

Folder in the artifact to mount as /workspace.

env_vars

No

Key-value map of environment variables injected into the container. See Container Configuration for details.

lifecycle

No

Per-stage overrides for plan, apply, and destroy. See Lifecycle and Execution.

Approved container registries

Only images from the registries below are accepted at publish time. Attempting to use an image from any other registry will be rejected.

Registry
Description

docker.io

Docker Hub

ghcr.io

GitHub Container Registry

quay.io

Red Hat Quay

registry.gitlab.com

GitLab Container Registry

mcr.microsoft.com

Microsoft Container Registry

gcr.io

Google Container Registry

artifactregistry.googleapis.com

Google Artifact Registry

ecr.aws

AWS ECR

*.us-east-1.amazonaws.com

AWS ECR us-east-1

*.eu-west-1.amazonaws.com

AWS ECR eu-west-1

circle-exclamation

How to create this artifact

The only requirement is a directory with your script or executable. Any language, any runtime: if it runs in a container, Bluebricks can orchestrate it. Bluebricks handles input injection, output capture, and container lifecycle for you.

You can create a Generic artifact in two ways:


The sections below explain how Bluebricks maps your code to inputs, outputs, and operations under the hood. Everything here is optional reading. To get started, head to Creating Blueprints.

Inputs

What becomes an input

Generic artifacts have no native input construct; all inputs are declared explicitly as props in bricks.json. There is no auto-discovery.

How inputs are delivered at runtime

Bluebricks delivers inputs through three mechanisms simultaneously:

Mechanism
Location inside the container
Notes

Environment variables

GREETING, REGION, ... (one per prop/secret)

Secrets are also passed; avoid printing them

/workspace/vars.json

JSON file with all props

Non-secret configuration

/workspace/secrets.json

JSON file with all secrets

0600 permissions

Reading inputs in code

Auto-injected environment variables

Bluebricks injects the following environment variables into every Generic artifact execution, in addition to any env_vars you define:

Variable
Description

BRICKS_ACTION

Current deployment stage: plan, apply, plan-destroy, or destroy

BRICKS_STATE

Base64-encoded JSON of previous deployment state (when state exists)

BRICKS_JOB_ID

UUID identifying the current job execution

See Lifecycle and Execution for details on using these variables.

Outputs

What becomes an output

Your code defines the outputs by writing a JSON file. There is no native output construct; outputs are whatever your script produces.

Output contract

Your code must create /workspace/outputs.json before exiting:

Rule
Detail

File name

outputs.json (exact)

Location

/workspace/outputs.json

Size limit

1 MiB maximum

Format

Flat JSON object (nested objects/arrays allowed)

Auto-added

job_id is automatically included

Writing outputs in code

Error handling pattern

Always write outputs.json even on failure, then exit with a non-zero code:

If the file is missing or invalid, the deployment fails.

Referencing outputs downstream

In a blueprint, reference a Generic artifact's output the same way as any other package using Data.hello_world.cidr_list.

Testing locally

You can test your Generic artifact locally with Docker before publishing:

Supported operations

Operation
What happens
Notes

Plan

Always returns an empty plan ({}) and lists outputs as "known after apply"

No resources are previewed; Generic is execution-only

Apply

Launches a Kubernetes Job with your image, mounts your artifact, runs the command, captures outputs.json, then cleans up

Job fails if exit code is not 0, timeout is hit, or outputs.json is invalid

Plan Destroy

Also always empty

Nothing to show; there are no persistent resources

Apply Destroy

No-op. Marks the deployment destroyed immediately.

Any cleanup must be coded into your script.

Runtime environment

Resource
Value

CPU

0.1 core guaranteed, burstable

Memory

256 MiB request, 256 MiB limit

Timeout

60 minutes (hard stop)

User

UID 1000 (non-root); no privilege escalation

Filesystem

/workspace is read-write; remainder is read-only

Need more resources? Split the task, optimize code, or run it in your own environment and call back to Bluebricks via API.

Python dependencies

If your Python script needs remote dependencies at runtime:

This installs dependencies to the user's local Python package directory, avoiding system-wide permissions.

Best practices

  • Pick a minimal image with only what you need; smaller = faster pull

  • Pin image digests (python@sha256:...) for reproducible builds

  • Log verbosely to stdout; Bluebricks streams logs in real time

  • Validate inputs early; exit 1 with a helpful message if invalid

  • Keep execution under 5 minutes or design for resumable/partitioned runs

  • Never echo secrets; they're in env vars, so accidental print(os.environ) will leak them to logs

Last updated

Was this helpful?