Create a Generic Artifact

Convert custom scripts and tools into Bluebricks packages using Docker containers.

Prerequisites

  • Bluebricks CLI installed

  • Authenticated with Bluebricks (bricks login)

  • Docker image published to an approved registry

  • Basic knowledge of Docker and containers

Quick Start

  1. Create package directory:

    mkdir my-generic-package
    cd my-generic-package
  2. Create bricks.json manually:

    {
      "name": "my-generic-package",
      "version": "0.1.0",
      "description": "Custom executor package",
      "native": {
        "type": "generic",
        "path": "./src",
        "image": "python:3.11-slim",
        "command": ["python"],
        "args": ["/workspace/scripts/my_script.py"]
      },
      "props": {
        "input_param": {
          "type": "string",
          "description": "Input parameter"
        }
      },
      "outs": {
        "job_id": {
          "type": "string",
          "description": "Job ID of the generic package job"
        },
        "result": {
          "type": "string",
          "description": "Execution result"
        }
      }
    }
  3. Create your script:

    # src/scripts/my_script.py
    import json
    
    # Read inputs
    with open('/workspace/vars.json', 'r') as f:
        vars = json.load(f)
    
    # Your custom logic here
    result = f"Processed: {vars['input_param']}"
    
    # Write outputs
    with open('/workspace/outputs.json', 'w') as f:
        json.dump({'result': result}, f)
  4. Test and deploy:

    bricks run . --dry --props-file properties.json
    bricks run . --apply --props-file properties.json

When to Use Generic Artifacts

Use Generic artifacts when you need to:

  • Run custom deployment scripts (Bash, Python, Node.js)

  • Execute API calls or database migrations

  • Integrate with tools not natively supported

  • Build complex workflows with custom logic

  • Orchestrate multi-step processes

How It Works

  1. Package Structure: You create bricks.json manually with custom configuration

  2. Execution: Bluebricks runs your container as a Kubernetes Job

  3. Input: Props are passed as vars.json file mounted in container

  4. Output: Container writes results to /workspace/outputs.json file. These outputs are registered as deployment outputs and can be referenced by downstream packages using Data.<package_id>.<output_key> syntax

  5. State: Bluebricks tracks outputs and input hash in state.json

See also

Last updated

Was this helpful?