# Managing Storage

## Overview

Storage changes include resizing disks, changing disk types, and adding new volumes. Like other [Day 2 operations](https://bluebricks.co/docs/managing-infrastructure/managing-infrastructure), you update the relevant [blueprint](https://bluebricks.co/docs/core-concepts/packages/blueprints-overview) inputs and start a new run of the environment. However, storage changes have important constraints: some modifications are in-place, while others require resource replacement.

## Prerequisites

* An existing [environment](https://bluebricks.co/docs/core-concepts/environments) with a completed run
* The blueprint exposes inputs for storage properties (e.g., `disk_size_gb`, `storage_type`, `disk_sku`)
* Access to the [collection](https://bluebricks.co/docs/core-concepts/collections) that the environment belongs to

## How to update storage

{% tabs %}
{% tab title="Git" %}
Update the storage-related input in your environment manifest file and push the change. If you use [GitOps environments](https://bluebricks.co/docs/core-concepts/environments/gitops-environments), Bluebricks triggers a plan automatically. Review the plan carefully for any `destroy` + `create` operations before merging.

```yaml
# environment manifest (e.g., data-layer-prod.yaml)
inputs:
  disk_size_gb: 256
```

See [Managing Configuration on Git](https://bluebricks.co/docs/workflows/bluebricks-git-repository-guide/managing-configuration-on-git) for the full manifest format.
{% endtab %}

{% tab title="Bluebricks app" %}

1. Open the **Environments** page and go to the environment you want to update
2. In the three-dot menu click **Deploy**
3. Update the storage-related input values (e.g., change `disk_size_gb` from `128` to `256`)
4. Review the plan carefully for any `destroy` + `create` operations
5. Click **Deploy**
   {% endtab %}

{% tab title="CLI" %}

```bash
bricks install data-layer \
  --collection=production \
  --env-slug=data-layer-prod \
  --props '{"disk_size_gb": 256}'
```

Always preview first with `--plan-only`:

```bash
bricks install data-layer \
  --collection=production \
  --env-slug=data-layer-prod \
  --props '{"disk_size_gb": 256}' \
  --plan-only
```

{% endtab %}
{% endtabs %}

## Common storage changes

### Increase disk size

Increasing disk size is generally safe and applied in-place by most cloud providers. The underlying volume expands without data loss.

<table><thead><tr><th width="150.98046875">Cloud provider</th><th width="183.51171875">Typical input</th><th>Example change</th></tr></thead><tbody><tr><td>Azure</td><td><code>disk_size_gb</code></td><td><code>128</code> to <code>256</code></td></tr><tr><td>AWS</td><td><code>volume_size</code></td><td><code>100</code> to <code>200</code></td></tr><tr><td>GCP</td><td><code>disk_size_gb</code></td><td><code>50</code> to <code>100</code></td></tr></tbody></table>

{% hint style="info" %}
Disk size can only be increased, not decreased. If you need a smaller disk, you must create a new volume and migrate data.
{% endhint %}

### Change disk type

Switching between disk performance tiers (e.g., Standard HDD to Premium SSD) changes the IOPS and throughput profile of the volume.

```bash
bricks install data-layer \
  --collection=production \
  --env-slug=data-layer-prod \
  --props '{"storage_account_type": "Premium_LRS"}'
```

{% hint style="warning" %}
Some disk type changes require resource replacement depending on the cloud provider and IaC resource definition. Always run with `--plan-only` first and check whether the plan shows an in-place update or a destroy/create cycle.
{% endhint %}

### Add additional volumes

If the blueprint supports multiple volumes (e.g., a `data_disks` list input), you can add new disks by updating the input:

```bash
bricks install data-layer \
  --collection=production \
  --env-slug=data-layer-prod \
  --props-file=./storage-config.json
```

<details>

<summary>Example storage-config.json</summary>

```json
{
  "data_disks": [
    { "name": "data-01", "size_gb": 256, "type": "Premium_LRS" },
    { "name": "data-02", "size_gb": 512, "type": "Premium_LRS" }
  ]
}
```

</details>

## Constraints and caveats

Storage changes carry more risk than compute scaling because they can involve data. Keep these constraints in mind:

* **Shrinking disks is not supported** by most cloud providers. You must create a new volume and migrate data manually
* **Disk type changes may require replacement**: if the IaC resource does not support in-place type changes, the plan will show a destroy + create cycle. This means data loss unless you have backups
* **OS disks vs data disks**: changing the OS disk often requires VM replacement. Data disk changes are usually independent
* **Filesystem expansion**: increasing disk size at the cloud layer does not automatically expand the filesystem. Your application or startup scripts must handle partition and filesystem resizing

{% hint style="danger" %}
If the plan shows a `destroy` operation on a storage resource, stop and verify that you have backups before proceeding. Destroyed disks cannot be recovered.
{% endhint %}

## Cloud provider behavior

Storage change behavior varies by provider. Size increases are generally safe and online, but type changes and shrinking have significant constraints.

<details>

<summary>Azure Managed Disks</summary>

* Disk size increases are in-place but may require the VM to be deallocated
* Changing between Standard HDD, Standard SSD, and Premium SSD is supported in-place for most configurations
* Ultra Disk changes have additional constraints around availability zones

</details>

<details>

<summary>AWS EBS</summary>

* Volume size increases are applied online (no downtime) for most volume types
* Volume type changes (e.g., `gp2` to `gp3`) are applied in-place
* IOPS and throughput modifications for `gp3` and `io1`/`io2` volumes are in-place
* After resizing, the OS must extend the filesystem (`resize2fs` or `xfs_growfs`)

</details>

<details>

<summary>GCP Persistent Disks</summary>

* Disk size increases are online and do not require VM downtime
* Switching between `pd-standard`, `pd-balanced`, and `pd-ssd` requires creating a new disk
* Regional persistent disks have additional replication constraints

</details>

## What to check after storage changes

1. **Run status**: confirm the run completed successfully
2. **Disk state**: verify the new size and type in your cloud provider console
3. **Filesystem**: confirm the OS-level filesystem reflects the new disk size
4. **Application health**: check that databases and storage-dependent services are operating correctly
5. **Backups**: verify your backup schedule covers the updated volumes
