Terraform not overwriting an object storage object
My GitHub CI process uses Terraform then Ansible to manage Linode resources.
On first run Terraform generates a local Ansible inventory file that Terraform then copies to a Linode Object Storage bucket.
But when the inventory changes, while subsequent CI runs create a revised local Ansible file correctly, the revised local file is not copied to object storage. The Terraform plan stage does not see that the remote file needs to be replaced (unless I delete the remote Object Storage object).
Here's an extract from the Terraform file (modified to anonymise):
resource "local_file" "ansible_inventory" {
depends_on = [linode_instance.hosts]
filename = "inventory"
content = templatefile(
"inventory.tpl",
{
host_ip_addresses = linode_instance.hosts.*.ip_address
host_names = linode_instance.hosts.*.label
}
)
}
resource "linode_object_storage_object" "ansible_inventory" {
depends_on = [local_file.ansible_inventory]
bucket = "pale"
cluster = "us-east-1"
key = "ansible_inventory"
source = "inventory"
access_key = var.bucket_access_key_ansible_inventory
secret_key = var.bucket_secret_key_ansible_inventory
}
I've been unable to find any documentation explaining the behaviour, or any flags for linode_object_storage_object that would help (I was hoping to find something like "always_overwrite_remote = true").
I've been able to work around the problem for now by getting the GitHub workflow to delete the remote object storage object using "s3cmd … del …" before any Terraform operations are run, but this grates horribly against Terraform's declarative nature.
So, is this a misunderstanding on my part regarding the workings of Terraform, Object Storage, or linode_object_storage_object? Or is there an underlying problem…? My suspicion currently lies with linode_object_storage_object.
3 Replies
Hello!
It looks like the etag
argument is what you're looking for. This is used to trigger object updates whenever the hash of the source file changes.
For example:
resource "linode_object_storage_object" "ansible_inventory" {
depends_on = [local_file.ansible_inventory]
bucket = "pale"
cluster = "us-east-1"
key = "ansible_inventory"
source = "inventory"
etag = filemd5("inventory")
access_key = var.bucket_access_key_ansible_inventory
secret_key = var.bucket_secret_key_ansible_inventory
}
Thank you for this, etag
was indeed the solution (my bad for missing that in the docs).
I wasn't able to use etag = filemd5("inventory")
though due to this error at the terraform plan
stage:
│ Error: Error in function call
│
│ on main.tf line 62, in resource "linode_object_storage_object" "ansible_inventory":
│ 62: etag = filemd5("inventory")
│
│ Call to function "filemd5" failed: no file exists at inventory; this
│ function works only with files that are distributed as part of the
│ configuration source code, so if this file will be created by a resource in
│ this configuration you must instead obtain this result from an attribute of
│ that resource.
╵
Error: Process completed with exit code 1.
Which makes sense, as file "inventory" is indeed created by a resource in my configuration.
The following does seem to work though, by passing the "inventory" file's content into the md5
function:
etag = md5(local_file.ansible_inventory.content)