Terraform Aws Ecs_service Force_new_deployment How to Make It Work ?
Fleeting resource "aws_ecs_service" "main" {
#+HUGO: more
# ...snip...
force_new_deployment = true
triggers = {
redeployment = what to put in here?
}
}
plantimestamp
resource "aws_ecs_service" "main" {
# ...snip...
force_new_deployment = true
triggers = {
redeployment = plantimestamp()
}
}
will cause the service to be redeployed everytime.
some value that changes
resource "aws_ecs_service" "main" {
# ...snip...
force_new_deployment = true
triggers = {
redeployment = aws_ecs_task_definition.task.revision
}
}
will result in this kind of error
When expanding the plan for aws_ecs_service.main to include new values learned so far during apply, provider “registry.terraform.io/hashicorp/aws” produced an invalid new │ value for .triggers[“redeployment”]: was cty.StringVal(“42”), but now cty.StringVal(“43”).
Because the value at the time of planning was 42, then the new taskdefinition is sent, resulting in the value 43 and when it tries to apply it to the task, it gets 43. Resulting in a plan/apply difference that it does not like.
don’t rely of terraform
resource "aws_ecs_service" "main" {
# ...snip...
force_new_deployment = false
triggers = {}
}
- name: Deploy to ECS
id: DeployToECS run: | cd terraform terraform init terraform apply echo “CLUSTER_ARN=$(terraform output -raw ecs_cluster_arn)” >> \(GITHUB_OUTPUT echo “SERVICE_ARN=\)(terraform output -raw ecs_service_arn)” >> $GITHUB_OUTPUT
- name: Trigger new ECS service deployment
run: aws ecs update-service –cluster ${{ steps.DeployToECS.outputs.CLUSTER_ARN }} –service ${{ steps.DeployToECS.outputs.SERVICE_ARN }} –force-new-deployment
— https://github.com/ministryofjustice/modernisation-platform-terraform-ecs-cluster/pull/28 ()
use terraform data to provide a plan/apply fixed value1
resource "terraform_data" "trigger_task_redeploy" {
input = plantimestamp()
triggers_replace = {
revision = aws_ecs_task_definition.task.revision
}
lifecycle {
ignore_changes = [input]
}
}
resource "aws_ecs_service" "main" {
# ...snip...
force_new_deployment = false
triggers = {
"redeploy_trigger" = terraform_data.trigger_task_redeploy.input,
}
}
The terraform_data input will change every time terraform is run, but thanks to the lifecycle Meta-Argument, it won’t trigger anything.
But in case the revision changes, the resource will change, triggering a change in the aws_ecs_service resource. But instead of using that value, it will use the plantimestamp one, that does not suffer from the plan/apply discrepancy.
Permalink
-
↩︎It works by inserting terraform_data as follows.
resource “aws_ecs_service” “main” {
force_new_deployment = true triggers = {
redeployment = terraform_data.triggers.input } }
resource “terraform_data” “triggers” { input = plantimestamp() triggers_replace = { secret_id = aws_secretsmanager_secret_version.main.secret_id version_id = aws_secretsmanager_secret_version.main.version_id } lifecycle { ignore_changes = [input] } }
— https://github.com/hashicorp/terraform-provider-aws/issues/29241 ()