If you're deploying code for your lambda function via terraform, this code is usually zipped and uploaded to Amazon S3 by terraform. The ZIP file's hash is then stored to terraform's state. However we have observed that zipping files can create ZIP archives with different hashes on different machines. This means that if you're collaborating with colleages e.g. via git, each run of terraform will possibly see a different hash of the code's ZIP archive and try to replace the lambda function.
Workaround
This workaround is for single file lambdas.
You can't use the source_code_hash
attribute of the aws_lambda_function
resource since that will refer to the zip archive. You'll have to ignore that in terraform. Instead, you can do the hashing of the source code file yourself and append the hash to the output filename. This way only when the actual code changes, the file name that get's uploaded to AWS S3 changes too and triggers an update on the aws_lambda_function
resource.
Example
locals {
lambda_function_file_path = "${path.module}/assets/script.py"
lambda_function_hash = filesha256(local.lambda_function_file_path)
lambda_function_output_path = "${path.module}/assets/script-${local.lambda_function_hash}.zip"
}
data "archive_file" "function_zip" {
type = "zip"
source_file = local.lambda_function_file_path
output_path = local.lambda_function_output_path
}
resource "aws_lambda_function" "function" {
filename = local.lambda_function_output_path
function_name = "function"
role = aws_iam_role.function.arn
handler = "function.handler"
runtime = "python3.8"
# we ignore the hash of the function as
# we control updated code via the uploaded filename
lifecycle {
ignore_changes = [
source_code_hash,
last_modified,
]
}
}