Manage AWS Auto Scaling Groups | Terraform | HashiCorp Developer

AWS Auto Scaling groups (ASGs) let you easily scale and manage a collection of
EC2 instances that run the same instance configuration. You can then manage
the number of running instances manually or dynamically, allowing you to lower
operating costs. Since ASGs are dynamic, Terraform does not manage the
underlying instances directly because every scaling action would introduce
state drift. You can use Terraform lifecycle arguments to avoid drift or
accidental changes.

In this tutorial, you will use Terraform to provision and manage an Auto
Scaling group and learn how Terraform configuration supports the dynamic
aspects of the resource. You will launch an ASG with traffic managed by a load
balancer and define a scaling policy to automatically modify the number of
instances running in the group. You will learn how to use lifecycle arguments to avoid
unwanted scaling of your ASG.

Prerequisites

This tutorial assumes that you are familiar with the standard Terraform
workflow. If you are new to Terraform, complete the Get Started
tutorials first.

For this tutorial, you will need:

Clone example repository

Clone the example
repository for this
tutorial, which contains configuration for an Auto Scaling group.

$

git

clone https://github.com/hashicorp/learn-terraform-aws-asg.git

Change into the repository directory.

$

cd

learn-terraform-aws-asg

Review configuration

In your code editor, open the main.tf file to review the configuration in this repository.

This configuration uses the vpc
module
to create a new VPC with public subnets for you to provision the rest of the
resources in. The other resources reference the VPC module’s outputs. For
example, the aws_lb_target_group resource references the VPC ID.

main.tf

main.tf

resource

"aws_lb_target_group"

"hashicups"

{

name

=

"learn-asg-hashicups"

port

=

80

protocol

=

"HTTP"

vpc_id

=

module.vpc.vpc_id

}

EC2 Launch Configuration

A launch configuration specifies the EC2 instance configuration that an ASG
will use to launch each new instance.

main.tf

main.tf

resource

"aws_launch_configuration"

"terramino"

{

name_prefix

=

"learn-terraform-aws-asg-"

image_id

=

data.aws_ami.amazon-linux.id

instance_type

=

"t2.micro"

user_data

=

file(

"user-data.sh"

)

security_groups

=

[

aws_security_group.terramino_instance.id

]

lifecycle

{

create_before_destroy

=

true

}

}

Launch configurations support many arguments and customization options for your instances.

This configuration specifies:

  • a name prefix to use for all versions of this launch configuration. Terraform will append a unique identifier to the prefix for each launch configuration created.
  • an Amazon Linux AMI specified by a data source.
  • an instance type.
  • a user data script, which configures the instances to run the user-data.sh file in this repository at launch time. The user data script installs dependencies and initializes Terramino, a Terraform-skinned Tetris application.
  • a security group to associate with the instances. The security group (defined later in this file) allows ingress traffic on port 80 and egress traffic to all endpoints.

You cannot modify a launch configuration, so any changes to the definition
force Terraform to create a new resource. The create_before_destroy argument
in the lifecycle block instructs Terraform to create the new version before
destroying the original to avoid any service interruptions.

Auto Scaling group

An ASG is a logical grouping of EC2 instances running the same configuration.
ASGs allow for dynamic scaling and make it easier to manage a group of
instances that host the same services.

main.tf

main.tf

resource

"aws_autoscaling_group"

"terramino"

{

min_size

=

1

max_size

=

3

desired_capacity

=

1

launch_configuration

=

aws_launch_configuration.terramino.name

vpc_zone_identifier

=

module.vpc.public_subnets

}

This ASG configuration sets:

  • the minimum and maximum number of instances allowed in the group.
  • the desired count to launch (desired_capacity).
  • a launch configuration to use for each instance in the group.
  • a list of subnets where the ASGs will launch new instances. This configuration references the public subnets created by the vpc module.

Load balancer resources

Since you will launch multiple instances running your Terramino application, you
must provision a load balancer to distribute traffic across the instances.

The aws_lb resource creates an application load balancer, which routes traffic at the application layer.

main.tf

main.tf

resource

"aws_lb"

"terramino"

{

name

=

"learn-asg-terramino-lb"

internal

=

false

load_balancer_type

=

"application"

security_groups

=

[

aws_security_group.terramino_lb.id

]

subnets

=

module.vpc.public_subnets

}

The aws_lb_listener
resource
specifies how to handle any HTTP requests to port 80. In this case, it
forwards all requests to the load balancer to a target group. You can define
multiple listeners with distinct listener rules for more complex traffic
routing.

main.tf

main.tf

resource

"aws_lb_listener"

"terramino"

{

load_balancer_arn

=

aws_lb.terramino.arn

port

=

"80"

protocol

=

"HTTP"

default_action

{

type

=

"forward"

target_group_arn

=

aws_lb_target_group.terramino.arn

}

}

A target group defines the collection of instances your load balancer sends
traffic to. It does not manage the configuration of the targets in that group
directly, but instead specifies a list of destinations the load balancer can
forward requests to.

main.tf

main.tf

resource

"aws_lb_target_group"

"terramino"

{

name

=

"learn-asg-terramino"

port

=

80

protocol

=

"HTTP"

vpc_id

=

module.vpc.vpc_id

}

resource

"aws_autoscaling_attachment"

"terramino"

{

autoscaling_group_name

=

aws_autoscaling_group.terramino.id

alb_target_group_arn

=

aws_lb_target_group.terramino.arn

}

While you can use an aws_lb_target_group_attachment
resource
to directly associate an EC2 instance or other target type with the target
group, the dynamic nature of instances in an ASG makes that hard to maintain in
configuration. Instead, this configuration links your Auto Scaling group with
the target group using the aws_autoscaling_attachment resource. This allows
AWS to automatically add and remove instances from the target group over their
lifecycle.

Security groups

This configuration also defines two security groups: one to associate with your
ASG EC2 instances, and another for the load balancer.

main.tf

main.tf

resource

"aws_security_group"

"terramino_instance"

{

name

=

"learn-asg-terramino-instance"

ingress

{

from_port

=

80

to_port

=

80

protocol

=

"tcp"

security_groups

=

[

aws_security_group.terramino_lb.id

]

}

egress

{

from_port

=

0

to_port

=

0

protocol

=

"-1"

security_groups

=

[

aws_security_group.terramino_lb.id

]

}

vpc_id

=

module.vpc.vpc_id

}

resource

"aws_security_group"

"terramino_lb"

{

name

=

"learn-asg-terramino-lb"

ingress

{

from_port

=

80

to_port

=

80

protocol

=

"tcp"

cidr_blocks

=

[

"0.0.0.0/0"

]

}

egress

{

from_port

=

0

to_port

=

0

protocol

=

"-1"

cidr_blocks

=

[

"0.0.0.0/0"

]

}

vpc_id

=

module.vpc.vpc_id

}

Both of these security groups allow ingress HTTP traffic on port 80 and all
outbound traffic. However, the aws_security_group.terramino_instance group
restricts inbound traffic to requests coming from any source associated with
the aws_security_group.terramino_lb security group, ensuring that only
requests forwarded from your load balancer will reach your instances.

Apply configuration

In your terminal, initialize your configuration.

$

terraform init

Initializing the backend...

Initializing provider plugins...

- Reusing previous version of hashicorp/aws from the dependency lock file

- Installing hashicorp/aws v3.50.0...

- Installed hashicorp/aws v3.50.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.

If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

Now, apply the configuration to create the VPC and networking resources, Auto
Scaling group, launch configuration, load balancer, and target group. Respond
yes to the prompt to confirm the operation.

$

terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

+ create

Terraform will perform the following actions:

#

Plan: 18 to add, 0 to change, 0 to destroy.

Changes to Outputs:

+ lb_endpoint = (known after apply)

Do you want to perform these actions in workspace "rita-asg"?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

#

Apply complete! Resources: 18 added, 0 changed, 0 destroyed.

Outputs:

application_endpoint = "https://learn-asg-terramino-lb-1572171601.us-east-2.elb.amazonaws.com/index.php"

asg_name = "terramino"

lb_endpoint = "https://learn-asg-terramino-lb-1572171601.us-east-2.elb.amazonaws.com"

Next, use cURL to send a request to the lb_endpoint output, which reports
the instance ID of the EC2 instance responding to your request.

$

curl

$(

terraform output -raw lb_endpoint

)

i-0735ecca64f49e5e1

Then, visit the address in the application_endpoint output value in your
browser to test out your application.

Scale instances

Use the AWS CLI to scale the number of instances in your ASG.

$

aws autoscaling set-desired-capacity --auto-scaling-group-name

$(

terraform output -raw asg_name

)

--desired-capacity

2

You can verify whether the newly launched instance has finished initializing in
the EC2
console.

Once the instance is running, make a few requests to the load balancer
endpoint.

$

for

i

in

`

seq

1

5

`

;

do

curl

$(

terraform output -raw lb_endpoint

)

;

echo

;

done

i-0ae5296368d386a56

i-084167eee4ef1bce0

i-084167eee4ef1bce0

i-084167eee4ef1bce0

i-0ae5296368d386a56

The response now varies between two IDs, confirming that your target
group includes the new EC2 instance and that the load balancer is distributing
your request across multiple hosts.

Now, run a terraform plan to review the execution plan Terraform proposes to
reconcile your scaled Auto Scaling group with the written configuration in your
working directory.

$

terraform plan

...

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the

last "terraform apply":

#

aws_autoscaling_group.terramino has changed

~ resource "aws_autoscaling_group" "terramino" {

~ desired_capacity = 1 -> 2

+ enabled_metrics = []

id = "terramino"

+ load_balancers = []

name = "terramino"

+ suspended_processes = []

+ target_group_arns = [

+ "arn:aws:elasticloadbalancing:us-east-2:561656980159:targetgroup/learn-asg-terramino/29d2f819df0d2494",

]

+ termination_policies = []

#

(

17

unchanged attributes hidden

)

}

Unless you have made equivalent changes to your configuration, or ignored the

relevant attributes using ignore_changes, the following plan may include

actions to undo or respond to these changes.

─────────────────────────────────────────────────────────────────────────────

Terraform used the selected providers to generate the following execution

plan. Resource actions are indicated with the following symbols:

~ update in-place

Terraform will perform the following actions:

#

aws_autoscaling_group.terramino will be updated in-place

~ resource "aws_autoscaling_group" "terramino" {

~ desired_capacity = 2 -> 1

id = "terramino"

name = "terramino"

~ target_group_arns = [

- "arn:aws:elasticloadbalancing:us-east-2:561656980159:targetgroup/learn-asg-terramino/29d2f819df0d2494",

]

#

(

21

unchanged attributes hidden

)

}

Plan: 0 to add, 1 to change, 0 to destroy.

Terraform proposes to scale your instances back down to 1, since your
configuration specifies desired_capacity = 1. While it may make sense to
define a desired capacity at launch time, you should rely on scaling policies
or other mechanisms to manage the instance count over the ASG’s lifecycle. To
do so, you must ignore the desired_capacity value for future Terraform
operations using a Terraform lifecycle rule. For example, if you manually scale
your group to 5 instances to respond to higher traffic load and also modify
your user data script, applying the configuration would update your launch
configuration with the new user data but also scale down the number of
instances to 1, risking overloading the machine.

Terraform also attempts to overwrite the association of your ASG and target
group. You can associate a target group with an ASG both through a standalone
resource as done in the current configuration, or through an inline argument to
the aws_autoscaling_group resource. The two are mutually exclusive, so if you
use the aws_autoscaling_attachment resource as done in this configuration,
you must ignore changes to the attribute of the ASG resource itself.

Set lifecycle rule

To prevent Terraform from scaling your instances when it changes other aspects
of your configuration, use a lifecycle argument to ignore changes to the
desired capacity and target groups. Add the following code to your
aws_autoscaling_group resource block.

main.tf

main.tf

resource

"aws_autoscaling_group"

"terramino"

{

min_size

=

1

max_size

=

3

desired_capacity

=

1

launch_configuration

=

aws_launch_configuration.terramino.name

vpc_zone_identifier

=

module.vpc.public_subnets

}

lifecycle

{

ignore_changes

=

[

desired_capacity, target_group_arns

]

}

}

Now run terraform apply to set the lifecycle rule on the resource.

$

terraform apply

No changes. Your infrastructure matches the configuration.

Terraform now respects dynamic scaling operations and does not disassociate
your ASG from the load balancer target group.

Now, list the resources Terraform is tracking in your state file.

$

terraform state list

data.aws_ami.amazon-linux

data.aws_availability_zones.available

aws_autoscaling_attachment.terramino

aws_autoscaling_group.terramino

aws_launch_configuration.terramino

aws_lb.terramino

aws_lb_listener.terramino

aws_lb_target_group.terramino

aws_security_group.terramino_instance

aws_security_group.terramino_lb

module.vpc.aws_internet_gateway.this[0]

module.vpc.aws_route.public_internet_gateway[0]

module.vpc.aws_route_table.public[0]

module.vpc.aws_route_table_association.public[0]

module.vpc.aws_route_table_association.public[1]

module.vpc.aws_route_table_association.public[2]

module.vpc.aws_subnet.public[0]

module.vpc.aws_subnet.public[1]

module.vpc.aws_subnet.public[2]

module.vpc.aws_vpc.this[0]

Notice that Terraform does not list your ASG’s EC2 instances in the state’s
resources. This is because Terraform is not aware of the member instances of
the group, only the capacity.

Add scaling policy

You can scale the number of instances in your ASG manually as you did earlier
in this tutorial. This allows you to easily launch more instances running the
same configuration, but requires you to monitor your infrastructure to
understand when to modify capacity.

Auto Scaling groups also support automated scaling events, which you can
implement using Terraform. You can scale instances on a schedule – for example,
if certain services receive less traffic overnight, you can use the
aws_autoscaling_schedule
resource
to scale accordingly.

Alternatively, you can trigger scaling events in response to metric thresholds
or other benchmarks.

Open your main.tf file and paste in the following configuration for an
automated scaling policy and Cloud Watch metric alarm.

main.tf

main.tf

resource

"aws_autoscaling_policy"

"scale_down"

{

name

=

"terramino_scale_down"

autoscaling_group_name

=

aws_autoscaling_group.terramino.name

adjustment_type

=

"ChangeInCapacity"

scaling_adjustment

=

-

1

cooldown

=

120

}

resource

"aws_cloudwatch_metric_alarm"

"scale_down"

{

alarm_description

=

"Monitors CPU utilization for Terramino ASG"

alarm_actions

=

[

aws_autoscaling_policy.scale_down.arn

]

alarm_name

=

"terramino_scale_down"

comparison_operator

=

"LessThanOrEqualToThreshold"

namespace

=

"AWS/EC2"

metric_name

=

"CPUUtilization"

threshold

=

"10"

evaluation_periods

=

"2"

period

=

"120"

statistic

=

"Average"

dimensions

=

{

AutoScalingGroupName

=

aws_autoscaling_group.terramino.name

}

}

This policy configures your Auto Scaling group to destroy a member of the ASG
if the EC2 instances in your group use less than 10% CPU over 2 consecutive
evaluation periods of 2 minutes. This type of policy would allow you to optimize costs.

Apply the configuration to create the metric alarm and scaling policy. Respond
yes to the prompt to confirm the operation.

$

terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

+ create

Terraform will perform the following actions:

#

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

#

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

application_endpoint = "learn-asg-terramino-lb-196810715.us-east-2.elb.amazonaws.com/index.php"

asg_name = "terramino"

lb_endpoint = "learn-asg-terramino-lb-196810715.us-east-2.elb.amazonaws.com"

Given the lightweight application you are running in this group, AWS will
remove one of the 2 instances you scaled up to. Monitor your ASG’s instance
count in the AWS console for a few minutes to observe the change.

AWS will not continue to scale down your instances, since you set a minimum
capacity for the group of 1 instance.

Destroy configuration

Now that you have completed this tutorial, destroy the AWS resources you
provisioned to avoid incurring unnecessary costs. Respond yes to the prompt
to confirm the operation.

$

terraform destroy

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

- destroy

Terraform will perform the following actions:

#

Plan: 0 to add, 0 to change, 20 to destroy.

Changes to Outputs:

- application_endpoint = "learn-asg-terramino-lb-196810715.us-east-2.elb.amazonaws.com/index.php" -> null

- asg_name = "terramino" -> null

- lb_endpoint = "learn-asg-terramino-lb-196810715.us-east-2.elb.amazonaws.com" -> null

Do you really want to destroy all resources?

Terraform will destroy all your managed infrastructure, as shown above.

There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

#

Destroy complete! Resources: 20 destroyed.

Next steps

In this tutorial, you used Terraform to provision an Auto Scaling group with
traffic managed by an application load balancer and learned how to use
Terraform’s lifecycle rules to support scaling the instances in your ASG. You
also learned how to use Terraform to create a dynamic scaling policy based on
your instances’ CPU utilization.

Learn more about managing autoscaling groups and AWS resources with Terraform:

Alternate Text Gọi ngay