cfm-reslib¶
CloudFormation Resource Library: a collection of useful custom resources that are missing from CloudFormation.
Instructions¶
Installation¶
cfm-reslib is delivered as a single CloudFormation template that exports a single output called cfm-reslib
. To use
it you must first install it in the account and region where it will be used.
Install¶
Installation is a simple one-liner. Make sure you have AWS CLI installed and configured.
aws cloudformation create-stack --stack-name cfm-reslib --template-url https://s3.amazonaws.com/cfm-reslib/cfm-reslib-latest.template --capabilities CAPABILITY_IAM
You can also download the template and manually install it using AWS Console.
Update¶
If you’ve already installed this library before, you need to run a different command to update to the latest version.
aws cloudformation update-stack --stack-name cfm-reslib --template-url https://s3.amazonaws.com/cfm-reslib/cfm-reslib-latest.template --capabilities CAPABILITY_IAM
Usage¶
Once installed cfm-reslib can be used by defining a custom resource with ServiceToken
set to the exported value.
See Available Custom Resources for a list of supported custom resource types.
YAML¶
Resources:
SomeCustomResource:
Type: Custom::SomeCustomResourceType
Properties:
ServiceToken: !ImportValue cfm-reslib
SomeParameter: some value
JSON¶
{
"Resources": {
"SomeCustomResource": {
"Type": "Custom::SomeCustomResourceType",
"Properties": {
"ServiceToken": {"Fn::ImportValue": "cfm-reslib"},
"SomeParameter": "some value"
}
}
}
}
Available Custom Resources¶
Custom::ElasticTranscoderPipeline¶
The Custom::ElasticTranscoderPipeline
resource creates an Elastic Transcoder pipeline.
Syntax¶
JSON¶
{ "Type" : "Custom::ElasticTranscoderPipeline", "Properties" : { "ServiceToken" : {"Fn::ImportValue": "cfm-reslib"}, "Name" : string, "InputBucket" : string, "OutputBucket" : string, "Role" : string, "AwsKmsKeyArn" : string, "Notifications" : Notifications, "ContentConfig" : PipelineOutputConfig, "ThumbnailConfig" : PipelineOutputConfig } }
YAML¶
Type: Custom::ElasticTranscoderPipeline Properties : ServiceToken : !ImportValue cfm-reslib Name : string InputBucket : string OutputBucket : string Role : string AwsKmsKeyArn : string Notifications : Notifications ContentConfig : PipelineOutputConfig ThumbnailConfig : PipelineOutputConfig
Properties¶
Name¶
The name of the pipeline. We recommend that the name be unique within the AWS account, but uniqueness is not enforced.
Constraints: Maximum 40 characters.
Required: Yes
Type: string
Update requires: No interruption
InputBucket¶
The Amazon S3 bucket in which you saved the media files that you want to transcode.
Required: Yes
Type: string
Update requires: No interruption
OutputBucket¶
The Amazon S3 bucket in which you want Elastic Transcoder to save the transcoded files. (Use this, or use ContentConfig:Bucket plus ThumbnailConfig:Bucket.)
Specify this value when all of the following are true:
You want to save transcoded files, thumbnails (if any), and playlists (if any) together in one bucket.
You do not want to specify the users or groups who have access to the transcoded files, thumbnails, and playlists.
You do not want to specify the permissions that Elastic Transcoder grants to the files.
When Elastic Transcoder saves files in
OutputBucket
, it grants full control over the files only to the AWS account that owns the role that is specified byRole
.You want to associate the transcoded files and thumbnails with the Amazon S3 Standard storage class.
If you want to save transcoded files and playlists in one bucket and thumbnails in another bucket, specify which users can access the transcoded files or the permissions the users have, or change the Amazon S3 storage class, omit
OutputBucket
and specify values forContentConfig
andThumbnailConfig
instead.Required: Yes
Type: string
Update requires: Replacement
Role¶
The IAM Amazon Resource Name (ARN) for the role that you want Elastic Transcoder to use to create the pipeline.
Required: Yes
Type: string
Update requires: No interruption
AwsKmsKeyArn¶
The AWS Key Management Service (AWS KMS) key that you want to use with this pipeline.
If you use either
s3
ors3-aws-kms
as yourEncryption:Mode
, you don't need to provide a key with your job because a default key, known as an AWS-KMS key, is created for you automatically. You need to provide an AWS-KMS key only if you want to use a non-default AWS-KMS key, or if you are using anEncryption:Mode
ofaes-cbc-pkcs7
,aes-ctr
, oraes-gcm
.Required: Yes
Type: string
Update requires: No interruption
Notifications¶
The Amazon Simple Notification Service (Amazon SNS) topic that you want to notify to report job status.
To receive notifications, you must also subscribe to the new topic in the Amazon SNS console.
Progressing: The topic ARN for the Amazon Simple Notification Service (Amazon SNS) topic that you want to notify when Elastic Transcoder has started to process a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic. For more information, see Create a Topic in the Amazon Simple Notification Service Developer Guide.
Complete: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder has finished processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic.
Warning: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters a warning condition while processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic.
Error: The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters an error condition while processing a job in this pipeline. This is the ARN that Amazon SNS returned when you created the topic.
Required: Yes
Type: Notifications
Update requires: No interruption
ContentConfig¶
The optional
ContentConfig
object specifies information about the Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists: which bucket to use, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files.If you specify values for
ContentConfig
, you must also specify values forThumbnailConfig
.If you specify values for
ContentConfig
andThumbnailConfig
, omit theOutputBucket
object.
Bucket: The Amazon S3 bucket in which you want Elastic Transcoder to save transcoded files and playlists.
Permissions (Optional): The Permissions object specifies which users you want to have access to transcoded files and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups.
Grantee Type: Specify the type of value that appears in the
Grantee
object:
Canonical: The value in the
Grantee
object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution. For more information about canonical user IDs, see Access Control List (ACL) Overview in the Amazon Simple Storage Service Developer Guide. For more information about using CloudFront origin access identities to require that users use CloudFront URLs instead of Amazon S3 URLs, see Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content.A canonical user ID is not the same as an AWS account number.
Email: The value in the
Grantee
object is the registered email address of an AWS account.Group: The value in the
Grantee
object is one of the following predefined Amazon S3 groups:AllUsers
,AuthenticatedUsers
, orLogDelivery
.Grantee: The AWS user or group that you want to have access to transcoded files and playlists. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group
Access: The permission that you want to give to the AWS user that you specified in
Grantee
. Permissions are granted on the files that Elastic Transcoder adds to the bucket, including playlists and video files. Valid values include:
READ
: The grantee can read the objects and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket.
READ_ACP
: The grantee can read the object ACL for objects that Elastic Transcoder adds to the Amazon S3 bucket.
WRITE_ACP
: The grantee can write the ACL for the objects that Elastic Transcoder adds to the Amazon S3 bucket.
FULL_CONTROL
: The grantee hasREAD
,READ_ACP
, andWRITE_ACP
permissions for the objects that Elastic Transcoder adds to the Amazon S3 bucket.StorageClass: The Amazon S3 storage class,
Standard
orReducedRedundancy
, that you want Elastic Transcoder to assign to the video files and playlists that it stores in your Amazon S3 bucket.Required: Yes
Type: PipelineOutputConfig
Update requires: No interruption
ThumbnailConfig¶
The
ThumbnailConfig
object specifies several values, including the Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files, which users you want to have access to the files, the type of access you want users to have, and the storage class that you want to assign to the files.If you specify values for
ContentConfig
, you must also specify values forThumbnailConfig
even if you don't want to create thumbnails.If you specify values for
ContentConfig
andThumbnailConfig
, omit theOutputBucket
object.
Bucket: The Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail files.
Permissions (Optional): The
Permissions
object specifies which users and/or predefined Amazon S3 groups you want to have access to thumbnail files, and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups.GranteeType: Specify the type of value that appears in the Grantee object:
Canonical: The value in the
Grantee
object is either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution.A canonical user ID is not the same as an AWS account number.
Email: The value in the
Grantee
object is the registered email address of an AWS account.Group: The value in the
Grantee
object is one of the following predefined Amazon S3 groups:AllUsers
,AuthenticatedUsers
, orLogDelivery
.Grantee: The AWS user or group that you want to have access to thumbnail files. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group.
Access: The permission that you want to give to the AWS user that you specified in
Grantee
. Permissions are granted on the thumbnail files that Elastic Transcoder adds to the bucket. Valid values include:
READ
: The grantee can read the thumbnails and metadata for objects that Elastic Transcoder adds to the Amazon S3 bucket.
READ_ACP
: The grantee can read the object ACL for thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.
WRITE_ACP
: The grantee can write the ACL for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.
FULL_CONTROL
: The grantee hasREAD
,READ_ACP
, andWRITE_ACP
permissions for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.StorageClass: The Amazon S3 storage class,
Standard
orReducedRedundancy
, that you want Elastic Transcoder to assign to the thumbnails that it stores in your Amazon S3 bucket.Required: Yes
Type: PipelineOutputConfig
Update requires: No interruption
Notifications¶
{ "Progressing" : string, "Completed" : string, "Warning" : string, "Error" : string }
Progressing : string Completed : string Warning : string Error : string
The Amazon Simple Notification Service (Amazon SNS) topic that you want to notify when Elastic Transcoder has started to process the job.
Required: No
Type: string
Update requires: No interruption
The Amazon SNS topic that you want to notify when Elastic Transcoder has finished processing the job.
Required: No
Type: string
Update requires: No interruption
The Amazon SNS topic that you want to notify when Elastic Transcoder encounters a warning condition.
Required: No
Type: string
Update requires: No interruption
The Amazon SNS topic that you want to notify when Elastic Transcoder encounters an error condition.
Required: No
Type: string
Update requires: No interruption
PipelineOutputConfig¶
{ "Bucket" : string, "StorageClass" : string, "Permissions" : [ Permission, ... ] }
Bucket : string StorageClass : string Permissions : - Permission
The Amazon S3 bucket in which you want Elastic Transcoder to save the transcoded files. Specify this value when all of the following are true:
You want to save transcoded files, thumbnails (if any), and playlists (if any) together in one bucket.
You do not want to specify the users or groups who have access to the transcoded files, thumbnails, and playlists.
You do not want to specify the permissions that Elastic Transcoder grants to the files.
You want to associate the transcoded files and thumbnails with the Amazon S3 Standard storage class.
If you want to save transcoded files and playlists in one bucket and thumbnails in another bucket, specify which users can access the transcoded files or the permissions the users have, or change the Amazon S3 storage class, omit OutputBucket and specify values for
ContentConfig
andThumbnailConfig
instead.Required: No
Type: string
Update requires: No interruption
The Amazon S3 storage class,
Standard
orReducedRedundancy
, that you want Elastic Transcoder to assign to the video files and playlists that it stores in your Amazon S3 bucket.Required: No
Type: string
Update requires: No interruption
Optional. The
Permissions
object specifies which users and/or predefined Amazon S3 groups you want to have access to transcoded files and playlists, and the type of access you want them to have. You can grant permissions to a maximum of 30 users and/or predefined Amazon S3 groups.If you include
Permissions
, Elastic Transcoder grants only the permissions that you specify. It does not grant full permissions to the owner of the role specified byRole
. If you want that user to have full control, you must explicitly grant full control to the user.If you omit
Permissions
, Elastic Transcoder grants full control over the transcoded files and playlists to the owner of the role specified byRole
, and grants no other permissions to any other user or group.Required: No
Type: List of Permission
Update requires: No interruption
{ "GranteeType" : string, "Grantee" : string, "Access" : [ string, ... ] }
GranteeType : string Grantee : string Access : - string
The type of value that appears in the Grantee object:
Canonical
: Either the canonical user ID for an AWS account or an origin access identity for an Amazon CloudFront distribution.A canonical user ID is not the same as an AWS account number.
Group
: One of the following predefined Amazon S3 groups:AllUsers
,AuthenticatedUsers
, orLogDelivery
.Required: No
Type: string
Update requires: No interruption
The AWS user or group that you want to have access to transcoded files and playlists. To identify the user or group, you can specify the canonical user ID for an AWS account, an origin access identity for a CloudFront distribution, the registered email address of an AWS account, or a predefined Amazon S3 group.
Required: No
Type: string
Update requires: No interruption
The permission that you want to give to the AWS user that is listed in Grantee. Valid values include:
READ
: The grantee can read the thumbnails and metadata for thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.
READ_ACP
: The grantee can read the object ACL for thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.
WRITE_ACP
: The grantee can write the ACL for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.
FULL_CONTROL
: The grantee has READ, READ_ACP, and WRITE_ACP permissions for the thumbnails that Elastic Transcoder adds to the Amazon S3 bucket.Required: No
Type: List of string
Update requires: No interruption
Custom::FindAMI¶
The Custom::FindAMI
resource finds an AMI by owner, name and architecture. The result can thenbe used with Ref
Syntax¶
JSON¶
{ "Type" : "Custom::FindAMI", "Properties" : { "ServiceToken" : {"Fn::ImportValue": "cfm-reslib"}, "Owner" : string, "Name" : string, "Architecture" : string } }
YAML¶
Type: Custom::FindAMI Properties : ServiceToken : !ImportValue cfm-reslib Owner : string Name : string Architecture : string
Properties¶
Owner¶
Image owner (e.g. "679593333241" for CentOS)
Required: No
Type: string
Update requires: Replacement
Name¶
Image name (e.g. "CentOS Linux 7 x86_64 HVM EBS *")
Required: No
Type: string
Update requires: Replacement
Architecture¶
Image architecture (e.g. "x86_64")
Required: No
Type: string
Update requires: Replacement
Examples¶
Create EC2 Instance With Latest Ubuntu¶
The following example searches for the latest version of Ubuntu 16.04 AMI and creates a newEC2 instance with this image.
JSON¶
{
"UbuntuAMI": {
"Type": "Custom::FindAMI",
"Properties": {
"ServiceToken": {
"Fn::ImportValue": "cfm-reslib"
},
"Owner": "099720109477",
"Name": "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04*",
"Architecture": "x86_64"
}
},
"UbuntuInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"InstanceType": "t2.micro",
"ImageId": {
"Ref": "UbuntuAMI"
}
}
}
}
YAML¶
UbuntuAMI:
Properties:
Architecture: x86_64
Name: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04*
Owner: "099720109477"
ServiceToken:
Fn::ImportValue: cfm-reslib
Type: Custom::FindAMI
UbuntuInstance:
Properties:
ImageId:
Ref: UbuntuAMI
InstanceType: t2.micro
Type: AWS::EC2::Instance
Custom::KafkaCluster¶
The Custom::KafkaCluster
resource creates a Kafka Cluster (MSK). Now offificially available in CloudFormation with AWS::MSK::Cluster.
Syntax¶
JSON¶
{ "Type" : "Custom::KafkaCluster", "Properties" : { "ServiceToken" : {"Fn::ImportValue": "cfm-reslib"}, "BrokerNodeGroupInfo" : BrokerNodeGroupInfo, "ClientAuthentication" : ClientAuthentication, "ClusterName" : string, "ConfigurationInfo" : ConfigurationInfo, "EncryptionInfo" : EncryptionInfo, "EnhancedMonitoring" : string, "OpenMonitoring" : OpenMonitoringInfo, "KafkaVersion" : string, "LoggingInfo" : LoggingInfo, "NumberOfBrokerNodes" : integer, "Tags" : map } }
YAML¶
Type: Custom::KafkaCluster Properties : ServiceToken : !ImportValue cfm-reslib BrokerNodeGroupInfo : BrokerNodeGroupInfo ClientAuthentication : ClientAuthentication ClusterName : string ConfigurationInfo : ConfigurationInfo EncryptionInfo : EncryptionInfo EnhancedMonitoring : string OpenMonitoring : OpenMonitoringInfo KafkaVersion : string LoggingInfo : LoggingInfo NumberOfBrokerNodes : integer Tags : map
Properties¶
BrokerNodeGroupInfo¶
Information about the broker nodes in the cluster.
Required: Yes
Type: BrokerNodeGroupInfo
Update requires: Replacement
ClientAuthentication¶
Includes all client authentication related information.
Required: Yes
Type: ClientAuthentication
Update requires: Replacement
ClusterName¶
The name of the cluster.
Required: Yes
Type: string
Update requires: Replacement
ConfigurationInfo¶
Represents the configuration that you want MSK to use for the brokers in a cluster.
Required: Yes
Type: ConfigurationInfo
Update requires: Replacement
EncryptionInfo¶
Includes all encryption-related information.
Required: Yes
Type: EncryptionInfo
Update requires: Replacement
EnhancedMonitoring¶
Specifies the level of monitoring for the MSK cluster. The possible values are DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, and PER_TOPIC_PER_PARTITION.
Required: Yes
Type: string
Update requires: Replacement
OpenMonitoring¶
The settings for open monitoring.
Required: Yes
Type: OpenMonitoringInfo
Update requires: Replacement
KafkaVersion¶
The version of Apache Kafka.
Required: Yes
Type: string
Update requires: Replacement
LoggingInfo¶
NumberOfBrokerNodes¶
The number of broker nodes in the cluster.
Required: Yes
Type: integer
Update requires: Replacement
Tags¶
Create tags when creating the cluster.
Required: Yes
Type: map
Update requires: Replacement
BrokerNodeGroupInfo¶
{ "BrokerAZDistribution" : string, "ClientSubnets" : [ string, ... ], "InstanceType" : string, "SecurityGroups" : [ string, ... ], "StorageInfo" : StorageInfo }
BrokerAZDistribution : string ClientSubnets : - string InstanceType : string SecurityGroups : - string StorageInfo : StorageInfo
The distribution of broker nodes across Availability Zones. This is an optional parameter. If you don't specify it, Amazon MSK gives it the value DEFAULT. You can also explicitly set this parameter to the value DEFAULT. No other values are currently allowed.
Amazon MSK distributes the broker nodes evenly across the Availability Zones that correspond to the subnets you provide when you create the cluster.
Required: Yes
Type: string
Update requires: No interruption
The list of subnets to connect to in the client virtual private cloud (VPC). AWS creates elastic network interfaces inside these subnets. Client applications use elastic network interfaces to produce and consume data. Client subnets can't be in Availability Zone us-east-1e.
Required: Yes
Type: List of string
Update requires: No interruption
The type of Amazon EC2 instances to use for Kafka brokers. The following instance types are allowed: kafka.m5.large, kafka.m5.xlarge, kafka.m5.2xlarge, kafka.m5.4xlarge, kafka.m5.12xlarge, and kafka.m5.24xlarge.
Required: Yes
Type: string
Update requires: No interruption
The AWS security groups to associate with the elastic network interfaces in order to specify who can connect to and communicate with the Amazon MSK cluster. If you don't specify a security group, Amazon MSK uses the default security group associated with the VPC.
Required: Yes
Type: List of string
Update requires: No interruption
ClientAuthentication¶
Details for ClientAuthentication using SASL.
Required: No
Type: Sasl
Update requires: No interruption
ConfigurationInfo¶
EncryptionInfo¶
{ "EncryptionAtRest" : EncryptionAtRest, "EncryptionInTransit" : EncryptionInTransit }
The data-volume encryption details.
Required: No
Type: EncryptionAtRest
Update requires: No interruption
The details for encryption in transit.
Required: No
Type: EncryptionInTransit
Update requires: No interruption
{ "ClientBroker" : string, "InCluster" : boolean }
ClientBroker : string InCluster : boolean
Indicates the encryption setting for data in transit between clients and brokers. The following are the possible values.
TLS means that client-broker communication is enabled with TLS only.
TLS_PLAINTEXT means that client-broker communication is enabled for both TLS-encrypted, as well as plaintext data.
PLAINTEXT means that client-broker communication is enabled in plaintext only.
The default value is TLS_PLAINTEXT.
Required: No
Type: string
Update requires: No interruption
When set to true, it indicates that data communication among the broker nodes of the cluster is encrypted. When set to false, the communication happens in plaintext.
The default value is true.
Required: No
Type: boolean
Update requires: No interruption
OpenMonitoringInfo¶
{ "JmxExporter" : JmxExporterInfo, "NodeExporter" : NodeExporterInfo }
Indicates whether you want to enable or disable the JMX Exporter.
Required: No
Type: JmxExporterInfo
Update requires: No interruption
Indicates whether you want to enable or disable the Node Exporter.
Required: No
Type: NodeExporterInfo
Update requires: No interruption
LoggingInfo¶
{ "CloudWatchLogs" : CloudWatchLogs, "Firehose" : Firehose, "S3" : S3 }
CloudWatchLogs : CloudWatchLogs Firehose : Firehose S3 : S3
Custom::Route53Certificate¶
The Custom::Route53Certificate
resource requests an AWS Certificate Manager (ACM) certificate that you can use to enable secure connections. For example, you can deploy an ACM certificate to an Elastic Load Balancer to enable HTTPS support. For more information, see RequestCertificate in the AWS Certificate Manager API Reference.
Unlike AWS::CertificateManager::Certificate
, this resource automatically validates the certificate for you. This only works if you request a certificate for a domain that’s hosted on Route53.
Syntax¶
JSON¶
{ "Type" : "Custom::Route53Certificate", "Properties" : { "ServiceToken" : {"Fn::ImportValue": "cfm-reslib"}, "DomainName" : string, "SubjectAlternativeNames" : [ string, ... ] } }
YAML¶
Type: Custom::Route53Certificate Properties : ServiceToken : !ImportValue cfm-reslib DomainName : string SubjectAlternativeNames : - string
Properties¶
DomainName¶
Fully qualified domain name (FQDN), such as www.example.com, that you want to secure with an ACM certificate. Use an asterisk (*) to create a wildcard certificate that protects several sites in the same domain. For example, *.example.com protects www.example.com, site.example.com, and images.example.com.
The first domain name you enter cannot exceed 64 octets, including periods. Each subsequent Subject Alternative Name (SAN), however, can be up to 253 octets in length.
Required: Yes
Type: string
Update requires: Replacement
SubjectAlternativeNames¶
Additional FQDNs to be included in the Subject Alternative Name extension of the ACM certificate. For example, add the name www.example.net to a certificate for which the
DomainName
field is www.example.com if users can reach your site by using either name. The maximum number of domain names that you can add to an ACM certificate is 100. However, the initial quota is 10 domain names. If you need more than 10 names, you must request a quota increase. For more information, see Quotas.The maximum length of a SAN DNS name is 253 octets. The name is made up of multiple labels separated by periods. No label can be longer than 63 octets. Consider the following examples:
(63 octets).(63 octets).(63 octets).(61 octets)
is legal because the total length is 253 octets (63+1+63+1+63+1+61) and no label exceeds 63 octets.
(64 octets).(63 octets).(63 octets).(61 octets)
is not legal because the total length exceeds 253 octets (64+1+63+1+63+1+61) and the first label exceeds 63 octets.
(63 octets).(63 octets).(63 octets).(62 octets)
is not legal because the total length of the DNS name (63+1+63+1+63+1+62) exceeds 253 octets.Required: Yes
Type: List of string
Update requires: Replacement
Development¶
Preparing Environment¶
Get the source code
git clone https://github.com/CloudSnorkel/cfm-reslib.git``
Switch to the code directory
cd cfm-reslib
Install requirements
pip install -r requirements.txt
Create a virtual environment with all of the requirements
poetry install
Building¶
The building process creates a CloudFormation template that can be deployed and expose cfm-reslib
to be imported by
other CloudFormation stacks. This template uses Lambda and its source code needs to be uploaded to a bucket. The build
script will create both a ZIP file and a template and will upload it to a given S3 bucket.
BUCKET=my-bucket-name
poetry run python build.py $BUCKET
And just like when deploying the released versions of cfm-reslib, you can deploy this with aws
CLI tool.
BUCKET=my-bucket-name
aws cloudformation create-stack --stack-name cfm-reslib --template-url https://s3.amazonaws.com/$BUCKET/cfm-reslib-latest.template --capabilities CAPABILITY_IAM
Or when updating:
BUCKET=my-bucket-name
aws cloudformation update-stack --stack-name cfm-reslib --template-url https://s3.amazonaws.com/$BUCKET/cfm-reslib-latest.template --capabilities CAPABILITY_IAM
Note that you won’t be able to deploy multiple stacks of cfm-reslib in the same region because the exported name has to be unique across all stacks in a certain region.
Adding Custom Resources¶
There are two methods to implement a new custom resource. You will need to create a class for your resource in both.
If the custom resource uses just one boto3 call to create, update and delete a resource, you can inherit from
cfmreslib.boto.BotoResourceHandler
. Simply override all of the constants with the names of the methods that need to be called and you’re done. Check outElasticTranscoderPipeline
for an example.If you need more control of the process, inherit from
cfmreslib.base.CustomResourceHandler
. You will have to implement some methods that will be called for requests coming from CloudFormation. Check outRoute53Certificate
for an example.
Once you’ve added your custom resource, make sure to add it to ALL_RESOURCES
at the end of resources.py
.
Classes¶
-
class
cfmreslib.base.
CustomResourceHandler
¶ Abstract base class for all custom resources. Implement this class for new resources. Check the documentation for each method. Not all methods are always required.
-
NAME
= '<not set>'¶ Custom resource name to be used in CloudFormation with
Custom::
prefix.
-
DESCRIPTION
= '<not set>'¶ Resource description for automatically generated documentation.
-
EXAMPLES
: List[Dict[str, str]] = []¶ Optional resource examples to be used in documentation. Each example needs “title”, “description” and “template”.
-
REPLACEMENT_REQUIRED_ATTRIBUTES
= {}¶ set of properties that require a replacement on update value changes.
-
exists
() → bool¶ Checks if the resource specified in self.physical_id exists.
Must always be implemented
- Returns
True if the resource exists, False if not
-
ready
() → bool¶ Checks if the resource specified in self.physical_id is ready.
Must always be implemented
Can just return True if a resource existing means it’s ready
- Returns
True if the resource exists, False if not
-
data
() → Optional[Dict[str, object]]¶ Retrieves the current data that should be returned for this resource.
Only required if
_wait_ready()
is used
- Returns
resource data, can be None
-
create
(args: Dict[str, object]) → None¶ Creates a new resource with supplied arguments.
Must set self.physical_id
Must call
_success()
,_fail()
or_wait_ready()
Must always be implemented
- Parameters
args – arguments as passed from CloudFormation
-
can_update
(old_args: Dict[str, object], new_args: Dict[str, object], diff: List[str]) → bool¶ Checks if a resource can safely be updated or whether a new one has to be created.
Must always be implemented, but can just return False if needed.
- Parameters
old_args – existing arguments as passed from CloudFormation for the current resource
new_args – requested arguments as passed from CloudFormation
diff – a list of argument names that have changed value
- Returns
True if the resource can be updated or False if it needs to be recreated
-
update
(old_args: Dict[str, object], new_args: Dict[str, object], diff: List[str]) → None¶ Updates the resource specified in self.physical_id based on the old and new arguments.
Must call
_success()
,_fail()
or_wait_ready()
Only required if
can_update()
ever returns True.
- Parameters
old_args – existing arguments as passed from CloudFormation for the current resource
new_args – requested arguments as passed from CloudFormation
diff – a list of argument names that have changed value
-
delete
() → None¶ Deletes the resource specified in self.physical_id .
Must call
_success()
,_fail()
or_wait_delete()
Must always be implemented
-
get_iam_actions
() → List[str]¶ Returns a list of required IAM permissions for all operations.
Must always be implemented
-
-
class
cfmreslib.boto.
BotoResourceHandler
¶ -
NAME
= None¶ Custom resource name to be used in CloudFormation with
Custom::
prefix.
-
SERVICE
= None¶ boto3 service name that will be used to create the client (e.g. s3, acm, ec2).
-
CREATE_METHOD
= {}¶ Descriptor for method used to create resource. Requires “name” with the name of the method, and “physical_id_query” used to query for the physical id of the newly created resource from the method return value.
-
UPDATE_METHODS
= []¶ Optional list of descriptor for methods used to update an existing resource. Each item requires “name” with the name of the method, and “physical_id_argument” with the name of the method argument that needs to have the physical id of the updated resource.
-
EXISTS_METHOD
= {}¶ Descriptor for method used to check if resource exists. Requires “name” with the name of the method, and “physical_id_argument” with the name of the method argument that needs to have the physical id of the checked resource. This method will raise the exception set in
NOT_FOUND_EXCEPTION
when the resource does not exist.
-
EXIST_READY_QUERY
= {}¶ Optional descriptor of query to check against the result of
EXISTS_METHOD
. When set we will wait until the resource is ready before finishing with create and update operations. Requires “query” with the query to run over the exists method result, “expected_value” with the expected value (e.g. READY), and “failed_values” with values that denote failure and should stop the operation.
-
DELETE_METHOD
= {}¶ Descriptor for method used to delete an existing resource. Requires “name” with the name of the method, and “physical_id_argument” with the name of the method argument that needs to have the physical id of the resource.
-
NOT_FOUND_EXCEPTION
= ''¶ Name of exception thrown by the exists method if the resource doesn’t exist.
-
EXTRA_PERMISSIONS
= []¶ A list of extra permissions required by any operations for this resource. Most permissions will be deduced by method names, but sometimes extra IAM permissions are required.
-