When working with Amazon Web Services (AWS) EC2 instances, bootstrapping refers to using scripts provided at launch to configure new EC2 instances (servers). Concerning Windows servers, there are several considerations when determining the best method of bootstrapping. Bootstrap scripts can be applied directly from the management console, but we will be looking at a programmatic method of using bootstrapping scripts through the AWS CLI. There are many alternative choices. Many can be quite elaborate. This approach is one with very few dependencies. This scope of this blog covers creating a batch file that will run at the command prompt using the AWS CLI interface.
Bootstrapping Prerequisites
AWS Account: To get started, you must have an AWS account. AWS provides a 6-month free account and many free resources that can be used to build and test functionality.
VPC: Although not required, the scripts that will be discussed here assumes the prior existence of a fully configured VPC with NAT instances, security groups, etc.
Windows and APP Resources: With Windows 2013 and later, the prerequisite files for roles and features are not provided with the installation media. For applications such as Sharepoint, additional resources need to be made available. For instances with internet access this is not a problem. However, for deployments where internet access is not permitted, a repository must be provided where resources can be staged. In this case, S3 Buckets are used.
S3 Storage Bucket: Amazon provides many resources for Linux servers in their native repository. However, Windows servers require resources to be stored in an accessible repository. Access to an S3 storage bucket provides a convenient repository where Windows prerequisites and any application prerequisites can be stored. You can learn about creating and managing S3 buckets through AWS documentation.
IAM Server Roles: For newly launched instances to access necessary S3 bucket repositories, permissions must be provided. One easy way to do this is to assign an IAM role to the instance. IAM roles can be assigned to AWS resources where only one role can be assigned to an instance at a time and roles can be assigned to instances only at startup. However, role permissions can be altered after the EC2 Instance is up and running. I have chosen to create roles based on the planned Server functionality, in my case, for a Sharepoint farm (AD, ADFS, WFE, APP, SQL etc…). Creating and assigning roles is a topic for another blog post.
AWS CLI Tools for Windows: The AWS CLI must be installed and configured. Amazon provides a bundled installer for Windows (supported on XP and later). When configuring the AWS CLI, best practice is to set your credential profile so that no plain text credentials will be needed during execution.
AWS Powershell Tools
The preferred administrative command line tool for Windows is PowerShell. AWS CLI can function from the Command line alone by using batch files and JSON files. For extended functionality, Powershell and XML files are used. The Amazon Powershell tools (Module) must also be downloaded, installed and configured. To use the AWS Tools for Windows PowerShell, your installed system must meet the following: Microsoft Windows XP or later, and Windows PowerShell 2.0 or later.
Obstacles
It has been said that “Windows itself is still hostile to automation.” In addition, AWS does not support many features that would make the automation process more Windows friendly. For Windows and Sharepoint specifically, there are many steps that require continuing the script after a reboot. Joining an existing domain is one example.
[NOTE: That solution is also beyond the scope of this blog post but is something to be aware of.]
Code
All of that being said, the process begins with a simple batch file that will start the process. This batch file consists of the commands to launch an instance and the references that determine where and how the instance will be deployed. The initial batch file also contains references and paths to the support files necessary to continue the process, including JSON, XML and Powershell scripts.
Example
aws ec2 run-instances --iam-instance-profile Name=%ProfileName% ^ --cli-input-json file://c:scriptsec2.win.wfe.launch.json ^ --user-data file:c:scriptsec2.win.wfe.userdata.json ^ --associate-public-ip-address ^ >> %filename%<
The first line of the launch batch script:
aws ec2 run-instances --iam-instance-profile Name=%ProfileName% ^
launches an instance and assigns a profile with the %ProfileName% variable. In a batch file, the commands need to be one continuous line so the “^” Carrot symbol continues the command with the following line as long as the next character encountered on the new line is a space.
The second line of the launch batch script:
--cli-input-json file://c:scriptsec2.win.wfe.launch.json ^
references the input-json file which contains the initial settings for the new EC2 server.
Example
{ "DryRun": false, "ImageId": "ami-3586ac5f", "MinCount": 1, "MaxCount": 1, "KeyName": "MyKey", "PrivateIpAddress": "10.10.10.15", "SecurityGroupIds": [ "sg-12508246" ], "UserData": "", "InstanceType": "t2.micro", "Monitoring": { "Enabled": true }, "SubnetId": "subnet-7ee3721f" }
Basically these variables are the same that would be provided using the AWS management console to launch an instance. This example uses a t2.micro which is currently a free tier server. It is significant to note that Image Id’s change regularly so one that is used this week may not work next week. Private-ip is an optional setting that may be invoked with the launch command.
The third line of the launch batch script:
--user-data file:c:scriptsec2.win.wfe.userdata.json ^
references the user data file. The User-Data file is basically a script that begins to configure the instance and puts in place the necessary hooks and settings that allow additional scripts to be launched and XML files referenced for configuration. Since we are in a Windows environment, we are using a Powershell script. Note that the file must begin with the <powershell> tag and end with the <powershell> tag.{{cta(‘209aad85-272d-4e83-8cdc-4f30262bc921′,’justifyright’)}}
The following example starts by creating setup logs for troubleshooting in the event that any errors occur using the start-transcript command. Once the initial directories are setup and the log is started, a function is called to begin downloading the resource files. Once the resource files have downloaded, the XML file is accessed for settings to continue. Breaking the process down keeps the scope of each function limited and easily modified for a less brittle implementation.
Next, the script sets the IP configuration before continuing and then launches a configuration script that was downloaded with the other resources. By storing different resources, XML files and Configuration scripts in different S3 Buckets, multiple server types can be staged. Buckets choice can be passed to the batch file or hard coded into the user-data script. In the more elaborate version of my launch batch script, user input is gathered and stored in a %ServerType% variable that is concatenated into the reference paths for the support files.
Example
<powershell>
function Create-SetupLogs { $SetupLogDir = "c:SetupLogs" $SetupLog = "SetupLog.txt" $ErrorActionPreference="SilentlyContinue" mkdir $SetupLogDir $path = $SetupLogDir + "" + $SetupLog Start-Transcript -path $path -append } function Download-Resources { write-host "Begin Download Resources function" $ScriptsDir = "c:Scripts" mkdir $ScriptsDir cd $ScriptsDir $localPath = $ScriptsDir $Bucket="win-code-repository" $keyPrefix="wfe" $objects = Get-S3Object -BucketName $Bucket -KeyPrefix $keyPrefix foreach($object in $objects) { $localFileName = $object.Key -replace $keyPrefix, '' if ($localFileName -ne '') { $localFilePath = Join-Path $localPath $localFileName Copy-S3Object -BucketName hadr-win-code-repository -Key $object.Key -LocalFile ` $localFilePath } } write-host "Completed Resource download function" } function Import-XMLFile { write-host " Begin Import-xmlFile function" [xml]$varFile = Get-Content c:scriptsec2.userdata.xml if($varFile) { Write-Host "Using ec2.userdata variable file..." [string]$ServerType = $varFile.DocumentElement.Variables.ServerType [string]$SubnetMask = $varFile.DocumentElement.Variables.SubnetMask [string]$Gateway = $varFile.DocumentElement.Variables.Gateway [string]$PrimaryDNSServer = $varFile.DocumentElement.Variables.PrimaryDNSServer [string]$Domain = $varFile.DocumentElement.Variables.Domain [string]$ServiceAccount = $varFile.DocumentElement.Variables.ServiceAccount [string]$Password = $varFile.DocumentElement.Variables.Password } else { Write-Host "Unable to resolve installation variable file." } write-host " Completed Import-xmlFile function" Return $ServerType, $SubnetMask, $Gateway, $PrimaryDNSServer, $Domain, $ServiceAccount, $Password } function Set-StaticIP ($ServerType, $SubnetMask, $Gateway, $PrimaryDNSServer, $Domain, $ServiceAccount,` $Password) { write-host " Set Static Ip, Subnet Mask, Gateway, DNS server" #Enable Network Discovery #Get current private IP address for this server $PrivateIp=((ipconfig | findstr [0-9]..)[0]).Split()[-1] netsh advfirewall firewall set rule group="network discovery" new enable=yes # Setup network adapter for domain join $wmi = Get-WmiObject win32_networkadapterconfiguration -filter "ipenabled = 'true'" $wmi.EnableStatic($PrivateIp, $SubnetMask) $wmi.SetGateways($Gateway, 1) $wmi.SetDNSServerSearchOrder($PrimaryDNSServer) } ################ start Processing Logic ####################### #set-ExecutionPolicy unrestricted -force write-host "Start Processing" try{ #Call Create Setuplogs function Create-setuplogs #Call download resources function Download-Resources # Call Import XMLFile function $return = Import-XMLFile $ServerType=$return[0] $SubnetMask=$return[1] $Gateway=$return[2] $PrimaryDNSServer=$return[3] $Domain=$return[4] $ServiceAccount=$return[5] $Password=$return[6] # Call set Static Ip function" Set-StaticIP $ServerType $SubnetMask $Gateway $PrimaryDNSServer $Domain ` $ServiceAccount $Password #Launch Config Script invoke-expression ".\dhs.hsin.hadr.ec2.win.hrd.config.ps1" } Catch{ Write-Host "" Write-Host "Error : " $Error[0] -ForegroundColor Red throw } Finally{ #exit Write-host The User Data script has completed execution }
</powershell>
The fourth line of the launch batch script:
--associate-public-ip-address ^
is optional and assigns a public IP for internet facing servers. Logic to include or ignore this command can be included in the launch batch script.
The fifth line of the launch batch script:
>> %filename%
is very special and more important than it might seem. This line directs any output from the previous commands to the %filename% text file variable which can be hard coded or defined earlier in the launch batch script. As it turns out, the command outputs the metadata associated with the new instance and is typically displayed on the monitor. The meta data file can be saved as documentation for the newly created instance. Important information that will be needed further in the process can be extracted from this file. For instance, the following code iterates through the meta data file and extracts the instance ID:
for /f delims^=^{^"^ tokens^=1-5 %%I in (%filename%) do ( echo "I %%I J %%J K %%K L %%L M %%M" set ID=%%L< If %%J == InstanceId GOTO :Done-2 ) echo the new Instance Id is: "%ID%" pause
Once the Instance ID is known, the administrative password for the instance can be retrieved using your Private key that was created for your VPC. Now you can use remote desktop to access the new instance as needed or used programmatically for additional operations. The Administrative password is necessary for many operations such as programmatically joining the instance to an existing domain. The following AWS CLI snippet is used to retrieve the password:
aws ec2 get-password-data --instance-id %Instance-ID% --priv-launch-key C:ScriptsMyKey.pem
[NOTE: It can take several minutes before the password is available so logic must be added to retry after a specified period or keep trying until a password is available. The alternative is to manually run the command repeatedly until the administrative password is returned. In addition, you can now set the computer name tag which is useful for identifying the instance and for use with scripted backup scenarios.]
Example
:: Set Computer tag NAME Set TagValue=HSINCor-%serverType%-test2 Aws ec2 create-tags --resource %ID% --tags Key=Name,Value=%TagValue% echo . echo The computer name has been set to: %TagValue%
The Wrap-Up
There are situations when it is desirable to launch instances one at a time. Having an AWS CLI script can greatly reduce the time to launch and allow an array of server type to be preconfigured and ready for deployment. Using the programmatic approach enhances the ability to have disposable instances. {{cta(‘8a9fb233-94e7-4220-901a-dc7f556ed529′,’justifyright’)}}In a dev environment, it is especially useful to have the ability to launch an instance to test “what if?” configurations. Scripting also offers repeatability when launching multiple servers. Last of all, scripting offers very granular control of launch parameters without having to use Remote Desktop to login and configure the servers.
What other benefits do you see while using AWS CLI scripts in bootstrapping Windows servers? Share your wisdom with us in a comment below!
Found this blog post useful? Make yourself comfortable and check out our blog home page to explore other technologies we use on a daily basis and the fixes we’ve solved in our day to day work. To make your life easier, subscribe to our blog to get instant updates sent straight to your inbox: