Connecting Multiple Windows Azure Virtual Networks with AWS

This post is inspired by the Connecting Windows Azure to Amazon post that Michael Washam wrote. In his post, he showed how to connect a Windows Azure Virtual Network (VNET) to a Virtual Private Cloud (VPC) hosted in Amazon Web Services (AWS) with a site-to-site VPN and OpenSwan.  I will extend his post to show you how to connect multiple Windows Azure VNETs together. I will use the following architecture diagram as an example for illustration.

cloud hub

The address space for the VPC in AWS is, and there are four VNETs in Windows Azure with the following address spaces.

  • Windows Azure VNET 16:
  • Windows Azure VNET 20:
  • Windows Azure VNET 24:
  • Windows Azure VNET 28:

The VNETs do not have to be in the same Windows Azure account, and they don’t have to be in the same region.  The address space has to be unique among VNETs and VPC. You need to create one local network per VNET. You can create a Local Network when you create a VNET, but I recommend to create Local Networks in advance. Then you can just pick a Local Network from the drop down list when you create a VNET.

local networks

There are four VNETs so four Local Networks are required. The Address Space of each Local Network is basically all other address spaces minus the address space of the VNET. I called the Local Network as vnet16-local, vnet20-local, vnet24-local, vnet28-local for VNET 16, VNET 20, VNET 24, and VNET 28 respectively.

  • vnet16-local:,,,
  • vnet20-local:,,,
  • vnet24-local:,,,
  • vnet28-local:,,,

Local Networks in Windows Azure are equivalent to Route Tables in AWS. The VPN Gateway Address is the Elastic IP of the OpenSwan Linux instance in AWS.

In the OpenSwan Linux instance, you will need to create four connections. I recommend to use one configuration file per connection under /etc/ipsec.d folder. The files have to have .conf as an extension.  I called them aws-to-vnet16.conf, aws-to-vnet20.conf, aws-to-vnet24.conf, and aws-to-vnet28.conf respectively. The format of the configuration files are slightly different than the one Michael showed.

  • [CONNECTION NAME] – The name of the IPSec tunnel connection. I would just use the name of the configuration file without the extension, such as aws-to-vnet16.
  • [LOCAL NETWORK ADDRESS SPACE] – The Local Network address spaces. They are defined in the Local Network in Windows Azure. For example:,,,
  • [AZURE VNET GATEWAY] – The Gateway IP Address of  the Windows Azure VNET
  • [AZURE VNET ADDRESS SPACE] -The Windows Azure VNET address space. It is defined in the Virtual Network in Windows Azure. For example:

You will need to update the /etc/ipsec.secrets file to include one entry per VNET with the following format.

  • [AZURE VNET GATEWAY] – The Gateway IP Address of  the Windows Azure VNET
  • [PRE-SHARED KEY] – The pre-shared key of the VNET Gateway. You can retrieve it from MANAGE KEY in the VNET Dashboard.

manage key

This is the sample /etc/ipsec.secrets.

Don’t forget to update the Security Group in AWS to allow UDP 500 and 4500 from Azure VNET Gateways to the OpenSwan Linux instance.

security rules

Restart the ipsec service in the OpenSwan instance to establish IPsec tunnels between VPC and Windows Azure VNETs.

If you need other instances in your VPC to connect to Windows Azure VNETs, you will also need to add route entries to the Route Table in AWS. The Destination is the Windows Azure VNET Address Space, and the Target is the Elastic Network Interface or the Instance of the OpenSwan Linux instance.

route table

You should be able to communicate among VNETs through AWS. Please notice the OpenSwan Linux instance is a single point of failure in the architecture. You may want to consider to create a second elastic network interface for the OpenSwan Linux instance and  set up a standby OpenSwan Linux instance for fail over. When the primary OpenSwan Linux instance is down, you can switch the second elastic network interface to the standby OpenSwan Linux instance to take over the primary OpenSwan Linux instance.

Posted in Uncategorized | Tagged , , , , , , , | 4 Comments

0 KB DATA IN/OUT in Site-to-Site VPN with Cisco ASA 8.4

I was working with a customer to set up a site-to-site VPN between Windows Azure and a corporate network. On the Windows Azure Virtual Network Dashboard, it showed the VPN tunnel was connected but data in and out were 0 KB even after a long time. Firewalls were open to allow the Windows Azure gateway in the corporate network.  What went wrong?


The router on the corporate network was Cisco ASA 5500 Series device with ASA OS version 8.4. A VPN configuration script was downloaded from the Virtual Network Dashboard in Windows Azure but the script was for OS version 8.3.

Obviously, the  script did not work well for OS version 8.4.  It ended up two changes were required for the following sections to resolve the issue.

  • Internet Key Exchange (IKE) configuration
  • Tunnel configuration

Internet Key Exchange (IKE) configuration

In this section, replace isakmp with ikev1 on the second line before policy 10.


Tunnel configuration

In this section, add ikev1 in front of the keyword pre-shared-key.


After re-running the modified script in the Cisco VPN device, the IN/OUT KB started to increase. VMs were able to communicate between the two networks via PING. Everything seemed to work fine.


Posted in Uncategorized | Tagged , , , , , , , | 3 Comments

Auto Start and Stop Your EC2 Instances

Have you ever forgotten to shut down your test instances that you just used for few days and got a surprise when the monthly AWS bill came? Do you want to stop your development instances automatically at night and weekends when you don’t use them? If your answers are yes, you will want to continue reading this post.

It is fairly easy to set up a schedule to start and stop your EC2 instances automatically to control your EC2 usage. There are quite a few 3rd party providers offering this service. If you search for “EC2 schedule” or “auto start stop EC2” in google or AWS forums, you will be able to see what is available.  I list few of them in alphabetical order for your reference in case you are interested:

Of course, there is a monthly subscription fee associated with these 3rd party services, and you also need to give out your access key and secret key to the 3rd party service provider. These service providers offer more feature than just auto start and stop EC2 instances. It is probably worth using them if you are planning to use other features as well. Otherwise, it may be hard to justify when you can do it on your own without revealing your credentials to a 3rd party provider.

In fact, AWS has provided all the APIs you need to manipulate EC2 instances. That’s how these 3rd party service providers are able to manage AWS resources. It is indeed pretty straight forward.  All you really have to do is to put together a simple Python script to leverage these APIs with Boto, which are well documented and supported.

In this post, I will show you how to use a micro instance with an IAM role to start and stop your EC2 instances automatically within a region. Here are the 3 simple steps to get started:

  1. Create an IAM role
  2. Provision a micro EC2 instance
  3. Tag your instances

STEP 1: Create an IAM Role

On the IAM Management Console, go to Roles and click Create New Roles to open a wizard to start the process to create a new role. In the screenshots below, I called the IAM role as ec2-operator. You can call it whatever you want as long as you can identify it.


Type a self-explanatory name to the Role Name field. It is used for identifying your IAM role. You will need to use the name later when you create a micro EC2 instance.


On the next screen, select AWS Service Roles and then Amazon EC2.


On the next screen, select Custom Policy.


Type an arbitrary name to the Policy Name field.


Paste the following JSON block to the Policy Document field to indicate the IAM role is allowed to perform describe instances, start instances and stop instances action.

Review the role information and click Create Role to finish. The new role should be created immediately.


STEP 2: Provision a micro EC2 instance

I recommend to use a micro EC2 instance. I think it is sufficient to handle what we need here to set up a cron job to run a Python script to process auto-start and auto-stop requests, but feel free to use a different instance type.

On the EC2 Management Console, click Launch Instance to start the process to provision a micro EC2 instance. Let’s call this instance as an EC2 operator.


I recommend to select the latest Amazon Linux AMI, which has AWS tools, python and boto pre-installed.


You can pretty much take the default value for each step except on Step 3: Configure Instance Details and Step 6: Configure Security Group.

On the Configure Instance Details screen, you need to make sure you select the IAM role you created and assign it to the instance. In my case, I selected ec2-operator as the IAM role. You will not be able to change the IAM role for the instance once it is launched.


You can cut and past the following shell script  into the User Data field. It will install few Python libraries and modules, create a Python script, and set up a cron job to execute it every 5 minutes. The shell script is set to operate instances for all regions. In other words, the script will go through each region to start and stop instances.

On the Configure Security Group screen, make sure you are not open the instance to the world ( for the security group. In fact, you can get rid of the security rule. You don’t really need to SSH to the instance.

STEP 3: Tag your instances

The last step is go to EC2 Dashboard Console to tag your instances that you want to start and stop automatically at certain time. The EC2 operator instance looks for the auto:start and auto:stop tags to determine when to start and stop instances. The name of the tags are case-sensitive. The value of the tags should be in a cron format. For instance, 0 14 * * * is 2 pm everyday. The time is based on the OS time of the EC2 operator instance. If you have not changed the default time zone of the EC2 operator instance, it should be in UTC. It will ignore any instances that do not have the tags set or the format is invalid. The auto:start tag holds the start schedule, and the auto:stop tag holds the stop schedule. Do not tag the EC2 operator instance, otherwise it will get stopped by itself.


The script may not start or stop instances at the requested time exactly. For the start operation, it will start the instance between 30 minutes prior to the start time and the requested time. For the stop operation, it will stop the instance between the requested time and 30 minutes after the requested time. The EC2 operator instance is set to check every 5 minutes. It will process any auto-start and auto-stop requests few times if they have not started or stopped during the 30-minute window in case the operation does not go well.

I have been using the script for a while, and it has been performing pretty well. I am able to get all my development instances stopped during non-business hours.  I am even able to check the operations with CloudTrail to audit if the instances were indeed started and stopped within few minutes of the requested time.

Posted in Uncategorized | Tagged , , , , | 92 Comments

Regenerating Administrator Password for EC2 Windows Instance

If you happen to forget your administrator password for an EC2 Windows instance and don’t have another windows account to log into the instance to reset it,  the good news is you can easily regenerate it with a few simple steps:

  1. Shut down the Windows instance to detach the root volume
  2. Attach the detached root volume to another instance
  3. Change Ec2SetPassword to Enabled from another instance
  4. Re-attach the root volume to the original Windows instance
  5. Start the Windows instance to retrieve the new password

STEP 1: Shut Down the Windows Instance to Detach the Root Volume

It is always a good practice to shut down the instance before detaching the root volume to make sure I/O is suspended to prevent it from corrupted. It can be done through the EC2 Dashboard. All you need to do is to highlight the Windows instance and select the Stop option to shut down the instance completely.


STEP 2: Attach the Detached Root Volume to Another Instance

You will need to detach the root volume before you can attach it to another instance in the same availability zone. You can use the instance ID to locate the root volume and select the Detach Volume option to make it available for another instance. You may see a few volumes if you have other volumes for the Windows instance. You only need to detach the root volume, which is attached as /dev/sda1.  Before detaching the root volume, you may want to write down the volume ID and even tag it to help you locate it later on.


I recommend to attach the volume to a running Linux instance. You probably can attach it to another Windows instance but doing so will make the volume non-bootable and prevent the original Windows instance from booting it up properly.


After attaching the volume to the Linux instance, you should be able to see a message in the /var/log/messages log file to indicate the volume is detected within the Linux instance. You can use the tail command to confirm it.

I attached the volume as /dev/sdf, so it was attached and mapped as xvdf1 in the messages log file.


Before you can mount the volume, you will need to create a local directory for the mount point. You can use mkdir to create an empty directory. Let’s call the directory /mnt/c-drive.

Now it is time to mount the volume assuming the device is xvdf1 and the mount point is /mnt/c-drive. Replace them accordingly to reflect your settings.

STEP 3: Change Ec2SetPassword to Enabled from Another Instance

Once the volume is mounted, change the current directory to the Settings directory.

In the config.xml, set the state of the Ec2SetPassword parameter to Enabled and save it.

Get out of the /mnt/c-drive directory and umount it so that you can detach it from the Linux instance gracefully.

STEP 4: Re-attach the Root Volume to the Original Windows Instance

The config.xml is edited and the volume is unmounted. The volume is ready to be detached from the Linux instance and re-attached back to the original Windows instance. You can perform these actions in the Volumes section of the EC2 Dashboard. When you re-attach the volume, make sure to set the Device field to /dev/sda1 to indicate this is a root volume. Otherwise, the Windows instance will not be able to start.


STEP 5: Start the Instance to Retrieve the New Password

The instance is still in the stopped state. You can start the instance from the EC2 Dashboard console. It may take 15-30 minutes to get the new password generated.


You can use your key pair to retrieve the new password  with the Get Windows Password option and log into the Windows instance.


Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Getting Error 1001 during the TFS PowerShell Cmdlets Installation in Windows Azure

It was pretty frustrated to keep getting the error 1001 when I tried to enable the TFS PowerShell Cmdlets feature during the installation of the Team Foundation Server 2012 Power Tools in Windows Azure.   If I did not select the TFS PowerShell Cmdlets, the installation went well without any errors. The error appeared whenever I wanted to enable the TFS PowerShell Cmdlets feature.

error 1001

I found out that the TFS PowerShell Cmdlets required .NET Framework 3.5 Features. However,  it was not enabled on the Windows Server 2012 r2 IaaS VM instance I created through the Microsoft image in the Gallery . It also did not include the source to install .NET Framework 3.5.  I was able to confirm it with the Get-WindowsFeature PowerShell Cmdlet.


When I tried to install the .NET Framework 3.5, the Add Roles and Features Wizard displayed a warning to inform me if I wanted to specify an alternative source path . Please notice it was just a warning. It still allowed me to proceed even I did not specify any alternative source path.  Of course, the re-installation of the TFS PowerShell Cmdlets did not work. It still gave me the same error 1001. It took me a while to realize that the .NET Framework 3.5 was not installed properly.


Because I did not have the source for the .NET Framework 3.5, I just downloaded the ISO file of the Windows Server 2012 r2 from MSDN and mounted it. It only took a minute or two to download the ISO file within Azure. It was pretty quick. Then I just clicked “Specify an alternative source path” to point to F:\sources\sxs where the source was. The wizard did not provide an option to browse the folder, so I had to enter the path manually.


With .NET Framework 3.5 properly installed this time, the re-installation of the TFS PowerShell Cmdlets was completed successfully without any errors.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Generating a Self-Signed Certificate for Windows Azure Cloud Service

In Windows Azure Cloud Service, you will need a X.509 certificate to enable SSL for your site and RDP for your role instances. For test purposes, you may just want to generate a self-signed certificate instead of getting one from a Certificate Authority (CA).  In the post, I will walk through how to generate and apply a self-signed certificate to a Cloud Service.

To create you own self-signed certificate, you can open a Visual Studio command prompt as an administrator and run the makecert command to create a certificate. You need to replace with a name you want to your certificate to be called:

For example, the following command will create a certificate name as

The makecert command will add the self-signed certificate to the Personal certificate store automatically. You can open CertMgr.msc to view the certificate in the Personal certificate store.


Before you can use it in the Cloud Service, you will need to export the certificate to a pfx format including the private key. In the Certificate Manager Tool, right click the certificate and select All Tasks > Export.


Follow the Certificate Export Wizard and make sure to select Yes, export the private key option.

export private key

You can take the default settings in the Export File Format screen.

export file format

You will need to select the Password option and set the password to protect the exported file. You will need to type the password when you import it to the Cloud Service.


In the File to Export screen, provide where you want to save the exported certificate in file to export

Click Finish on the confirmation screen to export the certificate.completing

Go to the Cloud Service you want to apply the certificate in Windows Azure Portal.

cloud service

On the CERTIFICATES section, click the UPLOAD on the bottom of the screen to start the importing process.  You will need to locate your exported file, the one with pfx extension, and  provide the password you set.

upload certificate

The certificate should be uploaded and imported to the Cloud Service shortly. Make a note of the Thumbprint of the certificate. You will need it when you adjust your Cloud Service application in Visual Studio.

cert uploaded

In the Virtual Studio, right click the Properties of the role in your Cloud Service project and  go to Certificates section. You will be able to add the certificate you uploaded to the Cloud Service in Windows Azure there. In my screen shot, I have two entries. The Certificate 1 was the one I added. The Microsoft.Windows… one was added when I enabled RDP to role instances during the publishing process.  You will need to provide the Thumbprint which was captured after you uploaded the certificate. Make sure the Thumbprint all capital letters without any spaces or hyphen. The Name of the certificate is an arbitrary name that helps you identity the certificate.

role properites

You will need to add an endpoint in the Endpoints section to associate with the certificate to enable SSL. The SSL Certificate Name is the name you set in the Certificates section.  The Name is the arbitrary name that helps you identify the endpoint. Make sure to set the Protocol to https. The Public Port is the external port that users will access the site. For SSL, it is usually set to 443 but feel free to use a different port for your requirements.


You will also need to select HTTPS endpoint to enable HTTPS in the Configuration section.


The last step is to publish your cloud service application to Windows Azure.


In the Publish Settings screen, you may want to select Enable Remote Desktop for all roles option to enable RDP access to your role instances that are provisioned for your Cloud Service. If you enable RDP access, you will need to set the user name and password  to connect to your role instances. You can use the same certificate to encrypt your user credentials.

publish settings

Once your Cloud Service application is published, you should be able to access the site with HTTPS and RDP into the role instances with the credential you set.

deployment complete

You will see a warning while accessing the site in the browser because it is a self-signed certificate. If you want to get rid of the warning, you will need to make sure the Certificate Name matches the site DNS and you also need to install the certificate to Trusted Root Certification in your Certification Manager Tool.


If you don’t like to prefer to modify the service definition and configuration files directly, you  can refer to Configuring SSL for an application in Windows Azure.

Posted in Uncategorized | Tagged , , , , , , | 2 Comments

Setting up OpenVPN Access Server with CloudFormation

This post is to continue on the my previous post about Setting up OpenvPN Access Server in Amazon VPC. To make it easy to launch it in an existing AWS VPC. I have put together a CloudFormation template to automate the process. You can find the CloudFormation template in my github repository.

You will be able to set up a OpenVPN Access Server with the cloudformation template in the CloudFormation Management Console.

Click Create New Stack button to start the process.

create stack

Give a stack name and specify where the template. The stack name is case-sensitive and has to be unique within your AWS account. It must start with a letter and can only contains alphanumeric characters. In other words, no spaces or special characters are allowed.

select template

You will need to enter a few inputs to specify things like VPC ID, Subnet ID, Admin user name, password and the like.

specify parameters

Add optional tags to mark whatever resources that are created by the stack to help identify them.

add tags

The final step is to review and click Continue button. The stack will be created and ready shortly. You should be able to get the OpenVPN’s URLs in the Outputs tab of the stack.


I am going to go through each section of the CloudFormation template. It consists of four sections:

  1. Parameters
  2. Mappings
  3. Resources
  4. Outputs


The Parameters section defines the inputs that you will need to to create the OpenVPN Access Server and related infrastructure components.

  • Instance Type – Instance type for the OpenVPN Access Server
  • VPC ID – ID of your VPC where the OpenVPN Access Server will be lanuched
  • Subnet ID – ID of your Subnet where the OpenVPN Access Server will be launced
  • Group Description – Security group description
  • Admin User – Admin user name
  • Admin Password – Admin user password
  • Admin CIDR IP –  IP block where you will be accessing the  Admin portal
  • Key Name – Keypair to ssh the OpenVPN Server’s console


The Mappings sections define what AMI should be used while launching the OpenVPN Access Server. It is essential a mapping table between regions and AMIs. Each region uses a different AMI.


The Resources section consists of the resources that will be created in a specific order:

  1. SecurityGroup – A security group for the OpenVPN Access Server
  2. IPAddress – An Elastic IP for the OpenVPN Access Server
  3. Instance – The OpenVPN Access Server
  4. IPAssoc – An association between the elastic IP and the OpenVPN Access Server

Security Group

The SecurityGroup resource defines the firewall rules for the OpenVPN Access Server in the VPC you have specified. The following are the minimum ports you will  need:

  • TCP 443 – Users log into the OpenVPN servers via this port.
  • UDP 1192 – VPN connections are running through this port.
  • TCP 943 – Admin Portal is running out of this port by default.

These ports can be changed in the OpenVPN Admin portal if needed.

IP Address

The IPAddress resource allocates a public IP (elastic IP) which will be associated with the OpenVPN Access Server. Users will need to access the OpenVPN Access Server via this public IP.


The Instance resource launches the OpenVPN Access Server. Depending on the region you run the cloudformation template, it will select the appropriate AMI accordingly. It will supply the admin user name, initial password and public IP as user-data to the instance.  You no longer have to SSH to the OpenVPN Access Server console to run the initial configuration. Since the user-data is stored as clear-text in user-data, you probably want to change the password afterward.

IP Association

The IPAssoc resource associates the public IP with the OpenVPN Access Server instance.


The Outputs section outputs a few things that you will need to access the OpenVPN Access Server after the stack is created:

  • OpenVPN Server Admin Portal – The URL for the  Admin portal
  • OpenVPN Server URL – The URL for the server
  • Group Name – The security group name
Posted in Uncategorized | Tagged , , , , | 9 Comments