The Doppler Quarterly Spring 2016 | Page 13

and cutover window , all application end points were updated to point to the new servers and databases in AWS .
The following migration steps were used for both applications :
1 . Create cloned servers in AWS using the disk replication product . For the Windows servers , we used an isolated network in AWS to prevent SID , Name , and DNS issues on the production network , and only RDP access was allowed into the server with a temporary local admin account that was created before creating the clone .
2 . Perform miscellaneous steps in preparing the servers that included updating the time zone and installing monitoring software . extensive application testing with the cloned servers , including testing each endpoint , database interaction , storage content validation , and testing all the scripts . Once the servers were ready in AWS , we followed these steps during the migration window :
1 . Update the AWS Security Groups to allow the Windows servers to communicate with production network and Domain Controllers .
2 . Perform a final sync of the NFS data to the new AWS based NFS server . Unmount the old NFS mount points and Mount the new AWS based NFS volumes . This was done incrementally on each of the servers to coordinate with the load balancer pool configuration and avoid any downtime . remove the old servers from the load balancer pool .
6 . Shut down application services on the old servers .
7 . Update the group policy back to the pre-migration setting .
8 . Avter validating all test , initiate the decommissioning process .
The key to achieving zero downtime , beyond the technical architecture and approach , was a thorough application analysis that validated the application behavior using tests . It is also critical to perform the migration in the Dev and Staging environment before Production . Last but not least , effective project management , active involvement from third parties , and consistent communication are key .
3 . Create clones for the database servers during migration
4 . Deploy a two node Linux based cluster with HAProxy and keepalived software for each of the applications .
5 . Complete App configuration updates :
For Application 1 Update the software cluster configuration to reflect the new IPs and AWS local domain controllers .
For Application 2 Servers Update host names , web server configurations to make them IP address agnostic , and remove the custom bonded network configuration . The application team did
3 . Update the Citrix Netscaler and HAProxy load balancers with both the old servers and the new servers in AWS and do a graceful pool update from old servers to new servers by testing each of the servers in the load balancer pool .
4 . Update external and internal DNS . Because the current Citrix Netscaler load balancer configuration was intact , they could serve application requests while DNS changes were being propagated .
5 . After validating that the new servers are handling the application requests successfully , drain and
In the end , AVID achieved zero downtime in their migraiton to AWS . The client team also gained deep insights into their applicaions through the extensive analysis , testing , and documentation . For many companies , this analysis and the discovery artifacts can be as valuable as the migration itself .
SPRING 2016 | THE DOPPLER | 11