Long time Windows users are familiar with the lifecycle of a new laptop with Windows. When you buy it, the computer is fast and your programs run without any problems. Over time, everything becomes a little worse. Little by little the computer becomes slower to start up. Programs may stop working, showing errors. Eventually the laptop will no longer go to sleep or will take a long time to wake up. Eventually you are forced to start over, reinstalling everything from scratch. Rince and repeat.
This is caused by a historic failure of Windows to isolate applications from one another. Each new version of Windows is better than the one before, but the problem has not been completely fixed. Over time the Windows registry, startup process list, and shared libraries are abused by applications we install until the stability of the entire machine is compromised.
To avoid this fate on Windows servers, administrators just avoid installing anything on them. Everything on a Windows server is closely controlled to avoid conflicts. Amazing time and effort goes into testing just the installation of software before production. Windows servers typically are single purpose or run a small set of services known to be safe together.
In Windows, isolation is mainly designed to protect individual users from each other. A single user in Windows is unable to access another user’s files. A program run by a single user cannot inspect or interfere with another application running on the same system. Users cannot pretend to be each other.
While this made sense in the world they were created, today’s servers running on the cloud rarely have any logged in users at all. Ideally, users never log onto those servers. Users who do are typically troubleshooting problems, and are logged on as administrators.
On today’s systems, it is more important to protect programs from each another even when run by the same user. Virtualization handles this problem at the cost of memory overhead and the license and run time expense of using a complete operating system for a single application.
Isolation should allow developers to update services and libraries without breaking other applications. It should allow applications to be removed without a trace. Upgrades to the parent system should not affect isolated applications.
It is no secret that over the past decade Google and other companies have solved these problems on the server. Rather than handcrafting servers to run single applications, Google deployed what they called “Linux containers” or just containers using a system called Borg.
Containers look like servers to the applications that run inside of them. Containers share the host operating system and use far less memory than virtual machines. Moreover, containers can be “shipped” to available servers in a cluster, running as long as needed and removed when complete. Unlike pre-2016 Windows, Linux could be used as a generic server platform without worrying about cross contamination.
Isolation through containers started to gain traction and companies like Heroku and Google created platforms based on Linux container technology. In 2013, Docker was released and quickly became the standard for containers on Linux.
Microsoft was not sitting still at this time and worked on to add better isolation for Windows applications. However, Microsoft saw the explosive popularity of Docker on Linux and finally started adding Linux-style containerization features to Windows. Microsoft and Docker worked together to bring the Docker interface and tooling to Windows.
Microsoft released Windows Server 2016 on October 12th. This release brought full Docker and containerization to Windows. Windows Server 2016 containers isolate the file system, registry, Windows services, network, and more.
With file system isolation, applications are no longer required play nice with every other application on the system. In containers, applications have their own isolated file system and can write anywhere without worrying about interfering with other applications outside of that container.
The registry is a notorious system in Windows where changes may break applications or the entire system. Now containers have completely isolated registries. Applications can have conflicting registry settings in each container without breaking each other.
Windows Services were another shared area of the system. No two services could share the same name. Different versions of services would need different names, or would need to remain backward compatible when updated. Now containers may have isolated Windows services. In many ways, containers are actually a modern replacement for Windows services.
One last example would be the networking system. With TCP each IP address has a range of ports. Normally ports cannot be shared between applications. Now containers can be configured with their own IP addresses. Applications in these containers can simplify their installation and use their default port configuration.
With container isolation, Windows servers can now serve as generic application platforms in the same way Docker enabled Linux. In this new world the base system remains clean. Hosted applications are shipped to servers and run in containers. Misbehaving applications do not interfere with others. Servers do not degrade over time. Systems are more stable.
I believe these changes are so compelling that containers will rapidly become the default way to deploy server applications on Windows as well as Linux. The payoff for porting applications to containers do not end with isolation and stability. Containerized applications gain access to the ecosystem building up around Docker.
Windows Docker containers can be stored in a registry, or central storage area. Registry services are available from cloud providers. Docker supports deploying containers in registries to servers out of the box.
Orchestration services like Amazon EC2 Container Service, or Google Container Engine group machines into clusters. Clusters are presented as a collection of computing resources, like a large single system. You assign resource requirements to containers and the services find machines with the available resources and send your containers to those machines. AWS has announced that they will support ECS for Windows 2016 by the end of 2016.
Monitoring becomes easier in a containerized world. Docker can present to you statistics related to how your application is behaving and how much processing time and memory it is using. It is easier to track down misbehaving applications when you can view them separately. Running services across a cluster reliably would be impossible without this visibility.
You can view how these metrics change over time. Container services allow you to take automatic actions in response to those metrics. Some example actions might be to start more services. In other cases you might want to receive an alert.
You will also want to put in safety controls when you start to deploy your new containerized applications onto your cluster. You need to protect against runaway processor or memory use in your services. Also, systems have both interactive and batch processes. Interactive services need to be able to preempt batch services when they are busy.
Docker now allows you to control the amount of CPU and memory a container is allowed to use. You can configure your batch services to use as much CPU as is available, but provide less when any interactive services are in use. This allows for far greater machine density and can save you money on the cloud.
Windows servers have had fewer tools for isolating applications from each other. The arrival of Docker and containers to the Windows ecosystem changes all of this. With Windows Server 2016, Windows is a full participant in the containerized future of hosted application deployment.
We expect most enterprises will shift completely to containerized deployment on Windows Server 2016 environments in time.