Pages

Sunday, September 13, 2020

Elastic Container Service - Running Tasks and Services

Once the task definition is created along with the Container definition we now can create the task. Go to the Task Definition and click on the Actions. Click the “Run Task” in the Actions. Now it will create a task in the Ecs Cluster. The task will be run in either of the Cluster Instances in the Ecs Cluster. 
Once you click on the “Run Task”,it opens a new Wizard that allows one to choose a cluster and other details to run the task.

I will not be choosing any more values and I will keep them default. Click on the Run task button. It will take a few minutes to download the image and then run the container from that image. We can see the running tasks inside the Tasks tab in the Cluster.

Check if the Container is Running
Lets see if the container is running. Click on the tasks tab where we see the running tasks details. The details include the tasks running, the instance where the task is running etc. Click on the Task to see more details regarding the task. Check the instance where it is running and login to that instance.

Run the “docker ps” command to see if the container is running or not. We can see the container is up and running.

Running a Service
Once the task definition is created along with the Container definition we now can create the service. Go to the Task Definition and click on the Actions. Click the “Create Service” in the Actions. 
Once you click on the “create Service”,it opens a new Wizard that allows one to choose a cluster and other details to run the task.

I filled in the details as above for my application. The important thing is the Service type and number of tasks. As we discussed earlier about the service type, I have chosen the Replica type. The Number of tasks is also 1 which means at least 1 instance of the container is running all time.

The service type also includes the “Deployments” option which will provide us a way to deploy the service. The default is the “rolling Update”. We can also choose the “blue/green deployment”. Choose the option necessary for the application.

Click Next and choose the default options and create the Service.

In the next article, we will see how we can configure a load balancer to route the requests to the Containers running in the Elastic Container service.
Read More

Elastic Container Service - Creating Your First Task Definition

Once the Cluster is created, we now go to create the Task definition, as we already discussed a Task definition is a blueprint describing which docker containers to run and represent your application. This task definition contains details about
1.What containers to run
2.From where the images need to download to start the container
3.Cpu and memory details
4.Volume details
5.Launch type
6.Networking mode backed up by VPC
7.Logging configuration of containers
8.Command to run as Entrypoint
9.Iam roles

The task definition contains 1 or more containers defined. So let's say we have a frontend web application and a Database with which the front end connects, it is logical to create both containers at one time. So for this use case, we can have both containers defined in one task definition. Similarly we will have 1 or more containers defined based on your application architecture. A task definition can contain 1 container also. 

Creating a Task Definition :
In the first screen Choose the Launch type - Select which launch type you want your task definition to be compatible with based on where you want to launch your task.

In the second screen, Give a name to the Task Definition. 
Choose the Task Role : This IAM role is very important that tasks use to make Api requests to authorized aws Services

Choose the Network mode : There are 4 options to choose, bridge, Host, Awsvpc and none. Ecs Will start your container using the Docker default networking mode which is bridge on Linux and NAT for windows. The awsvpc is only a different type of networking provided by aws in which a separate ENI card is provided to access the container. Other networking modes are provided by docker itself

Choose the Task execution IAM role : This role is required by tasks to pull container images and publish container logs to Amazon CloudWatch on your behalf

Provide the Task Size which is memory and Cpu requirements.

Add the Container definition : Click on the “Add Container” to configure your container details. When you click on this, there will be a popup shown to fill the container details. The most important for now are the Container name and Container image. We will not touch the other details for now. Other details that we can provide in Container definition include Health Check, Environment variables,Network settings, Container timeouts, storage, security , resource limits and labels.

At the end of the Second Page, we have the Volume where we can mount volumes from Host or Ebs to the Container path. For the demo , i have provided the following details, 

Here are the few details that i provided for my Container definition along with the port details as below,
Port Mappings :

Save the Details and save the Task Definition.

By this we have creating the TaskDefinition to create our Task ( Container workloads). In the next article we will see how we can run the Task from this task definition.
Read More

Elastic Container Service - Creating your First Ecs Cluster

                                             
An Ecs Cluster is nothing but a grouping of Tasks, services and Ec2 Instances. It can be taught from a logical group of Ec2 instances that host application containers. Ecs clusters are region specific which means the whole cluster should be in one Aws region. 

Ecs Cluster can be of two modes :
Ec2 ( Linux/Windows ) + networking  : this will have Ec2 instances running for hosting the container with networking available by default which means we will launch these ec2 instances inside a VPC.

Networking : in this mode we will have only a VPC available. Launching the containers on Hosts are taken care of by the Aws itself. This mode is used with Fargate launch type where aws will take care of launching the ec2 instances to host the containers. This can be taught as Ecs serverless launch.

Container Instance IAM role :
When we launch a Ec2 instance in an Ecs, it contains an Ecs agent which will talk to the Ecs cluster service all the time for sending details of the running containers and also getting details from Ecs Cluster service. In order for the Ecs agent inside a Ec2 instance to talk to the cluster service an specific role needs to be assigned to the Ec2 instance. The role is called “AmazonEC2ContainerServiceforEC2Role”.

Create Cluster ->
As we already discussed, we can see the 2 modes Ec2(linux + windows ) + networking and only networking. We will go with the Ec2 Linux + networking. Once we choose this, aws will take care of creating the instances for us using the Ecs optimized ami , vpc , subnets and also an auto scaling group. This autoscaling group will be used when we require to increase our instances in the cluster. 

We have an Option to create an empty cluster also from the console.We have 2 provisioning models, on-demand and spot. We will go with the on-demand model. 

We can choose instance type from the list box. I will go with the “t2.micro” but it depends on your application workload.

Number of instances are 2. The ami to create the Ec2 instance is also provided by the aws itself. We can choose the ami from the list box whether it can be Amazon Linux 2 or Amazon Linux 1.

Ebs storage is 30GB by default which means we can't choose less than this. Choose the keypair to login to the ec2 instances once created. 

In the network section, we can ask Aws to create a vpc , subnet and all other necessary details. We either ask aws or give our own vpc and subnet details. I will choose my default vpc and also use a security group that I created. The security group is open to all 

The Next is the Container Instance IAM role. This is the same IAM role that we discussed earlier. When we launch a Ec2 instance in an Ecs, it contains an Ecs agent which will talk to the Ecs cluster service all the time for sending details of the running containers and also getting details from Ecs Cluster service. In order for the Ecs agent inside a Ec2 instance to talk to the cluster service an specific role needs to be assigned to the Ec2 instance. The role is called “AmazonEC2ContainerServiceforEC2Role”. We can create a role with the specific permissions assigned or we can ask aws to create one for us which I will be doing here.

We can see that aws uses CloudFormation templates to launch the Ecs Cluster. We can see the details once the cluster is created as below,

If we check the stack, we can see that launch configuration, auto scaling groups are created. Once the Running of the templates are done, we can see that the Cluster is created.

We can see there is nothing running. The cluster is active but we don't see any things running even the Ec2 instances.
Since there are no tasks available to run, hence the ec2 instances are not running. Once we run the tasks we can see the ec2 instances running. But if we go to the Ec2 instance console and we can see Ec2 instances running and available. Similarly we can also see the launch configurations and auto scaling group.

In the next article, we will see the basics of Task Definition and understand its components.
Read More

Elastic Container Service - Understanding ECS Architecture and Components

Containers are the new way of running applications. Containers provide a logical packaging mechanism in which applications can be abstracted from the environment in which they run. This decoupling allows container based applications to be deployed easily no matter what the underlying environment is. 

One important benefit of the containers is that application teams will focus on code development without needing the underlying infra details, while platform teams focus on deployment and management without needing the application details.  But a container has its own challenges. One biggest challenge running these containers.it will be very hard for platform teams to manage and monitor these containers when they are many. This is where the container orchestrator tool comes into picture.

Understanding orchestrator 
Container orchestration refers to the automated arrangement, coordination and management of software containers. 

Container orchestration is used to automate the following tasks at scale:
1.Configuring and scheduling of containers
2.Provisioning and deployments of containers
3.Availability of containers
4.The configuration of applications in terms of the containers that they run in
5.Scaling of containers to equally balance application workloads across infrastructure
6.Allocation of resources between containers
7.Load balancing, traffic routing and service discovery of containers
8.Health monitoring of containers
9.Securing the interactions between containers. 

Amazon also provides a Container orchestrator called Ecs. In this article we will understand how Ecs works and we will understand how to configure that.

Introducing Ecs
A highly scalable, fast, container management service that makes it easy to run,stop and manage containers on a cluster.

Ecs runs our containers on a Cluster of Ec2 ( Elastic Compute Cloud ) instances. These Ec2 instances are created using Ecs Optimized Images that have a couple of Components pre-installed. These Ec2 instances created by Ecs optimized images come with Docker and Ecs agents preinstalled. 

Ecs handles the installing of containers,scaling, monitoring and managing these instances through both an API and a management console. The specific instance a container runs on, and maintenance of all instances, is handled by the platform.


Here are the components of the Ecs cluster
Ecr ( Elastic Container registry )
Ecs Cluster
Ecs agent
Task Definition
Task
Docker
Launch Types

Now Lets understand the Ecs Cluster and its Components, 
Ecr Registry - A separate article is written for understanding Ecr and how to push images to the Repository from our local machine. Check the link here.

Ecs Cluster - If we want to run our application in a container, we usually create the application in a Docker container and run it. If we are running the Container by its run command, we will be responsible for managing it. In Amazon an Ecs Cluster is a logical grouping of tasks ( Containers ). So An Ecs cluster will have a One of More Ec2 Instances where we can run our application Containers. The Ec2 Instance will be required to install with the Docker engine or any container engine for running our Containers.

Task Definition - Task Definition is a collection of 1 or more Container definitions and configurations. In Ecs, We need to define a configuration file where we need to tell what containers we are trying to run. We can define either 1 or more than one containers, How the containers are linked, Resource definitions like Memory and Cpu, ports to expose to the host machine, how to collect logs, environment variables and storage volumes that need to be attached to the containers etc.

Task - Once the Task definition is defined, we will use that to create a task. A Task is a Running instance of the Task Definition which in other means a task is nothing but a container. If we have 1 task defined in the Task definition, a task runs a single container and if we have more than 1 container defined in the task definition, the task runs more than 1 container. So the task is an instance of the Task definition which will run containers defined inside the task definition. No matter how many containers you define and run as a part of the task definition, we still call it a single task. The task will run until they are stopped or exit on their own.

Services - A Service is used to guarantee that you will always have the defined number of tasks running at all times. For example, let say I have defined a task definition ex: sample-test with 1 container definition. Next i will define a service ex: sample-test-service from that task definition saying that i will have 1 task running all time. This means now a task ( 1 or more containers defined in the task definition ) will run and Ecs service will make sure that task is running all time. For example if we stop the containers within that task manually, the Ecs service will start them again ( if it is 1 container or more than 1 as part of task definition). 

There are 2 ways where we can run the Service either in Replica more or Daemon more. If we create the service in replica mode with task 1, then 1 task will run and if by chance the task dies, the Ecs service will take the responsibility of restarting the task for you. If you run the task in Daemon mode, a copy of that task will run in all members of the Ecs Cluster and Ecs service will take care of starting that container if it exists.
A Service is responsible for creating the tasks. These are mainly used with long running applications like web services etc. For example, if I deployed my website powered by Node.JS in Oregon (us-west-2) I would want say at least three Tasks running across the three Availability Zones (AZ) for the sake of High-Availability; if one fails I have another two and the failed one will be replaced. 

Another important point is that the service can be configured to use a load balancer so that it creates the containers defined in the Task definition and then the service will automatically register those containers Underlying Host ( Ec2 ) with the load balancer. Tasks cannot be configured to use a load balancer, only Services can.

Ecs Agent : The Amazon Ecs Agent is a software that Aws developed for its Amazon Ecs Container Service that allows Container Instances to connect to your Clusters. The Ecs Container agent runs on Each Ec2 Instance with an Ecs Cluster and sends telemetry data about the tasks and resources utilizations of that instance to the Amazon Ecs Service. It also has the ability to start and stop tasks based on the requests from Ecs.

The agent is included in the Amazon Ecs Optimised Ami by default. The agent can be installed in other operating systems also but there can be issues joining the cluster and sending data. The agent also interacts with the Amazon Api, as well as docker.

The components in the Ecs Service would like below,
In the next article we will see how we can create a cluster and create task definition, services and tasks.
Read More

Saturday, September 12, 2020

Getting Started with Windows Containers

                                
Containers are a type of Operating System Virtualization that runs applications along with all its dependencies in a resource isolated process. Container have come a long way in *nix based Operating Systems. There are many container runtimes available for running containers in Linux. Running Containers on Windows is a requirement that's been added recently 

Docker came up with an installation package for windows that helps in running windows containers but the way containers run in windows is quite different from containers running on linux. In 2016, Microsoft partnered with Docker to come up with the Docker specification that supports running Docker windows containers. Not all Windows Operating Systems support running Containers other than Windows 10, Windows 2016 and Windows 2019. Docker supports running containers natively on these 3 operating systems only. Most of the Cloud providers provide AMI with prebuilt windows Container support. We can use Windows Server 2019 Datacenter with Containers VM image on Azure, and Amazon's Microsoft Windows Server 2019 Base with Containers AMI on AWS to run our windows containers. 

How does Communication happen?
The Docker Engine communication in linux is pretty well known. All the tools like Cli, Compose etc talk to the engine using the Rest Based API calls. This engine in turn talks with Containerd or RunC to create a OCI complaint Container. The Underlying Host kernel will be shared with multiple containers and using the Kernel features like Cgroup,Chroot, Namespaces etc will provide required resources to the Containers.

The architecture looks the same for most of the top level components on the windows side too but it is quite different at the host level. The kernel mode is windows is quite different from Linux kernel as this includes not just the kernel but also including other services. Various managers for  Objects, processes, Memory, Security, Cache, Plug in Play (PnP), Power,  Configuration and I/O collectively called Windows Executive(ntoskrnl.exe) are available. So there are no separate Namespace or Cgroup implementations in Windows  instead  came up with “Compute Service Layer” at Os level which provides namespaces, resource control and UnionFS capabilities. This layer is responsible for managing the containers like start,stop etc which containerd in linux does. So there will be no separate containerd or runC in windows. All the tasks of these will be done by this Compute layer.

What is different from Linux Containers?
The way that the Windows Containers run on windows platform is quite different from the way linux containers run on *nix based Operating systems. 

Windows is a highly integrated operating system that exposes its API with DLL’s and not by SysCalls. In windows, an application cannot directly make a system call like in *nix based systems. Every kernel action that the application wants to perform will communicate with the .dll ( dynamic linked library ) that talks to the windows manager which in turn performs the kernel action. The Underlying process of how the DLL’s work and how they talk to the Operating System services is undocumented. In basic terms, all kernel actions like allocating memory, hardware requirements etc will go through .dll to the windows manager which in turn provide the requested service. This goes in the same way for applications running inside the windows containers. For every kernel action requested by the application running inside the container would go through the .dll and then to the windows Operating system. The application running in the container would require the same .dll files for routing the kernel actions to the windows operating system. 

For this the windows container will have the application running and also certain services that are required for the application to connect to the windows operating system  and other things. This is what we call Windows server containers or Process Containers.

Isolation modes
In order to run a container, Windows provides two different modes of runtime isolation i.e Process Isolation and Hyper-V Isolation. These 2 modes let windows containers run in different ways. 

Process Containers 
As we can see in the above image, the windows server containers or process containers will have both application process ( asp.net or C# ) and system process ( .dll and other api related calls ). This is the reason why the windows container images are huge in-size since we have the application code along with the system process like dll etc. 

This is where a very important thing comes into picture. That is in a Linux Operating system, we can run a Centos Container on a RHEL machine or a SUSE linux container on a Ubuntu machine. This happens because the container shares the same host kernel. But in windows, we have a dependency with the system process ( .dll etc ) running inside a container which might change from build to build in windows. For this reason, we can only run a windows container on a windows machine which has the same operating system flavors. That is we cannot run a windows container built with windows 10 on a windows 2016 server or windows 2019 server. We need to have the same flavor of operating system for containers and also running on host. Most of the time, the container will start but some of its features will be restricted or not guaranteed.

Creating your first Windows Process Container
For creating our windows container, we will be using Amazon AMI for creating our windows instance which have Docker already installed. Create a Windows Ec2 Instance from the AMI “Windows_server-2019-ECS_Optimized” ( ami-05f51a63adfbce152). I will be using the Windows 2019 Base machine with Docker preinstalled.
Run the windows Process Container as below,
We ran the below command to pull the 2019 image and run the container as below,
docker run mcr.microsoft.com/windows/servercore:ltsc2019

Or we can also run the Windows process container by passing the isolation parameter as below
docker run -it --isolation=process mcr.microsoft.com/windows/servercore:ltsc2019 cmd
The default entrypoint for each Windows base OS image as a console, either cmd.exe or PowerShell. The command can be something similar to how we run in Linux as below,
docker run -it mcr.microsoft.com/windows/servercore:ltsc2019 cmd.exe
docker run -it mcr.microsoft.com/windows/servercore:ltsc2019 powershell.exe

In both the cases, we are running the same docker windows image with either cmd.exe or powershell.exe are landing zones similar to bash. The default is cmd.exe. Most of the commands for Docker remain the same on both Linux and windows machines. There are 4 types of images provided by Microsoft. 
Windows Server Core : For supporting traditional .net applications
Windows Nano Server : for core .net applications
Windows : Provides full windows capabilities
Windows IOT : for IOT applications.

Many Windows users want to containerize applications that have a dependency on .NET. In addition to the four base images described here, Microsoft publishes several Windows container images that come pre-configured with popular Microsoft frameworks, such as the .NET framework image and the ASP .NET image.

We can see that we have 2 base images available for Windows .net applications, windows server Core and nano server. Both these images are quite commonly used but have few differences. Nano Server has a significantly smaller API in which some of the windows services like powershell, WMI etc are not available. Nano Server was built to provide just enough API surface to run apps that have a dependency on .NET core.

Note - The important point over here, Microsoft does not publish images with the latest tag, it will always be a ltse or other thing ( Long term servicing channel ).Please declare a specific tag when pulling or referencing images from this repo.

Building Your First Docker Windows Image for Windows Containers
Building the Docker Image is quite the same as the Linux Image. Below is the Dockerfile contents for a Nginx process to run,

FROM mcr.microsoft.com/windows/servercore:ltsc2019
LABEL Description="Nginx" Vendor="Nginx" Version="1.0.13"
RUN powershell -Command \
    $ErrorActionPreference = 'Stop'; \
    Invoke-WebRequest -Method Get -Uri http://nginx.org/download/nginx-1.9.13.zip -OutFile c:\nginx-1.9.13.zip ; \
    Expand-Archive -Path c:\nginx-1.9.13.zip -DestinationPath c:\ ; \
    Remove-Item c:\nginx-1.9.13.zip -Force

WORKDIR /nginx-1.9.13
CMD ["/nginx-1.9.13/nginx.exe"]

All the Commands above are quite the same as those we use in Linux Image Building. 

Build the image using the docker build command as,
PS C:\Users\Administrator\windows-docker-files> docker build -t first-windows-dockerfile .
Sending build context to Docker daemon  2.048kB
Step 1/5 : FROM mcr.microsoft.com/windows/servercore:ltsc2019
 ---> 8351e66084ac
Step 2/5 : LABEL Description="Nginx" Vendor="Nginx" Version="1.0.13"
 ---> Running in 99a432826c7b
Removing intermediate container 99a432826c7b
 ---> 20ab94cb14d8
Step 3/5 : RUN powershell -Command     $ErrorActionPreference = 'Stop';     Invoke-WebRequest -Method Get -Uri http://nginx.org/download/nginx-1.9.13.zip -OutFile c:\nginx-1.9.13.zip ;     Expand-Archive -Path c:\nginx-1.9.13.zip -DestinationPath c:\ ;     Remove-Item c:\nginx-1.9.13.zip -Force
 ---> Running in c80bfc7227e1
Removing intermediate container c80bfc7227e1
 ---> 7ccd7d81b336
Step 4/5 : WORKDIR /nginx-1.9.13
 ---> Running in a46cf2a47431
Removing intermediate container a46cf2a47431
 ---> 7f8fd47864de
Step 5/5 : CMD ["/nginx-1.9.13/nginx.exe"]
 ---> Running in 75821cd15632
Removing intermediate container 75821cd15632
 ---> f793d005d447
Successfully built f793d005d447
Successfully tagged first-windows-dockerfile:latest

Once the image is build, check the images using the “docker images”
PS C:\Users\Administrator\windows-docker-files> docker images                                                                        REPOSITORY                  TAG       IMAGE ID           CREATED           SIZE                              first-windows-dockerfile   latest    f793d005d447    2 minutes ago    4.85GB

Run the Docker image using the “docker run” command as,
C:\Users\Administrator\windows-docker-files> docker run -d -p 80:80 first-windows-dockerfile                                      ef113bcb3567711958909394c6beff4082e618bea92dd57374a7cfb11d847519

Check if the docker container is running or not using,
PS C:\Users\Administrator\windows-docker-files> docker ps
CONATINER ID                 IMAGE                             COMMAND                                                          ef113bcb3567                  first-windows-dockerfile    "/nginx-1.9.13/nginx…"   
*******

Hyper-V Containers
With the above Windows Process Containers, we have a challenge where there is a dependency on the Host Operating system version. As we said earlier, we have to have both the Container Operating system version and the Host version as same Since we will be communicating with system processes ( dll). If there is any change on the Host Operating system, this can break the application running in the container.

This is where Hyper-V containers came in use. Hyper-V is a hypervisor-based virtualization technology for certain x64 versions of Windows. The hypervisor is core to virtualization. It is the processor-specific virtualization platform that allows multiple isolated operating systems to share a single hardware platform.

Rather than running a Windows Container on a Host Operating system, we run the containers on a Hyper-V virtual machine which in-turn run on the Host machine. First Hyper-V containers use the base image defined for the application and automatically create a Hyper-V VM using that base image. Inside the VM we will have the necessary binaries and libraries for our application run along with the windows Container. The windows container will have the application along with windows services to talk to the host kernel. The only difference is the Windows container is now running inside a Hyper-V VM which provides kernel isolation and separation of the host patch/version level from that used by the application. The other advantage is that we can use multiple hyper-V containers and can use the common base image and no management is required in creating the Hyper-V VM as it is taken care automatically.
Creating your first Hyper-V Container
For running a Windows Process Container, we ran it without passing the isolation parameter but for the Hyper-V container, we have to pass the isolation parameter. Docker by default creates a Windows Process Container if we dont pass the isolation parameter but we have to pass the isolation parameter for Hyper-V containers.

Run the Hyper-V container using,
docker run -it --isolation=hyperv mcr.microsoft.com/windows/servercore:ltsc2019 cmd

Once you run the above command, we can still see the docker containers using the “docker ps” command as below,
PS C:\windows\system32> docker ps
CONTAINER ID        IMAGE          COMMAND        CREATED       STATUS              PORTS               NAMES
1fa84490a13b        mcr.microsoft.com/windows/servercore:ltsc2019   "cmd"               ******

Windows also provides another way to check if the Hyper-V container is up and running. We can the processes using powershell “get-process” command as below,

PS C:\windows\system32> get-process -Name vmwp

Handles  NPM(K)    PM(K)       WS(K)     CPU(s)     Id      SI   ProcessName
-------    ------       -----          -----         ------     --       --    -----------
   1565     18         8764         17028       2.45     9792   0     vmwp
    280      14         5104         19148       1.47     10148 0      vmwp

For every Hyper-V container, there will be a vmwp process created. There is a Virtual machine worker process ( vmwp ) for every virtual Machine that gets created. This is actually the running virtual machine that is encapsulating the running container and protecting the running processes from the host operating system.

Image2Docker 
Since now we have support for running containers on Windows but in the earlier phases we have many of the windows applications running in a light-weight Hyper-V container or Virtual machines.  So there is a  need for a tool to convert or analyze these Virtual machines to identify what type of application is running and convert them to Dockerfile that can be used to build the windows image.

Image2Docker is a powershell module that will analyze the Virtual Hard disk image(vhdk), scan for the most common windows components and suggest a Dockerfile.

Install the Powershell Module
Start the powershell command prompt and run the following commands to install the Image2Docker module as below,

PS C:\dockfile> Install-Module Image2Docker
PS C:\dockfile> Import-Module Image2Docker

Once the module is installed, run the “Get-WindowsArtifact” command to see the supported Windows components as below
PS C:\dockfile> Get-WindowsArtifact
AddRemovePrograms
AllWindowsFeatures
Apache
DHCPServer
DNSServer
IIS
MSMQ
SQLServer

For the article, i have created a Virtual Machine and used this for our Image2Docker scanning as below,
PS C:\dockfile> ConvertTo-Dockerfile -ImagePath "
C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\Test1.vhdx" -Artifact IIS -ArtifactParam aspnet-webapi -OutputPath c:\i2d2

The Command we will be using is “ConverTo-Dockerfile” and we will be passing the vhdx location for -ImagePath, Artifact type and OutputPath for generating the Dockerfile location.

ConvertTo-Dockerfile 
-ImagePath " C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\Test1.vhdx"
-Artifact IIS
-ArtifactParam aspnet-webapi 
-OutputPath c:\i2d2

In order to run this command we have to first stop the Virtual Machine. This will mount the vhdx image file and once the scan is done, it will unmount the image. This will also generate the Dockerfile for us in the OutputPath location as we pass.

In the OutputPath location, run the “docker build” command as below,
PS C:\dockfile> docker build --isolation hyperv -t sample .
PS C:\dockfile> docker run -d --isolation hyperv sample

Two important things to remember here is to build and run the image passing the isolation parameter. The Image build will also fail if we dont pass the isolation parameter as above.

More to Come, Happy Learning :=)

Read More