Deploying Asp.Net Core Application behind IIS in a Windows Container


Introduction

Containers are a robust platform. Any existing application can be containerized in few minutes and the benefits henceforth are worthy in terms of cost and scalability. If you’re new to containers I would strongly recommend you read by latest publication which is primarily on designing web applications which can be deployed as Windows Containers.

clip_image002

The quest of writing this book has been incredible, containers are massive conceptually and continuously evolving. It is quite intensive to understand the nitty-gritty of this model and the way it is conceptualized from ground up to fit into the Windows Eco system.

This document describes the problem I have encountered while deploying a basic Asp.Net Core application behind IIS in a Windows Container. This seemingly simple scenario has got few gotchas which should be understood. The below steps describe in detail the process for deploying a sample Asp.Net Core application called Music Store behind IIS in a Windows Container built using Docker.

Note!! This below example still needs a manual intervention, which is explained at the end. There is still one challenge which is unsolved, I will update this blog if I find a solution.

Problem Statement

Deploying an Asp.Net core application as windows container is an easy job because it is decoupled from the hosting platform. Technically an Asp.Net core application can be hosted by itself or by using Kestrel. When you want to deploy an Asp.Net core application behind IIS the complexity arises mainly due to the relationship between IIS and kestrel. In this scenario IIS acts as a proxy for the kestrel server which just forwards/accepts requests and responses from kestrel. DotNet core Windows Server Hosting bundle is must on any server which hosts Asp.Net core, the bundle installs a module (AspNetCoreModule) on IIS which acts as a handler for Asp.Net core application, it is explained in great detail here. This module helps IIS run the application by invoking the command “dotnet myapp.dll” which is stored in web.config of the application’s root. IIS uses the application pool’s identity for running the above command. This account needs access on dotnet.exe installed while building the image, this is taken care by following the below steps.

Steps to run the container

  • Download the source code from https://github.com/vishwanathsrikanth/mycode/tree/master/aspnetcore-iis
  • Open the solution MusicStore.sln in VS 2015 Enterprise, Build the solution.
  • On successful build Visual Studio invokes the below PowerShell command which creates a docker image. The below command is added in the project.json file as a post compile script.

powershell -ExecutionPolicy ByPass ./Docker/DockerTask.ps1 -Build -ProjectName ‘%project:Name%-iis’ -Configuration ‘%compile:Configuration%’ -Version ‘1.0.0’

  • The above command creates a docker image using the Docker file shown below.

FROM microsoft/windowsservercore:latest

SHELL [“powershell”]

RUN Add-WindowsFeature Web-Server

RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \

Install-WindowsFeature Web-Asp-Net45

COPY ./Docker/Setups /Windows/Temp/Setups

COPY ./publishoutput test

ENV ASPNETCORE_URLS http://*:80

RUN Remove-WebSite -Name ‘Default Web Site’

RUN New-Website -Name ‘test’ -Port 80 \

-PhysicalPath ‘c:\test’ -ApplicationPool ‘.NET v4.5’

RUN C:\Windows\Temp\Setups\Install-DotNetCore.ps1 -InstallDir ‘C:\Program Files\dotnet’

RUN C:\Windows\Temp\Setups\DotNetCore.1.0.0-WindowsHosting.exe /quiet /install

EXPOSE 80

WORKDIR /test

  • The above Docker file installs IIS, ASP.Net 4.5 Framework, Dotnet Core and Asp.Net Core Module for IIS. The setup file required to run inside the container are copied from ./Docker/Setups to C:/Windows/Temp/Setups inside the container. The application binaries are deployed to c:/test inside the container and a new website with name test is created under IIS on port 80.
  • The RUN statements install .Net Core and Windows Hosting Bundle inside the windows container image.
  • Run the below command to create a container and open a PowerShell window inside the container.

docker run -p 80:80 -it learningwsc/musicstore-iis:1.0.0 powershell

  • Once the PowerShell window is open run the below command to reinstall Windows Hosting Bundle for Asp.Net Core.

C:\Windows\Temp\Setups\DotNetCore.1.0.0-WindowsHosting.exe /quiet /install

Note: This is a workaround on Windows Container, for some unknown reasons the previous installation as part of the docker image building process is lost.

  • The docker container should be accessible on port 80 of the container host as shown below.

clip_image004

docker run -p 80:80 -it learningwsc/musicstore-iis:1.0.0 powershell

For more queries write to: Vishwanath.srikanth@gmail.com\srikanthma@live.com.

Advertisement

DevOps: Deploying Websites n Background Jobs as Docker Containers–Part 2


In the previous version of this blog we have deployed an ASP.NET VNext Application as docker container manually, in this blog post we will see how to deploy a console application/background job as docker container and also make the Web Application and Background Job communicate using Azure Storage Queues.

Deploying Background Worker role

DevOps for deploying Web or Background role are identical, the only difference would be the contents of Docker File.

clip_image001

As you can see there is no endpoint information here.

Step 1: Publish the binaries of BackgroundWorker project to File System

clip_image002

Step 2: Run the below commands to build the image and deploy a container for our background worker.

docker –tls -H tcp://mydockerworld.cloudapp.net:2376 build -t backgroundworker -f “C:\BackgroundWorkerDeploy\Dockerfile” “C:\BackgroundWorkerDeploy”

docker –tls -H tcp://mydockerworld.cloudapp.net:2376 run backgroundworker

The background job starts running…clip_image004

To test the integration, create a user by registering from the web application and hit submit as shown below, the background job receives the message from the UI which mocks sending an Email to the user.

clip_image006

clip_image008

clip_image010

As you can see the background worker collected the message from the queue and mocks sending an email to the respective user.

At any point you can hook into the diagnostics of the containers by running the below command.

docker –tls -H tcp://mydockerworld.cloudapp.net:2376 logs [containername]

Happy Coding !! In an extension to this blog I will explain how to balance the load across multiple identical web container using NGINX and Azure load Balancer.

DevOps: Deploying Websites n Background Jobs as Docker Containers–Part 1


In my previous blog we discussed the advantages of the new Container technology called Docker, in this blog series I will explain how to deploy an ASP.NET web application and a Background job as Docker containers.

Tools required:

clip_image002

Application Description:

ASP.NET VNEXT Web Application used in this example allows user to enter username (email) and Password and Mocks a successful registration. A background Job (VNEXT Console application) mocks sending an Email to the user with registration details. The sample application can be downloaded from here.

Visual Studio 2015 RC Tools for Docker makes it all easy for you to build and deploy ASP.NET Web Sites or Background jobs to Docker Machines with simple right-click deploy options, but in this sample we will do everything manually to understand the nitty-gritty’s of the background tasks done by VS 2015.

Deploying ASP.NET VNEXT Application as Docker Container

The below artifacts are required to manually deploy an ASP.NET application as Docker container: Publish Folder, Docker File and Docker machine.

How to create a Docker Machine:

Below are few options to create Docker Hosts on Azure. Of course you can also create your own custom Docker host, the details of custom Docker host are beyond scope of this article. For more information see here.

  • Create a Linux Machine with Docker extension from https://portal.azure.com
  • Create a Docker Machine from VS 2015 RC.
  • Create from Market place.

In this example I used the VS extension to create a Docker Machine. Just right click on your ASP.Net solution and click Publish and select Docker Containers. Click New to create a new VM with Docker extension as shown below.

clip_image003

It might take 3-8 minutes for the machine to be ready. While you have the machine ready let us get the artifacts ready.

Preparing Deploy Artifacts: Docker File and Publish Foler

clip_image005

HelloDockerWeb ASP.Net VNext Project Solution

  1. The Docker file container information about the dependencies for this application for example: Kestrel, asp.net and endpoints (placeholders).
  2. pubxml contains the values for various environmental parameters and flags like container name, ports and build configuration. This file will be used by VS during the publish process to create appropriate docker commands. [Note: We are not going to use this file]

In most of the DevOps scenarios the application is dropped to a target location with all binaries, we are going to do the same using the Publish to File System option in VS.

clip_image007

clip_image009

Clicking on publish to drop the binaries to C:\Deploy as selected above, which would be our artifact 1. The next step is to prepare artifact 2 the Docker File for our web application.

image

The above file called the “Docker File” (available in the website solution as shown above) contains commands to build and publish our Project as an Image.

  1. Line 1: Makes sure the Docker host machine downloads and installs the asp.net container before installing our project. It is equivalent to including namespaces in C# development.
  2. Line 2: Default path for our project artifacts
  3. Line 3: Path for our Project on host machine
  4. Line 4: EntryPoint command line arguments For ex: web server name (Kestrel in this case) and server URLs we can also override these command line arguments while deploying the containers

Since Docker host is a Linux machine we are using Kestrel as our web server to deploy our ASP.NET solution. For more details about Kestrel see here.

At this point we should have our build artifacts ready, project binaries and the Docker file.

Deploying Docker Container

Deploying a Docker Container is a 2 step process

  1. Build Docker Image: In this step we create a Docker Image for our application which can be used to deploy any number of containers (on different ports)
  2. Run Docker Image: In this steps we deploy an instance of the image, called Docker Container.

Building Docker Image

Run the below command from VS command prompt to build and publish the image on to the Docker host

clip_image013

The above command will download and install asp.net engine on to the host machine and also build our helloworldweb image. Run the below command to get the list of images created on the host machine.

clip_image015

clip_image017

The above command should list 2 images one for our project and one for aspnet engine.

Run Docker Image

Run the below command to deploy the image hellowolrdweb on port 80.

docker –tls -H tcp://mydockerworld.cloudapp.net:2376 run -t -d -p 80:80 –entrypoint dnx helloworldweb . Kestrel –server.urls http://localhost:80

In the above command we are overriding the entry point command line arguments by supplying the endpoint we want to use and the image name. Running the above command should produce the following result.

clip_image019

Run the below command to get a list of running containers

docker –tls -H tcp://mydockerworld.cloudapp.net:2376 ps –a

clip_image021

Docker assigns a unique name to our container reverent_hawking in this case.

If you Docker host machine is listening on port 80, you should be to browse the site now, as shown below. For more information on creating endpoints on Azure VMs see here.

clip_image023

In the next update we will deploy the background job as another container and make them communicate using Azure Storage Queues.

In future versions we will also see how to use Azure Load Balancer to load balance between multiple website containers.

Containerization: Docker on Azure


clip_image002Containers, Containers, Containers the new buzz word, containers are going to significantly impact the DevOps. If you are new to container technology I would definitely recommend you start digging into them because they are here to stay.

In modest words containerization is a virtualization technology within a Single Machine (only Linux as of today). For Example: Azure provides virtualization using Microsoft Data Centers, we can build and deploy applications across any data center. Azure handles allocation of resources, creation of machines, deployment the packages and managing the machines (PAAS World). Similarly Docker uses Linux OS features (LXC, CGroups and Namespaces) to provide virtualization within the OS to host multiple containers\application instances each having their own view of the operating system.

Docker uses Linux kernel specifically Namespaces and CGroups to run multiple isolated containers (for example: instance of an application) on a single OS, each container is a unit of deployment having their own view\share of the OS Resources like CPU, Memory, File System, Network I/O etc.

Note: Although each container runs in an isolated environment on a single Linux OS, there also ways to link the containers and share the context as well. I will publish a separate blog just focusing on this so stay tuned!!

Let us understand containers better with a simple example:

Let us take a simple online discussion forum ASP.NET application which does all the foreground work (login, registration, threads, discussions etc.) with some background jobs like sending emails, resizing images etc. One way I would execute this on Azure would be to create a Cloud Service with one web role and one worker role one for each job.

clip_image004

The biggest question with this approach is why spin off 4 virtual machines? Even though the platform is managed by Azure there are few things which you may not like as a devops guy, for example: How much time does it take to deploy a new patch? Not less than 3-5 minutes some times more because Azure is spinning off a brand new instance once again, what about the CPU utilization for the worker roles? A whole machine just to run a simple job? All my job needs is some share in CPU and memory.

Let us consider another deployment model, why not like this?

clip_image006

This time I have used the IAAS model, I used one Azure VM and deployed my Web Application and Jobs inside the machine, fairly simple isn’t it? I’m just paying for one machine. But this means that we are entirely managing the Virtual Machine which is an overkill, how do I scale the application instances independently? How do I set up my release cycle? How do I maintain different environments Staging, QA, UAT, Prod etc.

Containers comes to rescue here, it helps you to deploy apps within containers inside a machine. Each container will run under an isolated context. You can create multiple containers (multiple instances of single application) using Images, so you can technically scale up/down when required. All of this can be managed from outside the machine through TCP/REST. With Azure or any cloud platform it is virtualization within virtualization.

The model looks like the one below, as of today this is possible only on Linux machines, until Microsoft ships one for Windows.

clip_image008

Docker is one famous container based technology build using open container standards which uses Linux Namespaces and CGroups to deploy applications/services in isolated environments. Docker team provides all the toolkits necessary for you to build, ship and run containers on Linux machines. As of today Docker also ships Windows based clients (boot2docker), Azure allows you to create Linux equipped with Docker Engine from VM Gallery and Marketplace as well. Docker also provides public and private repository for hosting your images/templates.

For more information: https://www.docker.com/

How does it work?

Technically Docker = Docker Engine + Docker Hub (repository for Images, both public and private).

1. First you need a machine with Docker Engine installed (which is what Azure provides today from Marketplace, you can also deploy from VS 2015). Docker engine running inside a Virtual Machine (could be Physical Machine as well) is exposed using TCP/REST API

2. Build your application as you do locally

3. Connect to Docker Engine using Windows/Linux Clients for Docker (for windows : Boot2Docker or Docker CLI which ships with VS 2015)

4. Push the binaries to Docker engine: At this point the binaries are not deployed, Docker creates a reusable image. These images can be public, private – you can build your own Docker Hub. Docker Hub (https://hub.docker.com) is an online repository which Docker provides for all users and any image you put here is public by default. There are a lot of images available here which you can hook into any time or refer in your own images.

· Run an Image, in other words create a container – this is when an instance of your application is deployed.

Happy Coding

In the next part of this series we will see how to get started developing (.NET) applications using docker and deploy them to Docker Machines.

DESIRED STATE CONFIGURATION USING CHEF, KNIFE ON AZURE – PART 2


In the first part of this blog post we saw few basics of DSC, about Chef, Knife Azure and its fundamentals. Also we have setup our development environment to write chef code using Knife Azure.

In this part we will begin writing code for chef and run it on Azure VMs using Azure Management Portal. We will also see how we can automate the use case of creating Azure VMs using Knife Azure code and running Chef DSC on it.

Creating Cookbooks

Let’s create a cookbook first. Open a command prompt as administrator and set the Chef-repo as your current context.

As a first step we will create a cookbook with name azurecookbook by running the following command.

knife cookbook create azurecookbook

Output


The above command would create a folder structure as shown below to start authoring the cookbook.

Each of the folder has a purpose, for example in attributes we store the variables, connection strings or configuration data that we can use across recipes.

Files are used to store files/folder which you want to move to your destination server.

Recipes are stored in the recipes folder.

Libraries are used to extend the chef-client and/or provide helpers to Ruby Code.

For more information: http://docs.chef.io/cookbooks.html

 

 

 

 

As discussed in the previous part of blog we will author a small recipe for installing IIS on a machine and configuring the default.html.

Open the default.rb file from recipes folder and add the below code.

The above pic also describes responsibilitie of each code block

The default.html file which needs to be copied to the target machine(s) is placed under files\ folder of the cook-book.

Uploading Cookbook

This cook book needs to be published to the chef-server so that it is reachable from chef client installed on the target machine(s).

As mentioned earlier it can be published to a pre-installed Chef Server or you can sign up for a trial account.

Note: Make sure knife.rb download from the chef portal (https://manage.chef.io) is placed under .chef folder. Because this file has all the details to connect to your Chef Server and will always be used while running commands

The uploaded cookbook should appear in the portal as shown below.

You will be able to view the files and the content from the portal as well.

Now let us create a VM from Azure Management Portal and run the cook-book’s dsc against it. The result state of the VM would be that IIS will be installed and default.html is configured.

Running Cookbook on the VM

For running the VM against a chef cook book, create a VM from gallery option of the Management Portal and on the last screen configure as shown below.


Ensure that VM agent is installed. You can run mutiple recipies here for the demo sake I’m using the one which I’ve recently uploaded to my trial chef server.

Few Important things:

The knife.rb file which is uploaded to the portal has a property called node_name make sure the node_name value maches the VM name on the below screen while configuring the VM otherwise the desired state would not be achieved.


 

Result

The machine is configured with IIS and Default.html is reachable.



Automation

Infact you do not have to goto the portal to do this, Knife Azure also has commands to create Azure Resources like VMs, Storage accounts and while doing so you can also feed in the chef DSC so that the complete use case is automated – just one command does that all.

The above command creates a VM with chef-client installed.

You can simplify the above code by adding the common parameters to the knife.rb file as shown below.

Since *.publishsettings file is the most common for all the commands I’ve added it to the knife.rb file you can do it for other parameters also like location, VM size etc.

We can run the DSC script against the VM using the below command. Together the whole use-case can be automated completely.


Summary:

Azure DSC is simple to configure and easy to run plaform for automating your infrastructure and its configuration. It works with both Linux and Windows. Chef is a huge and old community so the existing users can easily port there chef scripts to Azure and start running their workloads on Azure. It is very easy to learn authouring the script as we have scene it just needs very few lines, it can be easily extended by authoring re-usable components.

TIP: It’s is a common DEVOPS practice to add and manage DSC scripts along with the code to the source control systems. So it is easy to manage in a multi-user environment and versioning as well.

Happy Coding !!

 

Desired State Configuration using Chef, Knife on Azure – Part 1


In this 2 Part blog (Part 2) post we will learn automating infrastructure using Chef Desired State Configuration (otherwise called DSC). This post explains how to setup simple infrastructure and configure with desired state (install software, applications) using Chef on Azure.

In Part 1 we will learn how to setup development environment, tools and trial accounts needed – all that mind boggling work I’ve done to learn Chef, Knife and Ruby to connect to Azure over 2-3 days is here. If you’re from the .NET/MS background and pretty new to the open sources or never tried, trust me it is going to be little difficult gathering things from different sources, so this is my small attempt to put all at one place for you to get started – It’s worth, because it’s fun learning new things, always !!

Let us define a few basics here and also some intro to the open sources I’m going to use.

What is desired state configuration?

DSC is a way to automate creating\building, managing and maintaining infrastructure. It’s a configuration driven approach where in you configure the desired state of your machine and make your machine(s) adhere to it all times. Most importantly the configurations should be as simple as possible, so you only say what you need on the machine, NOT how they are going to be installed.

For ex:

  • Which web server on the machine you need IIS, Apache?
  • What are the default directories or files you need?
  • Which database platform you need on the machine SQL, MySQL, Oracle?
  • Who has access to your machine or what roles can access the machine.

Put these in simple configuration files and make machines adhere to it. It is the responsibility of the DSC client on that machine to make it so. I agree this can also be done using remote PowerShell but there we follow imperative approach, lots of if else loops etc. Here it is little different, its idempotent. Even if you run the script on the same machine multiple times, the tool (be it Chef or any DSC based tool) it ensures the desired state is achieved.

What are the different technologies or partners we have on Azure who help run DSC?

There are many clients who run DSC in wide variety of ways on both Windows & Linux systems. Azure today supports Puppet, Chef, and PowerShell DSC.

This is what the whole picture looks like:

Not just installing required software, tools, files, services (collectively called as resources) any DSC based tool should also look for drift (changes to the infra) in the configuration and react accordingly – We will see how Chef reacts. It also helps you create multiple copies of same machines adhering to configuration easily.

What is Chef?

Chef provides an automation system for building, deploying, and managing your infrastructure.
Chef code is authored and managed using folder structure called cook books (Chef specific) each component of a cook book is called a Resource, chef code contains reusable definitions that provide instructions for tasks such as configuring a web server. A cookbook may contain more than one recipe (more than one task to do), each recipe contain reusable resources like files, templates, libraries and attributes etc.

Chef helps your write recipes (DSC code) and store them on Chef Servers (Either create your own Chef server or take a trial account – In this blog we will learn using Hosted Service\trial account). The recipes along with resources are bundled together as cookbook.

Here’s a very basic use case, to install IIS on a VM and configure default page.

  • Create a recipe for installing IIS on a machine
  • Configure cookbook with recipe written above and default.html (file resource)
  • Upload the cookbook to your Chef Server (Hosted or Your Own)
  • Create a VM with Chef Client Installed
  • Run your recipe on the VM (We can do this using Azure Management Portal or script)

Additionally the chef client on each VM periodically checks the chef server for changes in the recipe and also the VM. If there is any change in the recipe or the VMs state, the VM is re-configured accordingly and any drift from the current configuration is discarded. If you would like change the desired state of a machine(s) you can edit the cookbook recipe and just re-publish. You can bundle it along with your application code, you have on cookbook per environment (Staging, Testing) or by Roles (App Server, Monitoring Server etc.)

You can also apply recipes to VMs by environment (staging, production, testing), by role (web role, AD, Monitoring server) or nodes (individual) chef client runs on each and every node.

The last component I want to introduce is

Knife Azure

Knife Azure is an add-on to Chef Development kit which helps you create and manage resources which can talk to Azure and bootstrap them. Knife azure uses Azure API.

Setting up Development Environment

We are going to execute a small use-case (from windows machine) here which involves creating a VM on Azure and bootstrapping with IIS and configure default html page. The sample can be extended to install any Web Application or Service on IIS or any Web Server.

Step-1

You will need a Chef Server to store chef recipes, so either create a Chef Server (details to create chef server are provided at the end of this blog) or create an account on the hosted Chef Server here:

https://manage.chef.io/signup

Step-2

Install Chef Development Kit (Chefdk) for Windows, it can be downloaded from here: https://www.chef.io/

Chefdk installs chef, knife, gem plugins and lots of other components which helps you author and run machine configurations from your machine\dev environment. Gem also helps you install other sub-components to work with third parties (like Linux, Windows), one such component is Knife Azure, which helps you write code which interacts with Azure APIs.

Knife-Azure, Knife-Windows are plugins to Knife which are distributed as Ruby Gem.

Step-3

Knife is a command line tool which comes with Chef Development kit from where we run our commands. You can use windows command prompt, notepad, notepad++, I use PowerShell ISE.

Although Chef Development Kit helps you create cookbooks and recipes, if you want to work with Azure Resources we have to install a plugin to knife called knife azure.

Open command prompt as administrator and navigate to C:\opscode\chefdk\embedded\bin\ and run the following command.

gem install knife azure

Once the installation is successful verify the installation and versions using below command

gem list | grep azure


Now you should be able to run knife commands.

For example below is the command to get Azure Virtual Machine Images. We can run the commands from the command prompt or from PowerShell

knife azure image list –azure-publish-settings-file C:\VinAzure-12-5-2014-credentials.publishsettings

Note: The command required *.publishsettings file to authenticate against your Azure subscription, it can be downloaded from here: https://manage.windowsazure.com/publishsettings/index?client=powershell

 

Step -4

Once you register and login to Chef hosted server as shown in Step 1, you should see a screen similar to below.

Note: You will be asked to create an organization here which is a logical container for your chef cook books and nodes. You can create multiple organizations.

Select the target organization and download the starter kit and extract it to C:\ or any other drive.

The starter kit contains the default Knife configuration file (Knife.rb), the authentication file ([orgname]-validator.pem) and sample cook books.

The development environment is now ready to author Chef Cookbooks.

Open the Command Prompt as administrator and configure chef-repo as your current context as shown below.


C:\chef-starter\chef-repo>

 

All the knife azure commands look for default configuration file (knife.rb) and validation key ([orgname]-validator.pem) which are placed under chef-repo.

Here are the contents of knife.rb explained, the file will always be required while authoring/running cookbooks so make sure it is valid, it can be download anytime from the Chef portal anytime, click on Generate Knife Config as shown in the above screen shot.

Now you’re ready to author your first cookbook.

In the next part of this blog we will write our first cookbook recipe and run it on an Azure Virtual Machine.

Internet of Things: The Big Picture


Internet of Things (IoT) has been a buzz word for quite a long, here I go describing what I know about it and how important it is for you and me.

While I start collecting my thoughts as a first word of notice I’m going to be as technology agnostic as possible, since IoT is not tied to any technology or platform like Azure or AWS. IoT is going to be (or should I say it is) the NEXT revolution in human civilization.

If I sell you a device which can easily fit into your car and on a busy day while you’re reaching home, the device starts speaking to you “Hey, you are about to reach home in next 10 min and I can help you maintain the same temperature at home while you reach, would you like me to do so?” and a simple “YES” would turn ON your AC and set temperature automatically as of your cars. Now would you buy it? I would actually jump and snatch (provided it fits into my budgetJ)

And what if the same sweet voice continues to say you’re out of groceries for this week, would like me to repeat the last week’s order to a Big Basket (Or any other online store)? Wow!! Isn’t it.

Now, that is Internet of things for you.

It is not you communicating with the device (I would call it voice commands) but this is about devices talking to each other and helping you lead better life, as simple as that.

From technology perspective it is sensors (or what we call IoT devices) collecting data\environmental variables (location, air pressure, humidity, temperature, calendars, weight, height etc.) and transmitting to an always available centralized store, services (data analytics) analyzing the information and coming up with interesting insights and your favorite devices bringing the insights to you.

IoT devices could be anything, it can also be your phone which is sensing location information, it can be a wrist wear which can monitor your heart rate, it can be Kinect sensing your gestures, a web-cam etc. In future we’re going to see more of such devices which can collect all kinds of information and transmit at regular intervals.

 

 

 

 

 

Cloud plays a very important role here, it provides highly scalable, always available infrastructure to easily connect and build applications. For example using Event Hubs (Microsoft Azure) we can log millions of events per second in near real time, combine that with stream analytics (and Azure Storage or any other storage service) we can convert information to facts\insights.

Here is a sample scenario

Original: https://weblogs.asp.net/scottgu/azure-announcing-new-real-time-data-streaming-and-data-factory-services

User\another IoT device is the end recipient of the insights. In case of it being another IoT device the scenario would become more complex and exciting and yes it has no limits.

For developers, how would you embrace this change? Well the way I see it is, it’s not a technology or language to learn it is about innovation, intelligence and making the best use it. But to some extent I see it has a lot to do with data analytics (Big data), so I suggest keep watching this change pretty closely as this is going to become the next big thing!! After mobile, after cloud and after big data. Of course it’s also about building great applications, so just stay focused towards IoT J

You can find some interesting IoT example scenarios here:

http://postscapes.com/internet-of-things-examples/

http://blog.nuvemconsulting.com/top-5-internet-of-things-examples

http://readwrite.com/2009/12/08/top_10_internet_of_things_products_of_2009

Storage Options in Microsoft Azure


Storage Options in Microsoft Azure

 

Data can be broadly classified into two categories:

  1. Transactional/Operational Data
    1. Data that is generated as a result of daily operations, for ex: Bank transactions or Shopping transactions, POS Systems
  2. Analytics/Accumulated data.
    1. Data accumulated over a period of time which is stored for predictive/perspective analysis there by bring business insights.

Storage options for storing either Transactional or Analytics data also can be broadly classified into 2 categories

  1. Relational Storage
  2. Non-Relational Storage

The idea of writing this blog is to introduce the storage options we have in Azure and see how Document DB fits into the space of NoSQL Stores on Azure.

In this blog we are going to see different technologies available under each categories of relational and non-relation storage for storing either Transactional or Analytics data.

Relational Storage

Relational Storage is here from long time and it has got a prominent position now and not going vanish completely. It has significant set of advantages which cannot be provided by non-relational data technologies for ex: Strongly typed data, transactional support, indexing on secondary columns etc.

Azure provides a handful of technologies to store relational data

SQL Azure: It’s a managed/multi-tenant SQL Server as a service. Users can add servers and databases to servers and be charged on usage & storage size basis. For the most part it is similar to on premise SQL Server.

SQL Server/Any Relational DB Server on VM: A SQL Server (any version) can in fact be installed on Azure Virtual Machine (IaaS). For that matter we can install any relational store like ORACLE, MYSQL on a Windows/Linux Virtual Machine.

In certain cases storing data on a relational data store is an overkill, for example modern applications which would like to store simple data (for ex: tasks list, shopping cart items, bookmarks).

Relational data stores on the other hand also has few disadvantages for Ex SQL Server are also not infinitely scalable, they do not support Sharding*.

Sharding: Sharding allows a database to spread data across multiple machines providing infinite horizontal scalability.

 

Non-Relational Storage

Non-Relational/NoSQL is a set of approaches towards storing data without following a definite structure. The below are few approaches for storing non-relation data on Azure.

  • Key/Value Stores
    • Data Stored in Key-Values pairs, supports Sharding. For Ex: Windows Azure table Storage.
  • Column family Stores
    • Data Stored in Columns, a key-value pair system with little more structure, for ex: HBase on Azure VM
  • Document Stores
    • Data Stored in JSON Documents, for ex: Document DB – Managed Service, MongoDB or CouchDB on Azure Virtual Machines.
  • Graph Stores
    • Data Stored in Graphs, suitable for storing inherently relational data (Social Networks), for Ex: Neo4J on Azure Virtual Machines
  • Big Data Stores
    • Managed Service Provided by Windows Azure HDInsight, suitable for big data analysis for predictive analysis, targeted advertising etc. The Service implements Hortonworks Hadoop Map Reduce Algorithm. For using versions of Hadoop we can deploy the custom software on Azure VMs.

In the next blog we will see in details Azure Document DB and how well it can be utilized to store JSON documents, querying, optimizing options etc.

Summary

 

Windows Azure VMImages & Updates to CloneVM PowerShell script


The following blog introduces Windows Azure VMImages introduced recently @build 2014 and updates made to the CloneVM blog.

Build 2014 has been a real exciting event for Azure users and developers with lots of exciting updates and new features. Here I’m going to publish few details about new extension\switch to Save-AzureVMImage Azure PowerShell script.

Azure VM Images.

Remember Azure Images? An Azure Image is a SysPrep’ed OS image which can be used to deploy identical VMs for scaling up/down your deployment environment. Similarly we also have Windows Azure Disks, but unlike images disks are ready to deploy entities, a disk can be used to deploy a copy (or a snapshot) of existing Virtual Machines. Hmmm….now the real big question is, what about data disks attached to each VM?

Azure VM Images answers this question with a new switch to Save-AzureVMImage, the switch is called OSState. There are two options for this switch Generalized & Specialized.

A Generalized AzureVMImage is an image capture (or sysprep’ed for windows) of a running VM along with all the data disks attached to the VM. Generalized VMImages are similar to Azure Images feature we have today with an addition of capturing data disks as well, using generalized VMImage we can deploy N Identical Virtual Machines. It’s kind of template\model which can be re-used for scaling out your Front-End or Back-End Web Servers.

A Specialized AzureVMImage is a capture of VM which is already running along with the data disks. It is similar to the Azure-Disk feature we have today. Specialized VMImage can be used to take a snapshot of the existing VM which can be restored later. We can provision a new VM using specialized VMImage only once.

The below pictures presents the new features available with Save-AzureVMImage.

clip_image002

Here is how a Generalized/Specialized VMImage can be captured using Azure PowerShell.

clip_image003

Once the image is created the procedure to create a VM (quickVM or using AzureConfig) using powershell is still the same as shown in the below picture.

clip_image004

Updates to CloneVM PowerShell script.

Thanks to the developers for coming up with such a beautiful update, now my CloneVM scripts looks more compact and simple.

Unfortunately VMImages do not capture the Endpoints from the source VM, my script still needs to collect the endpoints information from the source VM and recreate them on the new cloned VM.

Here is the updated CloneVM, marked as CloneVM_V1.1.

Happy Coding !!

Cloning Azure Virtual Machine using PowerShell


Once a virtual machine is created, it is a tedious task to move it to another location with endpoint, in other words deep copy. The below PowerShell script assists in deep copying (OS+Data) a Virtual Machine to a new location\same location with endpoints.

Tested with: PowerShell 4.0

Description:
Cloning does a deep copy of OS+data disks to a new storage location provided by the user and creates a new VM out of it with endpoints.
Supports both Windows & Linux

Below are the sequence of actions which takes place
1. Locates the VM and shuts down after capturing information about OS, Size and data disks
2. Creates a Cloud Service and Storage Account (dynamically generates names if names already exist)
3. Copies the VHDs
4. Creates a new VM
5. Attaches Data Disks
6. Adds Endpoints (Probes, Load Balancing Endpoints not considered as of now)
7. Restores the source VM to original state (if it is shut down as part of the script)

Script Location: https://github.com/vishwanathsrikanth/AzurePowerShell.git

Notes:

  • You might find errors while creating a new disk, ignore it.
    • Reason: While trying to fetch a disk with a proposed name the script throws an exception that no disk found with such name in such cases a new disk with the proposed name will be created or new name is created and process repeats until a available disk name is found. This process is common across various scenarios in the script.
  • The script still does not support adding VM to the same cloud service\existing cloud service, since the endpoints if already existing on the service
    need to be load balanced, this option will be available in the future versions of the script.