Category Archives: Microsoft Azure

Managing and Updating Custom Azure VM Images

When you create a Virtual Machine in Azure, you do this from an image. The Azure gallery contains quite a few images with Windows, several flavors of Linux, and some with middleware such as BizTalk Server, SharePoint Server, and Oracle WebLogic. When updates are available for the operating system or middleware, the images are updated so you don’t have to install the updates yourself after creating a new VM. This is great, because updating can take quite some time. Instead, you can go straight to adding your own software and configuration. In a typical environment this is what costs the most time, regardless of whether this is for development, test, or production purposes. You can speed that up by using scripts and desired state configuration through PowerShell DSC, Puppet, or Chef. Another option is to create custom images with your own configuration. This is particularly effective in scenario’s where you need to be able to create additional VMs quickly, for instance when ramping up a test team, or adding developers to a team. Even if you script it, post-configuration of a VM created from the gallery can take quite some time. Installing Visual Studio including updates, needed SDKs etc. can easily take 3-4 hours, and more complicated setups even longer.

Image Types

In Azure you can use two types of images, generalized images and specialized images. The difference between the two is very signifcant.

Generalized images have been stripped of computer specific information such as computer name and system identifier (SID). This ensures (virtual) machines provisioned from that image are unique. It also means that you need to provide it with a new set of credentials to access the VM. The images you get from the gallery are generalized images. You generalize a Windows image with sysprep, and a Linux image with the waagent command. The biggest problemof doing this, is that some software doesn’t work well after you’ve created a VM from a generalized image, because the software configurationis based on some computer specific information. For SQL Server for instance you need to take some additional measures, as explained in Install SQL Server 2014 Using SysPrep. Another example is SharePoint, which will only work if you don’t run the Sharepoint Products and Technologies Configuration (PSConfig) Wizard before creating the image. Basically this means you can install SharePoint, but you can’t configure it yet.

Specialized images on the other hand are basically just a copy of the original virtual machine. If you provision a VM based on such an image, you basically get a VM that is an exact copy of the original, including the name, SID, etc. For a single machine environment, this is not much of an issue. However, in a virtual network and with computers linked to a directory, it is an issue if multiple computers have the same name.

Creating Images

How to create an  image has been described in numerous places. This step-by-step guide tells you in detail how to do it for both Windows and Linux VMs. I’ll summarize the steps for both image types, so you have an understanding of the steps involved. Be aware that when you create a custom image, the image is stored in the storage account you created the VM in. In addition, any VM you create from the image will use the same storage account. This has two important consequences:

  1. You can only create a VM in the region where the image is stored. If you need it in other regions as well, you need to copy or recreate it there.
  2. You need to be aware of the limits of a storage account. These are listed here. The most important is the 20.000 maximum for Total Request Rate. This basically boils down to a maximum of 20.000 IOPS per storage account.
    Note: these limits apply to regular storage the limits for Premium Storage are different.

Creating a Generalized Image

  1. Create a VM in Azure based on the gallery image of your choice.
  2. Configure the VM to your liking. Keep in mind that some restrictions apply, as discussed earlier. If you’re unsure, check whether the software you install can survive the generalization process.
  3. Generalize the image (see the step-by-step guide for details).
  4. When generalization is done, the VM will be marked as Stopped in the Azure Portal. You can then do a Shutdown in the Azure Portal, so it won’t incur any more charges.
  5. Capture the image. Make sure you select the option I have run Sysprep on the virtual machine as shown below.Capture Image

Note that after you’ve captured the image the VM is gone.

Creating a Specialized Image

  1. Create a VM in Azure based on the gallery image of your choice.
  2. Configure the VM to your liking.
  3. Shutdown your VM.
  4. Capture the image. Do not select the option I have run Sysprep on the virtual machine.

After the image is captured,you can start the VM again, and continue using it.

Using Custom Images

You can select the images you created by creating a VM from the gallery. One of the gallery items is My Images, as shown below. You also see that the image information tells you whether the image is Specialized or Generalized.

Choose Image

When you select a Generalized image, the process of creating a VM is pretty much the same as with an image provided by Azure. The major difference is that you can’t select the storage account, and you can only select the region (or affinity groups or virtual networks in that region), in which that storage account is located. The same applies when you select a Specialized image, but than you also can’t provide the credentials of the administrator account. That is the same as in the VM the image was created from (so you need to keep that information somewhere).

Keeping Images Updated

Keeping specialized images up to date is easy. You create one VM that you only use as a base. VMs you run in your environment are actually a copy of the base VM. The base VM is turned off most of the time.You just fire it up when you need to apply updates. When you’ve applied updates, you shut it down again and capture an updated version of the image. This is particularly useful in scenarios where there is a single machine that may need to be redeployed at some point. A good example is a production environment in which you want to keep a working copy around of a VM, so you can quickly go back to a working state if the running VM breaks.

If your environment is more complex and you need generalized images, the process is slightly more involved. You still create a base VM as explained above. But then you need to take some additional steps.

  1. Capture the base VM as a specialized image.
  2. Create a new VM from the specialized image (VM 2).
  3. Generalize VM 2.
  4. Create a generalized image from VM 2 (which deletes VM 2).
  5. Delete the specialized image.
  6. Update VM 1 when needed.
  7. Repeat steps 1 through 5 to create an updated image.

Ad. 5: Alternatively you can delete the VM instead, and create a new VM based on the specialized image instead. My experience is that it is easier to keep the base VM around.

You may wonder why you just wouldn’t delete the base VM, and create a VM of the specialized template to perform updates in. The reason is that you can only generalize (Windows) VMs two times, so after the first update, you can’t upate and generalize again. By keeping around the base VM, you’re Always generalizing for the first time.

You typically don’t want to perform updates in the production environment. This is mostly a networking issue, except for the storage account limits discussed earlier. If you have your acceptance environment setup in the same Azure subscription, but in a different VNET, you can update in the acceptance environment, and then promote to the production environment. Remember that because everything is tied the same storage account, this also means the storage account is used for both your acceptance and production environments. Whether this is an issue depends on your specific requirements for acceptance and production environment.

Alternatively you can create a separate “Update VNET”, in which you only perform updates. and lastly you can copy images from one storage account to another, even if these are not in the same subscription. In that case you have to copy the underlying blobs, and make them into an image. How to do that is explained here.

Optimizing Performance with Windows Azure Premium Storage

Azure recently started the Preview of Premium Storage. Premium Storage provides much better IO performance than standard storage. Standard storage can reach up to 500 IOPS per disk, with minimum latency between 4-5 ms. Premium Storage on the other hand can reach up to 5000 IOPS per disk, depending on the disk size (bigger disks get more IOPS, see Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads). When  tuning performance on Premium Storage, you need to be aware of a few things. First of course the limts of the disks you provision. The table below (source: Microsoft) shows these limits.

Disk Type P10 P20 P30
Disk Size 128 GB 512 GB 1023 GB
IOPS per Disk 500 2300 5000
Throughput per Disk 100 MB/sec 150 MB/sec 200 MB/sec

The second limit is the machine limit, which is determined by the number of CPUs. Per CPU you get approximately 3200 IOPS and 32 MB/s bandwidth (or disk throughput). See the DS series table in Virtual Machine and Cloud Service Sizes for Azure. I will not go into creating a Premium Storage VM (you can read about that here), but rather how to see where your machine “hurting” if the application you are running isn’t performing well.

Configuring Performance Monitor

When looking for performance bottlenecks you’re basically always looking at 4 things:

  • CPU utilization
  • Memory utilization
  • Disk utilization
  • Network utilization

In this post I will focus on the first three, because I’ve mainly seen issues with these. CPU utilization and memory utilization are single metrics in performance monitor, but disk utilization consists of the number of reads and writes, and the throughput. Another disk metric is the length of the queued IO operations. If the application reaches the disk limits, the length of the queue goes up. To collect these metrics, you need to create a Data Collector Set in Performance Monitor and run that during testing. Take the following steps to do so:

  1. Start Performance Monitor (Windows + R, type perfmon)
  2. In the tree view on the left navigate to Performance \ Data Collector Sets \ User Defined
  3. Right click on the User Defined item and select New \ Data Collector Set, as shown below
    Creating a new Data Collector Set
  4. In the dialog that follows enter the name select Create manually (Advanced), and click Next.
  5. Select Performance counter, and click Next.
  6. Click Add…
  7. Select the following counters:
    Counter Instance(s)
    Processor \ % Processor Time _Total
    Memory \ Available Mbytes N/A
    Logical Disk \ Current Disk Queue Length _Total and *
    Logical Disk \ Disk Bytes/Sec _Total and *
    Logical Disk \ Disk Reads/Sec _Total and *
    Logical Disk \ Disk Writes/Sec _Total and *
  8. Click OK.
  9. Set Sample Interval to 1 second, and click Next.
  10. Select the location where the data must be saved. On Azure it makes sense to put the logs on the D: drive, which is a local (temporary) disk, instead of on one of the attached disks.
  11.  Click Next.
  12. If you want to start and stop the collection of data manually, click Finish. Otherwise, select Open properties for this data collector and click Finish.
  13. In the next dialog you can set a schedule for data collection. A very good idea is to set a Stop Condition, either for a maximum duration or a maximum file size.
  14. When you are done, you will see the new Data Collector under the User Defined Collector Sets.

When you’re ready to test your application, click the Data Collector and press the play button, or right click the Data Collector and select Start. When the test is done press the stop button, or right click the Data Collector and select Stop.

Analyzing the Data

The data collected by Performance Monitor is stored in CSV format. To use it, import it into Excel, as follows:

  1. Start Excel and create an empty worksheet.
  2. Go to the Data tab and click From Text under the Get External Data section.
    Import CSV
  3. Select the CSV file generated by Performance Monitor and click Import.
  4. Select My data has headers and click Next.
    Import Dialog 1
  5. Select Comma as delimeter and click Next.
    Import Dialog 2
  6. Ensure . is used as decimal delimer. Click Advanced… and ensure the values in the popup are as shown below. Then click OK.
    Import Dialog 3
  7. Click Finish.

Once the data is imported, you can create charts that tell you what’s going on. Not all columns are handy at one time, so you may want to create a copy of the sheet and remove certain columns before creating a chart. For instance, in the below chart I only kept the columns with disk reads and disk writes.

Disk IO chart

As you can see, Logical Disk F tops off at approximately 2400 IOs. A closer look at the test data also shows that all disks together never use more than 3160 IOs, with CPU and memory were not impacted. In a second test I added a P30 disk, and moved the data previously on Disk F to this disk (Disk X). The results of that test are shown in the chart below.

Disk IO chart

Notice that disk X tops off at approximately 5000 IOs. Total IOs for all disks never reached above 5300.

Understanding the data

A key point is that there is more or less a one-to-one relationship between the IOs measured and the IOPS specifications of the disk as shown in the table at the top. Combined with the knowledge that a 2x CPU VM has approximately 6400 IOPS available, that means there is still room for improvement. You could change the appication to use multiple disks, or you can use Windows Storage Spaces to combine physical disks to form a logical disk.

if you would analyze the disk queue length also measured, you would see that it has quite a few IOs queued, as an indication it can’t handle more IOs. The total bytes transferred would however not reach its maximum. For other types of workloads, you may see different behavior. Applications stream large files for instance, will likely have less IOs, but will reach VM or disk throughput limits. Because I used disks with higher throughput than available for the VM, my throughput would be capped at 64 MB/s. Still other workloads may be more CPU bound, and will show % Processor Time at 100 for extended periods.

Understanding how the workload affects your Azure VM helps you determine what to do when your performance isn’t what you want it to be. In my case adding a faster disk improved the performance. If I want even better performance, I have to add disks and utilize their IO capabilities. Once the test shows no longer shows a cap on IOs, I have to look elsewhere. Note that if you reach the VM IO cap, you need to get a bigger VM to get better performance, which will also decrease the likelyhood that you will reach CPU or memory limits.

Save The Planet With Machine Learning

I have a new car and I love it. To achieve better fuel efficiency, it tells me when to shift. Now I like to get to know my car, so I keep a close eye on how much fuel I use. The display can show me this in real-time. While driving home yesterday I noticed something odd.

When I drove 130 km/h, the car used the same amount of fuel when driving 100 km/h in the same gear (as suggested by the car). My assumption was that 100 km/h was too slow for that particular gear. I tested this assumption by shifting back a gear on the next 100 km/h stretch. Even though my car was telling me to shift to 6th gear, I found that in 5th gear the car used 0.3 l/100km less fuel. This morning I tried again, and found no difference between 5th and 6th gear. Apparently there are environmental factors (e.g wind, incline, engine temperature etc.) that influence which gear is most efficient. The algorithm in my car doesn’t take this into account. It just looks at speed and acceleration to determine the right gear.


We could try to make the algorithm smarter, but that is a flawed approach. The premise that we can create an algorithm upfront that makes the best calculation is fundamentally wrong. This is a perfect case for Microsoft Azure Machine Learning. Through learning it can figure out when to use which gear based on telemetry data. And not just for my car, but all the cars of the same model. There are approximately 1 billion cars in the world. Assuming these drive an average of 10,000 km a year, saving just 0.1 l/100km would save 1 trillion liters of fuel per year.

It’s The Platform, Stupid

In software development the platform you build on has always been a key piece of how you build applications. For a long time the platform was the system you were developing for, like a PDP-11 or Commodore 64. You were stuck with the processor, memory, and I/O capabilities of the platform. If your application didn’t run well, you had to change your application. Beefing up the hardware was virtually impossible.

Developers have become lazy

Although it is still true we develop on platforms today, these platforms are virtual. Java, .NET, Node.js, and most relational databases are all independent of the hardware they run on. The common practice is therefore to develop an application, and then figure out which hardware to run it on. Memory and CPU capacity are available in abundance, so scaling your application is easy. Well… it was anyway.

Cloud redefines the platform

When developing for Platform-as-a-Service (PaaS), the possible variance of the hardware platform is again limited. You have to deal with the platform as a whole. Aspects such as CPU, memory, network & disk latency, and failure rate, all have to be taken into account when building applications. Most Infrastructure-as-a-Service (IaaS) platforms have similar limitations. IaaS is not just a data center in the cloud which you can shape to fit your needs.

The platform is expanding, rapidly

Cloud vendors such as Amazon, Google, and Microsoft are all adding services we can use in our applications. Data(base) Services, Identity & Access, Mobile Notification, Big Data, Integration, are just a few areas where developers can now use high available and reliable services, instead of hosting their own services on some infrastructure. The Cloud has become the platform, and we need to use the services it offers as-is.

Getting Started with Windows Azure Scheduler – Part 2: Storage Queue

In part 1 I discussed using Windows Azure Scheduler with an HTTP or HTTPS endpoint. In this second and final part I’ll discuss using the scheduler with a Storage Queue. The advantage of using a Storage Queue instead of HTTP(S) is that it allows for more reliable scenarios. If an HTTP call fails, the scheduler will not retry. It will just log failure and try again next time the schedule fires. With a Storage Queue, you place an item in the queue, and then get it out of the queue on the other end. If processing of the item fails, the item is not taken out of the queue.

Creating a queue

Before you can create a job, you need to have a queue. Creating a queue is a two-step process. First you need to create a storage account. You can do this from the Windows Azure Management Portal. Then you can create a queue, which you can’t do from the management portal. You have to do that in code.

Creating a storage account

To create a storage account login in the Windows Azure Management Portal and click +NEW in the bottom left corner. If you’re not already in Storage, select Data Services, followed by Storage and Quick Create. That will reveal input fields to create an account, as shown below.

Create Storage Account

Notice that besides a name you have to specify the region and whether the storage is geo redundant (default). Geo redundant storage is of course more expensive, because it gives you a higher availability and assurance that you data is safe.

Getting an Access Key

To do anything with the storage account you created you need an Access Key, basically a password to access the account. To get it you have to select the storage account and then click MANAGE KEYS at the bottom, as shown below.

Manage Access Keys

Clicking MANAGE KEYS will show you the dialog below.

Manage Access Keys Dialog

From the dialog above you need to copy one of the keys to put in your code (or better still in the configuration). The key itself of course is not enough. You need to create a connection string that the Cloud Storage library can use to access the storage account. This connection string looks like this:


For example:


Creating a queue
You can create queues in your storage account. You can do this the first time a queue is needed in an application, regardless if this is the sender or the receiver. In this case, the sender is Windows Azure Scheduler, can’t create a queue. You have to select an existing queue. This means you either need to create a queue with a tool or make sure the receiver is already running. In either case, you can use the C# method below to create a queue, and (if needed) return it to the caller.

public CloudQueue GetQueue(string connectionstring, string queueName)
    \\ Create a queue client for the give storage account
    var storageAccount = CloudStorageAccount.Parse(connectionstring);
    var queueClient = storageAccount.CreateCloudQueueClient();

    \\ Get a reference to the queue with the given name.
    var queue = queueClient.GetQueueReference(queueName);

    \\ If the queue doesn't exist, this will create it with the given name.
    return queue;

You can run the above code from any application that has access to Windows Azure. Because you need some Windows Azure assemblies, the easiest way is to create a Windows Azure WebRole project. You can then insert the code above and call it from the startup task like this:

public override bool OnStart()
    var connectionstring = CloudConfigurationManager.GetSetting("StorageConnectionString");
    var queueName = "jobqueue";
    GetQueue(connectionstring, queueName);

Create a job

In part 1 I already explained how to create a job for HTTP(S). You follow the same steps for a Storage Queue, except that you select the latter when the time comes. The wizard then changes to give the details to connect to the queue, as shown below. To do this you need to have created the queue with the code shown previously, otherwise you can’t select a queue and you can’t finish the wizard. Now you can see that the wizard automatically selected the queue I created, because it’s the only queue in the storage account.

Create Storage Queue Job Dialog

In the dialog above you also need to create a SAS token. This is an access token that allows Scheduler to write to the queue. Just click the button to generate one, add some content you want to send to the target and you’re good to go.

Getting the Next Message From the Queue

Getting a message from the queue is easy. Just get a reference to the queue with the GetQueue method shown earlier, and then call GetMessage. If you received a message you can then read it as a string, as shown below.

public string GetMessage(string connectionstring, string queueName)
    var queue = GetQueue(connectionstring, queueName);
    var message = queue.GetMessage();
    if (message == null) return null;
    return message.AsString;

You need to call the above method with a certain frequency to get messages. How quickly you need to process a job message determines how often you should call the method to see if there is a message.

What’s in the Message?

The information in the message is similar to the headers discussed in Part 1, but it is formatted as XML. Below is an example of an XML message received through a Storage Queue.

<?xml version="1.0" encoding="utf-16"?>
<StorageQueueMessage xmlns:xsd=""
  <Region>North Europe</Region>
  <Message>Job Queue Demo Content</Message>

Getting Started with Windows Azure Scheduler – Part 1: HTTP

Ever since we’ve had (web) applications we’ve had a need for tasks that are executed on a regular basis or at specific points in time. A common way to do this is through some sort of scheduler, like the Windows Task Scheduler or from some custom (Windows) service. In a web based or cloud scenario you can now also use the Windows Azure Scheduler to do this. Scheduler basically offers two options to kick off a task in an application: with an HTTP(S) call or using a Windows Azure Storage Queue. In this post I will focus on the former.

Getting Started

Right now Scheduler is in preview, so you’ll have to request it before you can use it. To do so, go to and click try it now and follow the process until Scheduler is enabled for you.

Creating a job

Once you can use Scheduler you can create new jobs. Just click +NEW at the bottom left of the page and select Scheduler, as shown below.

Creating a new job

When you click CUSTOM CREATE a wizard pops up to guide you through the process of creating a job. First you have to select or create a Job Collection, as shown below.

Create a job collection

A Job Collection is tied to a specific region, so if you select a region where you don’t have a collection yet, it will default to creating a new one. Next, you need to specify the job details, as shown below.

Specifying job details

You can select three Action Types: HTTP, HTTPS, and Storage Queue. Here I’ve selected HTTP, which gives you four method types: GET, POST, PUT and DELETE. Although you can use these differently, these correspond to Read, Insert, Update, and Delete in most REST based APIs. Above I’m creating a HTTP GET job. You just have to specify the URL that gets called when the job fires.

The last thing you have to do is specify the schedule. You have a choice for a one time job that fires immediately or at a specified time, or a recurring job as shown below.

Specifying the schedule

When you create a recurring job you also have the choice of starting it immediately or at a specific time. You also have to specify when the schedule ends. Above I’ve set that to the year 9999, which is effectively indefinitely.

Getting Job Information

When you’ve created your first job, you can go to the Scheduler section in the Management Portal. It will show you all collections you’ve created, in my case just the one, as shown below.

The job collections

When you click the collection you go to the dashboard, which shows you what’s been happening, as you can see below.

Job collection dashboard

For more details you can go to HISTORY, where you can select the job you want information about, and filter all the by status. You see a list of all jobs that have been executed and their result. As shown below for one of my jobs.

Job history overview

When you select one of the jobs you can click on VIEW HISTORY DETAILS to get details about the exact response you received. For a successful job that looks something like the figure below, just the full HTTP response from the server.

Succeeded job details

For a failed job it’s not much different, as shown below. Notice that the body contains more information, so if you have control over the endpoint the scheduler is calling, you can add a comprehensive error message that enables you to debug the endpoint.

Failed job details

Managing Jobs

For now, editing jobs is not possible. You can only create jobs, delete jobs, are enable/disable all jobs. You can do the latter by clicking UPDATE JOBS at the bottom of the dashboard of a Job Collection, as shown below.

Updating jobs


There are two plans for the scheduler. The preview defaults to Standard, which allows for a maximum of 50 jobs and an interval up to a minute. The free plan allows for a maximum of 5 jobs, which can run at most every hour. You can change your plan under SCALE, as shown below.

Scaling the scheduler

What happens exactly?

So you’ve created a job, now what? If it’s a get job, basically it’s going to call the URL you specified at the interval you specified. At your endpoint you can run a page or a Web API get request method, or something similar. The request sent to the endpoint looks like this:

Connection: Keep-Alive
x-ms-execution-tag: c912f04ea3d225912c8e9dcc82090fe3
x-ms-client-request-id: 6009d929-587c-4051-b588-0ad2f9b14f16
x-ms-scheduler-expected-execution-time: 2014-01-01T17:16:13
x-ms-scheduler-jobid: DemoGetJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: North Europe

As you can see Azure Scheduler adds several headers with information about the job. Part of it is static information about the job, but the execution tag, request id, and execution time are unique for each request.

Notice that the region is North Europe, despite the fact that I defined the Job Collection in West Europe. This is not a fluke on my part. As you can see in the different responses to the POST, PUT, and DELETE job the region is different sometimes. In fact, if you go into the management portal you will see a different region sometimes. I assume this has something to do with high-availability between data centers, and I also assume that the two data centers closest to one another are used for this.


Creating a post job

Connection: Keep-Alive
Content-Length: 17
Content-Type: text/plain
x-ms-execution-tag: 728d411206720536d592f1f2cde52e8a
x-ms-client-request-id: 134dea00-e323-4832-9aae-e847ed3884ba
x-ms-scheduler-expected-execution-time: 2014-01-01T19:21:04
x-ms-scheduler-jobid: DemoPostJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: West Europe

Demo POST content


Creating a put job

Connection: Keep-Alive
Content-Length: 16
Content-Type: text/plain
x-ms-execution-tag: d62c789c2574f287af9216226d7e48a2
x-ms-client-request-id: 7003fe19-e127-4004-a9e1-1973f066155c
x-ms-scheduler-expected-execution-time: 2014-01-01T19:19:54
x-ms-scheduler-jobid: DemoPutJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: North Europe

Demo PUT Content


Creating a delete job

Connection: Keep-Alive
Content-Length: 0
x-ms-execution-tag: 5eb0e16e3eb9e880ee6edf969c376014
x-ms-client-request-id: 5d2b18e5-4e45-48f4-bf64-620393195c56
x-ms-scheduler-expected-execution-time: 2014-01-01T17:20:48
x-ms-scheduler-jobid: DemoDeleteJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: West Europe

Continue with Part 2.

Book Review: Developing Applications for the Cloud on the Microsoft Windows Azure Platform

If you’re a developer using the Microsoft platform and want to learn Windows Azure development, Developing Applications for the Cloud on the Microsoft Windows Azure Platform (Patterns & Practices) is the book for you. It’s a clear book which rides on a good practical case that covers most of the important angles. Because this is a Practice & Patterns book, it also spends quite some time teaching you the right mindset for building (multi-tenant) cloud applications.

The downside of the book is that it really assumes a good familiarity with the Microsoft .NET Framework and C#. Without that, you’re not going to understand much of the cases, apart from the high-level cloud information. That said, the book starts with a good explanation of why you would want to build cloud applications, the types of scenario’s that fit well, and what Windows Azure (and in more general terms Platform-as-a-Service) development means. The example case really covers most scenario’s and choices people will come across, and that means it also covers all the core technologies within Windows Azure. Another great thing about the book is the many links to articles and other (free e-)books that provide deeper insight into a certain aspect or technology. Be aware that Windows Azure is a fast moving platform, with changes on a regular basis. Although most of the core concepts in this book will remain the same for a long time, it can’t keep up with all the new developments. I hope new editions will follow to keep up with the changes.

Azure Appliance

In the early stages I’ve commented on Microsoft not going the right way, because a major selling point could be that you can run Azure in the cloud or in your own data center. That seemed not to be possible. When Azure almost went live, MS had changed sufficiently to maybe make this possible in the future. With Azure Appliance, this is now definitly a reality.

Azure story much better from PDC 09

Earlier this year I was pretty negative about the Azure story from Microsoft. My main gripe was that (from my perspective at the time) it was not a write-once, run-anywhere story, so you couldn’t run your current apps in Azure without modification. I’m very pleased about what I’ve seen now from PDC. Microsoft has opened up Azure in many ways, giving you much more control over what’s happening. In fact, you can get your own virtual machines and have complete remote admin access. Also, they’ve been really thinking about how to tie your existing hosting environment to Azure and vice versa. It will be possible to connect a web app inside Azure securely to a database server in your own data center.

I must say I’m impressed at how well Microsoft has listened to all the feedback about Azure. With all the changes they’ve implemented I feel that it has now become interesting for some of the services my company is implementing, whereas previously we weren’t even considering Azure.

Windows Azure licensing disappointment

I read this post from Steven Martin at Microsoft and frankly I’m disappointed. Microsoft is not the only company building cloud computing services, but they have a clear advantage over most of the providers: they own the operating system. As such, a unique selling point would definitly be that they can provide you with cloud services, but also enable you to run your applications in your own data center without modifications. If I build an application for the Windows platform, I want to build it once and be able to run in on any server infrastructure. As it looks now, this is not possible. Once built for the cloud, it must remain in the cloud unless you refactor the application for use in your own environment. I really hope Microsoft sees that this is a mistake and that it will actually gain them clients if they allow this. There is another factor here and that’s trust. I’d like to have a backup scenario in case Microsoft fails to deliver. With the Azure platform as is, there is no backup scenario. You either go for it full-blown, or you don’t. It is my belief that many people will decide not to go with Azure in the first place because of this. In fact, I am now much more reluctant to tell my clients about Azure as an option.