Save The Planet With Machine Learning

I have a new car and I love it. To achieve better fuel efficiency, it tells me when to shift. Now I like to get to know my car, so I keep a close eye on how much fuel I use. The display can show me this in real-time. While driving home yesterday I noticed something odd.

When I drove 130 km/h, the car used the same amount of fuel when driving 100 km/h in the same gear (as suggested by the car). My assumption was that 100 km/h was too slow for that particular gear. I tested this assumption by shifting back a gear on the next 100 km/h stretch. Even though my car was telling me to shift to 6th gear, I found that in 5th gear the car used 0.3 l/100km less fuel. This morning I tried again, and found no difference between 5th and 6th gear. Apparently there are environmental factors (e.g wind, incline, engine temperature etc.) that influence which gear is most efficient. The algorithm in my car doesn’t take this into account. It just looks at speed and acceleration to determine the right gear.

Speedometer

We could try to make the algorithm smarter, but that is a flawed approach. The premise that we can create an algorithm upfront that makes the best calculation is fundamentally wrong. This is a perfect case for Microsoft Azure Machine Learning. Through learning it can figure out when to use which gear based on telemetry data. And not just for my car, but all the cars of the same model. There are approximately 1 billion cars in the world. Assuming these drive an average of 10,000 km a year, saving just 0.1 l/100km would save 1 trillion liters of fuel per year.

It’s The Platform, Stupid

In software development the platform you build on has always been a key piece of how you build applications. For a long time the platform was the system you were developing for, like a PDP-11 or Commodore 64. You were stuck with the processor, memory, and I/O capabilities of the platform. If your application didn’t run well, you had to change your application. Beefing up the hardware was virtually impossible.

Developers have become lazy

Although it is still true we develop on platforms today, these platforms are virtual. Java, .NET, Node.js, and most relational databases are all independent of the hardware they run on. The common practice is therefore to develop an application, and then figure out which hardware to run it on. Memory and CPU capacity are available in abundance, so scaling your application is easy. Well… it was anyway.

Cloud redefines the platform

When developing for Platform-as-a-Service (PaaS), the possible variance of the hardware platform is again limited. You have to deal with the platform as a whole. Aspects such as CPU, memory, network & disk latency, and failure rate, all have to be taken into account when building applications. Most Infrastructure-as-a-Service (IaaS) platforms have similar limitations. IaaS is not just a data center in the cloud which you can shape to fit your needs.

The platform is expanding, rapidly

Cloud vendors such as Amazon, Google, and Microsoft are all adding services we can use in our applications. Data(base) Services, Identity & Access, Mobile Notification, Big Data, Integration, are just a few areas where developers can now use high available and reliable services, instead of hosting their own services on some infrastructure. The Cloud has become the platform, and we need to use the services it offers as-is.

Cloud Standard Time (CST)

For years we’ve built applications that assume the system is only used from a single location. As a result most applications work with local time, with the local time set to the time zone the application lives in. So an application of one of our Dutch customers would run in UTC/GMT +1, whereas the reservation site of a Las Vegas hotel would run in Pacific Standard Time (UTC/GMT-8) or Pacific Daylight Time (UTC/GMT-7) depending on the time of the season. You could think that there is no problem, after all the systems work as they are supposed to. There are however at least two problems.

Applications are interconnected

Suppose the application of our Dutch customer would interact with the reservation system of the Las Vegas system, for instance to get information about the latest time a reservation can be cancelled. The systems would need to agree which time to use, and make a conversion when necessary. That is possible but cumbersome, for instance because Daylight Saving Time starts and end on different days.

Time zone is not the same on every machine

If we move an application to another machine, we have to be sure the time zone is the same on the new machine, otherwise the chance is pretty good the application runs into problems. Any operation comparing stored time data against local time would yield different results.

Cloud Platform Time

In Cloud platforms such as Microsoft Azure, all machines use the same time: UTC. And when using their PaaS instances, Microsoft recommends not changing that (see bit.ly/azuretimezone). The best solution is to use UTC anywhere where date/time stored, queried, or manipulated. Only format date/time as local time for input or output. UTC is the universal time zone: Cloud Standard Time (CST).

Getting Started with Windows Azure Scheduler – Part 2: Storage Queue

In part 1 I discussed using Windows Azure Scheduler with an HTTP or HTTPS endpoint. In this second and final part I’ll discuss using the scheduler with a Storage Queue. The advantage of using a Storage Queue instead of HTTP(S) is that it allows for more reliable scenarios. If an HTTP call fails, the scheduler will not retry. It will just log failure and try again next time the schedule fires. With a Storage Queue, you place an item in the queue, and then get it out of the queue on the other end. If processing of the item fails, the item is not taken out of the queue.

Creating a queue

Before you can create a job, you need to have a queue. Creating a queue is a two-step process. First you need to create a storage account. You can do this from the Windows Azure Management Portal. Then you can create a queue, which you can’t do from the management portal. You have to do that in code.

Creating a storage account

To create a storage account login in the Windows Azure Management Portal and click +NEW in the bottom left corner. If you’re not already in Storage, select Data Services, followed by Storage and Quick Create. That will reveal input fields to create an account, as shown below.

Create Storage Account

Notice that besides a name you have to specify the region and whether the storage is geo redundant (default). Geo redundant storage is of course more expensive, because it gives you a higher availability and assurance that you data is safe.

Getting an Access Key

To do anything with the storage account you created you need an Access Key, basically a password to access the account. To get it you have to select the storage account and then click MANAGE KEYS at the bottom, as shown below.

Manage Access Keys

Clicking MANAGE KEYS will show you the dialog below.

Manage Access Keys Dialog

From the dialog above you need to copy one of the keys to put in your code (or better still in the configuration). The key itself of course is not enough. You need to create a connection string that the Cloud Storage library can use to access the storage account. This connection string looks like this:

DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accountkey]

For example:

DefaultEndpointsProtocol=https;AccountName=michieldemo;AccountKey=ChnU8fmFvS3y9vT7wYLew0Nl6dZ7ABGw2Ne/uQ/tgPZ6yKBNbibszPxiiFt1EhVedkIQvWfijT3719J2TrYqmw==

Creating a queue
You can create queues in your storage account. You can do this the first time a queue is needed in an application, regardless if this is the sender or the receiver. In this case, the sender is Windows Azure Scheduler, can’t create a queue. You have to select an existing queue. This means you either need to create a queue with a tool or make sure the receiver is already running. In either case, you can use the C# method below to create a queue, and (if needed) return it to the caller.

public CloudQueue GetQueue(string connectionstring, string queueName)
{
    \\ Create a queue client for the give storage account
    var storageAccount = CloudStorageAccount.Parse(connectionstring);
    var queueClient = storageAccount.CreateCloudQueueClient();

    \\ Get a reference to the queue with the given name.
    var queue = queueClient.GetQueueReference(queueName);

    \\ If the queue doesn't exist, this will create it with the given name.
    queue.CreateIfNotExists();
    return queue;
}

You can run the above code from any application that has access to Windows Azure. Because you need some Windows Azure assemblies, the easiest way is to create a Windows Azure WebRole project. You can then insert the code above and call it from the startup task like this:

public override bool OnStart()
{
    var connectionstring = CloudConfigurationManager.GetSetting("StorageConnectionString");
    var queueName = "jobqueue";
    GetQueue(connectionstring, queueName);
    base.OnStart();
} 

Create a job

In part 1 I already explained how to create a job for HTTP(S). You follow the same steps for a Storage Queue, except that you select the latter when the time comes. The wizard then changes to give the details to connect to the queue, as shown below. To do this you need to have created the queue with the code shown previously, otherwise you can’t select a queue and you can’t finish the wizard. Now you can see that the wizard automatically selected the queue I created, because it’s the only queue in the storage account.

Create Storage Queue Job Dialog

In the dialog above you also need to create a SAS token. This is an access token that allows Scheduler to write to the queue. Just click the button to generate one, add some content you want to send to the target and you’re good to go.

Getting the Next Message From the Queue

Getting a message from the queue is easy. Just get a reference to the queue with the GetQueue method shown earlier, and then call GetMessage. If you received a message you can then read it as a string, as shown below.

public string GetMessage(string connectionstring, string queueName)
{
    var queue = GetQueue(connectionstring, queueName);
    var message = queue.GetMessage();
    if (message == null) return null;
    return message.AsString;
} 

You need to call the above method with a certain frequency to get messages. How quickly you need to process a job message determines how often you should call the method to see if there is a message.

What’s in the Message?

The information in the message is similar to the headers discussed in Part 1, but it is formatted as XML. Below is an example of an XML message received through a Storage Queue.

<?xml version="1.0" encoding="utf-16"?>
<StorageQueueMessage xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <ExecutionTag>c3b67e748b93b0bac3718f1058e12907</ExecutionTag>
  <ClientRequestId>2fb66b67-e251-4c09-8d61-8627b8bf9bfd</ClientRequestId>
  <ExpectedExecutionTime>2014-01-13T22:32:30</ExpectedExecutionTime>
  <SchedulerJobId>DemoStorageQueueJob</SchedulerJobId>
  <SchedulerJobcollectionId>DemoCollection</SchedulerJobcollectionId>
  <Region>North Europe</Region>
  <Message>Job Queue Demo Content</Message>
</StorageQueueMessage>

Getting Started with Windows Azure Scheduler – Part 1: HTTP

Ever since we’ve had (web) applications we’ve had a need for tasks that are executed on a regular basis or at specific points in time. A common way to do this is through some sort of scheduler, like the Windows Task Scheduler or from some custom (Windows) service. In a web based or cloud scenario you can now also use the Windows Azure Scheduler to do this. Scheduler basically offers two options to kick off a task in an application: with an HTTP(S) call or using a Windows Azure Storage Queue. In this post I will focus on the former.

Getting Started

Right now Scheduler is in preview, so you’ll have to request it before you can use it. To do so, go to http://www.windowsazure.com/en-us/services/preview/ and click try it now and follow the process until Scheduler is enabled for you.

Creating a job

Once you can use Scheduler you can create new jobs. Just click +NEW at the bottom left of the page and select Scheduler, as shown below.

Creating a new job

When you click CUSTOM CREATE a wizard pops up to guide you through the process of creating a job. First you have to select or create a Job Collection, as shown below.

Create a job collection

A Job Collection is tied to a specific region, so if you select a region where you don’t have a collection yet, it will default to creating a new one. Next, you need to specify the job details, as shown below.

Specifying job details

You can select three Action Types: HTTP, HTTPS, and Storage Queue. Here I’ve selected HTTP, which gives you four method types: GET, POST, PUT and DELETE. Although you can use these differently, these correspond to Read, Insert, Update, and Delete in most REST based APIs. Above I’m creating a HTTP GET job. You just have to specify the URL that gets called when the job fires.

The last thing you have to do is specify the schedule. You have a choice for a one time job that fires immediately or at a specified time, or a recurring job as shown below.

Specifying the schedule

When you create a recurring job you also have the choice of starting it immediately or at a specific time. You also have to specify when the schedule ends. Above I’ve set that to the year 9999, which is effectively indefinitely.

Getting Job Information

When you’ve created your first job, you can go to the Scheduler section in the Management Portal. It will show you all collections you’ve created, in my case just the one, as shown below.

The job collections

When you click the collection you go to the dashboard, which shows you what’s been happening, as you can see below.

Job collection dashboard

For more details you can go to HISTORY, where you can select the job you want information about, and filter all the by status. You see a list of all jobs that have been executed and their result. As shown below for one of my jobs.

Job history overview

When you select one of the jobs you can click on VIEW HISTORY DETAILS to get details about the exact response you received. For a successful job that looks something like the figure below, just the full HTTP response from the server.

Succeeded job details

For a failed job it’s not much different, as shown below. Notice that the body contains more information, so if you have control over the endpoint the scheduler is calling, you can add a comprehensive error message that enables you to debug the endpoint.

Failed job details

Managing Jobs

For now, editing jobs is not possible. You can only create jobs, delete jobs, are enable/disable all jobs. You can do the latter by clicking UPDATE JOBS at the bottom of the dashboard of a Job Collection, as shown below.

Updating jobs

Scaling

There are two plans for the scheduler. The preview defaults to Standard, which allows for a maximum of 50 jobs and an interval up to a minute. The free plan allows for a maximum of 5 jobs, which can run at most every hour. You can change your plan under SCALE, as shown below.

Scaling the scheduler

What happens exactly?

So you’ve created a job, now what? If it’s a get job, basically it’s going to call the URL you specified at the interval you specified. At your endpoint you can run a page or a Web API get request method, or something similar. The request sent to the endpoint looks like this:

GET http://schedulerdemoenpoint.cloudapp.net/api/job/ HTTP/1.1
Connection: Keep-Alive
Host: schedulerdemoenpoint.cloudapp.net
x-ms-execution-tag: c912f04ea3d225912c8e9dcc82090fe3
x-ms-client-request-id: 6009d929-587c-4051-b588-0ad2f9b14f16
x-ms-scheduler-expected-execution-time: 2014-01-01T17:16:13
x-ms-scheduler-jobid: DemoGetJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: North Europe

As you can see Azure Scheduler adds several headers with information about the job. Part of it is static information about the job, but the execution tag, request id, and execution time are unique for each request.

Notice that the region is North Europe, despite the fact that I defined the Job Collection in West Europe. This is not a fluke on my part. As you can see in the different responses to the POST, PUT, and DELETE job the region is different sometimes. In fact, if you go into the management portal you will see a different region sometimes. I assume this has something to do with high-availability between data centers, and I also assume that the two data centers closest to one another are used for this.

POST

Creating a post job

POST http://schedulerdemoenpoint.cloudapp.net/api/job/ HTTP/1.1
Connection: Keep-Alive
Content-Length: 17
Content-Type: text/plain
Host: schedulerdemoenpoint.cloudapp.net
x-ms-execution-tag: 728d411206720536d592f1f2cde52e8a
x-ms-client-request-id: 134dea00-e323-4832-9aae-e847ed3884ba
x-ms-scheduler-expected-execution-time: 2014-01-01T19:21:04
x-ms-scheduler-jobid: DemoPostJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: West Europe

Demo POST content

PUT

Creating a put job

PUT http://schedulerdemoenpoint.cloudapp.net/api/job/1 HTTP/1.1
Connection: Keep-Alive
Content-Length: 16
Content-Type: text/plain
Host: schedulerdemoenpoint.cloudapp.net
x-ms-execution-tag: d62c789c2574f287af9216226d7e48a2
x-ms-client-request-id: 7003fe19-e127-4004-a9e1-1973f066155c
x-ms-scheduler-expected-execution-time: 2014-01-01T19:19:54
x-ms-scheduler-jobid: DemoPutJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: North Europe

Demo PUT Content

Delete

Creating a delete job

DELETE http://schedulerdemoenpoint.cloudapp.net/api/job/1 HTTP/1.1
Connection: Keep-Alive
Content-Length: 0
Host: schedulerdemoenpoint.cloudapp.net
x-ms-execution-tag: 5eb0e16e3eb9e880ee6edf969c376014
x-ms-client-request-id: 5d2b18e5-4e45-48f4-bf64-620393195c56
x-ms-scheduler-expected-execution-time: 2014-01-01T17:20:48
x-ms-scheduler-jobid: DemoDeleteJob
x-ms-scheduler-jobcollectionid: DemoCollection
x-ms-scheduler-execution-region: West Europe

Continue with Part 2.

ASP.NET OpenID/OAuth Login With ASP.NET 4.5 – Part 7

With ASP.NET 4.5 it is very easy to enable users to login to your site with their accounts from Facebook, Google, LinkedIn, Twitter, Yahoo, and Windows Live. In this 7 part series I’ll show you how for each of the identity providers.

Note: Out-of-the-box this only works with WebForms and MVC4. MVC3 is not supported by default.

Part 7: Logging in with Windows Live

If you want to enable users to login to your application with Windows Live, you to register an App like you have to with Facebook, LinkedIn, or Twitter. This means you first need a Windows Live account (which you get with Outlook.com among others). Follow the steps below to get your application running:

  1. Go to https://account.live.com/developers/applications/index and login if needed.
  2. If you have not created any Apps, you’re automatically redirect to the page to create an App.
  3. Give your application a name, as shown in the figure below.
    Application name dialog
  4. Click I accept. This brings up the API Settings, as shown below.
    API settings
  5. Enter the URL of the application in the Redirect domain textbox, and click Save.
  6. Open Visual Studio (if you don’t have already).
  7. Open the project created in Part 1 (or quickly create a project in the same manner).
  8. Find the App_Start folder and open AuthConfig.cs.
  9. Register the identity provider:
    1. In MVC go to the RegisterAuth method and add the following line of code:
      OAuthWebSecurity.RegisterMicrosoftClient("0000000048104C52", "Jm45Zcvj.........");
    2. In WebForms go to the RegisterOpenAuth method and add the following line of code:
      OpenAuth.AuthenticationClients.AddMicrosoft("0000000048104C52", "Jm45Zcvj.........");
  10. Save the file.
  11. Run the project.
  12. Click the Log in link. You will notice Microsoft has automatically been added next to the other providers you added under Use another service to log in.
  13. Clicking the Microsoft button will send you to Microsoft to log in.
  14. After you login, you are asked whether you want to allow the App access (see image below), and what the App may be able to see from your profile. In this case we aren’t doing anything with that information, but the App will receive a key that would allow it to get this information.

    Consent form

  15. When you click Yes, you are automatically sent back to the web application, where you will be asked to register the account as you learned in previous parts.

ASP.NET OpenID/OAuth Login With ASP.NET 4.5 – Part 6

With ASP.NET 4.5 it is very easy to enable users to login to your site with their accounts from Facebook, Google, LinkedIn, Twitter, Yahoo, and Windows Live. In this 7 part series I’ll show you how for each of the identity providers.

Note: Out-of-the-box this only works with WebForms and MVC4. MVC3 is not supported by default.

Part 6: Logging in with Yahoo

If you want to enable users to login to your application with Yahoo, you don’t have to register an App like you have to with Facebook, LinkedIn, or Twitter. All you have to do is enable Yahoo as a provider. Assuming you already have a project setup, you do this as follows:

  1. Open Visual Studio (if you don’t have already).
  2. Open the project created in Part 1 (or quickly create a project in the same manner).
  3. Find the App_Start folder and open AuthConfig.cs.
  4. Register the identity provider:
    1. In MVC go to the RegisterAuth method and add the following line of code:
      OAuthWebSecurity.RegisterYahooClient("Yahoo!");
    2. In WebForms go to the RegisterOpenAuth method and add the following line of code:
      OpenAuth.AuthenticationClients.Add("Yahoo!", () => new DotNetOpenAuth.AspNet.Clients.YahooOpenIdClient());

    Note that is both cases you have to specify a display name. This is what’s shown in the page where the user selects the identity provider.

  5. Save the file.
  6. Run the project.
  7. Click the Log in link. You will notice Yahoo has automatically been added next to the other providers you added under Use another service to log in.
  8. Clicking the Yahoo! button will send you to Yahoo to log in.
  9. Login with a Yahoo account on the page shown below.

    Yahoo! Login screen

  10. When you sign in you are asked whether you want to sign in to the application with your account. The name of the application is shown in the text (red highlight).

    Yahoo! Consent page

  11. When you agree, you are automatically sent back to the web application, where you will be asked to register the account as you learned in previous parts.

Windows Store App demo with OAuth 1.x and OAuth 2.0

There are several demo’s online that connect a Windows Store App to Facebook, Twitter, etc. using OAuth 1.x and OAuth 2.0.  Although these demo’s show how this works, the code is hard to reuse across applications, because it is tightly coupled to the main app page. I’ve completely rewritten the code to make the code reusable, and to make the OAuth 1.x and OAuth 2.0 interface almost identical, so you can use a single codebase to connect with both protocols. You can download the OAuth demo (71 KB), which includes an OAuth 1.x library you can use in the same manner as with OAuth 2.0.

Book Review – Arduino Workshop: A Hands-On Introduction with 65 Projects

Before reading Arduino Workshop: A Hands-On Introduction with 65 Projects (John Boxall, No Starch Press) I knew what Arduino was, but other than that knew nothing about it. This book for me was the perfect starting point, because it not only tells you what Arduino is, but effectively demonstrates its capabilities. Along the way you get a crash course in electronics (resistors, transistors, switches etc.) and programming. I personally didn’t need the latter, but for people with no programming experience this book will do the trick. That said, I believe the learning curve is quite steep, so you may want to look at another book for the basics of programming (preferably in a language like C#, Java or Javascript), as that comes close to the language used with Arduino). Overall this book is a good read to get started with Arduino. It is written fairly well and the projects give you a good idea of the possibilities. You can also pick stuff from the projects to use in your own. Because some projects build on previous projects, you get a sense of how to build something with Arduino.