Category Archives: Development

lPeriodically calling a function in your web app

A recurring theme in web programming is calling a function periodically and/or at a specific date and time. This has two aspects:

  • Calling a function on a scheduled basis
  • Making sure time-outs don’t interfere

Calling a function on a scheduled basis

To be able to call a function on your web app, you first need an endpoint (a URL) that you can call to kick the function off. In ASP.NET you can do this in several ways:

  1. Create a page that calls the function.
  2. Create a handler (ASHX) that calls the function (more efficient than a page).
  3. Create a WCF service that allows calls with HTTP GET, as discussed in this blog post by Sasi Suryadevara.

With your endpoint in place, you can use the Windows Task Scheduler to invoke the function at any given time and at intervals as low as one minute. With the Windows Task Scheduler you have several options again:

  1. Create a VB Script that calls the URL, as discussed in this blog post by Steve Schofield.
  2. Create a PowerShell script that calls the URL (same as option 1, but more modern).
  3. Have the Windows Task Scheduler open Internet Explorer and open the specified URL (e.g. C:\PROGRA~1\INTERN~1\iexplore.exe -extoff http://www.google.com, which starts IE without extensions). If you do this, you also need to specify that the Task Scheduler closes IE after 1 minute, which you can do in the Settings tab of the task (Windows 2003), or in the Trigger configuration (Windows 2008), as shown below. NOTE: I’ve found that IE sometimes doesn’t close, even if you tell Windows to close it. Eventually this will cripple your scheduled task.


Task Settings in Windows 2003


Trigger configuration in Windows 2008

Note: In Windows 2008 the dropdowns governing duration and interval show 30 minutes as lowest value. You can in fact change this to 1 minute by editing the text.

Making sure time-outs don’t interfere

A web based call is bound to time-out after a few minutes. If you task takes longer than that, this may abort the call depending on how you programmed it, and what webserver settings are used with regards to disconnected clients. To ensure a time-out does not interfere, you can spawn a new thread and have it call the function. That way the thread handling the request can return a response to the client, and the function is carried out regardless. One issue that may arise there is that the function itself hangs or takes too long. You may want to add logic to ensure that it’s aborted after a certain time, and add logging to notify you of this, and possibly also ensure that the function can only be run by one caller at a time.

Forwarding cookies

Sometimes we come across integration scenario’s that look straighforward, but where the devil is in the details. We needed to integrate our asp.net/silverlight application in an existing ASP “classic” site (yes, the still exist). The catch was that we needed to call the ASP “classic” site in a server to server call to get some information, but we needed to do this under the context of the current user. You may be wondering why we didn’t go through a shared database or someting, but the problem is that there is little knowledge left of the old app, so changing the existing app was a no go.
So, in order to impersonate the user, you need your server-sided request look like that user. This means forwarding the cookies the user sends, and sending back the cookies the server sends to the user. Below is code that demonstrates that.

HttpWebRequest webRequestToServer = (HttpWebRequest)HttpWebRequest.Create("http://somedomain/somepage.asp");
webRequestToServer.CookieContainer = new CookieContainer();
foreach (String cookieKey in Request.Cookies)
{
    HttpCookie cookie = Request.Cookies[cookieKey];
    Cookie serverCookie =new Cookie(cookie.Name, cookie.Value, "/", "somedomain");
    webRequestToServer.CookieContainer.Add(serverCookie);
}
HttpWebResponse webResponseFromServer = (HttpWebResponse)webRequestToServer.GetResponse();
foreach (Cookie serverCookie in webResponseFromServer.Cookies)
{
    HttpCookie clientCookie = Response.Cookies[serverCookie.Name];
    if (clientCookie == null)
    {
        clientCookie = new HttpCookie(serverCookie.Name);
    }
    clientCookie.Value = serverCookie.Value;
    clientCookie.Expires = serverCookie.Expires;
    Response.Cookies.Add(clientCookie);
}
webResponseFromServer.Close();

This code works fine in a test environment, but there is a catch… in some cases the domain of the server is not set in the cookie you get on the server side. The problem with that is that when you set the domain, it doesn’t correspond to what the server expects. You can see this if you write out the cookies you send/receive (both on the browser connection and te server-server connection) to a log or something (including the domain. It took a while to figure out, but replacing “somedomain” with Request.ServerVariables["LOCAL_ADDR"] did the trick.

Receiving unsecured response with WCF

When you’re using signing or encryption on your SOAP requests, WCF exepects the response to be signed/encrypted too. When the response is not signed/encrypted the message encoder throws a MessageSecurityException. This is perfectly fine behavior, but in interop scenario’s can really bug you, because some WS-* implementations don’t sign/encrypt Fault messages. Now, because the message encoder throws the exception, you can’t get to the underlying SOAP fault. This means that you have no clue why you received a fault in the first place.

To fix this, Microsoft has provided a hotfix. With this hotfix in place you can specify enableUnsecuredResponse=”true” in the binding configuration to allow unsecured responses. Unfortunately this means that also valid responses don’t have to be signed/encrypted, defeating the purpose of signing and encryption altogether!

As an alternative, you can implement your own message encoder that wraps the encoder that is actually used. In the wrapper you can either store the received XML for use higher up in the call stack, or retrieve the fault and throw a FaultException<>. Without jumping through hoops the latter option does require your wrapper to know about the fault types it needs to handle. With the former option you can handle the exception higher up in the call stack by catching the MessageSecurityException and throwing a new exception with the XML of the message as a property.

WSDL and WCF: WCF requires <sp:OnlySignEntireHeadersAndBody>

If you’ve ever tried svcutil.exe to import WSDL which has doesn’t have <sp:OnlySignEntireHeadersAndBody> specified in the security policy, you’ll know that this doens’t fly. SvcUtil will tell you the the security policy is not supported. So why is this? I assume this has something to do with the a statement in paragraph 6.6 in the WS-SecurityPolicy specification, which states:

Setting the value of this property to ‘true’ mitigates against some possible re-writing attacks.

So apparently Microsoft decided that setting it to false is not a good idea, and decided not to support setting it to false (omitting the element).

Removing the ReplyTo element if it is anonymous

Talking to a non-WCF webservice is like a box of chocolates… you never know what you’re going to get. After solving the issue mentioned in my previous blog post, I had another problem. For some reason the service didn’t expect a <wsa:ReplyTo> element if the value was anonymous. Later on the other party adjusted the service so it actually worked as expected from WCF, but in the mean time I did write a message inspector to solve the problem. Besides solving the problem it also is a nice little example of a message inspector.

public class RemoveAnonymousReplyToMessageInspector : IClientMessageInspector
{
    private const string ReplyToNode = "ReplyTo";
    private const string WSAddressingNamespace = "http://www.w3.org/2005/08/addressing";

    public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
    {} // Not used for this scenario.

    public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
    {
        // This method is called before the request is sent. You can read/manipulate the message here.
        // If you’re using signing or encryption, that is done after this, this is the
        // unencrypted/unsigned mesage.
        request = RemoveAnonymousReplyTo(request);
        return null;
    }

    private Message RemoveAnonymousReplyTo(Message message)
    {
        if (message.Headers.ReplyTo.IsAnonymous == true)
        {
            int index = message.Headers.FindHeader(ReplyToNode, WSAddressingNamespace);
            message.Headers.RemoveAt(index);
        }
        return message;
    }
}

To use this, you’ll need to create a class implementing the IEndpoint behavior and add the MessageInspector in ApplyClientBehavior, as follows:

public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
{
    RemoveAnonymousReplyToMessageInspector inspector = new RemoveAnonymousReplyToMessageInspector();
    clientRuntime.MessageInspectors.Add(inspector);
}

Troubles with WCF and certificate signing

Recently I found myself trying to talk to a webservice using signing. It was a WCF calling a Java webservice using a certificate to sign messages. I kept getting the following exception message:
The incoming message was signed with a token which was different from what used to encrypt the body. This was not expected.
After a wild goose chase we finally figured out that the certificate was corrupted. Just installing the certificate again solved the issue.

Awesome mockup tool

I’ve written quite a few functional designs over the years and I’ve found that for users needing to validate it having visuals is key. In most cases prospective users don’t understand what they really get until they see screens. On the opposite side of the spectrum telling developers what to do also is much easier with a screen, especially when you are debating what would be the best and most efficient (coding wise) way to give a user certain functionality. In these situations just getting a piece of paper and draw is the best you can do. The last few years I’ve done this on and off on my tablet. I can sketch on it, but the results are often so poor to see (and read!), that I can’t possible put it in a functional design. This is where a good mockup tool comes in.

A good mockup tool should make you feel like you are drawing, but provide you with predefined controls to make your job fast and easy. Recently I came across Balsamiq Mockups, which is simply jaw dropping. Let’s start with the result, which looks pretty much like a hand drawn thing. At first glance that may not seem like a big deal, but it is. It states clearly “This is a mockup, the actual thing may look different“. If you give a user something that looks like a screenshot, that is what to expect to get. With this they know it will look differently when it is done, and this also makes it much easier to debate your choices and come up with better ideas (to quote David Platt, “Thy User Is Not You”, so they will come up with stuff you didn’t even dream about).

Ok, so the result is great, what about getting there? Well, that’s a piece of cake, really. Balsamiq is as intuitive a tool as I’ve seen and I was able to create a pretty complex screen in about 10 minutes. There’s a bunch of commonly used controls (and some less common), and you can easily find what you need. Also, you can download tons of additional controls from http://mockupstogo.net. Placing, moving, resizing etc. is all very easy because of the snapping support. Want to see for yourself? Look here.

The last things that I find refreshing is the licensing model and fee. It only costs $79 for a single license, and that comes with updates forever (and they update frequently, so they say). Because the tool is already so good, this means you can use it for years, without having to worry about support or having to get a new version.

This is just a great tool. I am sure I will be using it often.

Starter STS

At my company we we’re looking at creating a generic STS that does not require Active Directory Federation Services 2.0, and we were also thinking about putting it up on CodePlex. Dominick Baier from Thinktecture beat us to it with StarterSTS. He’s also posted some webcasts on how to use it. Good stuff, so instead rolling our own, we’ll be using/extending this one.

System.IdentityModel.Claims.ClaimTypes vs. Microsoft.IdentityModel.Claims.ClaimTypes

Windows Identity Foundation introduces a new ClaimTypes class. It contains predefined claim type URIs for claims defined by OASIS and Microsoft. In the WIF SDK project templates for a custom STS this ClaimTypes class is mixed with the one already in System.IdentityModel.Claims, which is rather confusing. So, what’s the difference?


Functionally: None. All claim type URIs in Microsoft.IdentityModel.Claims.ClaimTypes are identical to corresponding types in System.IdentityModel.Claims.ClaimTypes. That said, Microsoft.IdentityModel.Claims.ClaimTypes adds a few new claim types.


Technically: Claim types in System.IdentityModel.Claims.ClaimTypes are defined as static read only string properties, whereas in Microsoft.IdentityModel.Claims.ClaimTypes the claim types are string constants.


My advice: for clarity always use Microsoft.IdentityModel.Claims.ClaimTypes.