Category Archives: ASP.NET

Common Windows Identity Foundation misconceptions

I am on crusade to get Windows Identity Foundation (WIF) adopted by the Microsoft .NET community at large. Why? Because maintaining a user store within an application as is propagated by ASP.NET Membership is just plain stupid these days. Yes, I may be a little harsh with that judgment, but sparing the rod spoils the child. Applications should no longer be islands, but should be working together. And if applications such as Spotify and Flickr can (re)use a user’s identity from Facebook, Twitter, LinkedIn, and so on, why can’t yours?

“A thousand mile journey begins with one step” – Lao Tze

In the past couple of years I’ve been speaking about WIF on many occasions, both at conference and with individual developers. Across the board I can say that WIF is largely misunderstood. Hence my first step is to address some of the misconceptions surrounding WIF, and more in general the concepts underlying WIF.

Misconception 1: WIF is Microsoft-only and not interoperable.

WIF actually implements the WS-Federation standard. Microsoft is an active participant in the standards commonly known as the WS-* specifications, a host of web services specifications for security, transactions, and reliable messaging. The WS-Federation standard is implemented by many other platforms, and WIF can interoperate with these just fine.

Misconception 2: WIF can only be used to secure web applications, not web services.

The WS-Federation protocol defines two profiles: Active and Passive. Passive federation is for browser based applications, because browsers don’t support the full cryptographic stack required for WS-Federation to work. Active federation is used for web services and can be used with clients that do support the needed cryptographic capabilities. I’ll get back to what this all means in another post.

Misconception 3: WIF can only be used to secure web services, not web applications.

See misconception #2.

Misconception 4: WIF is only for cloud (Azure) applications.

WIF works with any application written in .NET 3.5 and up. You can host that application anywhere you like, in the cloud or in your own data center. In fact, there is nothing that prevents you from creating applications with WIF for use in just the local network and for internal use only.

Misconception 5: You can’t do role-based security with WIF.

Quite the opposite is true. You can still do role-based security if you want to, but you can do much more. The underlying protocol is much more flexible, and you can implement security checks in your applications based on the information you get about a use any way you like.

Misconception 6: WIF only adds complexity.

It is indeed true that properly connecting a WIF enabled application to a security token service can be a challenge. You need to get the protocol settings to match and need certificates for encryption and signing. However, inside your applications WIF is just as easy as role-based security as you are used to. If you want to do more elaborate things, things obviously get more complex, but this is true for any type of security.

Misconception 7: To use WIF in an existing application I need to re-architect the whole application.

WIF extends the IIdentity and IPrincipal interfaces. This means that your existing application will keep working if you migrate to WIF to get authenticate and authorize the user. The only thing you need to be aware of is the fact that because you don’t have a local user directory anymore, you can’t do things for which you require information about another user. This means you may have to provide a different way to deal with such scenarios. If you use ASP.NET Profiles for this kind of information, a custom provider may be all you need.

IE9 Developer Tools

In recent years the development story for Internet Explorer wasn’t particularly appealing. If you wanted to fix CSS and JavaScript errors, IE was definitely not the tool you wanted to use. Also, seeing what was going over the wire wasn’t possible with IE, and as a result developers flocked to FireFox and other browsers offering (plugins) to help with these issues. You don’t have to be a genius to understand that in the long run this wasn’t helping IE in terms of market share. And with the renewed focus on webbased (HTML5) apps, Microsoft has stepped up and produced built in developer tools, also known as the F12 developer tools. So, what’s in there and what can you do with it?

What’s taking so long?

As with IE8, there are inspector tools for HTML, CSS, and script. Since I am by no means an HTML/CSS guy, I’m not the best judge when it comes to these tools, but for what I need from those I’ve been pretty satisfied. For me, the new profiler and network tools are much more interesting, because they respectively hook into the browser rendering engine and what’s going over the wire with HTTP. If you’ve been using tools such as Fiddler or HttpWatch, the latter of the two should be more or less familiar. As you can see in the image below, it shows all the HTTP requests going out to the server, when in the timeline these requests were going out, and how long that took. If you’ve never seen something like this, you can see that this provides great insight into what goes down under the covers.

If you need more details about the timing information, you can select one of the items, and see more. As you can see below, that information doesn’t only include HTTP information, but also information about the time it took to render and JavaScript to fire. If there’s a page that is slow to appear in the browser screen, this will give you great insight into where your time is going.

Is this functionality better than commercial tools such as HttpWatch? Not at this time, but I have a feeling Microsoft isn’t done yet. Tools like that are specialized, and Microsoft is playing catchup. One annoying thing I found is that if I have multiple requests bouncing back and forth, filling in a form, etc. IE9 tools will only show me the last interaction. It could be I’m missing something, but I haven’t been able to figure out how to see the whole list of requests since I started capturing, and I’m too lazy to figure it out. That means I find myself going back to HttpWatch for that (at the moment). That said, the tooling is good, so if you don’t want to spend the extra dime for other tooling, this will do in most cases. Except of course that this only works in IE9, whereas some of the tools out there work in multiple browsers. But wait… there’s more.

What I’m I getting?

An interesting question is always: what HTML will a certain browser actually get. This is where the F12 tools have another nice new feature. You can change the user agent string the server is receiving, and as a result inspect what happens on the HTML, CSS, script side when other browsers come in. Obviously this doesn’t make IE9 behave itself as one of the other browsers, but it can provide nice insights nonetheless, especially to tweak what robots are seeing.

How will it look?

The last thing that I fond really useful is the ability to change the browser so you can check the user experience for users with different settings. As you can see from the image below, you can disable css, script, and the pop-up blocker. In the environment I’m working in now, there’s often the need to see whether everything still works if JavaScript is disabled, and there this is a great tool. It definitely beats going into the browser settings and changing these settings every time you have to test.

Last but not least, you can easily resize the browser screen to fit a certain size. I always used Windows Sizer for this, but having this built in is better, because I rarely use it for anything but webdevelopment.

What’s more?

There’s a whole bunch of stuff I haven’t gone into here, so I advise you to play around with the F12 tools for a while. I’m also betting we’ll see a lot more where this came from in the not too distant future. Microsoft is investing heavily in HTML5, and is actually trying to use “the best HTML5 support” as a unique selling point for Windows.

How to get {get; set;} properties automatically from the Visual Studio Class Diagram

I sometimes use the Visual Studio Class Diagram when I’m designing a system. Because I like to test my assumptions in such a situation I want to be able to quickly create classes that just work. Unfortunately, when you add a property in a class, Visual Studio generates code like this:

public string SomeProperty
{
    get
    {
        throw new System.NotImplementedException();
    }
    set
    {
    }
}

In most cases what I need is:

public string SomeProperty { get; set; }

Fortunately, the PowerToys for the Class Designer and Distributed System Designer solve this problem. After installing these (and turning it on in the Add-In Manager), the right click menu is enhanced with a lot of new options. One of the is Add->Auto-Imlplemented Property, as shown below.

Tips on Finding Performance Issues in Your ASP.NET App – Part 2

Earlier I blogged about finding performance issues in an ASP.NET app “in the large” (see here). I’d like to reiterate that doing this for a web app is critical, because it not only shows you where the bottlenecks are, but also how these affect the entire application. I said I’d follow up on profiling, so here it is…

Once you know what the bottlenecks or “hot spots” are, you can dive into figuring out what the problem is with these pages. This is where profiling comes in. Profiling lets you know what is happening inside your code. It shows you how long a method call is taking and how often a call is made. Both of these are interesting, because performance bottlenecks can be caused by calls taking long, but also by too many calls to a method. Now, I won’t get into the details of how to profile with Visual Studio 2010 (you can read Beginners Guide to Performance Profiling for that), but when you use this tooling, you should focus on one page at a time. The profiler will throw a heap of information at you, so dealing with one page is hard enough. Once you have this information you have to determine what’s really going on. Is this somehting you can fix by changing a few lines of code, or is there a more structural problem you need to solve? Pages that under no load take 10 seconds or more likely have a structural problem, so you need to check if there is a lot of code being executed that is just waste. Also, be sure to focus on big fish first. You can worry about micro-optimizations (such as a different string compare mechanism) later. That said, you should try to make such optimizations part of your coding  guidelines, rather than looking at that afterwards. Micro-optimizations are only really interesting for very high performance apps. A 10th of a second loss here and there isn’t going to make a lot of difference apart from maybe needing to scale-out a little earlier.

Merging Claims Based Authorization and Application Authorization

It is typically a good idea to separate general authorizations of a user from application specific authorizations. This is why we invented Application Roles (Settings Administrator), which are separate from Organizational Roles (System Administrator).When using Application Roles, we can map these roles to Organizational Roles. In organizations using Active Directory, Organizational Roles are typically stored in AD. Application Roles can then be stored using Authorization Manager (AzMan) in XML or AD in Application Mode (separate from the organization AD).

Over the years I’ve built quite a few applications that use the above model, and it works well if you authorize with roles. But these days I do most of my work using things like Claims Based Authorization, so the question is “Does this translate to teh CBA world? And if so, how?”. The answer is that yes, it does translate (very well actually), at least in Windows Identity Foundation.

In the CBA world an application receives a token with claims about the user. Like with roles, this should typically be claims not specific to the application, unless the only source for the claim information lies within (or is only accessible to) the STS. This serves two purposes:

  1. The token is generally usable across applications, so the STS can deal with this more easily.
  2. Tokens are not stuffed with a lot of claims.

The latter is actually more important than you might think. Adding more claims means a bigger token, and there comes a point where the token is so large that for instance ASP.NET rejects the request, because it is bigger than the accepted request size (which you should only increase if really necessary).

Now, one of the great things about CBA is that it enables me to create business logic which checks the authorization based on meaningful values, rather than a role. On top of that, I wouldn’t want to have a hybrid security system for the claims stuff and the application specific stuff. Fortunately, In Windows Identity Foundation I can add claims to a claims identity, and these claims then behave the same as the claims acquired from the STS token. The only difference is that the issuer is set to LOCAL AUTHORITY, rather than the STS, which means these claims are really only usable locally in my app (or service). The code to add a local claim is easy:

IClaimsPrincipal claimsPrincipal = Page.User as IClaimsPrincipal;
IClaimsIdentity claimsIdentity = (IClaimsIdentity)claimsPrincipal.Identity;
claimsIdentity.Claims.Add(new Claim("http://MyApp/SomeAppClaim", "SomeValue"));

You can execute code like this when a session starts, and add all application specific claims for the user (identified by an STS claim) to the claims identity. The local claims then get the same lifetime as the claims originally from the token, so you only have to add them once. This way adding application specific claims is still separated from the functional code. Which was the benefit to start with.

Although the above code will definitly work, there is another option when using WCF, known as Claims Transformation. With Claims Transformation you can define policies that define ClaimSets to add to a user’s token. This model is much more flexible, as explained in the MSDN article Service Station: Authorization in WCF-Based Services (jumps straight to the Claims Tranformation section). That article is from the pre-WIF era, but you can do similar stuff with teh ClaimsAuthorizationManager in Microsoft.IdentityModel.

Tips on Finding Performance Issues in Your ASP.NET App

Performance issues can creep up in all sorts of places. Finding them is all about knowing where to look. This also depends on how you look, which can be at the application as a whole (“in the large”) or at individual functions (“in the small”). The latter is known as profiling. Because (ASP.NET) web applications are all about large numbers (of users), looking at the application as a whole is a good place start. This is where load testing (a.k.a. stress testing) comes in. Load tests will show you which pages are performing poorly, which is the first step in determining where to take a closer look. Load Testing 101: ASP.NET Web Applications is a great starting point to get yourself up to speed with the mechanics of a good load test, even though its from 2006.

One thing in the article that I think is absolutely critical is about creating a single user base line. This will show you which pages are doing well, when run on their own vs. pages that are doing not. The results of that test already give you an indication of where to look. In fact, a full load test may actually skew the results, because fast pages can be held up by slow pages if the request queue fills up. Fast pages can be identified under load from the difference between the best, average, and wordt results. For fast pages these show huge differences (from few tenths of a second to tens of seconds), whereas slow pages have numbers which are bad across the board.

If you’re looking for tools to do load tests, checkout the Web Capacity Analysis Tool (WCAT) provided by Microsoft. The downloads can be found here:

An interesting tool you can use with WCAT is the WCAT Fiddler Extension for Web Server Performance Tests. It helps you to record a path through your app with Fiddler, and then use that path in a WCAT load test.

Note: I will cover profiling (“in the small” testing) in a different post.

Aantekeningen SDN Event 13 December 2010

Op het SDN Event van 13 december heb ik twee presentaties gegeven. Hieronder kun je de aantekeningen downloaden die ik gemaakt heb op de tablet (voor wie er niet bij was: ik heb in plaats van slides mijn sessie gedaan met behulp van tekenen in OneNote).

lPeriodically calling a function in your web app

A recurring theme in web programming is calling a function periodically and/or at a specific date and time. This has two aspects:

  • Calling a function on a scheduled basis
  • Making sure time-outs don’t interfere

Calling a function on a scheduled basis

To be able to call a function on your web app, you first need an endpoint (a URL) that you can call to kick the function off. In ASP.NET you can do this in several ways:

  1. Create a page that calls the function.
  2. Create a handler (ASHX) that calls the function (more efficient than a page).
  3. Create a WCF service that allows calls with HTTP GET, as discussed in this blog post by Sasi Suryadevara.

With your endpoint in place, you can use the Windows Task Scheduler to invoke the function at any given time and at intervals as low as one minute. With the Windows Task Scheduler you have several options again:

  1. Create a VB Script that calls the URL, as discussed in this blog post by Steve Schofield.
  2. Create a PowerShell script that calls the URL (same as option 1, but more modern).
  3. Have the Windows Task Scheduler open Internet Explorer and open the specified URL (e.g. C:\PROGRA~1\INTERN~1\iexplore.exe -extoff http://www.google.com, which starts IE without extensions). If you do this, you also need to specify that the Task Scheduler closes IE after 1 minute, which you can do in the Settings tab of the task (Windows 2003), or in the Trigger configuration (Windows 2008), as shown below. NOTE: I’ve found that IE sometimes doesn’t close, even if you tell Windows to close it. Eventually this will cripple your scheduled task.


Task Settings in Windows 2003


Trigger configuration in Windows 2008

Note: In Windows 2008 the dropdowns governing duration and interval show 30 minutes as lowest value. You can in fact change this to 1 minute by editing the text.

Making sure time-outs don’t interfere

A web based call is bound to time-out after a few minutes. If you task takes longer than that, this may abort the call depending on how you programmed it, and what webserver settings are used with regards to disconnected clients. To ensure a time-out does not interfere, you can spawn a new thread and have it call the function. That way the thread handling the request can return a response to the client, and the function is carried out regardless. One issue that may arise there is that the function itself hangs or takes too long. You may want to add logic to ensure that it’s aborted after a certain time, and add logging to notify you of this, and possibly also ensure that the function can only be run by one caller at a time.

Forwarding cookies

Sometimes we come across integration scenario’s that look straighforward, but where the devil is in the details. We needed to integrate our asp.net/silverlight application in an existing ASP “classic” site (yes, the still exist). The catch was that we needed to call the ASP “classic” site in a server to server call to get some information, but we needed to do this under the context of the current user. You may be wondering why we didn’t go through a shared database or someting, but the problem is that there is little knowledge left of the old app, so changing the existing app was a no go.
So, in order to impersonate the user, you need your server-sided request look like that user. This means forwarding the cookies the user sends, and sending back the cookies the server sends to the user. Below is code that demonstrates that.

HttpWebRequest webRequestToServer = (HttpWebRequest)HttpWebRequest.Create("http://somedomain/somepage.asp");
webRequestToServer.CookieContainer = new CookieContainer();
foreach (String cookieKey in Request.Cookies)
{
    HttpCookie cookie = Request.Cookies[cookieKey];
    Cookie serverCookie =new Cookie(cookie.Name, cookie.Value, "/", "somedomain");
    webRequestToServer.CookieContainer.Add(serverCookie);
}
HttpWebResponse webResponseFromServer = (HttpWebResponse)webRequestToServer.GetResponse();
foreach (Cookie serverCookie in webResponseFromServer.Cookies)
{
    HttpCookie clientCookie = Response.Cookies[serverCookie.Name];
    if (clientCookie == null)
    {
        clientCookie = new HttpCookie(serverCookie.Name);
    }
    clientCookie.Value = serverCookie.Value;
    clientCookie.Expires = serverCookie.Expires;
    Response.Cookies.Add(clientCookie);
}
webResponseFromServer.Close();

This code works fine in a test environment, but there is a catch… in some cases the domain of the server is not set in the cookie you get on the server side. The problem with that is that when you set the domain, it doesn’t correspond to what the server expects. You can see this if you write out the cookies you send/receive (both on the browser connection and te server-server connection) to a log or something (including the domain. It took a while to figure out, but replacing “somedomain” with Request.ServerVariables["LOCAL_ADDR"] did the trick.