Monthly Archives: December 2010

Tips on Finding Performance Issues in Your ASP.NET App – Part 2

Earlier I blogged about finding performance issues in an ASP.NET app “in the large” (see here). I’d like to reiterate that doing this for a web app is critical, because it not only shows you where the bottlenecks are, but also how these affect the entire application. I said I’d follow up on profiling, so here it is…

Once you know what the bottlenecks or “hot spots” are, you can dive into figuring out what the problem is with these pages. This is where profiling comes in. Profiling lets you know what is happening inside your code. It shows you how long a method call is taking and how often a call is made. Both of these are interesting, because performance bottlenecks can be caused by calls taking long, but also by too many calls to a method. Now, I won’t get into the details of how to profile with Visual Studio 2010 (you can read Beginners Guide to Performance Profiling for that), but when you use this tooling, you should focus on one page at a time. The profiler will throw a heap of information at you, so dealing with one page is hard enough. Once you have this information you have to determine what’s really going on. Is this somehting you can fix by changing a few lines of code, or is there a more structural problem you need to solve? Pages that under no load take 10 seconds or more likely have a structural problem, so you need to check if there is a lot of code being executed that is just waste. Also, be sure to focus on big fish first. You can worry about micro-optimizations (such as a different string compare mechanism) later. That said, you should try to make such optimizations part of your coding  guidelines, rather than looking at that afterwards. Micro-optimizations are only really interesting for very high performance apps. A 10th of a second loss here and there isn’t going to make a lot of difference apart from maybe needing to scale-out a little earlier.

Merging Claims Based Authorization and Application Authorization

It is typically a good idea to separate general authorizations of a user from application specific authorizations. This is why we invented Application Roles (Settings Administrator), which are separate from Organizational Roles (System Administrator).When using Application Roles, we can map these roles to Organizational Roles. In organizations using Active Directory, Organizational Roles are typically stored in AD. Application Roles can then be stored using Authorization Manager (AzMan) in XML or AD in Application Mode (separate from the organization AD).

Over the years I’ve built quite a few applications that use the above model, and it works well if you authorize with roles. But these days I do most of my work using things like Claims Based Authorization, so the question is “Does this translate to teh CBA world? And if so, how?”. The answer is that yes, it does translate (very well actually), at least in Windows Identity Foundation.

In the CBA world an application receives a token with claims about the user. Like with roles, this should typically be claims not specific to the application, unless the only source for the claim information lies within (or is only accessible to) the STS. This serves two purposes:

  1. The token is generally usable across applications, so the STS can deal with this more easily.
  2. Tokens are not stuffed with a lot of claims.

The latter is actually more important than you might think. Adding more claims means a bigger token, and there comes a point where the token is so large that for instance ASP.NET rejects the request, because it is bigger than the accepted request size (which you should only increase if really necessary).

Now, one of the great things about CBA is that it enables me to create business logic which checks the authorization based on meaningful values, rather than a role. On top of that, I wouldn’t want to have a hybrid security system for the claims stuff and the application specific stuff. Fortunately, In Windows Identity Foundation I can add claims to a claims identity, and these claims then behave the same as the claims acquired from the STS token. The only difference is that the issuer is set to LOCAL AUTHORITY, rather than the STS, which means these claims are really only usable locally in my app (or service). The code to add a local claim is easy:

IClaimsPrincipal claimsPrincipal = Page.User as IClaimsPrincipal;
IClaimsIdentity claimsIdentity = (IClaimsIdentity)claimsPrincipal.Identity;
claimsIdentity.Claims.Add(new Claim("http://MyApp/SomeAppClaim", "SomeValue"));

You can execute code like this when a session starts, and add all application specific claims for the user (identified by an STS claim) to the claims identity. The local claims then get the same lifetime as the claims originally from the token, so you only have to add them once. This way adding application specific claims is still separated from the functional code. Which was the benefit to start with.

Although the above code will definitly work, there is another option when using WCF, known as Claims Transformation. With Claims Transformation you can define policies that define ClaimSets to add to a user’s token. This model is much more flexible, as explained in the MSDN article Service Station: Authorization in WCF-Based Services (jumps straight to the Claims Tranformation section). That article is from the pre-WIF era, but you can do similar stuff with teh ClaimsAuthorizationManager in Microsoft.IdentityModel.

Tips on Finding Performance Issues in Your ASP.NET App

Performance issues can creep up in all sorts of places. Finding them is all about knowing where to look. This also depends on how you look, which can be at the application as a whole (“in the large”) or at individual functions (“in the small”). The latter is known as profiling. Because (ASP.NET) web applications are all about large numbers (of users), looking at the application as a whole is a good place start. This is where load testing (a.k.a. stress testing) comes in. Load tests will show you which pages are performing poorly, which is the first step in determining where to take a closer look. Load Testing 101: ASP.NET Web Applications is a great starting point to get yourself up to speed with the mechanics of a good load test, even though its from 2006.

One thing in the article that I think is absolutely critical is about creating a single user base line. This will show you which pages are doing well, when run on their own vs. pages that are doing not. The results of that test already give you an indication of where to look. In fact, a full load test may actually skew the results, because fast pages can be held up by slow pages if the request queue fills up. Fast pages can be identified under load from the difference between the best, average, and wordt results. For fast pages these show huge differences (from few tenths of a second to tens of seconds), whereas slow pages have numbers which are bad across the board.

If you’re looking for tools to do load tests, checkout the Web Capacity Analysis Tool (WCAT) provided by Microsoft. The downloads can be found here:

An interesting tool you can use with WCAT is the WCAT Fiddler Extension for Web Server Performance Tests. It helps you to record a path through your app with Fiddler, and then use that path in a WCAT load test.

Note: I will cover profiling (“in the small” testing) in a different post.

Aantekeningen SDN Event 13 December 2010

Op het SDN Event van 13 december heb ik twee presentaties gegeven. Hieronder kun je de aantekeningen downloaden die ik gemaakt heb op de tablet (voor wie er niet bij was: ik heb in plaats van slides mijn sessie gedaan met behulp van tekenen in OneNote).