SharePoint Internet Sites – Performance Optimization for Data Access (1)

Then it comes to displaying something on your pages!

I presume that you are building a dynamic web site 🙂

By “dynamic” I mean that its contents comes from a data storage (SharePoint being one of these) where it is written in a serialized format used for persistence. These data are then extracted from the data store during the request processing phase, and some sort of transformation is applied to make them presentable as HTML markup on your final pages.

This process (extract => transform => render) can be extremely slow, and you may guess the reason for this consideration: if you wish to load a million rows from a database table (or 4K rows from a SharePoint list), then transform the resultset in XML format, then apply a complex XSLT transformation that finally produces 4 bytes of HTML markup, it’s clear that something is missing from your architecture design!

But even if you optimize every single step in the above process, you may end-up with a CPU and memory consumption that is excessive in heavy load scenarios.

The answer seems obvious: the best way to reduce the time required to load data from a persistent storage is… just not reading anything at all!

That is, use caching!

Cache?

Cool, you may say, and while saying “cool” you enable output caching on every page of your portal.

It will take just a couple of minutes for you to receive a phone call be your customer, saying that the users are complaining about strange behaviors during navigation. For example:

  • “I logged in but the name that is displayed on the top of the page is not my name”
  • “I did not add any item to the cart, but suddenly I see the cart is filling up with articles I’m not even interested about!”

Well, caching has its own drawbacks, for sure.

Here’s a list of pain points you need to be aware of:

  • Cache data should be saved somewhere, and it will consume resources
  • Windows processes do not share memory (unless you do this explicitly, which I don’t suggest anyway), so in a multi-server scenario you get duplicated information (one copy of each data set for each process serving http requests)
  • Sometimes you end up having multiple processes, even on a single WFE server topology (this is called web gardening)
  • If you choose to externalize data to a common, shared location, you probably need to consider data serialization as a limitation (you can save a string, but you cannot save an XslCompiledTransform instance, just to give you an example)
  • Once you put data into a caching location, this data becomes old, unless you implement a valid cache invalidation mechanism
  • This cache invalidation mechanism is often hard to implement
  • Coding can be tricky
  • Coding can be error prone (you should never rely on a copy of your data being available in the caching storage)

This list is by no mean a suggestion to avoid caching. On the contrary, I strongly suggest you to apply caching whenever it fits.

Therefore, I would like to summarize what SharePoint offers OOTB, trying to provide you some best practices in each case.

Cache

You get three different flavors of cache in SharePoint 2010.

Here’s a small diagram that display them, giving you some background that we will use later to discuss about when you should use any of these techniques.

Object caching

In a word: use it!

SharePoint uses it by default as an optimization for some key components of a typical web site (ContentByQueryWebParts, Navigation structure, etc…).

You should just be aware that some query filters (for example, one based on the current user) makes it not applicable (and indeed the site query engine prevents caching in these situations).

And…

<developerOnly>

I would encourage you to use object caching when you write code against the SharePoint server object model.

How? You cannot explicitly query the cache structure, but you should use classes (SPSiteDataQuery, CrossSiteQueryInfo and CrossSiteQueryCache) that can do the hard work for you. This is transparent, which is fine since you can forget about check for null data or stale data: everything is under the control of the Cache Manager.

Output Caching

In a word: always consider output caching while designing and developing pages and page components, and try to apply a design that makes output caching applicable.

A little example could be helpful in this case.

Imagine you have implemented a page layout that displays a lot of aggregated data coming from external resources. This data takes quite a long time to load, and the presentation layer takes some time to render it too. Plus, this data does not change very often, so you should not worry about invalidation.

This is a perfect candidate for output caching, unless for a very small portion of the page layout, more specifically a box that displays weather information reading it from an external RSS service, filtered by the location that a user has specified in his profile settings.

If you apply output caching to the page layout, every user will see the weather for a single location (the one of the first user hitting the page), and the weather will be constant for the whole duration of the page layout caching time.

This should not be an obstacle to applying output caching to the page layout. How can you do this?

Here’s a coupe of possible approaches:

  • Use a combination of AJAX requests and JS elaboration to read information “on the fly” and transform the page accordingly. The html code of the page can be “weather ignorant”, since the only pieces remaining there are an empty container and the client script code that issues the asynchronous HTTP request and parses the results producing the final markup. And both the empty container and the script code can be cached!
  • Use Post Cache Substitution. This is a somewhat complex technique (I mean, it’s easy for simple tasks, but it may get tricky easily). In a nutshell, you register a control for post cache substitution, and the ASP.NET runtime calls back your control asking for a string value the it will insert into the page exactly where the control markup had been rendered, replacing it with something else. The page keeps being cached, although part of it are indeed recalculated for every request.

Blob Caching

I’m mentioning Blob Caching here for the sake of completeness. But I would like to point out that it is not at all related to data or markup caching, so it does not reduce the computation and rendering time of a page “per-se”. It creates copies of static resources (css, js, images, etc… you can specify the resource by extension) that are saved to the file system of each web frontend server. An http module is responsible of the resource retrieval, effectively bypassing need for the document to be loaded from SharePoint (then from SQL, which is expensive if compared to raw filesystem access).

I’m going to talk about Blob Caching in a future part of this articles series, but I hope that this was enough to explain at least what blob caching is, especially compared to the other available caching techniques.

Tools

That said, what tools can help you investigate data access issues related to caching?

Here I’ll name a few, but consider that this list is by no mean exhaustive.

  • SharePoint logging
    • ULS logs contain information about Cross Site Queries, which may or may not use caching
    • Logging database for blocking queries reports (a blocking query is a good candidate for substitution with some data access logics)
  • Developer Dashboard
    • You get the execution time at a very detailed level, which may help you investigate which part of the page lifecycle needs further optimization
    • If you are a developer, you can use the SPMonitoredScope for instrumentation
  • Performance counters
    • Monitoring resource consumption you may discover that you need some caching optimization
    • ASP.NET provides several counters related to its Cache Engine
  • DbgView
    • You can output trace messages that will be consumable even on a live production server. This is not related to caching by itself, but it can definitely be a useful companion

SEO Toolkit and disk usage

Short introduction, for those of you who do not know the SEO Toolkit.

In a nutshell, it’s the Search Engine Optimization toolkit that you can download and install over your IIS setup.

It adds a new feature that allows you to run a spider over a web site (typicaly, a public, anonymous site, although the tool is not limited to anonymous authentication) and get back a ton of results.

The tool itself is able to read these results, aggregate them and provide you some report.

You get reports about SEO rules that are not fulfilled (pages without the title tag, images without the alt attribute, and several other, sometimes complex rules).

You also get reports about pages performance (which page/resource took more time to be downloaded?)

You get tons, tons of interesting stuff in the form of reports.

But nothing comes for free.

I was running out of disk space a couple of days ago on one of my development machines, I tried to find out where the causes reside, and… bingo! I had 19 GB (19,000+ MB, yes) occupied by the SEO Toolkit results.

Wooooo!

SharePoint Internet Sites – Performance Optimization for IT Professionals

If you have read the introduction to this articles series, you will know that Web Sites implementation should be done by an etherogenous team. A senior and clever system administrator should be part of this team.

Why?

Well, first of all SharePoint needs to be installed (this is easy) and configured (this is not always easy). I should say it should be configured well, with security and performance in mind. And this, believe me, is not easy at all.

This cannot be a guide to SharePoint configuration (I suggest you get a book on this topic, where you will find valuable information on each and every configuration topic).

But anyway I would like to point-out something you should consider especially within public web sites projects.

Network I/O

This may seem obvious, but a low network throughput is one of the most frequent reasons why you get slow response time (and unsatisfied users!).

As a system administrator you are not always in charge of network connectivity, especially when the web site is hosted by an ISP. But as an expert, you should always give suggestions to your customer and be prepared to test the network connectivity, defining metrics and possibly a baseline that you will use for simulations when you will perform stress tests.

Sometimes, though, you control part of the network of the hosting system: maybe not the peripheral segment, but the internal segment is often on your control.

Here you may suffer from a very high latency in server-to-server communication. Please, do not use a 10/100 cable to connect your SharePoint servers to the SQL backend!

And even if the network connectivity between the servers is considered good in low traffic conditions, you should consider isolating the SharePoint farm and its SQL back-end in a private subnet, maybe planning for multihoming. This way you will reduce the “noise” that other services could introduce into the network traffic, preventing contention with the packets that the SharePoint services generate.

The Microsoft Windows Performance Monitor is a great tool that can help you investigate these issues. Combining HTTP traffic reports generated by a Fiddler session can also be a valid aid, although you need some elaboration over the data you will collect.

Disk I/O

Network connectivity is not the only point you should pay attention to: disk I/O may be another bottleneck if you buy a 99$ external hard drive for your SQL data files!

As usual, you need some capacity planning beforehand, as well as some baseline and some support tool.

I would suggest you take a look at these two valuable resources related to capacity planning and SQL I/O subsystem measurement:

 

Authentication

Your web site will, probably, be accessible to anonymous users and to authenticated users as well.

What is the authentication authority you are going to use? The answer to this question may require some special consideration, since it may involve SSL protection (SSL is secure, but it adds some overhead due to traffic decryption) or the connection to an external authentication authority you trust.

The claims based authentication that SharePoint 2010 supports in centered on the concept of security tokens that are typically saved as cookies and, as such, passed back and forth increasing the requests payload: if you start playing with claims augmentation and have dozens of claims assignable to users, your security token size will increase accordingly.

And this is just about user-to-server authentication.

But you should remember that the SharePoint servers, the SQL servers and potentially any other service you are using on the server side usually requires authentication: this authentication happens on the server side only, is typically based on Windows identities, may be claims based, may be based on NTLM or Kerberos authentication. Some of these settings are not depending on the configuration you may apply, some other settings are completely under your responsibility (NTLM vs Kerberos is one example… and you are choosing Kerberos, right?!!).

Taking these considerations to the extreme (not so extreme, believe me) sometimes you end up with a domain controller within your network segment, so that you reduce the latency that is caused by authentication requests. Maybe you do not need this kind of topology, but this should give you an idea of how performance optimization is an extremely hard topic that requires a wider knowledge than the basic SharePoint configuration 🙂

Scaling

Needless to say, you will need to scale because a single-box server will hardly be enough for a heavy load web site.

Talking about scaling, you know that you have the option of either:

  • Increase the resources of server (scaling-in)
  • Add additional servers

In the first case, you should have a deep knowledge of what type of resources should be multiplied: do you need additional RAM? Faster CPUs? Additional disk space as a support for a more aggressive blob caching (I’m going to talk about blob caching later within another article of this series)? This list could continue…

In the second case, you should decide what you are going to duplicate. In other words, if you add servers you need to know which server roles you want to be redundant (which may add fault tolerance, together with performance improvements!) .

Sometimes you need to add a balancer (hardware or software) in front of your servers. This is the case for your web front end servers: without a NLB in front of them, who will instruct the client requests to be routed somewhere different than the single server you had before? 🙂

SharePoint Internet Sites – Performance Optimization

SharePoint has evolved over times. There’s been a significant step forward with the release of Windows SharePoint Services 3.0 and Microsoft Office SharePoint Server 2007, and several architectural improvements we all can see now, when the 2010 wave has been widely adopted by customers worldwide.

From a Web Content Management perspective, MOSS 2007 brought into the SharePoint family the former MCMS 2002 product, which was modified in order to make it an integral part of the SharePoint platform.

Since then, a constantly increasing number of web sites have been developed on top of MOSS 2007 and, now, on top of SPS2010 (at the risk of being redundant, I have to name Ferrari.com as a stunning example).

Web sites, especially those who will be visited by hundreds of thousands of users, need special considerations up-front, starting with the architectural phase where the global components and services are envisioned and planned.

During this early steps, a team needs to be created so that every single aspect of the web site implementation is taken into account.

You need a deep understanding of the network and server infrastructure you are going to put in place, as well as solid knowledge of the HTML/CSS/JS standards on which you will be building the pages that will be presented to the final user. And… well, you will be developing something custom (SharePoint is a platform, not a complete and ready-to-use product, isn’t it?), and you need to do this special attention, trying to minimize the server load to make it possibly scale-out and reach a wider audience with service continuity.

That’s why this series of articles tries to categorize some best practices you need to be aware of when designing and building public, internet facing web sites, and the categorization I’m going to propose is based on your role on the project: either you are an ITPRO, a Web Designer or a Developer, there’s something you should think about within this particular kind of projects.

Enough for an introduction, let’s start with some real world insight!

(…continued…)

Measuring HTTP requests response time

Needless to say, this topic is complex, first of all because the total response time varies due to network congestion, server load, amount of bytes transfered just to name a few.

Anyway, if you just need to get a raw indication, you could just try out this small script:

function GetRequestTime([string]$url)

{

  $wc = new-object System.Net.WebClient

  $wc.Credentials = [System.Net.CredentialCache]::DefaultCredentials

  $start = get-date

  $output = $wc.DownloadString($url)

  $span = (get-date).Subtract($start)

  return $span.TotalMilliseconds

}

function ShootRequests([string]$url, [int]$count)

{

  $totalTime = 0

  1..$count |% { $totalTime += GetRequestTime($url)}

  return $totalTime / $count

}

What it does is firing for a request a configurable number of times and return the average response time. Here’s how you may invoke this function:

ShootRequests http://blog.claudiobrotto.com 50

Just a few notes, though:

  • It does not consider any client-side caching (you could easily overcome this issue by passing some query string parameter that makes the client consider it a brand new resource)
  • It does not download automatically any resource defined in the HTML of the page (i.e. if a page contains img tags that force the browser to download 10MB of content… this script just ignores that)
  • The rendering time is completely transparent too (you may have pages with a small network footprint but with a lot of JS code that slows doen the browser rendering)