Then it comes to displaying something on your pages!
I presume that you are building a dynamic web site 🙂
By “dynamic” I mean that its contents comes from a data storage (SharePoint being one of these) where it is written in a serialized format used for persistence. These data are then extracted from the data store during the request processing phase, and some sort of transformation is applied to make them presentable as HTML markup on your final pages.
This process (extract => transform => render) can be extremely slow, and you may guess the reason for this consideration: if you wish to load a million rows from a database table (or 4K rows from a SharePoint list), then transform the resultset in XML format, then apply a complex XSLT transformation that finally produces 4 bytes of HTML markup, it’s clear that something is missing from your architecture design!
But even if you optimize every single step in the above process, you may end-up with a CPU and memory consumption that is excessive in heavy load scenarios.
The answer seems obvious: the best way to reduce the time required to load data from a persistent storage is… just not reading anything at all!
That is, use caching!
Cool, you may say, and while saying “cool” you enable output caching on every page of your portal.
It will take just a couple of minutes for you to receive a phone call be your customer, saying that the users are complaining about strange behaviors during navigation. For example:
- “I logged in but the name that is displayed on the top of the page is not my name”
- “I did not add any item to the cart, but suddenly I see the cart is filling up with articles I’m not even interested about!”
Well, caching has its own drawbacks, for sure.
Here’s a list of pain points you need to be aware of:
- Cache data should be saved somewhere, and it will consume resources
- Windows processes do not share memory (unless you do this explicitly, which I don’t suggest anyway), so in a multi-server scenario you get duplicated information (one copy of each data set for each process serving http requests)
- Sometimes you end up having multiple processes, even on a single WFE server topology (this is called web gardening)
- If you choose to externalize data to a common, shared location, you probably need to consider data serialization as a limitation (you can save a string, but you cannot save an XslCompiledTransform instance, just to give you an example)
- Once you put data into a caching location, this data becomes old, unless you implement a valid cache invalidation mechanism
- This cache invalidation mechanism is often hard to implement
- Coding can be tricky
- Coding can be error prone (you should never rely on a copy of your data being available in the caching storage)
This list is by no mean a suggestion to avoid caching. On the contrary, I strongly suggest you to apply caching whenever it fits.
Therefore, I would like to summarize what SharePoint offers OOTB, trying to provide you some best practices in each case.
You get three different flavors of cache in SharePoint 2010.
Here’s a small diagram that display them, giving you some background that we will use later to discuss about when you should use any of these techniques.
In a word: use it!
SharePoint uses it by default as an optimization for some key components of a typical web site (ContentByQueryWebParts, Navigation structure, etc…).
You should just be aware that some query filters (for example, one based on the current user) makes it not applicable (and indeed the site query engine prevents caching in these situations).
I would encourage you to use object caching when you write code against the SharePoint server object model.
How? You cannot explicitly query the cache structure, but you should use classes (SPSiteDataQuery, CrossSiteQueryInfo and CrossSiteQueryCache) that can do the hard work for you. This is transparent, which is fine since you can forget about check for null data or stale data: everything is under the control of the Cache Manager.
In a word: always consider output caching while designing and developing pages and page components, and try to apply a design that makes output caching applicable.
A little example could be helpful in this case.
Imagine you have implemented a page layout that displays a lot of aggregated data coming from external resources. This data takes quite a long time to load, and the presentation layer takes some time to render it too. Plus, this data does not change very often, so you should not worry about invalidation.
This is a perfect candidate for output caching, unless for a very small portion of the page layout, more specifically a box that displays weather information reading it from an external RSS service, filtered by the location that a user has specified in his profile settings.
If you apply output caching to the page layout, every user will see the weather for a single location (the one of the first user hitting the page), and the weather will be constant for the whole duration of the page layout caching time.
This should not be an obstacle to applying output caching to the page layout. How can you do this?
Here’s a coupe of possible approaches:
- Use a combination of AJAX requests and JS elaboration to read information “on the fly” and transform the page accordingly. The html code of the page can be “weather ignorant”, since the only pieces remaining there are an empty container and the client script code that issues the asynchronous HTTP request and parses the results producing the final markup. And both the empty container and the script code can be cached!
- Use Post Cache Substitution. This is a somewhat complex technique (I mean, it’s easy for simple tasks, but it may get tricky easily). In a nutshell, you register a control for post cache substitution, and the ASP.NET runtime calls back your control asking for a string value the it will insert into the page exactly where the control markup had been rendered, replacing it with something else. The page keeps being cached, although part of it are indeed recalculated for every request.
I’m mentioning Blob Caching here for the sake of completeness. But I would like to point out that it is not at all related to data or markup caching, so it does not reduce the computation and rendering time of a page “per-se”. It creates copies of static resources (css, js, images, etc… you can specify the resource by extension) that are saved to the file system of each web frontend server. An http module is responsible of the resource retrieval, effectively bypassing need for the document to be loaded from SharePoint (then from SQL, which is expensive if compared to raw filesystem access).
I’m going to talk about Blob Caching in a future part of this articles series, but I hope that this was enough to explain at least what blob caching is, especially compared to the other available caching techniques.
That said, what tools can help you investigate data access issues related to caching?
Here I’ll name a few, but consider that this list is by no mean exhaustive.
- SharePoint logging
- ULS logs contain information about Cross Site Queries, which may or may not use caching
- Logging database for blocking queries reports (a blocking query is a good candidate for substitution with some data access logics)
- Developer Dashboard
- You get the execution time at a very detailed level, which may help you investigate which part of the page lifecycle needs further optimization
- If you are a developer, you can use the SPMonitoredScope for instrumentation
- Performance counters
- Monitoring resource consumption you may discover that you need some caching optimization
- ASP.NET provides several counters related to its Cache Engine
- You can output trace messages that will be consumable even on a live production server. This is not related to caching by itself, but it can definitely be a useful companion