Category Archives

25 Articles

Posted by Ronald Anthonissen on

How to create a zip archive and download it in ASP.NET

In a previous post How to download multiple files in ASP.NET, I explained how to generate multiple documents and offer them as separate downloads in ASP.NET. One of the options I had when looking for a solution to offer multiple downloads, was adding all the documents to 1 single zip archive container, and offer that as download to the user. This solution didn’t completely satisfy the end-users, but is also offered for those who want to use it.

In this post I will explain, how I take the same list of documents, and offer them as a zip archive to download. Starting from the multiple download solution, this only required 1 extra step in the process, namely, creating a zip archive and adding all the documents to it. The rest of the process is as described in the previous post.

The method takes the same argument as when creating separate download links, namely an a list of byte arrays. Each byte array in its turn contains the binary content of the document. I use the SharpZipLib from ICSharpCode, which can be downloaded here: The Zip, GZip, BZip2 and Tar Implementation For .NET. This is what this method looks like:

Private Function ZipDocuments(ByVal reports As IList(Of byte())) As Boolean

' Add documents to 1 ZIP file, and open in browser
Using zipOutMemoryStream As New MemoryStream()
Using zipOutStream As New ZipOutputStream(zipOutMemoryStream)

'Add documents to Zip File.
Dim cnt As Integer = 1
For Each buffer As byte() In reports
Dim entry As New ZipEntry(String.Format("{0}_{1}.pdf", "GeneratedFile", cnt))

zipOutStream.Write(buffer, 0, buffer.Length)
cnt += 1


Dim responseBytes As Byte() = zipOutMemoryStream.ToArray()

'Return Null on Empty Zip File
Const ZIP_FILE_EMPTY As Integer = 22
If responseBytes.Length <= ZIP_FILE_EMPTY Then
Return Nothing
End If

RegisterDocumentDownload(Guid.NewGuid().ToString(), responseBytes, ContentTypes.ZIP)

End Using
End Using
End Function

I first create a (binary) memorystream (zipOutMemoryStream) to contain the content of the zip file (zipOutStream).
Then I loop over the list of documents (or files), create an entry in the zip file (entry as ZipEntry), and write the binary content to the zip entry.
After adding the files to the zip and cleaning up, I can use the same RegisterDocumentDownload() method from the previous post, and the zip archive will be added to the user and opened in the browser.

And that’s it…

Posted by Ronald Anthonissen on

How to download multiple files in ASP.NET

The project I’m currently assigned to, already has an option to generate reports (pdf) which are just streams the binary output of the report generator to the response output stream. Something like this:

Dim binReader As New System.IO.BinaryReader(report.ExportToStream())

With Response
.ContentType = "application/pdf"
.AddHeader("Content-Disposition", "inline; filename=AankondigingControlesEnGevolgen.pdf")
End With

This piece of code streams the binary output of the report to the response object, and setting the right ContentType and Header, it opens the document in the user’s browsers. Works like a charm.

But now I was asked to create a form where the user can select multiple reports to download and open them in the browser. My first answer was: We can’t do that (easily). But then I started to look at the options we have when working in ASP.NET and generating output to the client browser.

The solution I ended up with was that easy, that I found myself kind of stupid that I didn’t think about it earlier. This is what I did:

  1. Generate the documents and store them (binary), together with a unique key, in a session variable
  2. Generate download links with that unique key as parameter
  3. Open the links with clientside javascript
  4. In the download page, retrieve the content from the session variable and stream it to the client browser

Let’s take a look at that in detail.

1. Generate the documents and store them (binary), together with a unique key, in a session variable

I created a custom class to hold the binary document content, together with extra information that can be helpful when generating the download:

Private Class ContentTypes
Public Const PDF As String = "application/pdf"
Public Const ZIP As String = "application/zip"
End Class

<Serializable()> _
Private Class Download
Public Name As String
Public Content() As Byte
Public ContentType As String
End Class

Name: The name of the file that is generated and is used when the user downloads the file (save to disk)
Content: The binary content of the file
ContentType: Because I don’t want to be limited to 1 specific file type, I include the content type with the download

Currently I’m only using 2 types of documents, but as you can see, this can be easily extended.

2. Generate download links with that unique key as parameter

For each document I created and stored, together with a unique key, in a session variable, I generated the client-side script to open a new window with the download link. Because I use the same page to download the document, I can create a URL starting with the querystring question mark:

Private Sub RegisterDocumentDownload(ByVal key As String, ByVal content() As Byte, ByVal contentType As String)
Dim script As String = String.Format("'?key={0}');", key)
Dim download As New Download()
download.Content = content
download.ContentType = contentType
download.Name = key

Session.Add(key, download)
ScriptManager.RegisterStartupScript(Me, Me.GetType(), "Download_" & key, script, True)
End Sub

3. Open the links with clientside javascript

The JavaScript that is generated, will look something like this (when generating 3 downloads):

<script type="text/javascript">

4. In the download page, retrieve the content from the session variable and stream it to the client browser

Because I use the same page to download the file as well, I added code to the Page_Load() event that checks for the “key” parameter

If Not Request.QueryString("key") Is Nothing Then
Exit Sub
End If

This calls the StreamDownload() method which takes the download from the session, streams the content to the browser client and cleans up everything before ending processing

Private Sub StreamDownload(ByVal key As String)

Guard.ArgumentNotNull(Session(key), "download")

Dim download As Download = DirectCast(Session(key), Download)
Dim stream As New MemoryStream()
Dim formatter As New BinaryFormatter()

formatter.Serialize(stream, Session(key))
With Response
.ContentType = download.ContentType

Select Case download.ContentType
Case ContentTypes.ZIP
.AppendHeader("Content-Disposition", String.Format("filename={0}.zip", download.Name))
End Select

End With

' cleanup temporary objects
Session.Remove(key & "_download")


End Sub

As you can see, I also have the possibility to generate zip archives. This is to offer the functionality of downloading multiple documents in 1 zip archive container. I could easily immediately offer this zip download from within the page. But I prefer to use this generic solution, even if I’m only offering 1 file to download. This also gives me the possibility to offer other file formats as well. I just need to add a new content type, and alter the code where needed in the StreamDownload() method.

In a next post, I will show how I created 1 zip archive which contains 3 documents, and offer this as a download to the user.

Posted by Ronald Anthonissen on

Aspect Oriented Programming (AOP) with PostSharp

What is AOP (Aspect Oriented Programming)?

Aspect oriented programming breaks down programming logic in separate concerns. It separates and groups blocks of code that perform a specific operation and that can be applied to or re-used by different pieces of code, be it methods, classes, properties, and so on.
Commonly used examples of functionality that is often implemented using aspects are logging, exception handling, caching, authorization,…

Why use AOP?

You can write aspects as classes to perform specific functionality. These aspects can then be attached to code objects (classes, methods, properties, events,…) as attributes. This means that you only have to write the code once, and attach it anywhere you want with (mostly) one single line of code.
By separating this code from your business logic into aspects, changes made to these aspects have no impact on your business logic. In this way your code becomes much cleaner and robust, and it is much easier to maintain, resulting in fewer defects. And with no need write the same code over and over again, writing less code, that is more robust, means that you can focus on the important parts (the business logic) of your code, and you can be more productive as a programmer and save money on development time.

What can AOP do?

Aspect Oriented Programming can be applied in plenty of usage scenarios

  • Logging: Whether you log to logging files, a database, or any other device, it’s up to the logging aspect to determine what and where to log it, so there’s no need to do this inside your application logic over and over again.
  • Tracing: When you want to start tracing the performance of your application it can become a tedious task. It becomes even more complicated when you want to be able to turn tracing on and off when debugging or testing your code. By placing your tracing code in an aspect, you can do this in 1 single location, instead of muddling around your business code.
  • Exception handling: In production environments you don’t want your exceptions (yes, they will occur) to appear to your user and possibly reveal sensitive information. Aspects can handle these exceptions, take appropriate actions and show user-friendly messages.
  • Caching: You can write an aspect that captures a method’s output, store it in a cache, and return it from the cache the next time the method is called again.
  • Authorization: Go further than the built-in security functionality and write your own logic to grant or deny access to certain functionality or data.
  • Auditing: Keep an audit trail of who accesses or changes what data and when.
  • NotifyPropertyChanged: Remember implementing INotifyPropertyChanged into your classes over and over again? This can be solved with 1 aspect applied to your classes as 1 single attribute.
  • Even more examples:
    • Undo / Redo pattern
    • Thread dispatching & synchronization
    • Transaction handling
    • Persistence
    • And so on…

How does PostSharp work?

PostSharp weaves its aspects at compile into your code, so they get executed at the right time.
From the PostSharp website (


Think of the source code for your project as the parts of a car, and the build process as the assembly line in the factory. PostSharp aspects are written as source code, and applied to other source code artifacts in your application using .NET attributes.


The compiler for your language takes all of your application’s source files and converts them into executable binaries. It is just one of the many phases of the build process.


PostSharp is a compiler post-processor: it takes the output from the compiler, and instruments your assemblies and executables to execute your aspects at the appropriate times.


Once compiled, your application only needs one or two lightweight PostSharp assemblies to execute. No need to ship the factory with the car!

AOP With PostSharp

No better explanation than a real example. In following example I will explain how to get started using PostSharp en create your first aspect for caching. In a second example, I will create another to prove that the caching aspect indeed improves performance with an easy tracing aspect.

Getting started using PostSharp

The first step is to download PostSharp from the PostSharp website at There’s a free Community and a paid Professional Edition available. A comparison of the features of each version can be found on this page:

The sample application: Ordering pizza’s

We start from the real beginning by creating a sample application. I’m creating a “Pizza Ordering System” in a MVC3 Web Application. To make development easier, I will use Entity Framework with SQL Server Compact Edition and MVC Scaffolding. This allows me to write a few model classes and let the scaffolding generate controllers and views for me. The Code First feature of Entity Framework creates the database for me based on the model.
This creates a good starting point to begin this example.
First of all we’ll add a reference to the PostSharp.dll (SharpCrafters announced that they will have NuGet package available very soon, in the meantime we’ll have the add it the old-fashioned way.
And because I want to quickly set up a sample application, I install the EntityFramework.SqlServerCompact and MvcScaffolding packages from NuGet. These packages install their dependencies themselves, so I don’t need to take care of that.
I create 3 Model classes, Pizza, PizzaSize and Order for our application, and use the Scaffold command to create controllers and views for them.
Now, as you can see when you take a look at the controllers, the scaffolding created a DbContext that is used and directly addressed in each of the controllers. This isn’t quite useful when we want to use caching. We need some sort of service or repository pattern for this. Let’s instruct scaffolding to use a repository J (I could have done that like this from the beginning, but I just also wanted to show some of the functionality and strength from the MvcScaffolding package):

Remember, when you instruct scaffolding to recreate controllers and datacontext, it needs to recreate the database when you changed something in your model classes. Follow the instructions in the context file to achieve this.

I also created 3 menu items to the Index action of each of these controllers, to make navigation easier.

Now, let’s create a really simple and straight-forward caching class. I know you can do this with the Caching Application block from the Enterprise Library, or some other framework, but I just want to keep the sample straight forward, and since the caching isn’t the subject from this blog post, I don’t go deeper into the caching subject.

public class Cache
private static readonly IDictionary<string, object> _cache = new Dictionary<string, object>();
private const int _timeout = 60 * 60 * 24;

public static bool Contains(string key)
return _cache.ContainsKey(key);

public static object Get(string key)
if (_cache.ContainsKey(key))
return _cache[key];
return null;

public static void Add(string key, object item)
Add(key, item, _timeout);

public static void Add(string key, object item, int timeout)
if (_cache.ContainsKey(key))
_cache.Add(key, item);

public static void Remove(string key)

public static string GenerateKey(Arguments arguments)
var key = new StringBuilder();

foreach (var argument in arguments)
key.AppendFormat("_{0}_{1}", argument.GetType(), argument);

return key.ToString();

This creates an in-memory cache and supports adding, retrieving, removing and checking the presence of an object in the cache. It also has a GenerateKey that I will use later to generate a unique key based on the arguments passed to the method that I want to cache the result from.

The caching aspect

Now, time for some action, create the caching aspect!

Start by creating an “Aspects” folder (we want our project to stay clean of course) and create a new class called “CacheAttribute”. To have our aspect execute code before and after a method is called, it must be derived from the OnMethodBoundary aspect parent class. Also, this class needs to be serializable, so apply the [Serializable] attribute.

To execute code before and after a method call, we must implement the OnEntry and OnSuccess methods. In the OnEntry we will check whether the item already exists in the cache, skip the further execution of the method, and set the return value as our cache value. In the OnSuccess method, we will add the return value to the cache.

public class CacheAttribute : OnMethodBoundaryAspect
public override void OnEntry(MethodExecutionArgs args)
var key = args.Method + "_" + Cache.GenerateKey(args.Arguments);
var value = Cache.Get(key);

if (value == null)
args.MethodExecutionTag = key;
args.ReturnValue = value;
args.FlowBehavior = FlowBehavior.Return;

public override void OnSuccess(MethodExecutionArgs args)
var key = args.MethodExecutionTag.ToString();
Cache.Add(key, args.ReturnValue);

The next step is to apply the attribute to the methods that we want to cache the result from. We do this by applying the Cache attribute to the All(), AllIncluding() and Find(int id) methods of the PizzaRepository class that scaffolding created for each of our model classes.

When we launch the debugger after we have set breakpoints in the OnEntry() and OnSucces() methods of the CacheAttribute class, and in the All(), AllIncluding() and Find(int id) methods of the PizzaRepository class, we can see that the OnEntry() method of the CacheAttribute is executed first. When the PizzaRepository methods are executed the first time the execution is passed to the original method, and the result is stored in the cache after it is completed. The next time, the method execution is skipped, and the results are directly returned from the cache.

Nice, isn’t it? But does this really improve the performance of our application?

The performance aspect

To answer this question, we’ll create another aspect to trace the time of the execution of a method, the TimeTracingAttribute.

Again, start by creating an aatribute, called “TimeTracingAttribute” in the “Aspects” folder, make it Serializable and inherit from OnMethodBoundaryAspect.

Again, we use the OnEntry() and OnExit () methods, together with a Stopwatch this time. The Stopwatch will be a static instance on the TimeTracingAttribute. In the OnEntry() method we will store the value of the ElapsedTicks property in the MethodExecutionTag property of the attribute’s args. In the OnExit() method we’ll read it out to calculate the executed time (in ticks) and write that to the Trace.

public class TimeTracingAttribute : OnMethodBoundaryAspect
static Stopwatch _stopwatch = new Stopwatch();

static TimeTracingAttribute()

public override void OnEntry(MethodExecutionArgs args)
args.MethodExecutionTag = _stopwatch.ElapsedTicks;

public override void OnExit(MethodExecutionArgs args)
var executionTime = _stopwatch.ElapsedTicks - (long) args.MethodExecutionTag;
Trace.WriteLine(string.Format("{0}: {1} ticks.", args.Method.Name, executionTime));

Now, apply this TimeTracing attribute to the Index() and Details(int id) methods of the PizzaController and the Create() and Edit(int id) methods of the OrdersController.

When you start the debugger of Visual Studio, you will see the output of the TimeTracingAttribute written to the output window when you open the Pizza page or Edit an order multiple times. See the performance boost?

Now, this is nice when we have Visual Studio open in debugging mode, it would be even nice when we can see the results outside of Visual Studio. We don’t to that in our aspect, it has even nothing to do with AOP, but with another gem that is available from NuGet: Glimpse.

Glimpse is a web debugger used to gain a better understanding of what’s happening inside of your webserver. From the Glimpse website:

What Firebug is for the client, Glimpse does for the server… in other words, a client side Glimpse into what’s going on in your server.

Get the Glimpse package from NuGet, rebuild your application and start it in the browser. One action we must take before we can see Glimpse at work, is enabling it for our application. Do this by launching the /Glimpse/Config page of your browser and click the big “Turn Glimpse On” button.

Now when you open your page again, you will see a small eye-con in the bottom-right corner of you browse which will open the Firebug for your server. Clicking on it will open the Glimpse window with tracing information in the “Trace” tab.




Aspect Oriented Programming (AOP) with PostSharp, or another AOP tool significantly improves robustness of your application and keeps your code clean. It also improves productivity of the development team and allows developers to focus on their important tasks.

Posted by Ronald Anthonissen on

Telerik Grid ClientTemplate with collection inside column

I have defined a ClientTemplate which needs to display an employee and its roles as a list of items inside a single column.

Take following example:

Model person.cs:

public class Person
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string[] Roles { get; set; }

The grid looks like this:

<%= Html.Telerik().Grid()
.DataBinding(dataBinding => dataBinding.Ajax()
.Select("Employees", "Persons"))
.Columns(columns =>
columns.Bound(e => e.Id);
.DetailView(dv => dv.ClientTemplate(
"Name: <#= LastName #>, <#= FirstName #><br%>" +
"Roles: <ul%>" +
/* Person.Roles comes here: "<li%><#= Roles[i] #>"

Is there a way to have the roles op the employee displayed as an list inside the same column?

Yes there is!

You can embed executable code in your client template like this:

ClientTemplate("Roles: <ul>" +
"<# for (var i = 0; i < Roles.length; i++) {" +
"#> <li><#= Roles[i] #></li> <#" +
"} #>" +

Many thanks to Atanas Korchev of the Telerik team for answering my question: ClientTemplate with collection inside.

Posted by Ronald Anthonissen on

Prevent caching of stylesheet and javascript files

First something about caching

The numerous caching options you have in ASP.NET (MVC) are mainly focused on data and page output caching. But caching also occurs at the webserver, network and browser level.  These you can’t always control from within your code.

When your content leaves your application, it is processed by the webserver, depending on the server and version it has numerous options to control how and when it is cached. When your content is processed by the webserver and sent to the browser, there is also the network that can control the caching, namely proxy and web acceleration servers. Finally the content arrives in the browser and the browser itself has also numerous options related to caching. Generally spoken, they all use the same parameters, or at least some of them, to determine when, what and how long the content should be cached.

How does this caching work? Generally spoken, following rules apply:

  1. If the response header says not to cache, it doesn’t cache
  2. If we use a secure or authenticated transfer, like HTTPS, it doesn’t cache either
  3. If the cache expiring time or any other age-controlling header says it’s still ‘fresh’, it doesn’t cache
  4. If there’s an old version in the cache, the server will be asked to validate the version.  If the version is still good, it is served from the cache.
  5. Sometimes when the server cannot be reached due to network failure or disconnectivity, the content is also directly served from the cache.

Then what parameters are used, and how are they used?

  • HTTP Headers: these are sent in the request, but are not visible in the content
    • Expires: tells the cache how long the content stays fresh. After that time, the cache will always check back with the server. It uses an HTTP date in Greenwich Mean Time (GMT), any other or invalid format will be interpreted as in the pas and makes the content uncachable.  For static data you can set a time in the very far future, for highly dynamic content, you can set a time much closer, or even in the past to have the cache refresh the content more often or at every request.
    • Cache-Control: In response to some of the drawbacks of the Expired header, the Cache-Control header class was introduced. It includes (some, not all):
      • max-age=[seconds]
      • public / private
      • no-cache / no-store
      • must-revalidate
    • Pragma: no-cache: the HTTP specifications aren’t clear of what it means, so don’t rely on it, use the ones above
  • HTML meta tags: Unlike HTTP Headers, HTML meta tags are present in the visible content, more precisely in the <HEAD> section of your HTML page. A huge drawback of the us of HTML meta tags is, is that they can only be interpreted by browsers, and not all of them use them like you would expect. So prefer HTTP headers over HTML meta tags

A great Caching Tutorial can be found here:, and another one here: Save Some Cash: Optimize Your Browser Cache

An easy solution

Now, all of the caching systems rely in some way on the full request string to identify the content that is being cached.

So, the easiest solution would be to request a new unique URL every time the resource has changed, with a new version number.

How we do it in ASP.NET MVC

ASP.NET MVC (and ASP.NET Webforms also) doesn’t generate a new version number automatically.  You need to tell it to do so in the AssemblyInfo.cs file.  After a default project setup it contains a line like:

[assembly: AssemblyVersion("")]

The version number is a four-part string with the following format: <major version>.<minor version>.<build number>.<revision>.  You usually set the major and minor version manually, as they are used as the type library version number when the assembly is exported, and don’t (need to) care of the build and revision number.  Well, now we do.

When you change this line to (or add it if it doesn’t exist):

[assembly: AssemblyVersion("1.0.*")]

We tell the compiler to generate a build and revision number for us. The generated build number is the number of days since 1-01-2000 (so 9-08-2010 gives 3873) and the revision number is the number of two second intervals since midnight local time (so a build at 11:59:12 gives 19776).

Now we have instructed our application to generate a new unique build number for us with every build, and every (possible) change of a resource, we can use this number as a unique parameter value in the URL of the the resource.

First we need to pass this version number from controller to view.  In the constructor of the (base)controller we put the version number in the ViewData Dictionary. With the ViewData you easily can pass data from the controller to the view using a key-value pattern.

protected BaseController(){
ViewData["version"] = Assembly.GetExecutingAssembly().GetName().Version;

And finally in the view, all you need to do is append this version number to the URL of the files you want to be prevented from caching:

<script type="text/javascript" language="javascript" src="<%: Url.Content("~/Scripts/commonFunctions.js?" + ViewData["version"]) %>"></script>

This makes sure we have a unique URL for our resources and they are not cached by the browser or a proxy.

Of course, like stated above, there are other ways of preventing files from being cached anywhere between the server and the browser, but the advantage of this method is that you don’t need to poke around in IIS settings (in case when you don’t have access to it) and you can define when and which version of the file you want to be cached.  And you can of course use any other method to generate a unique URL.

One more remark: When building a multi-tier application, make sure you set the version number in the AssemblyInfo.cs of the project where you use it, meaning, that if you put your base controller in a shared assembly, you need to specify the version number in the shared assembly project.