Automated testing for Tridion templates: Followup

Earlier this week I released two blog posts on automated testing for Tridion templates. The idea to blog about this subject came from this stack overflow thread where several members from the Tridion community were discussing automated testing for Tridion templates. The technical solutions I offered in my last blog post each have their specific limitations. A post on the before mentioned thread by Nuno Linhares triggered my interest. He mentioned a web service that the template builder uses to debug templates.

After some research it turns out this web service is a really solution to achieve automated testing for templates.  In essence, you feed the web service some basic data, the template is executed and the package with its content is returned to you.  The whole solution was easy to setup, I created a demo solution to share with the community. All you have to do is add the reference to the service and you’re good to go. Check the config file for the service definition location on the Tridion content management machine.

Implementing automated testing this way has to following advantages and limitations:

Advantages: This solution is quite easy to setup and be used right away since it does not require any changes to the existing templates. Since we use a web service provided by Tridion there we can be sure the output is reliable.

Limitations: This solution only enables automated testing for complete template building blocks or templates. Testing individual functions within a template building block is not possible with this solution alone.

Wrapping up

This approach is a very pragmatic way to test Tridion templates. Normally I would be put off by the fact that individual functions cannot be tested. But since template are data transformations making assertions on the output is pretty much all you need.

Using the same web service as the template builder turned out to be a very sweet deal. I wonder why Tridion doesn’t document and offer this solution as a way to automate testing for templates. I would be happy to write a more detailed article plus a demo project for the Tridion world platform if I was asked ;)

Automated testing for Tridion templates: Technical

This is the second post on automated testing for Tridion component templates. The previous post was an overview about the usefulness of a testing strategy and the different approaches to automated testing in general.

This post will focus on actually implementing integration testing for Tridion templates. I will cover integration testing since this approach is well suited for testing template building blocks, which are basically data transformations.

Before we dive into the technical bit, let’s take a moment to discuss the creating the test input for integration tests. Integration tests require data, which has to coded by hand or pulled from somewhere. Both are valid choices, but it is a choice between stability and maintainability:

  • When favoring stability the input has to be local and coded hand in order to avoid a dependency to any external system. When using a hand coded input, changing the tests to accommodate new requirements will take longer since the input will likely have to change as well.
  • When favoring the ability to quickly change tests the input can be pulled from an external component. Since the input isn’t code by hand, it takes less time to change the automated tests. The downside is that you create a dependency on the external system. Downtime of that system causes tests to fail.

Now that we got the options for test input covered, let’s dive into the technical bits. I have two possible implementations for integration testing lined up:

Integration testing using TOM classes with Microsoft Fakes

The first implementation is the creation of an integration test input using a local test setup with the normal TOM (Tridion object model) classes. Since these classes are not developed to be testable I had to use the Microsoft Fakes framework in order to make this work. This framework enables the testing of classes that are not designed to be testable. Using Fakes I was able to create the item fields for a component. See the example code below:

using System;
using System.Collections.Generic;
using System.Reflection;
using System.Xml;

using Microsoft.QualityTools.Testing.Fakes;
using Microsoft.VisualStudio.TestTools.UnitTesting;

using Tridion.ContentManager;
using Tridion.ContentManager.ContentManagement;
using Tridion.ContentManager.ContentManagement.Fields;
using Tridion.ContentManager.ContentManagement.Fields.Fakes;
using Tridion.ContentManager.Fakes;

namespace Tridion.Deloitte.Libraries.Tests.ContentArticle
{
    [TestClass]
    //Use the default visual studio debugger: http://blog.degree.no/2012/09/visual-studio-2012-fakes-shimnotsupportedexception-when-debugging-tests/
    public class ItemFieldsTests
    {
        [TestMethod]
        public void ItemFieldTest()
        {
            using (ShimsContext.Create())
            {
                //Session constructor is bypassed completely to prevent interaction with Tridion. 
                ShimSession.Constructor = x => { };
                Session session = (Session)Activator.CreateInstance(typeof(Session), new object[] { });

                //Finalize is supressed to prevented interaction with Tridion. 
                GC.SuppressFinalize(session);

                //IdentifiableObject constructor is bypassed completely to prevent interaction with Tridion. 
                ShimIdentifiableObject.ConstructorTcmUriSession = (x, uri, sess) => { };
                Schema schema = (Schema)Activator.CreateInstance(typeof(Schema), new object[] { TcmUri.UriNull, session });

                //Create item fields without any interaction with Tridion using Fakes.
                ItemFields fields = GetItemFields(schema, session);

                //Assert fields where actually created.
                Assert.AreEqual(fields.Count, 1);
            }
        }

        private ItemFields GetItemFields(Schema schema, Session session)
        {
            List fields = new List { GetItemField(session) };

            ShimItemFields.ConstructorSchema = (x, y) =>
            {
                FieldInfo fieldsField = typeof(ItemFields).GetField("_fields", BindingFlags.NonPublic | BindingFlags.Instance);
                if (fieldsField != null)
                {
                    fieldsField.SetValue(x, fields);
                }
            };

            ItemFields itemFields = (ItemFields)Activator.CreateInstance(typeof(ItemFields), new object[] { schema });

            var shimFields = new ShimItemFields(itemFields);
            shimFields.ItemGetString = x => GetItemField(session);

            return itemFields;
        }

        private ItemField GetItemField(Session session)
        {
            ShimItemField.ConstructorXmlElementSession = (x, y, z) => { };

            const BindingFlags bindingFlags = BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.CreateInstance;
            XmlDocument emptyDoc = new XmlDocument();
            var arguments = new object[] { emptyDoc.DocumentElement, session };

            Type type = typeof(SingleLineTextField);
            var itemField = (ItemField)Activator.CreateInstance(type, bindingFlags, null, arguments, null);

            var shimField = new ShimItemField(itemField);
            shimField.NameGet = () => "Title";

            var field = itemField as SingleLineTextField;
            field.Value = "This is a title field";

            return itemField;
        }
    }
}

The basic implementation above is pretty decent, but you have to code the item fields by hand instead of just using a local xml file with component xml generated by Tridion. As discusses earlier, coding the input by hand effects the time it takes to change the tests.

In order to overcome this I intended to extended the code above to generate item fields from component and schema xml files generated by Tridion. Again, I used Microsoft Fakes to change the behavior of the TOM classes in order to make this scenario testable.

The end result worked fine, but it’s not something I would recommend other developers to try. Microsoft Fakes is a very powerful framework, but relying on it too much has drawbacks in terms of readability. A great deal of Microsoft Fakes configuration was required to make the scenario testable, making the code complex, hard to read and hard to understand. On top of that coding the test and configuring Fakes properly required extensive interaction with the inner workings of Tridion.

In conclusion, taking this route for automated testing has to following advantages and limitations:

Advantages: This implementation makes the TOM classes testable, so the template building blocks themselves do not have to be changed in order to start creating automated tests.

Limitations: This implementation requires expensive versions of Visual Studio and additional tooling for decompiling and debugging Tridion libraries. Coding the test setup requires a lot of interaction with the inner workings of Tridion. A great deal of Microsoft Fakes configuration makes the code complex, hard to read, and hard to understand.

Test input using Tridion Core Service classes

The first approach is the creation of an integration test input using a local test setup with Tridion core service classes. This might not seem like the most obvious approach at first, but is does offer certain advantages. The core service classes are perfect for creating test input since they are simple data classes that be created without any additional effort. Converting the data into fields is also very easy since there is a library by Frank van Puffelen that handles that nicely. See the example code below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Tridion.ContentManager;
using Tridion.ContentManager.CoreService.Client;

namespace Tridion.Deloitte.Libraries.Tests
{
    [TestClass]
    public class ComponentFieldsTests
    {
        [TestMethod]
        public void TestComponentFieldCreation()
        {
            const string fieldName = "Title";
            const string expectedTitle = "Test";

            ComponentData componentData = new ComponentData { Id = TcmUri.UriNull};
            componentData.Content = string.Format("{0}", expectedTitle);

            ItemFieldDefinitionData definition = new SingleLineTextFieldDefinitionData();
            definition.Name = fieldName;

            SchemaFieldsData schemaFieldsData = new SchemaFieldsData();
            schemaFieldsData.NamespaceUri = "Tridion.Deloitte.Libraries.Tests";
            schemaFieldsData.RootElementName = "Content";
            schemaFieldsData.Fields = new [] { definition };

            ComponentFields contentFields = ComponentFields.ForContentOf(schemaFieldsData, componentData);

            Assert.AreEqual(expectedTitle, contentFields[fieldName].Value);
        }
    }
}

The code above is really simple, but you have to code the fields instead of just using content from Tridion. However, this approach offers the freedom to either code the input yourself or pull it from the core service, since its core service classes you’re using in the first place. See the example code below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Tridion.ContentManager;
using Tridion.ContentManager.CoreService.Client;

namespace Tridion.Deloitte.Libraries.Tests
{
    [TestClass]
    public class ComponentFieldsTests
    {
        [TestMethod]
        public void TestComponentFieldCreationWithCoreService()
        {
            const string fieldName = "Title";
            const string expectedTitle = "Test";

            ISessionAwareCoreService client = TridionCoreServiceProvider.InitClient();
            var schemaFieldsData = client.ReadSchemaFields("tcm:106-269739-8", true, new ReadOptions());
            var componentData = (ComponentData)client.Read("tcm:106-273325", new ReadOptions());

            ComponentFields contentFields = ComponentFields.ForContentOf(schemaFieldsData, componentData);
            Assert.AreEqual(expectedTitle, contentFields[fieldName].Value);
        }
    }
}

The code above is also really simple, and saves the time of coding the input yourself. This will reduce the time required to change to tests to accommodate new requirements. There is a risk when you create a dependency on the external system. Downtime of that system causes tests to fail. But since this external system is actually another Tridion component the risks are limited.

From a testing standpoint, using the core service classes is a much better approach. However, this approach might not be favorable for existing implementations with their set of existing template building blocks, which will have to be changed in order to accommodate automated testing using this approach. I would not recommend changing a large set of existing template building blocks, the costs will most likely be higher than the benefits that automated testing will bring.

For new implementations, using the core service classes is a valid choice if automated testing is a requirement. The Tridion object model classes are easily mapped to the core service classes since their basically the same. In the code package I will include classes that map fields from a component to testable fields. This download should offer a decent start.

In conclusion, taking this route for automated testing has to following advantages and limitations:

Advantages: The test setup is simple and doesn’t require any trickery to setup. Data can be easily pulled from the core service, this will reduce the time required to change to tests to accommodate new requirements. Overall the best approach from a testing stand point.

Limitations: In order to use this approach existing templates have to be changed, which is a serious limitation for existing implementations with their set of existing template building blocks.

Wrapping up

Tridion should definitely be looking at automated testing for future version of their product. The Tridion object model classes are a nightmare to work with from a testing perspective. I touched on several possible solutions, but they all have their limitations. There is a lot of room for improvement.

Tridion caters to enterprise clients, the client expect a certain quality from their developers and consultants. We all have to work together in order to deliver that quality. A documented and supported approach to automated testing would benefit the overall quality greatly.

As I noted earlier in my first post, the technical solutions I described are not the only way to achieve automated testing. The stack overflow thread I linked in the beginning of my first post mentions several others, but not in much detail. If there is a good approach I didn’t cover yet please say so in the comments. I will definitely look into it.

Automated testing for Tridion templates: Introduction

Last year I wrote a blog post about developing Tridion component templates using .Net. I have been developing component templates for an implementation that uses dynamic component publishing. The one thing with templating that needs improving is automated testing. Many Tridion templating classes are not testable, unnecessarily restricting the options for automated testing. I decided to write two blog posts outlining the options that you do have for implementing automated testing when writing templates.

This first post will be an introduction covering the testing strategy and the different approaches for automated testing. I will wrap up with my personal preferences on which approach to use for testing templates.

The next post will be covering technical example implementations for automated testing of Tridion component templates. There is also an issue on stack overflow about this topic, be sure to check it out.

The testing strategy: why is it useful?

Most people have strong associations with the word strategy. So before we go any further, lets settle on what strategy means within the context of this series of blog posts:

“The determination of the basic long-term goals and objectives, and the adoption of courses of action and the allocation of resources necessary for carrying out these goals”

When creating a testing strategy it’s about making fact based decisions when setting the goals and objectives for testing. Testing is one the important tools in managing risks when developing software. Risks have to be identified and choices have to be made on how to manage these risks, thus requiring a strategy. The strategy be revisited for every project since the risks are different for every Tridion implementation. Consider the following implementations:

  • A Tridion implementation that uses dynamic component publishing.  The component templates are usually simple data transformations in order to publish xml to the broker. The templates don’t have complex business logic, since that is usually handled by another web application that consumes content from the broker database. The component templates usually consist of one or maybe two building blocks. There is no heavy interaction between the template building blocks.
  • A Tridion implementation that uses pages. The page and component templates are likely to be more complex. The page and component templates consist of more building blocks. This also means there is likely more interaction between template building blocks.

Both Tridion implementations are quite different and require different testing strategies in order to manage the risks that the implementation has to deal with. I will now move to the core subject, automated testing. There are two base approaches for automated testing: unit-testing and integration testing.

The testing strategy: unit-testing a template

The first approach is the unit-testing approach of testing small pieces of code within a template building block independently. This is done by isolating the code you want to test properly and avoiding interactions with the untestable Tridion classes inside the TOM (Tridion object model).

Advantages: The tests all cover a small portion of code, making it easy to pinpoint a mistake. The tests don’t have to rely on any external data in order to pass.

Limitations: This approach to testing is vulnerable to the standard unit-testing pitfall. The risk is that all the individual pieces work correctly, but somewhere in the interaction it all goes horribly wrong.

The testing strategy: integration-testing a template

The second approach is the integration testing strategy where you test a bigger, functional unit of code. This functional unit can consist of many smaller units. A functional unit can be the template building block from start to finish.

Advantages: The tests all cover a larger portion in order to test a functional unit. This means the interaction between different pieces of code is also tested, increasing the change of noticing a mistake in early stages of development.

Limitations: Since the tests cover a large portion of code, it is harder to pinpoint the cause of a mistake and thus increases the time required to fix it. The test setup is generally more complex because it requires a dataset, so maintaining the test setups will increase the amount of time required to maintain test code.

Wrapping up

The two approaches mentioned above can be used exclusively, but they can also be combined, negating each other’s limitations and creating a more all round  robust testing strategy. Personally I feel that an integration testing approach is the more pragmatic and logical approach for data transformations. Making assertions against the output is much more efficient to code, and also easier to understand by other developers.

Implementing an integration test combined with unit-tests at appropriate places makes it’s easier to pinpoint a mistake within a functional unit. This is only recommended in more complex cases.

Time to wrap up the introduction post. In my next post I will be going over some technical implementations for automated testing of Tridion component templates.

Developing Tridion 2011 .Net component templates

Recently i had the opportunity to develop component templates for a Tridion 2011 implementation. In this article I want to share my thought on the new templating features, provide quick tips and wrap up with some templating limitations that should be addressed in future versions of Tridion.

My previous Tridion component template development experience is all based on VBscript and the templating features from Tridion 5.3. Since I have plenty experience with .Net development I was confident the switch to Tridion 2011 template development would be a smooth one. It’s worth mentioning that I only developed component templates since the implementation used a dynamic publishing model (only component presentation xml is published to the database). The overall experience with the new templating features was very positive, the biggest improvements that I would like to point out:

  • Better productivity because of the ability to work with Visual Studio, a professional IDE which offers syntax highlighting, code completion and other nifty tools that increase developer productivity.
  • Faster bug fixing because of remote debugging and the logging functionality offered by the Tridion libraries.

The overall improvement of the template development experience has been quite substantial for me, being able to use a full fletched IDE and having decent debugging options makes a world of difference to me and my productivity as a developer.

Tridion .Net templating quick tips

When getting started with .Net template development there are some things you should look at, as they will have a positive effect on productivity and possibly save some frustration:

  • If you have the possibility, install Visual Studio on the same machine as the Tridion Content Manager Server. This allows for easier remote debugging of you templates without the risk of network/domain component causing issues and leaving you unable to use remote debugging. If you are unable to get remote debugging working on the customer network and you cannot install Visual Studio on the Tridion Content Manager, consider setting up a separate development environment where you can, cause debugging will boost productivity greatly. There is documentation on how to get remote debugging working on SDL live content.
  • Download the AssemblyUploader2 plugin for Visual Studio, this plugin allows for fast uploading of dll files into Tridion, having this plugin will save you a lot of manual and repetitive upload tasks.
  • Making the switch from VBscript to .Net I needed to get familiar with the new TOM.Net API. There are two things which make this process easier and faster. The most important thing to obtain the Tridion 2011 SP1 TOM.Net API documentation provided by Tridion. Use the Visual Studio object browser to quickly nagivate the Tridion libraries and see what functionality they expose.

Tridion .Net templating limitations

As stated in the introduction, there are some limitations to the Tridion .Net template development which I encountered during the project.

  • The most noticeable for me is the limited support for automated testing. Tridion classes used for templating are not testable making it harder to isolate code for testing. During training at Tridion the trainer did mention that Tridon will look at this for future versions.
  • A limitation worth mentioning is that all templating code has to be uploaded to Tridion in a single dll file, so all the code has to be maintained within one Visual Studio project. I found this out the hard way: Template execution failed when I referenced a second project which contained utility/support classes while both dll files where unloaded into Tridion correctly. This limitation can be frustrating when you want to reuse code from utility/support classes which reside within a different project. Fortunately there is a workaround for this limitation. Visual Studio offers the choice to add classes from a different project to the current project as a link. This way you can use the code from other projects within you templating project without duplicating the code.

Wrapping up

Tridion still supports VBscript templates, the reason is simple: upgrading Tridion should not break any existing templates and implementations. Even though VBscript is not officially deprecated I strongly recommend making the switch to .Net development for your next template development project. The reason is simple, the switch has to be made sometime, better make it soon and start seeing the benefits of the new .Net template development features for yourself.

Telligent Community 6.0 custom caching

In this article I will be briefly describing a custom caching approach which I developed for custom Telligent Community 6.0 widgets. Understanding caching and anticipating issues with cache is important to keep the load time of widgets and pages in check.

Widgets within Telligent are cached for five seconds out-of-the-box. This is a balance between performance gains by not having to re-evaluate the widget code and preventing the content of the widget becoming stale. While the five second cache is fine for most widgets out there, there are some widgets which may require additional caching to reduce load. Examples are widgets that perform many expensive database queries or connect to external API’s.

To illustrate how many expensive database queries can cause high load times for widgets I will describe a widget a recently developed for a client. This custom widget provides an overview of the amount of content within the community to give the user an indication of how active the groups within the community are. The community has ten groups, with a total of fifty sub groups. These groups contain roughly five hundred threads and twenty-five hundred replies. Querying all this content causes high load on the database server and increases load time for the widget. We remedied this problem by enhancing the caching. While this is not the only possible form of improvement, this article will only discuss caching.

There are two approaches known to me to increase caching within Telligent. The first approach is changing the standard Telligent caching through the caching.config. For more information see the documentation. I would like to point out that changes to the standard Telligent caching effect everything that is put into the cache. If you require only a portion of the widgets to be cached for a longer period of time, changing the standard Telligent caching probably yields undesirable results in other parts of the community.

The second approach requires customization through an extension, but you can achieve a more fine grained control over caching than you can with changing the standard caching. If you want to cache a portion of the widgets for ten minutes, and leave the rest on the standard five second caching, this approach is the way to go. If this is your first time writing an extension for Telligent Community 6.0, this article by Adam Seabridge is a good read. Basically what we want to do is write our own extension to handle caching for us and enable us to determine how long an object remains in the cache. The extension will look like this:

using System;

using Telligent.Evolution.Extensibility.Caching.Version1;

namespace KvKVelocityScriptExtensions
{

    public class CachingVelocityScriptExtensions : Telligent.Evolution.Extensibility.UI.Version1.IScriptedContentFragmentExtension
    {

        public string Name
        {
            get { return "KvK Cache Extension Plugin"; }
        }

        public string ExtensionName
        {
            get { return "KvKCachingExtension"; }
        }

        public string Description
        {
            get { return "Caching Extension Class For Use In Custom Widgets"; }
        }

        public void Initialize()
        {
            //stuff here;
        }

        public object Extension
        {
            //pass back an instance of the TwitterService which exposes the methods
            get { return new CachingService(); }
        }
    }

    public class CachingService
    {
        #region GroupStatistics

        public const string KvKGroupListWidgetCacheKeyPrefix = "KVKGROUPLISTWIDGET{0}";

        public GroupStatistics GetGroupStatistics(int groupId) 
        {
            string cacheKey = string.Format(KvKGroupListWidgetCacheKeyPrefix, groupId);
            GroupStatistics groupstatics = (GroupStatistics)CacheService.Get(cacheKey, CacheScope.All);

            if (groupstatics != null) {
                return groupstatics;
            }

            return null;
        }

        public void PutGroupStatistics(int groupId, int minutesToCache, int forumThreadCount, int forumThreadReplyCount, DateTime lastForumThreadReply)
        {
            string cacheKey = string.Format(KvKGroupListWidgetCacheKeyPrefix, groupId);
            GroupStatistics groupStatistics = new GroupStatistics
            {
                ForumThreadCount = forumThreadCount, 
                ForumThreadReplyCount = forumThreadReplyCount, 
                LastForumThreadReply = lastForumThreadReply
            };

            CacheService.Put(cacheKey, groupStatistics, CacheScope.All, null, TimeSpan.FromMinutes(minutesToCache));
        }

        #endregion
    }

    public class GroupStatistics
    {
        public int ForumThreadCount { get; set; }
        public int ForumThreadReplyCount { get; set; }
        public DateTime LastForumThreadReply { get; set; }
    }

}

As the code above shows, we make a simple extension with some simple functions that interact with Telligent caching. As you cannot cast objects within Velocity script, the functions within your caching extension have to return concrete instances of objects, in this case an instance of the GroupStatistics class. You have to write a small amount of additional code when want to cache an additional type of object. Now that we got our simple extension added to Telligent Community 6.0 we can start using it from Velocity script like this:

#foreach($group in $groups)
	#set($groupStatistics = $CachingExtension.GetGroupStatistics($group.Id))
	#if ($groupStatistics)
		## Use cached results.
	#else
		## Execute normal widget code.
		## Insert results into cache.
		$CachingExtension.PutGroupStatistics($group.Id, 15, $threadCount, $replyCount, $lastReactionDate)
	#end
#end

While this solution takes some additional effort, it adds great value over the standard Telligent caching in that it gives developers to power to cache widgets individuality to their needs. While caching is not the answer to every performance issue, adding additional caching in the right places can reduce load on the servers and help keep page loads in check. If you have any questions feel free to leave a comment or contact me.

Telligent Community 6.0 custom tagging solution

In this article I will be briefly describing a custom component I developed for integrating two web platforms at a client, the website and their community (running Telligent Community 6.0) . The website uses a tagging system which matches news articles and seminars to content pages on the website. The website itself is developed in .Net web forms. The tagging is managed via the Tridion Content Management System (5.3). The tagging system will be expended in the near future by pulling content from the community and matching it to content pages on the website using the same tagging system.

The community is developed using Telligent Community 6.0, which gave me to opportunity to work with this new product. The Telligent platform has evolved greatly compared to its predecessor Community Server, especially when it comes to customization and integration. The easiest way to integrate Telligent Community 6.0 is through using the rest API, which is quite elaborate as the documentation will show.

Unfortunately, while the Telligent API is pretty elaborate, there are no endpoints available to pulling content by their tags. Something I hope will be available in the future, since the data structure for storing tags within Telligent is pretty straightforward.

When developing the integration component we went for a simple integration with maximum performance. Since the website can communicate directly with the Telligent Community 6.0 database, we choose not a developed a web service, but integrate the code into the website directly. We used standard Linq2Sql with repository pattern to hold the objects, and developed some business logic to sort the results by relevancy. Nothing especially fancy from a technical perspective, but I always find it fun to integrate platforms.

Since I will be developing a full community in Telligent Community 6.0 at a later period, there may be more articles regarding Telligent custom widget development on the way.

Asp.Net MvC 3 from scratch: Models

This is the third article in the series where we create a book review application from scratch using Asp.Net MvC 3. In this article were going to start developing the data layer for our application. Since this goal is fairly large I will be chopping it up into three articles, this article will get our data layer up and running. The upcoming two articles will cover unit-testing and proper integration with the rest of the application through dependency injection.

The data layer

The main concern of the data layer is to manage and persist the data of the application. We will build our data layer using the Entity Framework 4.0, a object relational mapper tool build by Microsoft. The newest version of the framework offers a lot of neat new features, the most prominent being the “code first approach”. This feature allows us build our data layer starting with hand coding the data classes (the models). The database will be generated later using the models as the blueprint. The benefits of this approach: the models are clean, not generated by any tool and not directly coupled to Entity Framework. To utilize this approach we have to install an extension for the Entity Framework called “EFCodeFirst”, which you can install through the NuGet Package manager.

The basics

Before we get started, the data layer can be created inside a separate project in our solution, this makes it easier to physically recognize the existence of a separated layer and it dividing the application into multiple projects helps with structuring as the application grows larger. All right then let’s get started with coding our models. I will start with coding the basic models we need at this point, the book and the review:

public class Book
{
        public int Id {get; set; }
        public string Name { get; set; }
        public string Author { get; set; }
        public string ISBN { get; set; }
}

public class Review
{
        public int Id { get; set; }
        public Book Book { get; set; }
        public string Content { get; set; }
        public DateTime Created { get; set; }
}

Now we that we got our basic models in place we have to make the Entity Framework aware of their existence. The code first way to do this is to define a context that derives from the DbContext. The models themselves live inside the context as DbSets:

public class EntityContext : DbContext
{
        public DbSet<Review> Reviews { get; set; }
        public DbSet<Book> Books { get; set; }
}

As mentioned before the database is generated using the model as the blueprint. Personally i like to start with standard set of dummy data every time i run my application. The database if recreated by the Entity Framework every time i run my application, then the dummy data is inserted. We have to code the dummy data so the example provided here is pretty basic:

public class EntityBaseData : DropCreateDatabaseAlways<EntityContext>
{
        protected override void Seed(EntityContext context)
        {
            context.Books.Add(new Book{Author = "Oskar uit de Bos", ISBN = "1234567", Name = "Book1"});
        }
}

Notice that our class inherits from a class class DropCreateDatabaseAlways, which tells the Entity Framework to drop and recreate a fresh database. Insert the class into the gloabal.asax like this:

protected void Application_Start()
{
            AreaRegistration.RegisterAllAreas();

            RegisterGlobalFilters(GlobalFilters.Filters);
            RegisterRoutes(RouteTable.Routes);

            DbDatabase.SetInitializer(new EntityBaseData());
}

Last step is setting up the connection to the database. By default the Entity Framework looks for a connection string with a name that matches our custom DbContext name, in our case EntityContext. Lets give the Entity Framework what it wants, add the following connection string to the web.config file:

<connectionStrings>
    <add name="EntityContext"
         connectionString="data source=.\SQLEXPRESS;Database=BookReviews5;Integrated Security=SSPI;"
         providerName="System.Data.SqlClient" />
</connectionStrings>

The next step

While we got our basic setup covered there is little to no structure at this point, so let’s apply some structure. Most of you should be familiar with the concept of repositories, they encapsulate the mechanisms for storing, retrieving and querying data from the rest of the application. Repositories act as gateways and guardians for our data, preventing data access logic from being scattered all over our data layer. Now that we got some basic structure in place we got testability to think about. While there are many different approaches for testing a data layer, i will take a strict unit-testing approach, so not integration testing with the database.

The focus of the unit-testing lies with data access logic inside the repositories. To enable testing without the database we will abstract the link to the database (our custom DbContext) away from the repositories into a separate component. Let’s start with this separate component, since we will need it when we start working on our repositories.

I named this component EntityProvider, since its main function is to provide the application’s data (entities) to our repositories. The basic idea is that every version of the EntityProvider exposes data through several IDbSet interfaces. The first implementation, our main implementation, works with the actual database. The implementation looks like this:

public interface IEntityProvider
{
        IDbSet<Review> Reviews { get; set; }
        IDbSet<Book> Books { get; set; }

        void PersistChanges();
}

public class SqlEntityProvider : IEntityProvider
{
        private readonly DbContext _context;

        public SqlEntityProvider()
        {
            _context = new EntityContext();
        }

        public IDbSet<Review> Reviews
        {
            get { return _context.Set<Review>(); }
            set { Reviews = value; }
        }

        public IDbSet<Book> Books
        {
            get { return _context.Set<Book>(); }
            set { Books = value; }
        }

        public void PersistChanges()
        {
            _context.SaveChanges();
        }
}

Now on to the first repository. As mentioned before, repositories act as gateways and guardians for our data.  Since there is a basic set of functionality that every repository should have, i have created a simple interface. The first implementation, our book repository, will only implement basic functionality by the interface. Our first repository looks like this:

public interface IRepository<T> where T : class
{
        IEnumerable<T> All();

        void Change(int id, T entity);
        void Add(T entity);
        void Remove(T entity);
}

public class BookRepository : IRepository<Book>
{
        private IEntityProvider Provider { get; set; }

        public BookRepository(IEntityProvider provider)
        {
            Provider = provider;
        }

        public IEnumerable<Book> All()
        {
            return Provider.Books.AsEnumerable();
        }

        public void Change(int id, Book entity)
        {
            Book item = Provider.Books.Where(e=>e.Id == id).FirstOrDefault();
            item.Name = entity.Name;
            item.ISBN = entity.ISBN;
            item.Author = entity.Author;

            Provider.PersistChanges();
        }

        public void Add(Book entity)
        {
            Provider.Books.Add(entity);
            Provider.PersistChanges();
        }

        public void Remove(Book entity)
        {
            Provider.Books.Remove(entity);
            Provider.PersistChanges();
        }
}

Wrapping up

We got a lot of work done, it’s time to wrap up the first article on the models. We have created basic models and wired them up to the Entity Framework using the code first approach. For more in-depth information on the code first approach check out this resource.  We created our first testable repository which contains basic functionality. But how about some results for all our hard work? A possible quick test is adding the following code to the index action of the home controller.

public ActionResult Index()
{
            SqlEntityProvider provider = new SqlEntityProvider();
            BookRepository repository = new BookRepository(provider);
            var result = repository.All();

            ViewBag.Message = result.FirstOrDefault().Author;

            return View();
}

Note that is not the proper way of integrating the repositories with the controllers, this is simply for the quick and dirty results. The proper loosely coupled way to integrate these components is through dependency injection, will will be covered in a later article. I expect the next article to be up within a week or so, until then happy coding. Feel free to ask questions, feedback is much appreciated. Full source for all that we have done so far is available here.

Asp.Net MvC 3 from scratch: Routing

This is the second article in the series where we create a book review application from scratch using Asp.Net MvC 3. In the first article i did a general introduction and created the project. In this article we are going to discuss the importance of automated testing and were going to finally going to write some code for testing our routes.

The routing mechanism is vital to the mapping of requests within our application. This makes subjecting routes to automated testing a valid investment of time. Automated testing of routes requires the help of a mocking framework, there a several frameworks available, i will be using MOQ for this article series.

We will be installing MOQ with help of the NuGet package manager. The easiest way to install third party libraries using NuGet is simply through the GUI. Click references insight the unit-test project and click “Add package reference”. In the next screen select “All” under the “Online” section on the left, then enter your search in the upper right corner like shown in the screenshot below. MOQ is the fourth result in de search results, click it and then click install.

This is just a simple example of the power of NuGet, installing complex libraries with dependencies is just as easy. You can use the same approach to update your third party libraries by selecting “All” under the “Updates” section on the left, so there is no more manual importing of any libraries anymore, hurray!

Unit-testing

We will be writing tests during the development of our application. Now that we got MOQ installed, lets talk about the importance of automated testing in the form of unit-testing. If you have limited or no experience with automated testing or unit-testing, it is recommended to read up on the subject, at least the basics before continuing. check out my resource page for recommended reading.

Before we get to why we should unit-test, lets set a definition for unit-testing. The clearest definition i have come across is one made by Roy Osherove: A unit-test is a an automated piece of code that invokes the method or class being tested and then checks some assumptions about the logical behavior of that method or class. A unit-test is always always written using a unit-testing framework. It is fully automated, trustworthy, readable and maintainable.

But why should we as developers be bothered with unit-testing the first place? Well basically because we should care about the quality of our the web applications we develop. Delivering quality can be a challenge with a constantly evolving application, unit-testing provides a means to enhance and maintain the quality of our web application. The three major advantages offered by unit-testing are:

  • Unit-testing allows for easy refactoring. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly after a change. If a change causes a fault, it can be quickly identified and fixed.
  • Unit-testing makes for great technical documentation. Developers somehow need to convey the functionally offered by class or method to other developers. One of the possibilities is looking at the unit-tests to gain a basic understanding off a class or method.
  • Unit-testing promotes good coding. An important aspect of good coding is loose coupling. Loose coupling is a prerequisite for testing code with dependencies because it enables the replacements of dependencies with test doubles (stubs, mocks). This is the only way to control how dependencies will behave when under test, which is vital for unit-testing.

Having discussed the advantages of unit-testing, it’s only fair to point out that that unit-testing is not able to cover all the bases, unit-testing has it’s limitations and you should not reply soulfully on one technique. The first limitation of unit-testing is that it does not cover integration between different parts of the system. The second limitation is that the test scenarios are created by the developer. Somehow end-users tend to be creative and use/abuse the application in ways that developers do not expect and therefore are not covered by unit-testing.

While unit-testing does not cover all the bases, it is very important for the overall quality of the application. Just do not soulfully on unit-testing, some form of integration testing and user testing are important to prepare an application for “the real world”. Now let’s do some unit-testing of our own.

Routing

As mentioned before, we will start with the automated testing of our routes! The routing mechanism is vital to the mapping of requests within our application. This makes subjecting routes to automated testing a valid investment of time.

The challenge with testing routes is that the routing mechanism expects an request from the user, in the form of an HttpContext instance. This is a dependency, which we are going to replace with a special test double called a mock using the MOQ framework. We configure the mock to behave the way we want, faking a request from the user and giving us control over the dependency behaves.

With the mock in place, we call the routing mechanism, passing in our mocked object. The routing mechanism then returns a the route data, which contains the information on how the routing mechanism will map the request. We can check the contents of the route data to see how the request would be mapped.  Now let’s see the code:

using System.Web;
using System.Web.Routing;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;

namespace BookReviews.Web.Tests.Routes
{
    public static class RouteTestHelpers
    {
        public static Mock<HttpContextBase> BuildMockHttpContext(string url)
        {
            // Create a mock instance of the HttpContext class.
            var mockHttpContext = new Mock<HttpContextBase>();

            // Decorate our mock object with the desired behaviour.
            var mockRequest = new Mock<HttpRequestBase>();
            mockHttpContext.Setup(x => x.Request).Returns(mockRequest.Object);
            mockRequest.Setup(x => x.AppRelativeCurrentExecutionFilePath).Returns(url);

            var mockResponse = new Mock<HttpResponseBase>();
            mockHttpContext.Setup(x => x.Response).Returns(mockResponse.Object);
            mockResponse.Setup(x => x.ApplyAppPathModifier(It.IsAny<string>())).Returns<string>(x => x);

            return mockHttpContext;
        }

        public static RouteData GetRouteDataForMockedRequest(string url)
        {
            var routes = new RouteCollection();
            MvcApplication.RegisterRoutes(routes);

            // Create a mock instance of the HttpContext class with the desired behaviour.
            var mockHttpContext = BuildMockHttpContext(url);

            return routes.GetRouteData(mockHttpContext.Object);
        }
    }

    [TestClass]
    public class RouteTests
    {
        [TestMethod]
        public void TestReviewControllerIndexRoute()
        {
            RouteData routeData = RouteTestHelpers.GetRouteDataForMockedRequest("~/review/index");

            // Check if the route was mapped as expected.
            Assert.IsTrue(routeData != null && routeData.Values["controller"].ToString() == "review");
            Assert.IsTrue(routeData != null && routeData.Values["action"].ToString() == "index");
        }
    }
}

The code should be pretty straightforward with the commenting provided and the explanation provided earlier. As you may have noticed pretty much all of the code is reusable, so adding tests for new routes takes very little effort. In the next article we will start coding a lot more with our model. So this is it for the second article! Feel free to ask questions, feedback is much appreciated. Get the full source code for our progress so far from here!

Asp.Net MvC 3 from scratch: Introduction

Web development is best suited for those who like continuous learning, you have to with the constant changes in technologies and available frameworks. To get a grip on new technologies and frameworks i often find myself writing code simply to learn and experiment. Now i have decided to combine this experimenting with my blogging by developing  a series of articles based on developing a web application using the Asp.Net MvC 3.

The Asp.Net MvC 3 framework

I choose to develop my application with the Asp.Net MvC framework because it enables me to develop high quality web applications. The framework utilizes successful design patterns (Model View Controller) and is build using important object oriented principles and best-practices like separation of concerns, loose coupling and testability. While it may take some getting used to for those coming from Asp.Net Web Forms, the initial learning investment will prove worth while.

The Asp.Net MvC framework recently had it’s third major release, while many new features have been added, there are two new features that got me really exited. First is the Razor view engine, which offers cleaner and “easy on the eyes” syntax for the views. Razor comes with full Intellisense support. The second feature is the NuGet package manager which enables developers to easily manage third party libraries and their dependencies from within Visual Studio. NuGet makes installing and updating off all third party libraries a breeze. We will be working with both these new features during this article series. After discussing the framework let’s talk about the application we are going to build.

The Application

Before we begin, the concept behind the application is pretty straightforward. Community demo applications like Nerddinner proved that a simple concept can be effective for those wanting to learn a framework. We are going to develop a community website centered on book reviews. We start simple with the core functionality: write, tag and submit book reviews. As the series progresses functionality will be extended. We will start developing responses to book reviews with a badge/kudos system. We should also implement some membership mechanism like OpenID integration. But before we get carried away, lets actually start by creating the a new project!

Project creation

Time to fire up Visual Studio and get coding! Make sure you have installed Asp.Net MvC 3 otherwise it will not show up in the project templates when creating a new project. In the project creation screen select the Asp.Net MvC 3 project template. Name the project BookReviews.Web, and the solution BookReviews. In the second screen of the project creation select the select the following settings:

  1. Project template: Internet application
  2. View engine: Razor
  3. Create a unit-test project: Yes
  4. Test project Name: BookReviews.Web.Tests
  5. Unit-test framework: Visual Studio Unit Test

Wrapping up

After creating the project your Solution Explorer should look like the screenshot on the right, containing the web project along with a unit-test project. The unit-test project offers the opportunity for automated testing. We will start with automated testing in the next article. We will be using automated testing for our routes, ensuring that requests made by users get mapped properly within our application. So this is it for the first article, the next article get will posted within a few days! Feel free to ask questions, feedback is much appreciated.

Generating bulk SQL data scripts

Most web developers who develop Asp.Net web applications come across SQL Server. While developers may have less direct contact with databases than before due to ORM (object-relational mapping) tools, most of us still work with databases on a regular basis. Today was one of those days, I was faced with generating a couple of large SQL data export scripts.  These scripts have to be executed by a service provider, and since I don’t have access to their machines delivering scripts is the only simple way.

There are many tools for generating large SQL data export scripts, but not all are suited for managing large/bulk scripts. SQL data compare, a tool by Red Gate, generates scripts that are rather large, without an option to split the generated script. Since SQL data compare was my only tool for managing SQL data, I had to search for additional tooling, and i found the wonderful BCP utility.

It turns out the BCP utility is included in the SQL server install, and its specifically made for dealing with large bulk exports and imports. Its old school command-line, which might not make it the most user-friendly tool, but damn, is it fast. And it’s available on every machine that has SQL server installed. The basic usage of the BCP utility is clearly explained in this article, it’s pretty straightforward and the tool is awesome so use it! That is all, happy coding!