EF Code First Index Column not created

A while back I tried to create a unique index on a column.
The configuration file looked something like this.

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

This resulted in the following migration.

            CreateTable(
                "dbo.Departments",
                c => new
                    {
                        Id = c.Guid(nullable: false),
                        Alias = c.String(
                            annotations: new Dictionary<string, AnnotationValues>
                            {
                                { 
                                    "Alias",
                                    new AnnotationValues(oldValue: null, newValue: "IndexAnnotation: { Name: IX_Alias, IsUnique: True }")
                                },
                            }),
                        Name = c.String(nullable: false),
                    })
                .PrimaryKey(t => t.Id);

Which seemed fine. It looked like it did what it supposed to do.
When the migration was run, no index or whatsoever. So I started googling about indexes and I came across the following:

Columns that are of the large object (LOB) data types ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image cannot be specified as key columns for an index.

So I limited the Alias to 50 characters in the configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

But still no Index. So I continued my search on the internet and I Finally found the problem.
It is the name of the HasColumnAnnotion method. This should be set to “Index” when you want to create an Index. This seems a bit unnecessary to me when the second argument is an IndexAnnotation. So once again I changed my configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

The migration file generated:

            CreateIndex("dbo.Departments", "Alias", unique: true);

So now I know that the HasColumnAnnotation name is fixed on “Index”, I would recommend creating an extension method for creating unique indexes:

    public static class PrimitivePropertyConfigurationExtensions
    {
        public static PrimitivePropertyConfiguration IsUnique(this PrimitivePropertyConfiguration configuration)
        {
            return configuration.HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute { IsUnique = true }));
        }
    }

And you can use it as follows:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<DepartmentState>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).IsUnique();
            Property(p => p.Name).IsRequired();
        }
    }

Sass mixin for transparent gradient

I’m a big fan of SASS.
Not only just because of the awesome name, but it allows me to use variables and mixins.
This keeps my code DRY.

When I was working on a mixin for creating linear gradients, I kept running in to problems with older (IE) browsers.
My first attempt:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='rgba($top-color, $opacity)', endColorstr='rgba($top-color, $opacity)',GradientType=0 ); /* IE6-9 */
    }

This resulted in not rendered variable values in the filter property. For some reason they are not compiled by the sass compiler.

My second attempt:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#{rgba($top-color, $opacity)}', endColorstr='#{rgba($bottom-color, $opacity)}',GradientType=0 ); /* IE6-9 */
    }

This did trigger the sass compiler to render the variable values, but they were rendered as rgb values. IE browsers do not work well with rbga values.
After some google searches I found “ie-hex-str”. This outputs the hex code (with alpha filter) of a color instead of the rbga value.
So this resulted in the following mixin which was working well for me:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#{ie-hex-str(rgba($top-color, $opacity))}', endColorstr='#{ie-hex-str(rgba($bottom-color, $opacity))}',GradientType=0 ); /* IE6-9 */
    }

The mixin can be used as follows:

    .background-image-overlay {
        @include linear-gradient($black, $white, .5);
    }

Running gulp tasks on a buildserver

With the newest ASP.NET release coming, Microsoft is removing their own optimization framework and pushes developers to use Gulp, NPM and Bower.
I do not want to manually minify and bundle my css and js files, so I want a Gulp task to do it.
My NPM file (package.json) looks like:

{
  "version": "1.0.0",
  "name": "ASP.NET",
  "private": true,
  "devDependencies": {
    "bower": "1.7.7",
    "gulp": "3.9.1",
    ....
  }
}

My bower file (bower.json) looks like

{
  "name": "ASP.NET",
  "private": true,
  "dependencies": {
    "jquery": "2.2.3",
    "jquery-validation-unobtrusive": "3.2.6",
    "bootstrap": "3.3.6",
    ....
  }
}

I also do not want my bundles to be source controlled.
It is a task of the buildserver to prepare my solution for release.
This means that the buildserver should be able to run the same Gulp tasks as we do in our development environment.

The following software should be installed on the buildserver to let the buildserver run Gulp tasks:

When installing Git, set install option to “Run GIT from the Windows Command Prompt”.
I’d like to have all my configuration source controlled, so I create a targets file which contains the targets for running Npm, Gulp and Bower and I import this file in my web project.

  <!-- Gulp -->
  <Target Name="RunGulpTask" AfterTargets="BowerInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running gulp task 'default'" Importance="high" />
    <Exec Command="node_modules\.bin\gulp" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Bower -->
  <Target Name="BowerInstall" AfterTargets="NpmInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running bower install" Importance="high"/>
    <Exec Command="node_modules\.bin\bower install" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Npm -->
  <Target Name="NpmInstall" BeforeTargets="BeforeBuild" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running npm install" Importance="high"/>
    <Exec Command="npm install" WorkingDirectory="$(ProjectDir)" />
  </Target>

I do not want these tasks to run in my development environment (because the TaskRunner of Visual Studio takes care of it), so I added a RunGulpTask parameter. When this is provided (by adding /p:RunGulpTask=true to the msbuild command), the targets will be run before the solution is built.

My gulpfile.js looks like:

var gulp = require('gulp'),
.....
gulp.task('default', ['bundleCss', 'minCss', 'bundleJs', 'minJs'], function() {});

I did not provide a Gulp task to run, so Gulp will run the default task by convention. My default task has a dependency to all tasks I want to be run on the buildserver.
The buildserver now bundles and minifies my css and js files by using the same Gulp tasks.

Picking up jQuery dom manupilations with AngularJS

AngularJS and jQuery are two of many frameworks who manipulate the dom.
They both have their own way of doing this.
I prefer AngularJS for its data driven approach.
Mixing jQuery dom manipulation and AngularJS data binding is not a success, it is a bad practice and should be avoided.
But sometimes it just cannot be avoided and you just have to deal with it. In a project I was working on, jQuery was used for dom manipulation. My task was to build a new feature with AngularJS which uses the same part of the dom. Completely refactoring was not an option so I had to deal with dom changes triggered by jQuery. AngularJS wont pick up expressions when they are injected in the dom. I figured out the following solution.

Monitoring (a section of) the dom

In my first attempt, I tried to monitor a section of the dom. Every change will be picked up and compiled.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compile', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      return element.html();
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by adding the compile attribute to an element in the app scope.

<div ng-app="myApp">
    <div compile>
    ....
    </div>
</div>

At first, I thought it worked perfectly.
It detected every change. But when I was adding more angular expressions, I noticed a performance drop.
Every data bind was causing a trigger to compile the section being watched. This was causing the performance drop. Though it wasn’t an issue yet, I just did not think it is a good solution.

Monitoring an attribute of an element in the dom

In my second attempt, I tried to limit the compiles to a minimum.
I was asking myself what I could use as a trigger for an update. I realized that the correct answer is “it depends”. It depends seems to be the answer to every question in developer land. I tried to keep the trigger as flexible as possible and came up with this solution.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compileWhenAttributeOfChildChanges', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      var attributeToWatch = element.attr("compile-when-attribute-of-child-changes");
                      return element.children().attr(attributeToWatch);
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by elements with attribute compile-when-attribute-of-child-changes. It uses the value of this attribute as attribute to watch on its child element. When this attribute changes value, a compile is triggered.

<div ng-app="myApp" compile-when-attribute-of-child-changes="watch">
    <div watch="valueBeingWatched">
    ....
    </div>
</div>

In this example, a compile is triggered when the “watch” attribute changes.
this solution worked well and there are no unnecessary compiles.

JavaScript unit tests and Visual Studio

Although all my code is always completely bug free (I love sarcasm), I’m still a big fan of unit tests.
As logic moves from server-side to client-side, I’m very happy to see that unit testing for client-side code is getting better supported and more common these days.

Jasmine

Jasmine is a behavior-driven JavaScript testing framework. It allows you to test your client-side code.
It has a clear and easy to understand syntax and is very popular in the community. Even Angular, which is designed with testing in mind, uses Jasmine and has tons of Jasmine examples on their website.

Example

In the example below I have a very simple math function, named multiply. This function is stored in math.js and it accepts two arguments, it should multiply argument 1 by argument 2.

function multiply(a, b)
{
    return a * b;
}

The test below makes sure that the the multiply function works as expected.

/// <reference path="math.js" />

describe('multiply', function () {
    it('multiplies a by b', function () {
        var a = 2;
        var b = 4;
        var result = multiply(2, 4);
        expect(result).toEqual(8);
    });
});

It requires math.js to be loaded. Make sure you reference all required files needed to run the test.
Of course one test is not enough to prove this function is completely bug free, but this is just an example. Normally I would write tests for all edge cases to make sure it handles everything as expected.

Visual Studio

My IDE is Visual Studio and I also use it for all my client-side code (I’m experimenting with TypeScript and enjoying it a lot).
Resharper, one of my most important Visual Studio extensions, does support client-side unit testing, but I do not have a Resharper license for at home, so I was searching for a different solution, until I found Chutzpah.

This Visual Studio extensions adds JavaScript unit testing support and integration with the Visual Studio test runner.
It combines server-side unit tests and client-side unit tests.

The result is as follows:
client-side-test-runner

Pretty sweet!

TFS: Discard changesets when merging to branches

When changes are branch specific and should not be merged (back) to other branches, these changes should be discarded.

The following TFS command will discard changsets:

tf merge $/Project/SourceBranch $/Project/TargetBranch /discard /recursive /version:C10000~C10000

This example command discards pending merge changesets from SourceBranch to TargetBranch.
It discards changeset 10000. The version is a from ~ to, so you can discard multiple changesets at once.

When the command has finished, you still need to check in the merge.

WebForms and triggering submit buttons on pressing the enter key

Default button

Filling in forms by only using keyboard commands is very popular and should get more attention than most developers give it.
I have seen a lot of weird JavaScript functions in a WebForms page to fake a button click when a user presses the enter button.
This is really unnecessary. There is a nice feature which allows you to set the submit button for a (sub)section of a WebForms page.
Every container element (f.e. Panel) supports this feature. Pressing enter on any focusable element will trigger a click on the button defined by the DefaultButton property of the parent container element.

<asp:Panel ID="pnlFormName" runat="server" DefaultButton="btnSubmit">
    <asp:TextBox ID="txtName" runat="server" />
    <asp:TextBox ID="txtMiddleName" runat="server" />
    <asp:TextBox ID="txtLastName" runat="server" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Tab order

Another thing to keep in mind is the tab order. The order can be set by setting the TabIndex property of an HTML Input element (or any other element which should be focusable).
The default tab order will fit most cases; it is equal to the positioning order of the elements.

<asp:Panel ID="pnlFormName" runat="server">
    <asp:TextBox ID="txtName" runat="server" TabIndex="1" />
    <asp:TextBox ID="txtMiddleName" runat="server" TabIndex="2" />
    <asp:TextBox ID="txtLastName" runat="server" TabIndex="3" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Setting up Git source control on a QNAP NAS

When I start a new project, the first thing I do is set up source control. Source control is key!
I know all my source code is safe and when I make mistakes I can easily do a rollback.

So why not take it serious and use Git?
GitHub is great, but it is not free when using private repositories. Tho I’m a big fan of open source, not all my projects are open source.

So I wanted to configure Git on my NAS, an old QNAP TS-410 (currently running on firmware 4.2.0).
This is how I configured Git for a QNAP NAS.

Install Git

First of all, install the QNAP package from the app center (currently version 2.1.0) and make sure it is turned on.

There seems to be something wrong with the QNAP Git package, because a manual action is required.
open a ssh connection to your NAS.

If you’re not familiar with ssh, you can download a client (f.e. putty) and open a new connection by entering the IP of your NAS.

now login with your admin account and enter the following command:


#  cd /usr/bin
#  ln -s /Apps/git/bin/git-upload-pack
#  ln -s /Apps/git/bin/git-receive-pack

This fixes an issue with the git-upload-pack and git-receive-pack not being found.

Hosting your repositories

Next, create a new share for your repositories.
I created a new share named ‘git’, but you’re free to choose.

Again, open an ssh connection and go to the just created share:

#  cd /share/MD0_DATA/git

if this does not work, the MD0_DATA folder is probably different. go to the /share folder and check the folder name with the following command:

#  ls -la

This will show a full list of all items and you can figure out what the right name is.

in the ‘git’ folder, enter the following command to create a new repository:

git init --bare NameOfMyRepository

This creates a new repository with the name ‘NameOfMyRepository’. It will automatically create a new subfolder with an identical name.

Cloning the repository

On your development machine, open your git tool and go to the directory where you want to work.
Now enter the following command:

git clone admin@YourIP:/share/git/NameOfMyRepository

This will ask for the admin’s password.
You can also use auto login by generating a ssh keyfile, but I do not want that for security reasons.

Once entered, the repository is cloned in a folder named ‘NameOfMyRepository’ and you’re good to go!

The Git controls integrated in VS2015 do not work with ssh yet, but the guys are working on it. You can read about it here

Keep EntityFramework state objects valid by encapsulating it in domain entities

There are already a lot of great posts (f.e. by Vaughn Vernon) about how to use EntityFramework with Domain Entities.
So why am I writing about? Because I think it is important not to leak your state objects to your entire application.

EntityFramework cannot map private fields, so you need to make all mapped properties public.
The domain entity is responsible to make sure that the state object is always in a valid state.
EntityFramework does not work with ValueObjects, so you cannot use ValueObjects in a nice way in your state. So another plus for Domain Entities.
A state object does not necessarily need to be an entity, it could also be a ValueObject when you don’t care about uniquely identifying the ValueObject. The state will still have an id, but you can hide it in your ValueObject.

In this example I’m talking about products, they are stored with entity framework as ProductState entities.

    public class ProductState
    {
        public Guid Id { get; set; }
        public string ProductCode { get; set; }
        public string Title { get; set; }
    }

Constructing

It starts with the constructors. I usually create two constructors, one for reviving the entity from the database and one for newly constructing the entity. Every constructor should result in a valid Product and ProductState.
The following constructor revives the product from it’s state.

    public class Product
    {
        private readonly ProductState _state;
        public Product(ProductState productState)
        {
            Assert.NotNull(productState, nameof(productState));

            _state = productState; 
        }
    }

The state is provided from the repository. It is already in valid state, a null check is sufficient.
Noticed the nameof? This new trick can come in very handy f.e. for logging purposes.
In this example, a product is valid when it has a ProductCode, this is the unique identifier of the product. The constructor for creating a new product is as follows:

    public class Product
    {
        private readonly ProductState _state;
        public Product(ProductCode productCode)
        {
            Assert.NotNull(productCode, nameof(productCode));

            _state = new ProductState
            {
                Id = Guid.NewGuid(),
                ProductCode = productCode.Value,
            };
        }
    }

The Product entity is responsible for instantiating a ProductState and make sure it is in valid state. The Id is the primary key for EntityFramework, the product code is used as surrogate identifier.

Exposing data

Now that we have instantiated a Product, we can use it in our application.
The private (and readonly) state is used as backing variable for every get or set method/property.
The methods in Product look as follows

        public ProductCode GetProductCode()
        {
            return new ProductCode(_state.ProductCode);
        }

        public string GetProductTitle()
        {
            return _state.ProductTitle;
        }

        public void SetProductTitle(string title)
        {
            Assert.NotNull(title, nameof(title));
            if (title.Length > 255)
            {
                throw new ArgumentException("ProductTitle cannot be more than 255 characters.");
            }
            _state.ProductTitle = title;
        }

There is no Set method for ProductCode? Correct! The ProductCode is the identifier for the Product, it is immutable. A different product code is a different product, so this requires instantiating a new/differnt Product.
The ProductTitle does not identifies the Product, so there is a set method for the ProductTitle. In this set method, there are some business rules; in this example the ProductTitle cannot be null and should not be more than 255 characters. This makes sure that the state object could never get an invalid title in the ProductTitle property.
I prefer void as return type for set methods. When the provided data is invalid I throw exceptions. Returning a bool to indicate if the operation went succesfull has some disadvantages, f.e.:
– It does not give any detail of what went wrong
– It suggests we can still continue normally.
In this example I use methods for the Get operations, this could as well have been properties.

Attaching the State Entity to EntityFramework

Unfortunately, There is a downside to this. Now that the Product creates the ProductState, we need to attach it to the DbContext before EntityFramework will pick it up.
So we do need to expose the inner state entity. I always try to make it internal so not everybody can reach it, but there are (many) situations when internal is not enough and you need to make it public.

        internal ProductState InnerState
        {
            get { return _state; }
        }

Updating EntityFramework State objects before DbContext Saves their state

In most of the projects, we use Entity Framework as ORM. It works ok in most cases.
We always try to hide the state object as much as possible, we try to encapsulate the state objects with Domain Entities. Repositories can be used to retrieve these Domain Entities.

For a project, we needed to store serialized data in a state object. These are some reasons why we chose to store data serialized:

  • The data structure can vary by entity
  • There is no need to query this data
  • It is a complex structure and would require lots of tables to store it deserialized

We need to make sure this serialized data is always up to date (serialized) before saving.
In a first attempt, we serialized the state at every command on the Entity.
When the API of my Entity grew, the serializations increased. It wasn’t a performance issue yet, but it also wasn’t one of the pieces of code to be proud of.
So we started brainstorming and came to the following solution.

We created the following interface:

public interface ICanBeOutOfSync
{
    void SyncMe();
}

All state objects with serialized state implement this interface.

Now we need to implement this method on our state objects. We do not want a reference from a state object to the entity so we provided a method on the state object in which the entity can provide an Action to Sync the state:

public class MyEntityState : ICanBeOutOfSync
{
    public void SyncMe()
    {
        _syncMethod();
    }

    private Action _syncMethod;
    public void RegisterSyncMethod(Action syncMethod)
    {
        _syncMethod = syncMethod;
    }
}

Now that we can call SyncMe() on the state object, we want to force that this method is called before SaveChanges() is called on the DbContext.

public class MyDataContext : DbContext
{
    public override int SaveChanges()
    {
        SyncEntitiesWhoCanBeOutOfSync();
    
        base.SaveChanges();
    }

    private void SyncEntitiesWhoCanBeOutOfSync()
    {
        var syncableEntities = ChangeTracker.Entries().Where(e => e.Entity.GetType().GetInterfaces().Any(x => x == typeof(ICanBeOutOfSync)));

        foreach (var syncableEntity in syncableEntities)
        {
            ((ICanBeOutOfSync)syncableEntity.Entity).SyncToMe();
        }
    }
}

The SaveChanges() of the DbContext is overridden and we make sure all Entities are synced.
We ask the ChangeTracker for all ICanBeOutOfSync Entities and call SyncMe() on all Entities to make sure they update their serialized data. When the serialized data is changed, the ChangeTracker will set the state to Modified.
When syncing is completed, we can call the SaveChanges() of the DbContext and let EntityFramework do its work.