AuthorVincent Keizer

EntityFrameworkCore and IDesignTimeDbContextFactory

In one of my first attempts on using EntityFrameworkCore I quickly ran into the following error
This problem appears when your dbcontext is in a different project than your web project.

Unable to create an object of type ‘….DbContext’. Add an implementation of ‘IDesignTimeDbContextFactory’ to the project, or see https://go.microsoft.com/fwlink/?linkid=851728 for additional patterns supported at design time.

I fixed this by implementing the IDesignTimeDbContextFactory as follows:

    public class YourDbContextDesignTimeFactory : IDesignTimeDbContextFactory<YourDbContext>
    {
        public BuildMonitorDbContext CreateDbContext(string[] args)
        {
            var optionsBuilder = new DbContextOptionsBuilder<YourDbContext>();
            optionsBuilder.UseSqlServer(@"ConnectionStringGoesHere");

            return new YourDbContext(optionsBuilder.Options);
        }
    }

A better solution would be storing the connection string in your appsettings file.

        IConfigurationRoot configuration = new ConfigurationBuilder()
            .SetBasePath(Directory.GetCurrentDirectory())
            .AddJsonFile("appsettings.json")
            .Build();
 
        var builder = new DbContextOptionsBuilder<YourDbContext>();
 
        var connectionString = configuration.GetConnectionString("YourDbContext");

The appsettings.json file would look something like

{
  "ConnectionStrings": {
    "YourDbContext": "ConnectionStringGoesHere"
  }
}

Posting a form with files with jQuery Ajax

When a form includes files, the serialize() method on a form element is not going to help you.
The serialize() method serializes a form to json, but all file inputs are not included.
The trick is to use FormData.

var jqueryForm = $("form");
var formAsHtmlElement = jqueryForm[0];
var formData = new FormData(formAsHtmlElement);

The form data can be posted as follows

$.ajax({
    type: form.attr("method"),
    url: form.attr("action"),
    data: formData,
    cache: false,
    contentType: false,
    processData: false,
    success: function (data) {
        ....
    }
});

The contentType and processData are very important. This treats the data as “multipart/form-data”.

EF Code First Index Column not created

A while back I tried to create a unique index on a column.
The configuration file looked something like this.

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

This resulted in the following migration.

            CreateTable(
                "dbo.Departments",
                c => new
                    {
                        Id = c.Guid(nullable: false),
                        Alias = c.String(
                            annotations: new Dictionary<string, AnnotationValues>
                            {
                                { 
                                    "Alias",
                                    new AnnotationValues(oldValue: null, newValue: "IndexAnnotation: { Name: IX_Alias, IsUnique: True }")
                                },
                            }),
                        Name = c.String(nullable: false),
                    })
                .PrimaryKey(t => t.Id);

Which seemed fine. It looked like it did what it supposed to do.
When the migration was run, no index or whatsoever. So I started googling about indexes and I came across the following:

Columns that are of the large object (LOB) data types ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image cannot be specified as key columns for an index.

So I limited the Alias to 50 characters in the configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

But still no Index. So I continued my search on the internet and I Finally found the problem.
It is the name of the HasColumnAnnotion method. This should be set to “Index” when you want to create an Index. This seems a bit unnecessary to me when the second argument is an IndexAnnotation. So once again I changed my configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

The migration file generated:

            CreateIndex("dbo.Departments", "Alias", unique: true);

So now I know that the HasColumnAnnotation name is fixed on “Index”, I would recommend creating an extension method for creating unique indexes:

    public static class PrimitivePropertyConfigurationExtensions
    {
        public static PrimitivePropertyConfiguration IsUnique(this PrimitivePropertyConfiguration configuration)
        {
            return configuration.HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute { IsUnique = true }));
        }
    }

And you can use it as follows:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<DepartmentState>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).IsUnique();
            Property(p => p.Name).IsRequired();
        }
    }

Sass mixin for transparent gradient

I’m a big fan of SASS.
Not only just because of the awesome name, but it allows me to use variables and mixins.
This keeps my code DRY.

When I was working on a mixin for creating linear gradients, I kept running in to problems with older (IE) browsers.
My first attempt:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='rgba($top-color, $opacity)', endColorstr='rgba($top-color, $opacity)',GradientType=0 ); /* IE6-9 */
    }

This resulted in not rendered variable values in the filter property. For some reason they are not compiled by the sass compiler.

My second attempt:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#{rgba($top-color, $opacity)}', endColorstr='#{rgba($bottom-color, $opacity)}',GradientType=0 ); /* IE6-9 */
    }

This did trigger the sass compiler to render the variable values, but they were rendered as rgb values. IE browsers do not work well with rbga values.
After some google searches I found “ie-hex-str”. This outputs the hex code (with alpha filter) of a color instead of the rbga value.
So this resulted in the following mixin which was working well for me:

    @mixin linear-gradient($top-color, $bottom-color, $opacity) {
        background: -moz-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* FF3.6-15 */
        background: -webkit-linear-gradient(top, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* Chrome10-25,Safari5.1-6 */
        background: linear-gradient(to bottom, rgba($top-color, $opacity) 0%, rgba($bottom-color, $opacity) 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
        filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#{ie-hex-str(rgba($top-color, $opacity))}', endColorstr='#{ie-hex-str(rgba($bottom-color, $opacity))}',GradientType=0 ); /* IE6-9 */
    }

The mixin can be used as follows:

    .background-image-overlay {
        @include linear-gradient($black, $white, .5);
    }

Running gulp tasks on a buildserver

With the newest ASP.NET release coming, Microsoft is removing their own optimization framework and pushes developers to use Gulp, NPM and Bower.
I do not want to manually minify and bundle my css and js files, so I want a Gulp task to do it.
My NPM file (package.json) looks like:

{
  "version": "1.0.0",
  "name": "ASP.NET",
  "private": true,
  "devDependencies": {
    "bower": "1.7.7",
    "gulp": "3.9.1",
    ....
  }
}

My bower file (bower.json) looks like

{
  "name": "ASP.NET",
  "private": true,
  "dependencies": {
    "jquery": "2.2.3",
    "jquery-validation-unobtrusive": "3.2.6",
    "bootstrap": "3.3.6",
    ....
  }
}

I also do not want my bundles to be source controlled.
It is a task of the buildserver to prepare my solution for release.
This means that the buildserver should be able to run the same Gulp tasks as we do in our development environment.

The following software should be installed on the buildserver to let the buildserver run Gulp tasks:

When installing Git, set install option to “Run GIT from the Windows Command Prompt”.
I’d like to have all my configuration source controlled, so I create a targets file which contains the targets for running Npm, Gulp and Bower and I import this file in my web project.

  <!-- Gulp -->
  <Target Name="RunGulpTask" AfterTargets="BowerInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running gulp task 'default'" Importance="high" />
    <Exec Command="node_modules\.bin\gulp" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Bower -->
  <Target Name="BowerInstall" AfterTargets="NpmInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running bower install" Importance="high"/>
    <Exec Command="node_modules\.bin\bower install" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Npm -->
  <Target Name="NpmInstall" BeforeTargets="BeforeBuild" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running npm install" Importance="high"/>
    <Exec Command="npm install" WorkingDirectory="$(ProjectDir)" />
  </Target>

I do not want these tasks to run in my development environment (because the TaskRunner of Visual Studio takes care of it), so I added a RunGulpTask parameter. When this is provided (by adding /p:RunGulpTask=true to the msbuild command), the targets will be run before the solution is built.

My gulpfile.js looks like:

var gulp = require('gulp'),
.....
gulp.task('default', ['bundleCss', 'minCss', 'bundleJs', 'minJs'], function() {});

I did not provide a Gulp task to run, so Gulp will run the default task by convention. My default task has a dependency to all tasks I want to be run on the buildserver.
The buildserver now bundles and minifies my css and js files by using the same Gulp tasks.

Picking up jQuery dom manupilations with AngularJS

AngularJS and jQuery are two of many frameworks who manipulate the dom.
They both have their own way of doing this.
I prefer AngularJS for its data driven approach.
Mixing jQuery dom manipulation and AngularJS data binding is not a success, it is a bad practice and should be avoided.
But sometimes it just cannot be avoided and you just have to deal with it. In a project I was working on, jQuery was used for dom manipulation. My task was to build a new feature with AngularJS which uses the same part of the dom. Completely refactoring was not an option so I had to deal with dom changes triggered by jQuery. AngularJS wont pick up expressions when they are injected in the dom. I figured out the following solution.

Monitoring (a section of) the dom

In my first attempt, I tried to monitor a section of the dom. Every change will be picked up and compiled.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compile', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      return element.html();
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by adding the compile attribute to an element in the app scope.

<div ng-app="myApp">
    <div compile>
    ....
    </div>
</div>

At first, I thought it worked perfectly.
It detected every change. But when I was adding more angular expressions, I noticed a performance drop.
Every data bind was causing a trigger to compile the section being watched. This was causing the performance drop. Though it wasn’t an issue yet, I just did not think it is a good solution.

Monitoring an attribute of an element in the dom

In my second attempt, I tried to limit the compiles to a minimum.
I was asking myself what I could use as a trigger for an update. I realized that the correct answer is “it depends”. It depends seems to be the answer to every question in developer land. I tried to keep the trigger as flexible as possible and came up with this solution.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compileWhenAttributeOfChildChanges', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      var attributeToWatch = element.attr("compile-when-attribute-of-child-changes");
                      return element.children().attr(attributeToWatch);
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by elements with attribute compile-when-attribute-of-child-changes. It uses the value of this attribute as attribute to watch on its child element. When this attribute changes value, a compile is triggered.

<div ng-app="myApp" compile-when-attribute-of-child-changes="watch">
    <div watch="valueBeingWatched">
    ....
    </div>
</div>

In this example, a compile is triggered when the “watch” attribute changes.
this solution worked well and there are no unnecessary compiles.

JavaScript unit tests and Visual Studio

Although all my code is always completely bug free (I love sarcasm), I’m still a big fan of unit tests.
As logic moves from server-side to client-side, I’m very happy to see that unit testing for client-side code is getting better supported and more common these days.

Jasmine

Jasmine is a behavior-driven JavaScript testing framework. It allows you to test your client-side code.
It has a clear and easy to understand syntax and is very popular in the community. Even Angular, which is designed with testing in mind, uses Jasmine and has tons of Jasmine examples on their website.

Example

In the example below I have a very simple math function, named multiply. This function is stored in math.js and it accepts two arguments, it should multiply argument 1 by argument 2.

function multiply(a, b)
{
    return a * b;
}

The test below makes sure that the the multiply function works as expected.

/// <reference path="math.js" />

describe('multiply', function () {
    it('multiplies a by b', function () {
        var a = 2;
        var b = 4;
        var result = multiply(2, 4);
        expect(result).toEqual(8);
    });
});

It requires math.js to be loaded. Make sure you reference all required files needed to run the test.
Of course one test is not enough to prove this function is completely bug free, but this is just an example. Normally I would write tests for all edge cases to make sure it handles everything as expected.

Visual Studio

My IDE is Visual Studio and I also use it for all my client-side code (I’m experimenting with TypeScript and enjoying it a lot).
Resharper, one of my most important Visual Studio extensions, does support client-side unit testing, but I do not have a Resharper license for at home, so I was searching for a different solution, until I found Chutzpah.

This Visual Studio extensions adds JavaScript unit testing support and integration with the Visual Studio test runner.
It combines server-side unit tests and client-side unit tests.

The result is as follows:
client-side-test-runner

Pretty sweet!

TFS: Discard changesets when merging to branches

When changes are branch specific and should not be merged (back) to other branches, these changes should be discarded.

The following TFS command will discard changsets:

tf merge $/Project/SourceBranch $/Project/TargetBranch /discard /recursive /version:C10000~C10000

This example command discards pending merge changesets from SourceBranch to TargetBranch.
It discards changeset 10000. The version is a from ~ to, so you can discard multiple changesets at once.

When the command has finished, you still need to check in the merge.

WebForms and triggering submit buttons on pressing the enter key

Default button

Filling in forms by only using keyboard commands is very popular and should get more attention than most developers give it.
I have seen a lot of weird JavaScript functions in a WebForms page to fake a button click when a user presses the enter button.
This is really unnecessary. There is a nice feature which allows you to set the submit button for a (sub)section of a WebForms page.
Every container element (f.e. Panel) supports this feature. Pressing enter on any focusable element will trigger a click on the button defined by the DefaultButton property of the parent container element.

<asp:Panel ID="pnlFormName" runat="server" DefaultButton="btnSubmit">
    <asp:TextBox ID="txtName" runat="server" />
    <asp:TextBox ID="txtMiddleName" runat="server" />
    <asp:TextBox ID="txtLastName" runat="server" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Tab order

Another thing to keep in mind is the tab order. The order can be set by setting the TabIndex property of an HTML Input element (or any other element which should be focusable).
The default tab order will fit most cases; it is equal to the positioning order of the elements.

<asp:Panel ID="pnlFormName" runat="server">
    <asp:TextBox ID="txtName" runat="server" TabIndex="1" />
    <asp:TextBox ID="txtMiddleName" runat="server" TabIndex="2" />
    <asp:TextBox ID="txtLastName" runat="server" TabIndex="3" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Setting up Git source control on a QNAP NAS

When I start a new project, the first thing I do is set up source control. Source control is key!
I know all my source code is safe and when I make mistakes I can easily do a rollback.

So why not take it serious and use Git?
GitHub is great, but it is not free when using private repositories. Tho I’m a big fan of open source, not all my projects are open source.

So I wanted to configure Git on my NAS, an old QNAP TS-410 (currently running on firmware 4.2.0).
This is how I configured Git for a QNAP NAS.

Install Git

First of all, install the QNAP package from the app center (currently version 2.1.0) and make sure it is turned on.

There seems to be something wrong with the QNAP Git package, because a manual action is required.
open a ssh connection to your NAS.

If you’re not familiar with ssh, you can download a client (f.e. putty) and open a new connection by entering the IP of your NAS.

now login with your admin account and enter the following command:


#  cd /usr/bin
#  ln -s /Apps/git/bin/git-upload-pack
#  ln -s /Apps/git/bin/git-receive-pack

This fixes an issue with the git-upload-pack and git-receive-pack not being found.

Hosting your repositories

Next, create a new share for your repositories.
I created a new share named ‘git’, but you’re free to choose.

Again, open an ssh connection and go to the just created share:

#  cd /share/MD0_DATA/git

if this does not work, the MD0_DATA folder is probably different. go to the /share folder and check the folder name with the following command:

#  ls -la

This will show a full list of all items and you can figure out what the right name is.

in the ‘git’ folder, enter the following command to create a new repository:

git init --bare NameOfMyRepository

This creates a new repository with the name ‘NameOfMyRepository’. It will automatically create a new subfolder with an identical name.

Cloning the repository

On your development machine, open your git tool and go to the directory where you want to work.
Now enter the following command:

git clone admin@YourIP:/share/git/NameOfMyRepository

This will ask for the admin’s password.
You can also use auto login by generating a ssh keyfile, but I do not want that for security reasons.

Once entered, the repository is cloned in a folder named ‘NameOfMyRepository’ and you’re good to go!

The Git controls integrated in VS2015 do not work with ssh yet, but the guys are working on it. You can read about it here