Page 2 of 5

Setting up Git source control on a QNAP NAS

When I start a new project, the first thing I do is set up source control. Source control is key!
I know all my source code is safe and when I make mistakes I can easily do a rollback.

So why not take it serious and use Git?
GitHub is great, but it is not free when using private repositories. Tho I’m a big fan of open source, not all my projects are open source.

So I wanted to configure Git on my NAS, an old QNAP TS-410 (currently running on firmware 4.2.0).
This is how I configured Git for a QNAP NAS.

Install Git

First of all, install the QNAP package from the app center (currently version 2.1.0) and make sure it is turned on.

There seems to be something wrong with the QNAP Git package, because a manual action is required.
open a ssh connection to your NAS.

If you’re not familiar with ssh, you can download a client (f.e. putty) and open a new connection by entering the IP of your NAS.

now login with your admin account and enter the following command:

#  cd /usr/bin
#  ln -s /Apps/git/bin/git-upload-pack
#  ln -s /Apps/git/bin/git-receive-pack

This fixes an issue with the git-upload-pack and git-receive-pack not being found.

Hosting your repositories

Next, create a new share for your repositories.
I created a new share named ‘git’, but you’re free to choose.

Again, open an ssh connection and go to the just created share:

#  cd /share/MD0_DATA/git

if this does not work, the MD0_DATA folder is probably different. go to the /share folder and check the folder name with the following command:

#  ls -la

This will show a full list of all items and you can figure out what the right name is.

in the ‘git’ folder, enter the following command to create a new repository:

git init --bare NameOfMyRepository

This creates a new repository with the name ‘NameOfMyRepository’. It will automatically create a new subfolder with an identical name.

Cloning the repository

On your development machine, open your git tool and go to the directory where you want to work.
Now enter the following command:

git clone admin@YourIP:/share/git/NameOfMyRepository

This will ask for the admin’s password.
You can also use auto login by generating a ssh keyfile, but I do not want that for security reasons.

Once entered, the repository is cloned in a folder named ‘NameOfMyRepository’ and you’re good to go!

The Git controls integrated in VS2015 do not work with ssh yet, but the guys are working on it. You can read about it here

Keep EntityFramework state objects valid by encapsulating it in domain entities

There are already a lot of great posts (f.e. by Vaughn Vernon) about how to use EntityFramework with Domain Entities.
So why am I writing about? Because I think it is important not to leak your state objects to your entire application.

EntityFramework cannot map private fields, so you need to make all mapped properties public.
The domain entity is responsible to make sure that the state object is always in a valid state.
EntityFramework does not work with ValueObjects, so you cannot use ValueObjects in a nice way in your state. So another plus for Domain Entities.
A state object does not necessarily need to be an entity, it could also be a ValueObject when you don’t care about uniquely identifying the ValueObject. The state will still have an id, but you can hide it in your ValueObject.

In this example I’m talking about products, they are stored with entity framework as ProductState entities.

    public class ProductState
        public Guid Id { get; set; }
        public string ProductCode { get; set; }
        public string Title { get; set; }


It starts with the constructors. I usually create two constructors, one for reviving the entity from the database and one for newly constructing the entity. Every constructor should result in a valid Product and ProductState.
The following constructor revives the product from it’s state.

    public class Product
        private readonly ProductState _state;
        public Product(ProductState productState)
            Assert.NotNull(productState, nameof(productState));

            _state = productState; 

The state is provided from the repository. It is already in valid state, a null check is sufficient.
Noticed the nameof? This new trick can come in very handy f.e. for logging purposes.
In this example, a product is valid when it has a ProductCode, this is the unique identifier of the product. The constructor for creating a new product is as follows:

    public class Product
        private readonly ProductState _state;
        public Product(ProductCode productCode)
            Assert.NotNull(productCode, nameof(productCode));

            _state = new ProductState
                Id = Guid.NewGuid(),
                ProductCode = productCode.Value,

The Product entity is responsible for instantiating a ProductState and make sure it is in valid state. The Id is the primary key for EntityFramework, the product code is used as surrogate identifier.

Exposing data

Now that we have instantiated a Product, we can use it in our application.
The private (and readonly) state is used as backing variable for every get or set method/property.
The methods in Product look as follows

        public ProductCode GetProductCode()
            return new ProductCode(_state.ProductCode);

        public string GetProductTitle()
            return _state.ProductTitle;

        public void SetProductTitle(string title)
            Assert.NotNull(title, nameof(title));
            if (title.Length > 255)
                throw new ArgumentException("ProductTitle cannot be more than 255 characters.");
            _state.ProductTitle = title;

There is no Set method for ProductCode? Correct! The ProductCode is the identifier for the Product, it is immutable. A different product code is a different product, so this requires instantiating a new/differnt Product.
The ProductTitle does not identifies the Product, so there is a set method for the ProductTitle. In this set method, there are some business rules; in this example the ProductTitle cannot be null and should not be more than 255 characters. This makes sure that the state object could never get an invalid title in the ProductTitle property.
I prefer void as return type for set methods. When the provided data is invalid I throw exceptions. Returning a bool to indicate if the operation went succesfull has some disadvantages, f.e.:
– It does not give any detail of what went wrong
– It suggests we can still continue normally.
In this example I use methods for the Get operations, this could as well have been properties.

Attaching the State Entity to EntityFramework

Unfortunately, There is a downside to this. Now that the Product creates the ProductState, we need to attach it to the DbContext before EntityFramework will pick it up.
So we do need to expose the inner state entity. I always try to make it internal so not everybody can reach it, but there are (many) situations when internal is not enough and you need to make it public.

        internal ProductState InnerState
            get { return _state; }

Updating EntityFramework State objects before DbContext Saves their state

In most of the projects, we use Entity Framework as ORM. It works ok in most cases.
We always try to hide the state object as much as possible, we try to encapsulate the state objects with Domain Entities. Repositories can be used to retrieve these Domain Entities.

For a project, we needed to store serialized data in a state object. These are some reasons why we chose to store data serialized:

  • The data structure can vary by entity
  • There is no need to query this data
  • It is a complex structure and would require lots of tables to store it deserialized

We need to make sure this serialized data is always up to date (serialized) before saving.
In a first attempt, we serialized the state at every command on the Entity.
When the API of my Entity grew, the serializations increased. It wasn’t a performance issue yet, but it also wasn’t one of the pieces of code to be proud of.
So we started brainstorming and came to the following solution.

We created the following interface:

public interface ICanBeOutOfSync
    void SyncMe();

All state objects with serialized state implement this interface.

Now we need to implement this method on our state objects. We do not want a reference from a state object to the entity so we provided a method on the state object in which the entity can provide an Action to Sync the state:

public class MyEntityState : ICanBeOutOfSync
    public void SyncMe()

    private Action _syncMethod;
    public void RegisterSyncMethod(Action syncMethod)
        _syncMethod = syncMethod;

Now that we can call SyncMe() on the state object, we want to force that this method is called before SaveChanges() is called on the DbContext.

public class MyDataContext : DbContext
    public override int SaveChanges()

    private void SyncEntitiesWhoCanBeOutOfSync()
        var syncableEntities = ChangeTracker.Entries().Where(e => e.Entity.GetType().GetInterfaces().Any(x => x == typeof(ICanBeOutOfSync)));

        foreach (var syncableEntity in syncableEntities)

The SaveChanges() of the DbContext is overridden and we make sure all Entities are synced.
We ask the ChangeTracker for all ICanBeOutOfSync Entities and call SyncMe() on all Entities to make sure they update their serialized data. When the serialized data is changed, the ChangeTracker will set the state to Modified.
When syncing is completed, we can call the SaveChanges() of the DbContext and let EntityFramework do its work.

Powershell CmdLets

Powershell is a powerful language and can be used in several situations.
One of these situations is the deployment process (continuous delivery). It is also integrated in several systems, f.e. NuGet. NuGet uses powershell for post package installation processing.

I use powershell for the following cases (in continuous delivery):

  •  Replace/rename config files
  • Replace variables in config files
  • Call/Post to webservices
  • Run SQL commands

Note: External Modules need to be imported for calling webservices and running SQL commands. This can be done by calling the Import-Module CmdLet.

Powershell can be used as scripting language, but you can also create CmdLets. These commands can be invoked from the command line in the powershell environment.


A CmdLet is a command which can be called from the powershell command line.

Cmdlets are created by inheriting your class from Cmdlet.
This is available in the System.Management.Automation namespace.

CmdLets use the Verb-Noun naming convention. The Verb and Noun are provided as argument in the CmdLet attribute which decorates your CmdLet class.
The VerbsCommunications class comes with the following fields:

  • Connect
  • Disconnect
  • Read
  • Receive
  • Send
  • Write

You are not restricted to these Verbs, you can also use custom Verbs.

The ProcessRecord method is called. The method is called for every item in the pipeline.

[Cmdlet(VerbsCommunications.Send, "Greeting")]
public class SendGreetingCommand : Cmdlet
	public string Name { get; set; }

	protected override void ProcessRecord() 
	      WriteObject("Hello " + name + "!");

This CmdLet example can be called from powershell command line as follows:

Send-Greeting –Name "Vincent" // Outputs: Hello Vincent!

Command line arguments are automatically bound to properties which are decorated with the Parameter attribute. Parameters are mandatory when the Mandatory argument of Parameters are set to true.

Maybe it’s interesting to know that the package manager console in Visual Studio also uses powershell and commands like Add-Migration and Update-Database are all CmdLets.

So if you haven’t given powershell a try, you really should!

Out Of Memory Exceptions when using Images in Android

I haven’t talked about my app for a while. The development was going quite well. When the learning phase of working with android was decreasing, the development effort decreased and I started to embrace the android development framework and its lifecycle.
The functionality of my app increased rapidly, with tons of new features and the look and feel got better and better and better (thanks to!)

That’s when random crashes of my app start to occur at random moments in the app.
The debugger couldn’t help me at all and all I got was this exception message:

 Out of memory: Heap Size=49159KB, Allocated=40884KB, Limit=49152KB

I couldn’t really figure out why, I did use some memory by drawing some images, but it wasn’t huge.
So I googled it and it seems to be a common problem. A lot of people run in to these Memory issues.

The garbage collector collects all elements which are not used. Unfortunately, the views have a callback, this is why the garbage collector cannot detect that they aren’t used anymore.

I found some code which unbinds all views and their descendants. I tweaked it a bit and rewrote it for c#.

protected void UnbindDrawables(View view)
   if (view == null) { return; }

  if (view.Background != null)
      view.Background.Callback = null;
  if (view is ViewGroup)
       for (int i = 0; i < ((ViewGroup)view).ChildCount; i++)
       if (!(view is AdapterView))

Now that we’ve got this method, we can call it in the OnDestroy of an Activity.
In this example, I use my LayoutContainer view, which is my outer wrapper view which contains all views of the current layout. Feel free to use your own view id, but make sure the images you want to unbind are in this view.

protected override void OnDestroy()


When I implemented this in all activities (perhaps consider a BaseActivity), the memory issues were gone and I have not seen them since.
A lot of people suggest to add android:largeHeap=”true” to your manifest file so your app can use more heap size. I don’t consider that as a good solution. You should try to keep your application clean and only use the memory you really need. Just clean everything up as you’re supposed to.

Dumping objects to string for logging purposes in c#

Logging is very important and in many cases not given as much attention as it deserves.
When connecting to external datasources, logging becomes crucial.
Whenever data provided by an external source is not as expected, you would like to know what went wrong.

When objects are dumped to the log, a common use is to override the ToString method

    public class Phone
        public Guid Id { get; set; }

        public string Brand { get; set; }

        public override string ToString()
            return string.Format("Phone with ID {0} and Brand {1}", Id, Brand);

This works fine for objects with not a lot of properties.
When the object becomes complex, overriding the ToString method becomes unclear and a lot of work.
Instead, we want to see the data of the entire object without having to do lots of work for this.
Let’s create an interface for dumping objects to string, which we can use for logging purposes.

public interface IObjectDumper
    string WriteToString(object objectToDump);

Objects can be dumped in several formats. You can pick whatever format you prefer, think of json, xml, ….
I prefer json. I think it is an easy to understand and readable format.
Asp.Net has its own json serializer (JavaScriptSerializer), but I prefer the one from Newtonsoft Json (this package can be downloaded from NuGet):

public class JsonObjectDumper : IObjectDumper
   public string WriteToString(object objectToDump)
       return JsonConvert.SerializeObject(objectToDump, Formatting.Indented);

The Formatting.Indented makes sure the output is indented.
Dumping a phone object looks as follows:

  "Id": "d768bbc4-a1d5-441f-ae14-ebb5bef92b41",
  "Brand": "Google"

Using the object dumper with logging would look something like this:

    public class PhoneRepository : IPhoneRepository
        private readonly IObjectDumper _objectDumper;
        private readonly IConnector _connter;
        private readonly IPhonesParser _phonesParser;
        public PhoneRepository(IObjectDumper objectDumper, IConnector connector, IPhonesParser phoneParser)
            _objectDumper = objectDumper;

        public IEnumerable<Phone> GetPhones()
            var phones = new List<Phone>();
                var result = _connector.GetPhones();
                phones = _phonesParser.Parse(result);
            catch (ParseException e)
                Logging.Error(string.Format("Error parsing result for GetPhones with data: {0}", _objectDumper.WriteToString(result)), e);
            catch (Exception e)
                Logging.Error("Error retrieving data for GetPhones.", e);
            return phones;

Value objects and operator overloading in c#

Value objects are small objects which are not equal by ref, but by value. They do not have an identity. Comparison of two objects is based on a value, not all values in the Value object have to be equal.

Another attribute of a Value object is that it is immutable.

An example of a Value object is a price.

public class Price
    private readonly double _value;
    public Price(double value)
        _value = value;

    public double Value {
        get { return _value; }

This is a simple value object which holds one property, Value. This Value property holds the actual price.
The value is set by the constructor and stored in a readonly variable. This makes it immutable.

When validation is required, this can be added to the constructor. When a not valid value is provided as argument, an exception should be thrown. It is the objects responsibility to make sure it is valid. A best practice is to throw custom exceptions, f.e. when a negative value is provided, an InvalidPriceException could be thrown. This specific exception can be caught on a higher level which knows how to handle these exceptions.

Comparing two Price objects with the same value but a different ref, will result in false.
It can only be compared by value by using the Value property. This means that we have to understand how the Price value object works in order to compare it. This is not a best practice. It is the responsibility of the value object to determine if it is equal to another value object of the same type.
This is when operator overloading comes to the rescue! Operator overloading allows us to override the default implementation of the operators used on the value object.

Lets start with implementing comparison.

public static bool operator ==(Price x, Price y) 
    if ((object)x == null || (object)y == null)
        return false;
    return x.Value == y.Value;
public static bool operator !=(Price x, Price y)
    return !(x == y);

When implementing the == operator, you are required to implement the != operator.
Do not forget to override the Equals method. By default, this method compares objects by ref. It should return the same result as the == and != operators.

public override bool Equals(System.Object obj)
    Price p = obj as Price;
    if ((object)p == null)
        return false;
    return Value == p.Value;

public override int GetHashCode()
    return Value.GetHashCode();

Now that we can compare Prices, the next step is to have some simple math functions like adding, subtracting, multiplying and dividing. The math operations is also the responsibility of the value object.

public static Price operator +(Price x, Price y)
    return new Price(x.Value + y.Value);

public static Price operator -(Price x, Price y)
    return new Price(x.Value - y.Value);

Remember to return a new instance of your object. This will prevent strange side effects when objects are set to null and the ref is destroyed.
Operator overloading is not limited to the currenty type, it can be implemented for any type. For multiply and divide a double is used.

public static Price operator *(Price x, double y)
    return new Price(x.Value * y);

public static Price operator /(Price x, double y)
    return new Price(x.Value / y);

These are some very simple math operations on a Price value object. The key point is that the value object itself is responsible for handling all the operations.
Although this is very basic, it is a perfect scenario for some unit tests.

Add msbuild parameters for NuGet to the TFS build process template

NuGet packages can be a great artifact of a build and it can easily be integrated in your build process.
Make sure the NuGet targets file is imported in the projects which should generate a NuGet package. Please read my previous post about NuGet to see how you can integrate NuGet in your solution.
By default, NuGet registers itself in a project file as follows:

<Import Project="$(SolutionDir)\.nuget\NuGet.targets" Condition="Exists('$(SolutionDir)\.nuget\NuGet.targets')" />

The targets file contain a BuildPackage target which can be triggered by setting an msbuild parameter named BuildPackage to true.
Now we need to create a property in the Build  template which sets the BuildPackage parameter in the msbuild command in the build template.

Open Visual Studio and open the Team Explorer window. Click on Builds to display your TFS Builds. When you right click on a Build, you can select “Edit Build definitions”. This will bring up the edit screen for the selected Build.
Now click on the “Process” tab. This will display which Build template is associated with the Build and what the parameters are.

Click the “show details” icon in the top right corner to see the template.
The template is a Windows WorkFlow Foundation (WWF) template.  The xaml file is an xml based file which creates workflows in applications.
Edit the .xaml build process template in an xml editor, I prefer notepad++.
Find <x:Members> and add the following (do not remove any existing child elements):

   <x:Property Name="NuGet_BuildPackage" Type="InArgument(x:Boolean)" />

Now that we’ve added the properties, we can add them to a section.
Find the <this:Process.Metadata> element and add the following items (and again, do not delete any existing child elements):

    <mtbw:ProcessParameterMetadata BrowsableWhen="Always" Category="#400 NuGet" Description="Set this to true to create NuGet packages" DisplayName="NuGet : Build Package" ParameterName="NuGet_BuildPackage" />

This will create a section called “NuGet” and add the parameters to this section. The #400 sets the priority of the section, it will be positioned in fourth place (just beneath the default sections).


Now that we can set the parameters in the build process, we need to provide them as build arguments to msbuild.
Switch from the xml editor to the Workflow editor and find “Run MsBuild for project“.

Edit the command arguments in the properties window and add the new build property to the CommandLineArguments property:

String.Format("/p:SkipInvalidConfigurations=true {0} /p:BuildPackage={1}", MSBuildArguments, NuGet_BuildPackage)


You can now enable or disable NuGet package Generation from the build process window of your build. Creating NuGet packages is triggered for every project which has the NuGet targets file imported.

When you want to push the NuGet packages to the NuGet repository, take a look at NuGets push command.  This can also be implemented by an msbuild task.

Providing data to JavaScript functions from code behind

In my previous post, I talked about JavaScript namespaces and functions.
When using WebForms, it can be difficult to call these functions from the code behind in a nice way.
It usually requires some data to initialize the JavaScript function, for example, providing some html element ids as trigger elements.
The Ids of html elements are non-predictable (except when using static ids, but this brings in a whole bunch of different problems) and should be provided from the code behind to avoid the ugly <% %> syntax in your markup file.
I see a lot of people making use of a StringBuilder to write out a JavaScript object. This will work of course, but it is not the nicest and best way to do that, because you will lose your strongly typing and intellisense.

I prefer to create a model and use a JavaScript Serializer to create a json object and provide that to a function.

public class SearchManagerOptions
    public string Url { get; set; }

.Net has its own serializer; JavascriptSerializer. It is a very simple and straight forward serializer and does the trick.

var options = new SearchManagerOptions { Url = "someurlhere" };
var json = new JavaScriptSerializer().Serialize(options);

The result (json) will look like


This is fine for most common cases.

If you want some extra control over the serialization process, the JavaScriptSerializer will not be your friend and I recommend you to switch to the serializer of json.NET. This is a third party library (available on NuGet) which allows you to control the serialization process by decorating your properties with attributes.
For example, when I want to apply to the JavaScript standard of camel casing properties, I can use an attribute to change the output name of the property.

public class SearchManagerOptions
    public string Ur; { get; set }

The json.NET seializer is used as follows:

var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);

This results in:


That’s even better, we now apply to the standard!
Just play around with the json.Net library, it is full of nice serialization tricks. For example, ignoring properties for serialization by adding the JsonIgnore attribute to properties.

Now we need to call the JavaScript function and provide this data.
WebForms provides us two methods for injecting scripts; RegisterClientScriptBlock and RegisterStartupScript.
What is the difference? Good question! Both method signatures are the same. The difference lies in the position where the script is injected in the page.
RegisterClientScriptBlock injects the script at the top of the form element (as one of the first children). This means that none of the html elements are rendered yet. Remember that! All your selectors won’t work! Unless you use a document ready event.
RegisterStartupScript injects the script at the bottom of the form element, which means that all html elements are already rendered.
I usually go for RegisterStartupScript, because I think it is a cleaner solution to inject scripts at the end of your page. It is still injected inside the form element, but that is a limitation of WebForms.

var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);
var script = string.Format("ViCreative.Managers.SearchManager.init({0})", json);

if (Page.ClientScript.IsStartupScriptRegistered("SearchManagerInitialization"))
    Page.ClientScript.RegisterStartupScript(GetType(), "SearchManagerInitialization", script, true);

I do not want this script to be injected more than once, which is why I check if this script is already registered. I usually would add “SearchManagerInitialization” to a constant, but for clarity of this blog post, I just added it to the script.

JavaScript Globals, Namespaces and Scopes

Everybody hates (or at least should hate) globals. It makes your application harmful for attacks and globals can easily be overwritten (even by mistake).

Hiding functions and variables in scopes is one way to decrease the use of globals.
But globals cannot always be avoided. For instance, when you want your application to communicate with the outside world.

I like to use namespaces to make sure that I do not overwrite any globals. It also brings structure to your application. It groups related objects and functions.

var ViCreative = ViCreative || {};
ViCreative.Managers = ViCreative.Managers || {};

This creates a ViCreative.Managers namespace (or reuses it if it already exists). I usually use the domain name as first part of the namespace, just to make sure I don’t mess with other plugin/framework namespaces.
Always make sure you do not overwrite the namespace when creating/extending a namespace. Check if the namepsace already exists!!
This also allows the JavaScript files with namespace definitions to be loaded in any sequence. Now that we have a namespace, we can add objects and functions to it.

ViCreative.Managers.SearchManager = {
    _url : '',
    _value : '',

    doSearch : function() {
        // logic for searching

    init : function(options) {
        /// initialization logic here
        this._url = options.url;

    search : function(searchValue) {
        // post searchValue and return response
        this._value = searchValue;

This is already a lot better than using just some global methods, but we’re still not there yet.
All variables and methods are still out in the open. We need to create a scope to hide the private members and functions. In the example above, we only want to expose the init and search method. The doSearch method should not be called directly.

Functions have their own scope and allow you to hide variables. Wrapping the SearchManager in a function allows us to create a new scope for all variables inside the SearchManager object.

ViCreative.Managers.SearchManager = function () {
    var url = '';
    var value = '';

    var doSearch = function() {
        // logic for searching

    var init = function(options) {
        /// initialization logic here
        url = options.url;

    var search = function(searchValue) {
        // post searchValue and return response
        value = searchValue;
    return {
        init: init,
        search: search

The example above creates and executes a function and stores the result in ViCreative.Managers.SearchManager.
The function returns an object, which exposes the init and search functions. All other variables and functions are not accessible from the global scope:


Frameworks, variables and other objects from the global scopre, which are used by this function, should be provided as argument to this function to bring them to the local scope.

ViCreative.Managers.SearchManager = function ($) {

This example adds jQuery to the scope of this function and stores it in $.
Try not to pollute the global scope and hide all methods and variables which are meant to be private.

JavaScript is a first-class citizen! Treat it as such!