CategoryJavaScript

Posting a form with files with jQuery Ajax

When a form includes files, the serialize() method on a form element is not going to help you.
The serialize() method serializes a form to json, but all file inputs are not included.
The trick is to use FormData.

var jqueryForm = $("form");
var formAsHtmlElement = jqueryForm[0];
var formData = new FormData(formAsHtmlElement);

The form data can be posted as follows

$.ajax({
    type: form.attr("method"),
    url: form.attr("action"),
    data: formData,
    cache: false,
    contentType: false,
    processData: false,
    success: function (data) {
        ....
    }
});

The contentType and processData are very important. This treats the data as “multipart/form-data”.

Running gulp tasks on a buildserver

With the newest ASP.NET release coming, Microsoft is removing their own optimization framework and pushes developers to use Gulp, NPM and Bower.
I do not want to manually minify and bundle my css and js files, so I want a Gulp task to do it.
My NPM file (package.json) looks like:

{
  "version": "1.0.0",
  "name": "ASP.NET",
  "private": true,
  "devDependencies": {
    "bower": "1.7.7",
    "gulp": "3.9.1",
    ....
  }
}

My bower file (bower.json) looks like

{
  "name": "ASP.NET",
  "private": true,
  "dependencies": {
    "jquery": "2.2.3",
    "jquery-validation-unobtrusive": "3.2.6",
    "bootstrap": "3.3.6",
    ....
  }
}

I also do not want my bundles to be source controlled.
It is a task of the buildserver to prepare my solution for release.
This means that the buildserver should be able to run the same Gulp tasks as we do in our development environment.

The following software should be installed on the buildserver to let the buildserver run Gulp tasks:

When installing Git, set install option to “Run GIT from the Windows Command Prompt”.
I’d like to have all my configuration source controlled, so I create a targets file which contains the targets for running Npm, Gulp and Bower and I import this file in my web project.

  <!-- Gulp -->
  <Target Name="RunGulpTask" AfterTargets="BowerInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running gulp task 'default'" Importance="high" />
    <Exec Command="node_modules\.bin\gulp" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Bower -->
  <Target Name="BowerInstall" AfterTargets="NpmInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running bower install" Importance="high"/>
    <Exec Command="node_modules\.bin\bower install" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Npm -->
  <Target Name="NpmInstall" BeforeTargets="BeforeBuild" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running npm install" Importance="high"/>
    <Exec Command="npm install" WorkingDirectory="$(ProjectDir)" />
  </Target>

I do not want these tasks to run in my development environment (because the TaskRunner of Visual Studio takes care of it), so I added a RunGulpTask parameter. When this is provided (by adding /p:RunGulpTask=true to the msbuild command), the targets will be run before the solution is built.

My gulpfile.js looks like:

var gulp = require('gulp'),
.....
gulp.task('default', ['bundleCss', 'minCss', 'bundleJs', 'minJs'], function() {});

I did not provide a Gulp task to run, so Gulp will run the default task by convention. My default task has a dependency to all tasks I want to be run on the buildserver.
The buildserver now bundles and minifies my css and js files by using the same Gulp tasks.

Picking up jQuery dom manupilations with AngularJS

AngularJS and jQuery are two of many frameworks who manipulate the dom.
They both have their own way of doing this.
I prefer AngularJS for its data driven approach.
Mixing jQuery dom manipulation and AngularJS data binding is not a success, it is a bad practice and should be avoided.
But sometimes it just cannot be avoided and you just have to deal with it. In a project I was working on, jQuery was used for dom manipulation. My task was to build a new feature with AngularJS which uses the same part of the dom. Completely refactoring was not an option so I had to deal with dom changes triggered by jQuery. AngularJS wont pick up expressions when they are injected in the dom. I figured out the following solution.

Monitoring (a section of) the dom

In my first attempt, I tried to monitor a section of the dom. Every change will be picked up and compiled.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compile', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      return element.html();
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by adding the compile attribute to an element in the app scope.

<div ng-app="myApp">
    <div compile>
    ....
    </div>
</div>

At first, I thought it worked perfectly.
It detected every change. But when I was adding more angular expressions, I noticed a performance drop.
Every data bind was causing a trigger to compile the section being watched. This was causing the performance drop. Though it wasn’t an issue yet, I just did not think it is a good solution.

Monitoring an attribute of an element in the dom

In my second attempt, I tried to limit the compiles to a minimum.
I was asking myself what I could use as a trigger for an update. I realized that the correct answer is “it depends”. It depends seems to be the answer to every question in developer land. I tried to keep the trigger as flexible as possible and came up with this solution.

angular.module('myApp', [] ,function ($compileProvider) {
    $compileProvider.directive('compileWhenAttributeOfChildChanges', function ($compile) {
            return function (scope, element, attrs) {
                scope.$watch(
                  function (scope) {
                      var attributeToWatch = element.attr("compile-when-attribute-of-child-changes");
                      return element.children().attr(attributeToWatch);
                  },
                  function (value) {
                      $compile(element.contents())(scope);
                  }
                );
            };
        });
    });

This directive is triggered by elements with attribute compile-when-attribute-of-child-changes. It uses the value of this attribute as attribute to watch on its child element. When this attribute changes value, a compile is triggered.

<div ng-app="myApp" compile-when-attribute-of-child-changes="watch">
    <div watch="valueBeingWatched">
    ....
    </div>
</div>

In this example, a compile is triggered when the “watch” attribute changes.
this solution worked well and there are no unnecessary compiles.

JavaScript unit tests and Visual Studio

Although all my code is always completely bug free (I love sarcasm), I’m still a big fan of unit tests.
As logic moves from server-side to client-side, I’m very happy to see that unit testing for client-side code is getting better supported and more common these days.

Jasmine

Jasmine is a behavior-driven JavaScript testing framework. It allows you to test your client-side code.
It has a clear and easy to understand syntax and is very popular in the community. Even Angular, which is designed with testing in mind, uses Jasmine and has tons of Jasmine examples on their website.

Example

In the example below I have a very simple math function, named multiply. This function is stored in math.js and it accepts two arguments, it should multiply argument 1 by argument 2.

function multiply(a, b)
{
    return a * b;
}

The test below makes sure that the the multiply function works as expected.

/// <reference path="math.js" />

describe('multiply', function () {
    it('multiplies a by b', function () {
        var a = 2;
        var b = 4;
        var result = multiply(2, 4);
        expect(result).toEqual(8);
    });
});

It requires math.js to be loaded. Make sure you reference all required files needed to run the test.
Of course one test is not enough to prove this function is completely bug free, but this is just an example. Normally I would write tests for all edge cases to make sure it handles everything as expected.

Visual Studio

My IDE is Visual Studio and I also use it for all my client-side code (I’m experimenting with TypeScript and enjoying it a lot).
Resharper, one of my most important Visual Studio extensions, does support client-side unit testing, but I do not have a Resharper license for at home, so I was searching for a different solution, until I found Chutzpah.

This Visual Studio extensions adds JavaScript unit testing support and integration with the Visual Studio test runner.
It combines server-side unit tests and client-side unit tests.

The result is as follows:
client-side-test-runner

Pretty sweet!

Providing data to JavaScript functions from code behind

In my previous post, I talked about JavaScript namespaces and functions.
When using WebForms, it can be difficult to call these functions from the code behind in a nice way.
It usually requires some data to initialize the JavaScript function, for example, providing some html element ids as trigger elements.
The Ids of html elements are non-predictable (except when using static ids, but this brings in a whole bunch of different problems) and should be provided from the code behind to avoid the ugly <% %> syntax in your markup file.
I see a lot of people making use of a StringBuilder to write out a JavaScript object. This will work of course, but it is not the nicest and best way to do that, because you will lose your strongly typing and intellisense.

I prefer to create a model and use a JavaScript Serializer to create a json object and provide that to a function.


public class SearchManagerOptions
{
    public string Url { get; set; }
}

.Net has its own serializer; JavascriptSerializer. It is a very simple and straight forward serializer and does the trick.


var options = new SearchManagerOptions { Url = "someurlhere" };
var json = new JavaScriptSerializer().Serialize(options);

The result (json) will look like


"{\"Url\":\"SomeUrl\"}"

This is fine for most common cases.

If you want some extra control over the serialization process, the JavaScriptSerializer will not be your friend and I recommend you to switch to the serializer of json.NET. This is a third party library (available on NuGet) which allows you to control the serialization process by decorating your properties with attributes.
For example, when I want to apply to the JavaScript standard of camel casing properties, I can use an attribute to change the output name of the property.


public class SearchManagerOptions
{
    [JsonProperty("url")]
    public string Ur; { get; set }
}

The json.NET seializer is used as follows:


var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);

This results in:


"{\"url\":\"SomeUrl\"}"

That’s even better, we now apply to the standard!
Just play around with the json.Net library, it is full of nice serialization tricks. For example, ignoring properties for serialization by adding the JsonIgnore attribute to properties.

Now we need to call the JavaScript function and provide this data.
WebForms provides us two methods for injecting scripts; RegisterClientScriptBlock and RegisterStartupScript.
What is the difference? Good question! Both method signatures are the same. The difference lies in the position where the script is injected in the page.
RegisterClientScriptBlock injects the script at the top of the form element (as one of the first children). This means that none of the html elements are rendered yet. Remember that! All your selectors won’t work! Unless you use a document ready event.
RegisterStartupScript injects the script at the bottom of the form element, which means that all html elements are already rendered.
I usually go for RegisterStartupScript, because I think it is a cleaner solution to inject scripts at the end of your page. It is still injected inside the form element, but that is a limitation of WebForms.


var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);
var script = string.Format("ViCreative.Managers.SearchManager.init({0})", json);

if (Page.ClientScript.IsStartupScriptRegistered("SearchManagerInitialization"))
{
    Page.ClientScript.RegisterStartupScript(GetType(), "SearchManagerInitialization", script, true);
}

I do not want this script to be injected more than once, which is why I check if this script is already registered. I usually would add “SearchManagerInitialization” to a constant, but for clarity of this blog post, I just added it to the script.

JavaScript Globals, Namespaces and Scopes

Everybody hates (or at least should hate) globals. It makes your application harmful for attacks and globals can easily be overwritten (even by mistake).

Hiding functions and variables in scopes is one way to decrease the use of globals.
But globals cannot always be avoided. For instance, when you want your application to communicate with the outside world.

I like to use namespaces to make sure that I do not overwrite any globals. It also brings structure to your application. It groups related objects and functions.

var ViCreative = ViCreative || {};
ViCreative.Managers = ViCreative.Managers || {};

This creates a ViCreative.Managers namespace (or reuses it if it already exists). I usually use the domain name as first part of the namespace, just to make sure I don’t mess with other plugin/framework namespaces.
Always make sure you do not overwrite the namespace when creating/extending a namespace. Check if the namepsace already exists!!
This also allows the JavaScript files with namespace definitions to be loaded in any sequence. Now that we have a namespace, we can add objects and functions to it.

ViCreative.Managers.SearchManager = {
    _url : '',
    _value : '',

    doSearch : function() {
        // logic for searching
    },

    init : function(options) {
        /// initialization logic here
        this._url = options.url;
    },

    search : function(searchValue) {
        // post searchValue and return response
        this._value = searchValue;
        doSearch();
    }
};

This is already a lot better than using just some global methods, but we’re still not there yet.
All variables and methods are still out in the open. We need to create a scope to hide the private members and functions. In the example above, we only want to expose the init and search method. The doSearch method should not be called directly.

Functions have their own scope and allow you to hide variables. Wrapping the SearchManager in a function allows us to create a new scope for all variables inside the SearchManager object.

ViCreative.Managers.SearchManager = function () {
    var url = '';
    var value = '';

    var doSearch = function() {
        // logic for searching
    };

    var init = function(options) {
        /// initialization logic here
        url = options.url;
    };

    var search = function(searchValue) {
        // post searchValue and return response
        value = searchValue;
        doSearch();
    };
    return {
        init: init,
        search: search
    };
}();

The example above creates and executes a function and stores the result in ViCreative.Managers.SearchManager.
The function returns an object, which exposes the init and search functions. All other variables and functions are not accessible from the global scope:

searchmanager

Frameworks, variables and other objects from the global scopre, which are used by this function, should be provided as argument to this function to bring them to the local scope.

ViCreative.Managers.SearchManager = function ($) {
   ...
}(jQuery);

This example adds jQuery to the scope of this function and stores it in $.
Try not to pollute the global scope and hide all methods and variables which are meant to be private.

JavaScript is a first-class citizen! Treat it as such!

Creating and maintaining minified files in Visual Studio 2012 (with Web essentials)

Web essentials brings a lot of cool features to Visual Studio. It is a creation of Mads Kristensen. Many of these features will be added to Visual Studio in future releases and Web Essentials is downloadable as nuget package.
One of the cool features of Web Essentials is minification and bundling.
When you right click on a js file, Web essential adds an option to the context menu called “Minify Javascript file(s)”.
This creates two files, one minified file (adds min.js to the original filename) and a map file (adds min.js.map to the original filename).
Map files can be used to make the minified file readable and debuggable. The creation of a map file can be enabled or disabled in Tools > Options.

But now the coolest feature, when you make changes to the original javascript file, the minified file gets updated automatically.

I use this features a lot for my javacript plugins (see http://www.vicreative.nl/Projects). I work in the original files, the minified files are updated automatically and I commit them all in Git.

When selecting more than one file, the option “Create Javascript bundle file” gets enabled. This creates the following files:

  • a bundle file, an xml file containing all filenames for this bundle.
  • a javascript file
  • a minified javascript file
  • a map file

The same features also work for CSS files, but no map is created because this is not necessary for css files.

Creating bundles can also be done automatically, read this post.

Update:
The bundling and minification has moved from Web essentials to a different NuGet package: Bundler & Minifier

Manually setting the navigator.geolocation

Firefox is probably my most favorite browser. Mostly because of all the great extensions.
Every web developer should know Firebug. It really makes my life so much easier. I use it a lot for:

  • Examining and manipulating dom elements
  • Run JavaScript
  • Inspect ajax calls (request and response)
  • Resource manager (see what’s being loaded, cached, etc)

But this post is not about Firebug.
Another great extension is Geolocator.
This tool allows you to change your geolocation. Let me first explain what the geolocation is.
The geolocation is your current position provided by your browser.
It can be requested via JavaScript. This is one of the new features described in the HTML5 standard.

navigator.geolocation

The geolocation object has three methods:

  • getCurrentPosition
  • watchPosition
  • clearWatch

getCurrentPosition gives you the current position, this method accepts two arguments, the first argument is the method called when the position is successfully retrieved. The second argument is the method called when the position could not be retrieved. This could also be because the user has denied location sharing.


function onSuccess(position)
{
    var lat = position.coords.latitude;
    var long = position.coords.longitude;
}

function onError()
{
    // handle failed retrieving of location.
}

navigator.geolocation.getCurrentPosition(onSuccess, onError);

The watchPosition method also returns the current position, but continues to return updated positions.
The clearWatch method stops the watchPosition method.

Not all browsers support html5, so not all browsers support geolocation. You first need to check if geolocation is available.


if (navigator.geolocation) {
    // do location based stuff here
}

I was working on a location based application and wanted to test the applications with different positions, and that’s when I came across the Geolocator extension.
I created several locations and when I reloaded the page, Firefox showed a dropdownlist with my locations so I could easily switch my position.

Locations can be added by going to the Add-ons menu and select the Geolocator options. Search for your location, leave the search screen (tweak settings a bit if you’d like) and click the save button.

Automatic Bundling and Minification in .Net

One of the cool new features of .Net 4.5 is the bundling and minification feature. This can save so many requests to your server and saves you many kbs without creating a debug hell. The amount of data is getting more and more important, because of the mobile devices. People pay lots of money on data bundles for their mobile device, we better not waste it. It also decreases loading time!! Reasons enough to start using bundling and minification!

First thing you need to do is download and install the Microsoft ASP.NET Web  Optimization Framework nuget package.

The bundles are registered at application startup. This event is handled in the Application_Start method in the Global.asax. I always try to keep my Global.asax as clean as possible and split all registrations to seperate files in the App_Start folder. So I created a static BundleConfig class in the App_Start folder. This class contains one method, Register with the current Bundles as argument.

BundleConfig.RegisterBundles(BundleTable.Bundles);

BundleConfig

The BundleConfig class registers all JavaScript and Stylesheet bundles, Both Bundles can be added to the same BundleCollection, but both bundles have their own classes. JavaScript bundles are created by using the ScriptBundle class and Stylesheet Bundles are created by using the StyleBundle class.

bundles.Add(new ScriptBundle("~/bundles/jquery").Include("~/Scripts/jquery-{version}.js"));

This creates a JavaScript bundle named “~/bundles/jquery”  and includes one file, “~/Scripts/jquery-{version}.js”.  {version} is automatically resolved to the version of the jquery file in the scripts folder.

bundles.Add(new StyleBundle("~/Content/css").Include("~/Content/site.css"));

This creates a Stylesheet bundle names “~/content/css”” and contains one file “~/Content/site.css”. Adding more files is easy, just add more files to the include and separate them by a comma.

Adding bundles to a view

The only thing left to do, is adding the bundles to a view.

MVC (Razor)

@Scripts.Render("~/bundles/jquery")

WebForms

<%: Scripts.Render("~/bundles/jquery") %>

The above code renders the JavaScript bundle.

MVC (Razor)

@Styles.Render("~/Content/css")

WebForms

<%: Styles.Render("~/Content/css") %>

The above code renders the Stylesheet bundle.

All the requests to the bundles have a querystring parameter (v=CkVTG71m7lHB5jSCpyOSxbeCVJLIPag7u7NL4ykFenk1). This is to make sure that no old versions of this file are cached and retrieved.

Config files

Bundles can also be configured in config files. This allows you to make changes to your bundle without having to rebuild the project. It also allows frontend developers to easily include, remove or change files in a bundle without having to use visual studio.

An example of a bundle.config file

<?xml version="1.0" encoding="utf-8" ?>
<bundles version="1.0">
  <styleBundle path="~/Content/css">
    <include path="~/Content/Site.css" />
  </styleBundle>
</bundles>

Debugging

One of the biggest problems of minified files is debugging. It’s almost impossible to debug those files.

That’s when the best feature of this minification framework comes in place. When compilation is set to debug (in web.config), the files will not be minified, which makes it a lot easier to debug.

<system.web>
    <compilation debug="true" />
    <!-- Lines removed for clarity. -->
</system.web>

Creating custom bundle transforms

Custom transforms can be created by implementing the IBundleTransform interface.,

using System.Web.Optimization;

public class MyByndleTransform : IBundleTransform
{
   public void Process(BundleContext context, BundleResponse response)
   {
       // Process bundle tranform here
   }
}

The custom transform can now be added to the transform of a bundle

var myBundle = new Bundle("~/My/Files/To/Include");
lessBundle.Transforms.Add(new MyBundleTransform());

This bundle can now be added to the bundles collection in the BundleConfig file.

Why creating custom bundle transforms? Just think of less or coffeescript, or whatever bundle you want.

UpdatePanels and JavaScript request events

When my journey continued in the land of UpdatePanels (read my previous post about UpdatePanels here), I was looking for a way to create a clientside success event handler. I have been working with jQuery for some years now so I’ m a bit spoiled when it goes about Ajax. Their framework makes it very easy to register success or error handlers:

$.ajax({
   url : "/someurl",
   type: "POST",
   success: function(data) {
      // your success handling here
   }
});

I was hoping to find some what the same for partial postback requests. After researching the web for a while, I took a look at the Sys.WebForms.PageRequestManager object. This singleton object holds all data relevant to async page requests and holds some useful functions for registering your functions as event handlers.

In this case, I was looking for two events, the begin and end request events. The begin request  event handler can be registered by using the add_beginRequest function and the end request handler by using the  add_endRequest function.

var requestManager = Sys.WebForms.PageRequestManager.getInstance();
requestManager.add_beginRequest(function(sender, args) {
    // begin request logic here
});
requestManager.add_endRequest(function(sender, args) {
    // end request logic here
});

The event handlers are called with two arguments: The PageRequestManager (sender in this example) and the EventArguments (args in this example). These are very important to determine which UpdatePanel or PostBack element (Button, etc) is triggering the request.

The BeginEventArguments exposes three interesting functions:

  • a get_postBackElement function for retrieving the element that caused the PostBack.
  • a get_updatePanelsToUpdate function for retrieving a list of UniqueIds of the UpdatePanels which will be updated by this request.
  • a get_request function for retrieving the event context.
   var requestManager = Sys.WebForms.PageRequestManager.getInstance();
   requestManager.add_beginRequest(function(sender, args) {
     var postBackElement = args.get_postBackElement();
     var updatePanels = args.get_updatePanelsToUpdate();
     var request = args.get_request();
   });

The EndRequestArguments exposes five functions, two are very useful for my case.

  • a get_error function for retrieving the error. This function will return null when no error has occurred.
  • a get_response function to retrieve the response.
var requestManager = Sys.WebForms.PageRequestManager.getInstance();
requestManager.add_endRequest(function(sender, args) {
    var error = args.get_error();
    var response = args.get_response();
    if (error) {
       // handle error
    }
});

The PageRequestManager contains a lot more useful functions and properties for hooking up to the webforms ajax system.
Some I like are abortPostBack, get_isInAsyncPostBackremove_beginRequest and remove_endRequest.