CategoryASP.NET

EntityFrameworkCore and IDesignTimeDbContextFactory

In one of my first attempts on using EntityFrameworkCore I quickly ran into the following error
This problem appears when your dbcontext is in a different project than your web project.

Unable to create an object of type ‘….DbContext’. Add an implementation of ‘IDesignTimeDbContextFactory’ to the project, or see https://go.microsoft.com/fwlink/?linkid=851728 for additional patterns supported at design time.

I fixed this by implementing the IDesignTimeDbContextFactory as follows:

    public class YourDbContextDesignTimeFactory : IDesignTimeDbContextFactory<YourDbContext>
    {
        public BuildMonitorDbContext CreateDbContext(string[] args)
        {
            var optionsBuilder = new DbContextOptionsBuilder<YourDbContext>();
            optionsBuilder.UseSqlServer(@"ConnectionStringGoesHere");

            return new YourDbContext(optionsBuilder.Options);
        }
    }

A better solution would be storing the connection string in your appsettings file.

        IConfigurationRoot configuration = new ConfigurationBuilder()
            .SetBasePath(Directory.GetCurrentDirectory())
            .AddJsonFile("appsettings.json")
            .Build();
 
        var builder = new DbContextOptionsBuilder<YourDbContext>();
 
        var connectionString = configuration.GetConnectionString("YourDbContext");

The appsettings.json file would look something like

{
  "ConnectionStrings": {
    "YourDbContext": "ConnectionStringGoesHere"
  }
}

EF Code First Index Column not created

A while back I tried to create a unique index on a column.
The configuration file looked something like this.

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

This resulted in the following migration.

            CreateTable(
                "dbo.Departments",
                c => new
                    {
                        Id = c.Guid(nullable: false),
                        Alias = c.String(
                            annotations: new Dictionary<string, AnnotationValues>
                            {
                                { 
                                    "Alias",
                                    new AnnotationValues(oldValue: null, newValue: "IndexAnnotation: { Name: IX_Alias, IsUnique: True }")
                                },
                            }),
                        Name = c.String(nullable: false),
                    })
                .PrimaryKey(t => t.Id);

Which seemed fine. It looked like it did what it supposed to do.
When the migration was run, no index or whatsoever. So I started googling about indexes and I came across the following:

Columns that are of the large object (LOB) data types ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image cannot be specified as key columns for an index.

So I limited the Alias to 50 characters in the configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Alias", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

But still no Index. So I continued my search on the internet and I Finally found the problem.
It is the name of the HasColumnAnnotion method. This should be set to “Index” when you want to create an Index. This seems a bit unnecessary to me when the second argument is an IndexAnnotation. So once again I changed my configuration file:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<Department>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute("IX_Alias") { IsUnique = true }));
            Property(p => p.Name).IsRequired();
        }
    }

The migration file generated:

            CreateIndex("dbo.Departments", "Alias", unique: true);

So now I know that the HasColumnAnnotation name is fixed on “Index”, I would recommend creating an extension method for creating unique indexes:

    public static class PrimitivePropertyConfigurationExtensions
    {
        public static PrimitivePropertyConfiguration IsUnique(this PrimitivePropertyConfiguration configuration)
        {
            return configuration.HasColumnAnnotation("Index", new IndexAnnotation(new IndexAttribute { IsUnique = true }));
        }
    }

And you can use it as follows:

    public class DepartmentEntityConfiguration : EntityTypeConfiguration<DepartmentState>
    {
        public DepartmentEntityConfiguration()
        {
            HasKey(p => p.Id);
            Property(p => p.Alias).IsRequired().HasMaxLength(50).IsUnique();
            Property(p => p.Name).IsRequired();
        }
    }

Running gulp tasks on a buildserver

With the newest ASP.NET release coming, Microsoft is removing their own optimization framework and pushes developers to use Gulp, NPM and Bower.
I do not want to manually minify and bundle my css and js files, so I want a Gulp task to do it.
My NPM file (package.json) looks like:

{
  "version": "1.0.0",
  "name": "ASP.NET",
  "private": true,
  "devDependencies": {
    "bower": "1.7.7",
    "gulp": "3.9.1",
    ....
  }
}

My bower file (bower.json) looks like

{
  "name": "ASP.NET",
  "private": true,
  "dependencies": {
    "jquery": "2.2.3",
    "jquery-validation-unobtrusive": "3.2.6",
    "bootstrap": "3.3.6",
    ....
  }
}

I also do not want my bundles to be source controlled.
It is a task of the buildserver to prepare my solution for release.
This means that the buildserver should be able to run the same Gulp tasks as we do in our development environment.

The following software should be installed on the buildserver to let the buildserver run Gulp tasks:

When installing Git, set install option to “Run GIT from the Windows Command Prompt”.
I’d like to have all my configuration source controlled, so I create a targets file which contains the targets for running Npm, Gulp and Bower and I import this file in my web project.

  <!-- Gulp -->
  <Target Name="RunGulpTask" AfterTargets="BowerInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running gulp task 'default'" Importance="high" />
    <Exec Command="node_modules\.bin\gulp" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Bower -->
  <Target Name="BowerInstall" AfterTargets="NpmInstall" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running bower install" Importance="high"/>
    <Exec Command="node_modules\.bin\bower install" WorkingDirectory="$(ProjectDir)" />
  </Target>

  <!--Npm -->
  <Target Name="NpmInstall" BeforeTargets="BeforeBuild" Condition="'$(RunGulpTasks)' != ''">
    <Message Text="Running npm install" Importance="high"/>
    <Exec Command="npm install" WorkingDirectory="$(ProjectDir)" />
  </Target>

I do not want these tasks to run in my development environment (because the TaskRunner of Visual Studio takes care of it), so I added a RunGulpTask parameter. When this is provided (by adding /p:RunGulpTask=true to the msbuild command), the targets will be run before the solution is built.

My gulpfile.js looks like:

var gulp = require('gulp'),
.....
gulp.task('default', ['bundleCss', 'minCss', 'bundleJs', 'minJs'], function() {});

I did not provide a Gulp task to run, so Gulp will run the default task by convention. My default task has a dependency to all tasks I want to be run on the buildserver.
The buildserver now bundles and minifies my css and js files by using the same Gulp tasks.

WebForms and triggering submit buttons on pressing the enter key

Default button

Filling in forms by only using keyboard commands is very popular and should get more attention than most developers give it.
I have seen a lot of weird JavaScript functions in a WebForms page to fake a button click when a user presses the enter button.
This is really unnecessary. There is a nice feature which allows you to set the submit button for a (sub)section of a WebForms page.
Every container element (f.e. Panel) supports this feature. Pressing enter on any focusable element will trigger a click on the button defined by the DefaultButton property of the parent container element.

<asp:Panel ID="pnlFormName" runat="server" DefaultButton="btnSubmit">
    <asp:TextBox ID="txtName" runat="server" />
    <asp:TextBox ID="txtMiddleName" runat="server" />
    <asp:TextBox ID="txtLastName" runat="server" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Tab order

Another thing to keep in mind is the tab order. The order can be set by setting the TabIndex property of an HTML Input element (or any other element which should be focusable).
The default tab order will fit most cases; it is equal to the positioning order of the elements.

<asp:Panel ID="pnlFormName" runat="server">
    <asp:TextBox ID="txtName" runat="server" TabIndex="1" />
    <asp:TextBox ID="txtMiddleName" runat="server" TabIndex="2" />
    <asp:TextBox ID="txtLastName" runat="server" TabIndex="3" />
    <asp:Button ID="btnSubmit" runat="server" />
</asp:Panel>

Keep EntityFramework state objects valid by encapsulating it in domain entities

There are already a lot of great posts (f.e. by Vaughn Vernon) about how to use EntityFramework with Domain Entities.
So why am I writing about? Because I think it is important not to leak your state objects to your entire application.

EntityFramework cannot map private fields, so you need to make all mapped properties public.
The domain entity is responsible to make sure that the state object is always in a valid state.
EntityFramework does not work with ValueObjects, so you cannot use ValueObjects in a nice way in your state. So another plus for Domain Entities.
A state object does not necessarily need to be an entity, it could also be a ValueObject when you don’t care about uniquely identifying the ValueObject. The state will still have an id, but you can hide it in your ValueObject.

In this example I’m talking about products, they are stored with entity framework as ProductState entities.

    public class ProductState
    {
        public Guid Id { get; set; }
        public string ProductCode { get; set; }
        public string Title { get; set; }
    }

Constructing

It starts with the constructors. I usually create two constructors, one for reviving the entity from the database and one for newly constructing the entity. Every constructor should result in a valid Product and ProductState.
The following constructor revives the product from it’s state.

    public class Product
    {
        private readonly ProductState _state;
        public Product(ProductState productState)
        {
            Assert.NotNull(productState, nameof(productState));

            _state = productState; 
        }
    }

The state is provided from the repository. It is already in valid state, a null check is sufficient.
Noticed the nameof? This new trick can come in very handy f.e. for logging purposes.
In this example, a product is valid when it has a ProductCode, this is the unique identifier of the product. The constructor for creating a new product is as follows:

    public class Product
    {
        private readonly ProductState _state;
        public Product(ProductCode productCode)
        {
            Assert.NotNull(productCode, nameof(productCode));

            _state = new ProductState
            {
                Id = Guid.NewGuid(),
                ProductCode = productCode.Value,
            };
        }
    }

The Product entity is responsible for instantiating a ProductState and make sure it is in valid state. The Id is the primary key for EntityFramework, the product code is used as surrogate identifier.

Exposing data

Now that we have instantiated a Product, we can use it in our application.
The private (and readonly) state is used as backing variable for every get or set method/property.
The methods in Product look as follows

        public ProductCode GetProductCode()
        {
            return new ProductCode(_state.ProductCode);
        }

        public string GetProductTitle()
        {
            return _state.ProductTitle;
        }

        public void SetProductTitle(string title)
        {
            Assert.NotNull(title, nameof(title));
            if (title.Length > 255)
            {
                throw new ArgumentException("ProductTitle cannot be more than 255 characters.");
            }
            _state.ProductTitle = title;
        }

There is no Set method for ProductCode? Correct! The ProductCode is the identifier for the Product, it is immutable. A different product code is a different product, so this requires instantiating a new/differnt Product.
The ProductTitle does not identifies the Product, so there is a set method for the ProductTitle. In this set method, there are some business rules; in this example the ProductTitle cannot be null and should not be more than 255 characters. This makes sure that the state object could never get an invalid title in the ProductTitle property.
I prefer void as return type for set methods. When the provided data is invalid I throw exceptions. Returning a bool to indicate if the operation went succesfull has some disadvantages, f.e.:
– It does not give any detail of what went wrong
– It suggests we can still continue normally.
In this example I use methods for the Get operations, this could as well have been properties.

Attaching the State Entity to EntityFramework

Unfortunately, There is a downside to this. Now that the Product creates the ProductState, we need to attach it to the DbContext before EntityFramework will pick it up.
So we do need to expose the inner state entity. I always try to make it internal so not everybody can reach it, but there are (many) situations when internal is not enough and you need to make it public.

        internal ProductState InnerState
        {
            get { return _state; }
        }

Updating EntityFramework State objects before DbContext Saves their state

In most of the projects, we use Entity Framework as ORM. It works ok in most cases.
We always try to hide the state object as much as possible, we try to encapsulate the state objects with Domain Entities. Repositories can be used to retrieve these Domain Entities.

For a project, we needed to store serialized data in a state object. These are some reasons why we chose to store data serialized:

  • The data structure can vary by entity
  • There is no need to query this data
  • It is a complex structure and would require lots of tables to store it deserialized

We need to make sure this serialized data is always up to date (serialized) before saving.
In a first attempt, we serialized the state at every command on the Entity.
When the API of my Entity grew, the serializations increased. It wasn’t a performance issue yet, but it also wasn’t one of the pieces of code to be proud of.
So we started brainstorming and came to the following solution.

We created the following interface:

public interface ICanBeOutOfSync
{
    void SyncMe();
}

All state objects with serialized state implement this interface.

Now we need to implement this method on our state objects. We do not want a reference from a state object to the entity so we provided a method on the state object in which the entity can provide an Action to Sync the state:

public class MyEntityState : ICanBeOutOfSync
{
    public void SyncMe()
    {
        _syncMethod();
    }

    private Action _syncMethod;
    public void RegisterSyncMethod(Action syncMethod)
    {
        _syncMethod = syncMethod;
    }
}

Now that we can call SyncMe() on the state object, we want to force that this method is called before SaveChanges() is called on the DbContext.

public class MyDataContext : DbContext
{
    public override int SaveChanges()
    {
        SyncEntitiesWhoCanBeOutOfSync();
    
        base.SaveChanges();
    }

    private void SyncEntitiesWhoCanBeOutOfSync()
    {
        var syncableEntities = ChangeTracker.Entries().Where(e => e.Entity.GetType().GetInterfaces().Any(x => x == typeof(ICanBeOutOfSync)));

        foreach (var syncableEntity in syncableEntities)
        {
            ((ICanBeOutOfSync)syncableEntity.Entity).SyncToMe();
        }
    }
}

The SaveChanges() of the DbContext is overridden and we make sure all Entities are synced.
We ask the ChangeTracker for all ICanBeOutOfSync Entities and call SyncMe() on all Entities to make sure they update their serialized data. When the serialized data is changed, the ChangeTracker will set the state to Modified.
When syncing is completed, we can call the SaveChanges() of the DbContext and let EntityFramework do its work.

Powershell CmdLets

Powershell is a powerful language and can be used in several situations.
One of these situations is the deployment process (continuous delivery). It is also integrated in several systems, f.e. NuGet. NuGet uses powershell for post package installation processing.

I use powershell for the following cases (in continuous delivery):

  •  Replace/rename config files
  • Replace variables in config files
  • Call/Post to webservices
  • Run SQL commands

Note: External Modules need to be imported for calling webservices and running SQL commands. This can be done by calling the Import-Module CmdLet.

Powershell can be used as scripting language, but you can also create CmdLets. These commands can be invoked from the command line in the powershell environment.

CmdLet

A CmdLet is a command which can be called from the powershell command line.

Cmdlets are created by inheriting your class from Cmdlet.
This is available in the System.Management.Automation namespace.

CmdLets use the Verb-Noun naming convention. The Verb and Noun are provided as argument in the CmdLet attribute which decorates your CmdLet class.
The VerbsCommunications class comes with the following fields:

  • Connect
  • Disconnect
  • Read
  • Receive
  • Send
  • Write

You are not restricted to these Verbs, you can also use custom Verbs.

The ProcessRecord method is called. The method is called for every item in the pipeline.

[Cmdlet(VerbsCommunications.Send, "Greeting")]
public class SendGreetingCommand : Cmdlet
{
	[Parameter(Mandatory=true)]
	public string Name { get; set; }

	protected override void ProcessRecord() 
	{
	      WriteObject("Hello " + name + "!");
	}
}

This CmdLet example can be called from powershell command line as follows:

Send-Greeting –Name "Vincent" // Outputs: Hello Vincent!

Command line arguments are automatically bound to properties which are decorated with the Parameter attribute. Parameters are mandatory when the Mandatory argument of Parameters are set to true.

Maybe it’s interesting to know that the package manager console in Visual Studio also uses powershell and commands like Add-Migration and Update-Database are all CmdLets.

So if you haven’t given powershell a try, you really should!

Dumping objects to string for logging purposes in c#

Logging is very important and in many cases not given as much attention as it deserves.
When connecting to external datasources, logging becomes crucial.
Whenever data provided by an external source is not as expected, you would like to know what went wrong.

When objects are dumped to the log, a common use is to override the ToString method

    public class Phone
    {
        public Guid Id { get; set; }

        public string Brand { get; set; }

        public override string ToString()
        {
            return string.Format("Phone with ID {0} and Brand {1}", Id, Brand);
        }
    }

This works fine for objects with not a lot of properties.
When the object becomes complex, overriding the ToString method becomes unclear and a lot of work.
Instead, we want to see the data of the entire object without having to do lots of work for this.
Let’s create an interface for dumping objects to string, which we can use for logging purposes.

public interface IObjectDumper
{
    string WriteToString(object objectToDump);
}

Objects can be dumped in several formats. You can pick whatever format you prefer, think of json, xml, ….
I prefer json. I think it is an easy to understand and readable format.
Asp.Net has its own json serializer (JavaScriptSerializer), but I prefer the one from Newtonsoft Json (this package can be downloaded from NuGet):

public class JsonObjectDumper : IObjectDumper
{
   public string WriteToString(object objectToDump)
   {
       return JsonConvert.SerializeObject(objectToDump, Formatting.Indented);
   }
}

The Formatting.Indented makes sure the output is indented.
Dumping a phone object looks as follows:

{
  "Id": "d768bbc4-a1d5-441f-ae14-ebb5bef92b41",
  "Brand": "Google"
}

Using the object dumper with logging would look something like this:

    public class PhoneRepository : IPhoneRepository
    {
        private readonly IObjectDumper _objectDumper;
        private readonly IConnector _connter;
        private readonly IPhonesParser _phonesParser;
        public PhoneRepository(IObjectDumper objectDumper, IConnector connector, IPhonesParser phoneParser)
        {
            _objectDumper = objectDumper;
        }

        public IEnumerable<Phone> GetPhones()
        {
            var phones = new List<Phone>();
            try
            {
                var result = _connector.GetPhones();
                phones = _phonesParser.Parse(result);
            }
            catch (ParseException e)
            {
                Logging.Error(string.Format("Error parsing result for GetPhones with data: {0}", _objectDumper.WriteToString(result)), e);
                throw;
            }
            catch (Exception e)
            {
                Logging.Error("Error retrieving data for GetPhones.", e);
                throw;
            }
            return phones;
        }
    }

Value objects and operator overloading in c#

Value objects are small objects which are not equal by ref, but by value. They do not have an identity. Comparison of two objects is based on a value, not all values in the Value object have to be equal.

Another attribute of a Value object is that it is immutable.

An example of a Value object is a price.

public class Price
{
    private readonly double _value;
    public Price(double value)
    {
        _value = value;
    }

    public double Value {
        get { return _value; }
    }
}

This is a simple value object which holds one property, Value. This Value property holds the actual price.
The value is set by the constructor and stored in a readonly variable. This makes it immutable.

When validation is required, this can be added to the constructor. When a not valid value is provided as argument, an exception should be thrown. It is the objects responsibility to make sure it is valid. A best practice is to throw custom exceptions, f.e. when a negative value is provided, an InvalidPriceException could be thrown. This specific exception can be caught on a higher level which knows how to handle these exceptions.

Comparing two Price objects with the same value but a different ref, will result in false.
It can only be compared by value by using the Value property. This means that we have to understand how the Price value object works in order to compare it. This is not a best practice. It is the responsibility of the value object to determine if it is equal to another value object of the same type.
This is when operator overloading comes to the rescue! Operator overloading allows us to override the default implementation of the operators used on the value object.

Lets start with implementing comparison.

public static bool operator ==(Price x, Price y) 
{
    if ((object)x == null || (object)y == null)
    {
        return false;
    }
 
    return x.Value == y.Value;
}
 
public static bool operator !=(Price x, Price y)
{
    return !(x == y);
}

When implementing the == operator, you are required to implement the != operator.
Do not forget to override the Equals method. By default, this method compares objects by ref. It should return the same result as the == and != operators.

public override bool Equals(System.Object obj)
{
    Price p = obj as Price;
    if ((object)p == null)
    {
        return false;
    }
 
    return Value == p.Value;
}

public override int GetHashCode()
{
    return Value.GetHashCode();
}

Now that we can compare Prices, the next step is to have some simple math functions like adding, subtracting, multiplying and dividing. The math operations is also the responsibility of the value object.


public static Price operator +(Price x, Price y)
{
    return new Price(x.Value + y.Value);
}

public static Price operator -(Price x, Price y)
{
    return new Price(x.Value - y.Value);
}

Remember to return a new instance of your object. This will prevent strange side effects when objects are set to null and the ref is destroyed.
Operator overloading is not limited to the currenty type, it can be implemented for any type. For multiply and divide a double is used.

public static Price operator *(Price x, double y)
{
    return new Price(x.Value * y);
}

public static Price operator /(Price x, double y)
{
    return new Price(x.Value / y);
}

These are some very simple math operations on a Price value object. The key point is that the value object itself is responsible for handling all the operations.
Although this is very basic, it is a perfect scenario for some unit tests.

Providing data to JavaScript functions from code behind

In my previous post, I talked about JavaScript namespaces and functions.
When using WebForms, it can be difficult to call these functions from the code behind in a nice way.
It usually requires some data to initialize the JavaScript function, for example, providing some html element ids as trigger elements.
The Ids of html elements are non-predictable (except when using static ids, but this brings in a whole bunch of different problems) and should be provided from the code behind to avoid the ugly <% %> syntax in your markup file.
I see a lot of people making use of a StringBuilder to write out a JavaScript object. This will work of course, but it is not the nicest and best way to do that, because you will lose your strongly typing and intellisense.

I prefer to create a model and use a JavaScript Serializer to create a json object and provide that to a function.


public class SearchManagerOptions
{
    public string Url { get; set; }
}

.Net has its own serializer; JavascriptSerializer. It is a very simple and straight forward serializer and does the trick.


var options = new SearchManagerOptions { Url = "someurlhere" };
var json = new JavaScriptSerializer().Serialize(options);

The result (json) will look like


"{\"Url\":\"SomeUrl\"}"

This is fine for most common cases.

If you want some extra control over the serialization process, the JavaScriptSerializer will not be your friend and I recommend you to switch to the serializer of json.NET. This is a third party library (available on NuGet) which allows you to control the serialization process by decorating your properties with attributes.
For example, when I want to apply to the JavaScript standard of camel casing properties, I can use an attribute to change the output name of the property.


public class SearchManagerOptions
{
    [JsonProperty("url")]
    public string Ur; { get; set }
}

The json.NET seializer is used as follows:


var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);

This results in:


"{\"url\":\"SomeUrl\"}"

That’s even better, we now apply to the standard!
Just play around with the json.Net library, it is full of nice serialization tricks. For example, ignoring properties for serialization by adding the JsonIgnore attribute to properties.

Now we need to call the JavaScript function and provide this data.
WebForms provides us two methods for injecting scripts; RegisterClientScriptBlock and RegisterStartupScript.
What is the difference? Good question! Both method signatures are the same. The difference lies in the position where the script is injected in the page.
RegisterClientScriptBlock injects the script at the top of the form element (as one of the first children). This means that none of the html elements are rendered yet. Remember that! All your selectors won’t work! Unless you use a document ready event.
RegisterStartupScript injects the script at the bottom of the form element, which means that all html elements are already rendered.
I usually go for RegisterStartupScript, because I think it is a cleaner solution to inject scripts at the end of your page. It is still injected inside the form element, but that is a limitation of WebForms.


var options = new Models.SearchManagerOptions { Url = "SomeUrl" };
var json = JsonConvert.SerializeObject(options);
var script = string.Format("ViCreative.Managers.SearchManager.init({0})", json);

if (Page.ClientScript.IsStartupScriptRegistered("SearchManagerInitialization"))
{
    Page.ClientScript.RegisterStartupScript(GetType(), "SearchManagerInitialization", script, true);
}

I do not want this script to be injected more than once, which is why I check if this script is already registered. I usually would add “SearchManagerInitialization” to a constant, but for clarity of this blog post, I just added it to the script.