Tuesday, 11 October 2016

Angular 2 - Polymorphic Component Container using ContentChildren

As I do a deep dive into Angular 2 I've been finding some amazing features, one which really stands out being the Decorator ContentChildren. ContentChildren allows a component to access it's children components which are placed between it's selector. An example of such a setup is as follows:


now inside of the parent's component it can define the following:

@ContentChildren(ChildComponent) editors: QueryList<ChildComponent>;

ngAfterViewInit() {
    let myChildComponents = this.editors.toArray();

NOTE: you cannot access the child components inside of ngOnInit as the child components have not become available yet. ContentChildren and QueryList are both found inside of @angular/core.

The benefit of ContentChildren over the traditional AngularJs's transclude is that the child component need not know about the parent component. This is great because a common design pattern is top down (example a tab component).

The real hidden gem here though is creating a system where you can have an abstract base class for your child components. Through doing so you can achieve a polymorphic system which lets you combine multiple like components with a base class.

Lets say for example, you have a dropdown of question types that a user can choose to answer with. You could either couple all of the editors and have them hard coded into the component that produces this part of your website, OR you could have a container ccomponent which accepts any number of different editors which are just placed in and magically work! I much prefer magic, so lets have a look at the markup that could achieve this


the contents of question-editor don't matter yet, so lets have a think of what information we would require from each of our editors (foo and bar). first we would need to be able to hide then when they are in-active, then get some sort of human friendly name and finally get the value that the form has extracted from the user. this contract could be forfilled with the following abstract class (using abstract class over interface as interfaces are compile time only, and I need something injectable!)

export abstract class BaseEditor {
  constructor(private privateEditorName: string, public isSelected: boolean = false) {}
  get editorName(): string {
    return this.privateEditorName;
  public value: string;

as you can see, this base class exposes a getter for the editorName, the value and also isSelected (used to toggle visibility)

A class would then implement this base class like so

import { Component, forwardRef } from '@angular/core';
import { BaseEditor } from 'app/editor/base-editor.ts';

  selector: 'bar-editor',
  template: `
  <div style="color: blue" *ngIf="isSelected">
    <p>bar editor</p>
    <input [(ngModel)]="value">
  providers: [{provide: BaseEditor, useExisting: forwardRef(() => BarEditorComponent)}]
export class BarEditorComponent extends BaseEditor { 
  constructor() {
    super('bar editor', false)

as this editor is a test editor, there is no complicated editor logic inside of the component, but theoretically in a real world situation there would be. What makes this editor unique to the foo editor is that it allows the user to enter an answer via an input field which we can see inside of the template.

The other thing to notice here, is that we are providing the angular DI system an implementation of the BaseEditor through use of the ExistingProvider provider (as seen inside of @Component's providers array). This provider basically tells angular, if somebody asks about BaseEditor, I'm your man! As we are defining this provide at the component level, we don't have to worry about breaking the DI every time we define a new editor either as the provide is scoped to this component and under.

next we have a look at the question-editor

import { Component, OnInit, AfterViewInit, ContentChildren, QueryList, Output, EventEmitter } from '@angular/core'
import { BaseEditor } from 'app/editor/base-editor.ts';

    templateUrl: 'app/editor/editor.component.html', // unfortunatly need full uri 
    selector: 'question-editor'
export class EditorComponent implements OnInit, AfterViewInit {
    @Output() formValueChange: EventEmitter<string> = new EventEmitter<string>();
    @ContentChildren(BaseEditor) editors: QueryList<BaseEditor>;

    onQuestionChange(newQuestion: string) {
        // reset editors
        let editorFilter = this.editors.filter(editor => editor.editorName === newQuestion);
        let editor = editorFilter[0];

        if(editor == null) {
            throw new Error(`Cannot find question editor for: ${newQuestion}`)

        editor.isSelected = true;
    onClickSubmit() {
      let currentEditor = this.editors.filter(editor => editor.isSelected)[0];
      console.log(`submitting: ${currentEditor.value}`);

    private hideAllEditors(): void {
        this.editors.forEach(editor => {
            editor.isSelected = false;

the part to take note of is the @ContentChildren. As you can see, we request all children of this component which have the type of BaseEditor. Both FooEditor and BarEditor have setup their DI to point all requests for BaseEditor to themselves, so as we scan over the components, each editor is picked upa as a BaseEditor and placed inside the array for editors. The remainder of the code in the component are used to hide, display and gather input from the editors.

The html for this component is as follows:

    <select (change)="onQuestionChange($event.target.value)">
        <option *ngFor="let editor of editors" [value]="editor.editorName">{{ editor.editorName }}</option>
  <button (click)="onClickSubmit()">submit</button>

as you can see, we are using our list of editors to create options for our select, then displaying it inside of our component via ng-content.

In summary: @ContentChildren is an amazing new tool for every angular developers utility belt. It allows for creating top down architectures and with some DI wizardry, it also allows for polymorphic designs to be created!

Working example over here on Plunker 

Thursday, 18 February 2016

Getting MVC 6 and .Net Core running on Ubuntu in 2016

I thought I would make my first blog post in a while on something I have been learning recently, that is .Net Core, more specifically on Ubuntu 14.

First things first, open a terminal! Personally I run all these commands under the root user, so I will be using 'sudo su' so that I don't have to enter sudo every time I need super user privileges (so if you don't you will need sudo in front of almost all of the commands). I have also found that sometimes the permisions between the root user and the local user can cause issues if you don't install everything against ONE user. If you use 'sudo su' you will have to use the root account for everything you install through apt-get, but you will still have to run the packages you install via npm under your account as root won't have access (this is important for Yeoman)

Installing .Net Core

The following will walk you through the preperation of your system

Installing DNVM

The next step is to enter some commands that will allow us to download and install DNVM. DNVM stands for Dot Net Version Manager and is the tool used to set and manage the .net run-times on your machine. For more information check out the aspnet documentation on DNVM.

      ~$ sudo su
      ~$ apt-get install unzip curl
      ~$ curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

After doing the above commands you can test that the installation worked by typing 'dnvm'

If you do not see the above once you have entered the above commands then look for error messages and or try to enter the command 'source ~/.dnx/dnvm/dnvm.sh'. This should run automatically though. If you would like to find out more about the dnvm you can also type 'dnvm -h' to bring up the help.

Installing DNX and Core CLR

The following commands will download the prerequisites of the dnx and then install the Core CLR. The core CLR is the new open source implementation of .net which has been built from the ground up to be used to build .net apps cross platform and has been cloud optimised. It is light weight and highly uncoupled. Unlike .net 4 and mono, it is installed per app rather than per system, this per app design is great as it will allow devs in the future to install new apps to their servers without having to upgrade all their old apps and without the system admins input!

      ~$ apt-get install libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
      ~$ dnvm upgrade -r coreclr

The first line installs the prerequisites and the second instructs the dnvm to either install the latest coreclr or to update to the latest, you MUST include '-r coreclr' as it instructs the dnvm you don't want to upgrade the default mono clr. It is at ths time that you could also install mono, but I have chosen not too as this blog post is about the core clr.

To verify that you have the core clr installed and to check what runtime is active, type 'dnvm list'

Two things to note:
  1. Active: shows you which run time is currently in use.
  2. Alias: When you upgrade the default alias is automatically added, if you have multiple run times you can utilize the 'dnvm use' command with '-p' to switch the active run time and to remove the current default alias and set it to the newly selected one.

Installing Kestrel requirements

When we eventually create a MVC 6 website, we will need somthing to run it! So I have opted to use the default web server that .net core is provided, Kestral.

To run Kestral, we will need to install libuv, a multi-platform asynchronous IO library!

      ~$ apt-get install make automake libtool curl
      ~$ curl -sSL https://github.com/libuv/libuv/archive/v1.8.0.tar.gz | sudo tar zxfv - -C /usr/local/src
      ~$ cd /usr/local/src/libuv-1.8.0
      ~$ sh autogen.sh
      ~$ ./configure
      ~$ make
      ~$ make install
      ~$ rm -rf /usr/local/src/libuv-1.8.0 && cd ~/
      ~$ ldconfig

The above installs libtool and curl, then downloads the libuv source, configures it, compiles it and finally installs it. ldconfig is used to update the ld.so.cache so that dlopen can load it.

Congratulations! You now have a working .net environment on your system! What's next? Actually making something!

MVC 6 with YeoMan scaffolding!

To use YeoMan to scaffold the default project that we have all grown to love from Visual Studio, we first will need to install a few things, namely node, npm, yo and generator-aspnet!

Installing Node.js and NPM

installing node is simple enough, there are just a few apt-get to run and your good!

      ~$ apt-get install nodejs
      ~$ apt-get install npm
      ~$ ln -s /usr/bin/nodejs /usr/bin/node

What it does is fairly self explanitory, install the nodejs and npm, then add a symbolic link for node so we can use node in the terminal. From here we can enter 'node -v' and 'npm -v' to confirm installation.

Updating NodeJs

The next is optional, but if your system already has nodejs installed making sure you have the latest version of nodejs is a good idea!

      ~$ npm cache clean -f
      ~$ npm install -g n
      ~$ n stable

This will clean out the npm cache and the next line installs the n package (node helper) globally (-g). From there we run n stable to tell the node helper to update to the latest stable release. You could also enter a specific version you want to use instead of stable. Next when you type you will find your node is up to date!

Installing YeoMan and the aspnet generators

Almost on the home stretch now! All that is left is to install YeoMan and the aspnet generators

      ~$ npm install -g yo
      ~$ npm install -g generator-aspnet

This will install YeoMan and the aspnet YeoMan Generators globally.

Scaffolding the MVC Starter Template

All that is left now is to actually run the YeoMan aspnet gernerator! Today we will be creating the default MVC Web Template that you make when you choose the default authentication in Visual Studio.

First type 'yo aspnet' into the console. You will be presented with an error! Oh no! Don't worry, if you see this there is a good chance it is because you are still under the root account and you just need to type 'exit' to go back to your users session. I don't know why 'yo' won't work for me under the root account, but the fix is easy enough. Try again and this time you will be presented with the following (after choosing to contribute your data or not):

From here you can use your arrow keys to navigate the menu, choose Web Application.
Then choose a nice name (I will be using 'HelloMvcUbuntu').

All that is left to do is follow the instructions it leaves you. If you ran the installation under the root (sudo su) account you will need to login to the super user account again (type 'sudo su')

      ~$ cd "HelloMvcUbuntu"
      ~$ dnu restore 
      ~$ dnu build (optional, build will also happen when it's run)
      ~$ dnx web

this will move you into the newly created directory and restore all of the nuget packages that are required as documented inside of project.json. If you want to have a look around the directory either use 'ls' and 'cd' OR you can use 'xdg-open .' to open the current directory in the Ubuntu file explorer.

If you want to be able to run 'dnu build' you will need to make a modification to project.json. Inside of the frameworks you will see there is 'dnx451' which is actually only available on windows, so you can either remove it or add mono in there if you have it installed.

After running all of those commands congrats! your website running in aspnet is now running on http://localhost:5000.

Just as a heads up, in the next release candidate (rc2) and all the future versions the two commands dnu and dnx are merging into the single tool command 'dotnet' so instead of the above commands you would use:

      ~$ cd "HelloMvcUbuntu"
      ~$ dotnet restore 
      ~$ dotnet build (optional, build will also happen when it's run)
      ~$ dotnet web

I hope this helps somebody out there, if you have any questions or feedback please feel free to leave them in the comments bellow.


Tuesday, 4 November 2014

Downgrading from Visual Studio 2013 Premium to Professional - The easy way

Visual Studio

Due to a miss-understanding of the MSDN Subscriber download site, I accidentally downloaded and sequentially used up the trial of Premium. This was a really annoyance as the internet I am currently on doesn't take to kindly to downloading 600mb .iso's. It was going to take 12+ hours to complete!

This was unacceptable as I had work I had to get done. So I tried the other option, the web installer (traditionally I avoid these as I am a hoarder of installers so I don't have to re-download). This is where the awesomeness happened... It didn't have to download anything! (well at least it doesn't look like it had to) on the installer, the acquiring progress bar when through allot faster than it should have and it only took 10-20 minutes to install, which from previous memories is the same install time as via the .iso!

This speed increase (from 12+ hours to 20 minuets) could be caused by either:
A. The network policies favor the method the web installer used (don't think so)
B. The download speed on the .iso from Microsoft is capped at 100kb/s (I hope not)
C. It re-used my premium's install data to not have to download everything again (What I think happened as it downloaded to the same folder as Premium and I couldn't launch it until I uninstalled Premium)

Then all I had to do was uninstall Premium from my Add and Remove Programs (both Premium and Professional showed up) then enter my key given via the MSDN Subscriber portal and BAM! I had my Visual Studio up and running again. Also, as an extra spurt of luck, I didn't loose my recent projects. This is only a small thing, but I really was dreading un-installing and potentially loosing my file history.

I would love to hear in the comments from anyone else who was having this problem and if my solution had the same outcome for them. Or if you know how the web installer works, maybe either confirm or deny my theory.

Thanks for reading!

Monday, 3 November 2014

Quick and easy Console Logging - Trace


Just a quick Blog,

I was wanting to get the output of my Console Write to a log file but still show to the Console!! So off to google I went and I found this amazing solution via StackOverflow!

I thought that I would pass on the love with a slightly modified (cleaned up and simplified the output location)

      /// <summary>  
      /// Initiates a Tracer which will print to both  
      /// the Console and to a log file, log.txt  
      /// </summary>  
      private static void InitiateTracer()  
        var dir = AppDomain.CurrentDomain.BaseDirectory;  
        var twtl = new TextWriterTraceListener("log.txt")  
          Name = "TextLogger",  
          TraceOutputOptions = TraceOptions.ThreadId | TraceOptions.DateTime  
        var ctl = new ConsoleTraceListener(false) { TraceOutputOptions = TraceOptions.DateTime };  
        Trace.AutoFlush = true;  

The next step is to use replace (ctrl+h) on 'Console.Write(' with 'Trace.Write(' and 'Console.WriteLine(' with 'Trace.WriteLine('. After this it probably will not build, that will be because you need to add 'using System.Diagnostics;' to the top of the pages where you replace Console with Trace. The next error you might get is a bit less likely. If you have been using Console.WriteLine's build in String.Format you will have to add String.Format yourself as Trace doesn't do it automagically :(

Then simply call InitiateTracer() from within your Console Applications main and voilĂ  you have your normal console output + output to a log.txt file in the same path as your executable so you can check over your logs at a later time!

Hope this helps :) Thanks once again to

Tuesday, 7 October 2014

Knockout.js easy line clamping

After spending an annoyingly long time (more that I would have liked to) I found a rather neat way to incorporate line clamping into my project!

My first need for line clamping was a 1 line header in my web page. It didn't look too dandy once the text flowed over it :(

This was solved rather easily with some simple CSS:

 span.panelTitle {  
   display: inline-block; /* so height and width actually take affect */  
   max-width: 50%;  
   overflow: hidden;  
   -moz-text-overflow: ellipsis;  
   text-overflow: ellipsis;  
   white-space: nowrap; /* important to show ellipsis, or words will just be broken off */  
 h3 span {  
   height: 1em; /* so overflow hidden works and keeps text on one row */  

(thanks to http://www.answerques.com/s1NigUgSgePU/dynamic-maximum-width-for-text)

which worked amazingly... but only for one line...

It seams that text-overflow doesn't really work too well for multiple line situations.

After doing a fair amount of research into how to get a multiple line solution working I found clamp.js. It worked great! the only problem is that I was dynamically creating new notification with Knockout and these new notifications needed clamping!

I tried to subscribe to the event change but unfortunately that didn't work too well because the innards of the html hadn't been rendered when the subscription fired, plus I would be re-clamping on every item (1000+ clamps every time a notification came in could be problematic!).

My next attempt was to attach myself to the afteradd binding of foreach. That didn't work too well either as the root element of the notification was created at that point, but I was clamping a child node (which hadn't been populated yet).

Being new to Knockout I had never made my own binding, which is why it didn't cross my mind. But when I tried and failed to use the foreach binding's afteradd I realised. Why not make my own binding?

I then found that it was trivial to do this. I simply created a new js file: binding-clamp.js and made the following 6 lines of code:

 ko.bindingHandlers.ellipseOutput = {  
   update: function (element, valueAccessor) {  
     // access ellipseOutput's value, this will contain the desired line count  
     var linesToDisplay = ko.unwrap(valueAccessor()); // edit: thanks Patrick Steele for update
     $clamp(element, { clamp: linesToDisplay });  

and put my custom data binding inside of my notification panel

 <div class="container" style="margin-top: 10px;" data-bind="foreach: notifications">  
   <div class=" col-lg-4 col-md-6 col-sm-6 col-xs-12 notification">  
     <div class="panel panel-primary shadowBorder">  
       <div class="panel-heading">  
         <h3><span class="glyphicon glyphicon-heart-empty"></span> <span class="panelTitle" data-bind="text: title"></span><button class="btn btn-danger pull-right" data-bind="click: $parent.removeNotification">&times;</button></h3>  
       <div class="panel-body">  
         <span data-bind="text: content, ellipseOutput: 3"></span>  
     <textarea cols="55" rows="5" data-bind="value: content"></textarea>  
     <div data-bind="text: content"></div>  

now when ever I add or remove notifications the text is automatically clamped!!!

After creating this I wondered if anybody else had ever came across this issue and made a solution, I found the following implementation during my research. This solution looks allot more thought through and backwards compatible (I am using Knockout 3.2) but allot of the code is to do with saving the text as clamp.js is destructive.

I then frowned thinking, great, my data in my View Model will now contain "..." whenever it's too long.

But when I created a textarea who's value was bound to my viewmodel's data + a div repeating the data out without clamping I found that the data in the VM hadn't been messed with.


This must have something to do with the fact that the data-bind that I had was a text binding, maybe knockout only applies one way binding to a text field.

The result ended up looking perfect. the top right notification had it's content clamped (via clamp.js) and the header on the bottom right notification has a clamped title (via css)


I hope this blog proves to be useful for any other people out there learning Knockout.js.
I plan to make more blog posts and more regularly so feel free to subscript to my rrs through the button up top or here!

Thanks for reading.

Friday, 1 March 2013

2010 -> 2011 Murdoch University

In the first two years of Murdoch University during my bachelor of Science (Computer Science and Games Technology) I was taught many things including C, C++, C#, java, Assembly, Applied Mathematics, Systems Analyses, Databases (SQL), OpenGL and Computer Graphics Applications as such as Autodesk 3DS Max in modeling, texturing and rendering.

My first two semester were comprised of an introduction unit, in which we were taught how to write documents at a university standard. I also took several Programming units, in the first semesters units we were taught non object orientated programming, including C. C is a brilliant first language for all want to be programmers, it is very fast and powerful AND also will blow up in your face if you do anything wrong! Learning through pain is a great way to learn, if you don't design the system with a proper mind state it won't work. This is vital because in more new age languages, as such as Java or C++ you can code on the fly with little design and not have to worry too much because the modular capabilities of OOP makes it very easy to replace parts of your code. BUT as the system gets bigger, design becomes VERY important in an OOP project, thus the harshness of C teaches good planning and design!

I also took two maths Units in the first semester, Computational Mathematics and Applied Mathematics. The first unit, Computational Mathematics was mostly comprised of maths I had done in Highschool and therefore wasn't too dificult. It did teach me though, how to use programs as such as MATLAB to use computers in my solutions. The second unit did teach new mathematics which were useful in my future Games Units in 3rd year, this included Imaginary numbers and integration!

In the first year I had my first Games Technology unit, this was Introduction to 3D Graphics and Animation. In this unit we were taught organic and non organic modelling techniques, you can see below images and videos of this unit. We also were taught camera operation inside of 3DS max and rendering techniques. Autodesk 3DS max was still useful during my non modelling units, I used 3DS max twice to create a tool to place objects in a 3D OpenGL scene through use of a XML level loader which contained data as such as position, rotation and Scale. This allowed for easy and fast level prototyping with use of 3DS max's highly refined placing tools.

My second year was comprised of Object Oriented programming, games software development, systems analysis and databases. This was the first "real" year of university, it was the first year in which my time management skills were put to the test and then shown to not be fully developed yet. The reason for this sudden jump in both difficulty and work load was due to my first Games Assignment and Project.

In this first games technology assignment, I was assigned the task to created a 3D world in which had at least one simulation of the physical world written in C. I chose to create a water fountain set in a back yard. Through use of structures and iteration . It was very basic and had many flaws, although it was my first ever OpenGL so as much can be expected. Please note that when the video pauses, this is due to a right click menu being used, it's just non view-able.

The two other units for the first semester taught me systems analysis and object oriented programming unit (previously last semester we did java, but it was only for a little bit) (C++) learning C++ was the first language I learnt that I took a great interest into due to the sheer power it had available  we were put to the task of creating a binary search tree and then to integrate one into a book management system for a library.

The second semesters games technology project was first project which required team work. In this unit I was assigned to two other people from the same unit, myself and one of them took on the majority of the programming and the other person took on texturing and modeling. We used the agile development methodology of SCRUM and we followed it alright for the first few weeks, but soon we stopped following the formal SCRUM and we started to fall behind due to that. In the end the project at the end turned out to be two weeks of hell when we realised just how far behind we were... Although in the end it worked and you can see the demo above. I worked on the terrain, physics and AI. Unfortunately I am inable to find out which version of the OpenGL/glut framework I was using as it will not compile no matter what I do, so no demo is avaliable.

In the Second semester we also did Artificial Intelligence in which I created an ant simulator... In this project ants would navigate around an area looking for food, laying sent tracks of which they then used to return to the hole/find more food... The ants are able to navigate around objects and after enough time will find the food and return it even if the food is hidden in a maze!! I also completed a Database unit in which I used oracle SQL.

Next up I will discus my last year of Uni @ Murdoch!

ICT215 - fountain
executable: http://www.filedropper.com/executable
source: http://www.filedropper.com/source

ICT219 - antsim
source/exe: http://www.filedropper.com/antsim
online demo 1: http://www.megaswf.com/serve/1188283/
online demo 2: http://www.megaswf.com/serve/1188284/
online demo 3: http://www.megaswf.com/serve/1188285/

Monday, 31 December 2012

restaurant order system

MySQL and Android!
Recently whilst having lunch with a friend at a restaurant, the discussion came up of why you can't create separate orders when you make your decisions on the meals you will be having. The ability to split your order into portions on the table so that you are able to keep track of your own meal and not have to factor in the cost of the other people at the table to your own subtotal until you finally go to the cashier to pay.

This sparked an idea inside my head to create my own order management system which ran off of MySQL and Android devices. The main idea being that the waiter would have a easy to navigate database of all the items of which are on the patrons menu's. Later on a friend of mine (who has worked in waiting) told me that the reason why this doesn't happen is due purely to lack of time, as in it takes to long to organize several meals to a single table. This time constraint issue then led to another idea... Why not have an application which guests can install (or run on their mobile browsers) which allowed them to look through an electronic menu and then create an order for themselves.. As in the customer get's what they want fully organised from their device, then when the waiter comes around they just site their reference number and their order is added to the table under either the whole tables bill or a partial bill of which is separate from the main (as in, person A has their bill and so does person B).

I soon realized after this that this adds tons of other possibilities, so I started out on creating a MySQL based android application for restaurants. In my first build I have a working Windows server 2012 running MySQL/PHP/Apache and an application which can connect to the database, look at available data and create new entries!!

Make sure to follow my facebook/twitter (above) to keep up to date on my progress in creating this system and I hope to keep everyone posted!!