Showing posts with label angularjs. Show all posts
Showing posts with label angularjs. Show all posts

Saturday, June 10, 2017

Stuffing Angular into a Tiny Docker Container (< 2 MB)

I’ve been doing a lot of work with Angular and Docker lately. In one of my workshops I demonstrate how to take an Angular app and related services then package them as containers and leverage Docker Compose to spin them up with a single command. You can read about a more involved example at Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker.

Using the nginx container, my Angular images average several hundred megabytes in size (click to view full size).

bigimages

Microservices have transformed the way modern apps are architected. For example, Angular can produce a highly optimized production distribution of its assets as static files. The website for Angular can be incredibly simple because it relies on separate services and APIs to do the heavy lifting, and any browser-based logic is encapsulated in the JavaScript libraries that are included.

Therefore, standing up the front end should be a lot easier! In fact, even with a minimal web server, containers are easy to spin up and scale out, so load doesn’t necessarily have to be a concern of the web server now when it can be managed by the orchestrator. As an experiment, I set out to see how small I can make an Angular app.

If you have Docker installed, you can run the tiny Angular app yourself. Instructions are available at this link.

In searching for a solution I came across BusyBox, a set of Unix commands in an extremely small container that is around a megabyte in size. BusyBox contains httpd, an HTTP daemon. Let’s see how small we can make our Angular app!

The app we’ll target is the simple Angular app I built for Music City Code. You can clone the repo from GitHub here. When you build the app, it creates a simple interactive fractal app using a bifurcation diagram.

Let’s build an optimized distribution of our Angular app. This generates about 400kb of assets.

ngbuild

Next, create a Dockerfile with these contents:

from busybox:latest
run mkdir /www
copy /dist /www
expose 80
cmd ["httpd", "-f", "-p", "80", "-h", "/www"]

The steps are straightforward. It pulls down the latest busybox image, creates a directory, copies the Angular assets into the directory, exposes the web port and instructs the container to run the httpd daemon on startup.

Add a .dockerignore and ignore the src, e2e, and node_modules directories.

Let’s build our container:

dockerbuild

To see the image that was built, you can type:

docker images | grep "tinyng"

(Replace grep with find on Windows machines) and on my machine it is 1.57 MB.

Running it and browsing to localhost proves it is working:

docker run –d –p 80:80 jlikness/tinyng

So I push to Docker Hub, and confirm the size there:

dockerhub

The experiment is finished. Success! Of course now we have to try it under load and scale, but it’s good to know there is a path to optimize the size of your containers!

Wednesday, June 7, 2017

Music City Code 2017

This was my first year to attend the Music City Code conference in “the Music City” Nashville, Tennessee. It was held on the beautiful Vanderbilt University campus, where they advertise a 3-to-1 squirrel to student ratio. I imagine it was about 5-to-1 squirrels to conference attendees.

vanderbilt

I presented two talks there, the last talk of the first day and the first talk of the last day.

Rapid Development with Angular and TypeScript

This is a talk focused on demonstrating just how powerful Angular, TypeScript, and the Angular CLI are to rapidly build apps. Don’t be scared by this 360 photo, if you can’t see the audience just scroll it around.

The first half of the talk focused on the features Angular provides, as well as an overview of TypeScript. The second is a hands-on demo building an app using services, settings, rendering, data-binding, and a few other features.

You can access the deck and source code here.

Contain Your Excitement

The next talk focused on containers. In true “Inception” style, the container talk itself can be run from a container.

The talk briefly covers the difference between “metal” in your data center and Docker containers, then walks through building simple containers and evolves to multi-stage containers, using Docker compose, and an overview of orchestrators like Kubernetes.

As part of the talk, I took a 360 degree picture with my Samsung Gear 360 (I have the older model, there is a newer 2017 version). I updated the source with the embed link, then synced my changes to GitHub. This triggered an automated build that prepared a Node.js environment, ran unit tests, then built and published the Docker image to demonstrate the continuous integration and deployment pipeline that is possible with containers.

You can access the source code here and run the container from here.

Parting Thoughts

As far as conferences go, this is one of my favorites. There were great people, terrific speakers, friendly and helpful volunteers, and a fun venue. The food was awesome and speakers were able to stay in the dormitories on campus which was a fun experience. I definitely look forward to coming back in future years!

Sunday, April 2, 2017

Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker

Docker is a pretty amazing tool.

To prove it, I want to show you how you can build, deploy, and stand-up an N-tier Angular app backed by MongoDB in just three steps. Literally. Without installing any prerequisites other than Docker itself.

usdadiagram

First, seeing is believing. Once you have Docker installed (OK, and git, too), type the following commands:

git clone https://github.com/JeremyLikness/usda-microservice.git

cd usda-microservice

docker-compose up

It will take some time for everything to spin up, but once it does you should see several services start in the console. You’ll know the application is ready when you see it has imported the USDA database:

usdaimports

After the import you should be able to navigate to localhost and run the app. Here is an example of searching for the word “soy” and then tapping “info” on one of the results:

usdaweb

On the console, you can see the queries in action:

usdasearch

How easy was that?

Of course, you might find yourself asking, “What just happened?” To understand how the various steps were orchestrated, it all begins with the docker-compose.yml file.

The file declares a set of services that work together to define the app. Services can depend on each other and often specify an image to use as a baseline for building a container, as well as a Dockerfile to describe how the container is built. Let’s take a look at what’s going on:

Seed

The seed service specifies a Dockerfile named Dockerfile-seed. The entire purpose of this image is to import the USDA data from flat files along with some helper scripts, then expose the data through a volume so that they can be imported into the database. It is based on an existing lightweight Linux container called Ubuntu.

Containers by default are black boxes. You cannot communicate with them and are unable to explore their contents. The volume command exposes a mounting point to share data. The file simply updates the container to the latest version, creates a directory, copies over a script and an archive, then unzips the archive and changes permissions.

db

The db service is the backend database. It inherits from the baseline mongo image that provides a pre-configured and ready-to-run instance of mongodb.

You’ll notice a command is specified to run the shell script seed.sh. This is a bash script that does the following:

  1. Launch mongodb
  2. Wait until it is running
  3. Iterate through the food database files and import them into the database
  4. Swap to the foreground so it continues running and can be connected to

At this point, Docker created an interim container to stage data, then used that data to populate a mongo database that was created from the public, trusted registry and is now ready for connections and queries.

descriptions

The next container has a directory configured for the build (“./descriptions”), so you can view the Dockerfile in that directory to discern its steps. This is an incredibly simple file. It leverages an image from node that contains a build trigger. This allows the image definition to specify how a derived image can be built.

In this instance, the app is a Node app using micro. The build steps simply copy the contents into the container, run an install to load dependent packages, then commit the image. This leaves you with a container that will run the microservice exposed on port 3000 using node.

Going back to the compose file, there are two more lines called “links” and “ports” respectively. In this configuration, the mongodb container is not available outside of the host or even accessible from other containers. That is because no ports are exposed as part of its definition.

The “link” directive allows the microservice to connect with the container running the database. This creates a secure, internal link – in other words, although the microservice can now see the database, the database is not available to any other containers that aren’t linked and not visible outside of the Docker host.

On the other hand, because this service will be called from an Angular app hosted in a user’s web browser, it must be exposed outside of the host. The “port” directive maps the internal port 3000 to an external port 3000 so the microservice is accessible.

This service exposes two functions: a list of all groups that the user can filter, and a list of descriptions of nutrients based on search text.

nutrients

Nutrients is another microservice that is setup identical to descriptions. It exposes the individual nutrients for a description that was selected. The only difference in configuration is that because it runs on the same port (3000) internally, it is mapped to a new port (3001) externally to avoid a duplicate port conflict.

ngbuild

This image points to an Angular app and is used as an interim container to build the app (in production deployments it is more common to have a dedicated build box perform this step). I included this to demonstrate how powerful and flexible containers can be.

Inside the Dockerfile, the script installs node and the node package manager, then the specific version of the angular-cli used to build the app. Once the Angular CLI is installed, a target directory is created. Dependent packages are installed using the node package manager, and the Angular CLI is called to build a production-ready image with ahead-of-time compilation of templates. This produces a highly optimized bundle.

The compose file specifics a volumes directive that names “ng2”. This is a mount point to share storage between containers. The ngbuild service mounts “ng2” to “/src/dist” which is where the build is output.

web

Finally, the web service hosts the Angular app. There is no Dockerfile because it is completely based on an existing nginx image. The “ng2” mount points to “/usr/share/nginx/html” which is where the container serves HTML pages from by default. The “ng2” shared volume connects the output of the build from ngbuild to the input for the web server in web.

This app uses the micro-locator service I created to help locate services in apps. The environment.ts file maps configuration to endpoints. This allows you to specify different end points for debug vs. production builds. In this case the root service is mapped to port 3000, while the nutrients is mapped to the root of 3001.

Even though the services are running on different nodes, the micro-locator package allows the code to call a consistent set of endpoints. You can see this in the descriptions component that simply references “/descriptions” and “/groups” and uses the micro-locator service to resolve them in its constructor.

They are mapped to the same service in configuration but if groups was later pulled out to a separate end point, the only thing you would need to change is the configuration of the locator itself. The end code remains the same.

The standard web port 80 is exposed for access, and the service is set to depend on descriptions so it doesn’t get spun up until after the dependent microservices are.

Summary

The purpose of this project is to demonstrate the power and flexibility of containers. Specifically:

  1. Availability of existing, trusted images to quickly spin up instances of databases or node-based containers, for example
  2. Least privilege security by only allowing “opt-in” access to services and file systems
  3. The ability to compose services together and share common resources
  4. The ease of set up for a development environment
  5. The ability to build in a “clean” environment without having to install prerequisites on your own machine

Although these features make development and testing much easier, Docker also provides powerful features for managing production environments. These include the ability to scale services out on the fly, and plug into orchestrators like Kubernetes to scale across hosts for zero downtime and to manage controlled roll-outs (and roll-backs) of version changes.

Happy containers!

Monday, January 9, 2017

Comprehensive End-to-End Angular 2 with Redux and Kendo UI DevOps Example

I know, the title is a mouthful. But it describes exactly what I’m sharing! Before and during the Christmas break I created a project to illustrate how to leverage Redux in Angular 2 apps. As part of the process I integrated Kendo UI and created a full DevOps solution with continuous integration and automated gated deployments to a Docker host on Azure.

To start with, read about Improving the State of your App with Redux.

reduxflow

After you get the gist of how and why the app was built, learn how I set up DevOps Continuous Deployment with Visual Studio Team Services and Docker.

containerstack

Finally, a common question related to Angular 2 is how to test it. On this blog, learn about Integrating Angular 2 Unit Tests with Visual Studio Team Services using PhantomJS and JUnit to report back test results.

testresults

Even if you don’t use VSTS for your automation, many of the processes and steps described in this triad of articles will help you build your own deployment pipeline.

Happy DevOps in 2017!

Thursday, December 8, 2016

Integrating Angular 2 Unit Tests with Visual Studio Team Services

DevOps focuses on continuous delivery of value by removing barriers between application development and operations teams. A crucial component of the DevOps pipeline is continuous integration (CI). CI is the process of automating a build and tests to ensure a stable code branch. Traditionally this has been difficult to achieve in web-centric and Single Page Applications (SPA) that focus on the front-end, but modern libraries and tools make it a lot easier to “chase the unicorns.”

junitvsts

I recently published an application to GitHub that features Angular 2, Redux, and Kendo UI. It also continuously integrates and deploys to a Docker host. You can view the running app here. It is run as a Docker container, managed by a Docker host, on an Ubuntu (Linux) server hosted in Azure.

Although the source is hosted on GitHub, I connected the repository to Visual Studio Team Services (VSTS) for continuous integration and deployment. The app has over sixty (60) automated tests that are run as part of integration. It is critical to ensure that all tests pass before the Docker image is created and deployed to production. It is also important to see the detailed results of tests so that broken builds can be quickly triaged and addressed.

Good Karma

I used the Angular Command Line Interface to scaffold the project. Out of the box, the Angular-CLI leverages Jasmine to define tests and Karma to as its test-running suite. A Jasmine test might be a simple unit test based on pure JavaScript:

Angular 2 provides a test service that enables integration-style tests that interact with actual web components:

Either way, the tests “as configured” aren’t good enough for an automated build via VSTS for two reasons:

1. They depend on a browser to host the tests (Chrome, by default) that isn’t available on the build server.

2. They only generate output to the console and don’t create a file that can be parsed for test results.

The Phantom Browser

The first step is to get rid of the browser dependency. Fortunately, a project was created to provide a “headless browser” or one that runs without rendering “real” UI, and it is called PhantomJS. To include it in my project, I issued the following command:

npm i phantomjs-prebuilt --save-dev

This creates a development dependency on the pre-built version of PhantomJS so that the project can pull it down and install it as a dependency. It adds the reference to the project’s package.json file.

The next step is to add a launcher to Karma. These packages help link Karma to browsers so Karma is able to launch the host to run the tests. The Karma launcher is installed like this:

npm i karma-phantomjs-launcher --save-dev

Finally, you need to edit the karma.conf.js configuration file to include the launcher:

Now you verify the setup by running the tests through PhantomJS:

ng test --browsers=PhantomJS

You should see the same output you normally see from Chrome, with the exception that no external browser is launched.

Note: some build machines may require you to install additional prerequisites for PhantomJS. For example, Ubuntu requires additional font libraries to be installed.

JUnit of Measure

The next requirement is to generate output that can be parsed by the build. Karma uses reporters to provide test results, and ships with a “progress” reporter that writes test results out to the command line.

testrun

By default, VSTS is able to process JavaScript unit tests in the JUnit format. Karma has a JUnit reporter that can be installed:

npm i karma-junit-reporter --save-dev

This can be added to the Karma config file the same way the PhantomJS launcher was. Now you can run tests using the --reporters=junit flag and the test run will generate a file named TESTS-browser_(platform).xml. For example, a local run on Windows 10 creates TESTS-Chrome_54.0.2840_(Windows_10_0.0.0).xml. If you open the file, you’ll see XML that defines the various test cases, how long they ran, and even a structure that holds the console output.

Configuring VSTS

I assume you know how to configure builds in VSTS. If not, check out the full CI/CD article. The build steps I created look like this:

buildsteps

The first step ensures the Angular Command Line interface is installed on the environment. The package manager command is install and the arguments are:

-g [email protected]

(This is the version the project was built with). The second step installs the dependencies for the project itself and just uses the install command with no arguments.

With the Angular-CLI installed, we can now run a command to execute the tests and generate the output file. I use two reporters. The progress reporter allows me to see the progress of the test run in the console output for the build and will abort the build if any tests fail. The JUnit reporter writes the test results file. The tool is ng and the arguments:

test --watch=false --single-run=true --reporters=junit,progress --browsers=PhantomJS

The next step instructs the VSTS agent to read the test results file and integrate it into the build results. This is what the configuration looks like:

publishtest

Here is a snippet of the test run output:

testrunoutput

That’s it! Now the automated build can create the Angular 2 production application after verifying tests successfully ran. The VSTS build log will contain specific test results and even allow you to set up a widget to chart pass/fail percentage over time. Release Management can then take the results of a successful build and deploy to production. For a more comprehensive overview of the CI/CD process, please read DevOps: Continuous Deployment with Visual Studio Team Services and Docker.

Happy DevOps!

Saturday, July 30, 2016

An Adventure in Redux: Building redux-adventure

Redux is a “predictable state container for JavaScript apps.” If you’re like me, reading about a new technology is nice but it takes a good project to really understand it. For some reason, when I hear “state machine” I immediately think of the Z-machine that was created “on a coffee table in Pittsburgh in 1979” that revolutionized computer games in the early 80s by bringing text-based adventure games to myriad platforms.

thumbnail

I originally thought of re-factoring my 6502 emulator to use Redux, but realized it would be a far bigger task to take on so I decided to build something from scratch instead. Borrowing from an app I wrote for a book I published a few years ago, I built redux-adventure using Angular 2 and TypeScript with the Angular-CLI.

Redux Concepts

There are numerous tutorials online that cover Redux. One problem I find is that a lot tend to overcomplicate the description and throw graphs that make it look far more involved than it really is. Rather than re-inventing the wheel, I’ll share a simple description here and then walk through the app that uses it.

Redux is a simple state management tool. Your application may transition through multiple states. At any given time you may raise an event, or create an action, that results in a new state. State is immutable, so actions will never modify the existing model that represents your state but instead will generate a new model. This is the concept that is sometimes difficult to understand.

Redux keeps track of state for you, and offers three key services (there are other APIs, but I’m keeping this simple).

  • The ability to dispatch an action, indicating a transition in state
  • A set of reducers that respond to an action by providing the new state
  • A subscription that receives a notification any time the state changes

The game

The redux-adventure game is fairly straightforward. You are dropped in a random room in a dungeon and must explore the dungeon to find various artifacts. You can look or travel in the four compass directions, and if there is an item you can get it to put it into your inventory. You win the game by retrieving all of the available items.

State

The state itself is really just a domain model represented by a plain-old JavaScript object (POJO). A “thing” or artifact has a name and a description. Then there are rooms that look like this:

Notice that a room may contain more than one inventory item. It also keeps track of other rooms based on compass direction and walls where there are no rooms to navigate to.

The world itself is represented by a dungeon class that contains rooms, the player’s inventory, the count of total items they must obtain, the current room, a console that contains the text displayed to the user, and a flag indicating whether or not the player has won.

There is also a dungeonMaster that generates the world from some seed information and randomly generates walls. Any classes or services with behavior have their own tests. Now that we have the world defined, what can we do?

Actions

The user can type in any number of commands that are represented by the action list. Although an action may start as these commands, based on the current state they end up being translated into four key actions:

  • Move: updates the current room to the room the user has navigated to, and updates the console to indicate the movement and display the description of the new room
  • Get: transfers inventory from the current room to the user
  • Text: adds a line of text to the console
  • Won: transfers the final item of inventory to the user, sets the won flag, and updates the console to indicate the user has won

The createAction method is responsible for this logic. TypeScript allows me to write interfaces to make it more clear what an action inspects. Here is the “get” action’s interface:

And here is the code that takes the original action and transforms it into an internal one:

Notice that one “incoming” action can translate to three “internal” actions: text with a snarky comment when there is nothing to get, an action to transfer the inventory to the user, and an action to indicate the user has won.

The translation of actions is fully testable. Note that to this point we’ve been working in pure TypeScript/JavaScript – none of this code depends on any external framework yet.

Reducers

Reducers may take awhile to get used to, but in essence they simply return a new state based on an action and ensure the existing state isn’t mutated. The easiest way to tackle reducers is from the “bottom up” meaning take the lower level properties or nested objects and handle their state, then compose them into higher levels.

As an example, a room contains a set of inventory items. The “get” action transfers inventory to the user, so the things property of the room is updated with a new array that no longer contains the item. Here is the TypeScript code:

If the ellipses notation is confusing, it’s part of a newer spec that allows for composition of items. It essentially represents a portion of the array. What is returned is a new array that no longer has the item. Here is the JavaScript:

You can view the corresponding tests written in TypeScript here. Notice that in the tests, I use Object.freeze to ensure that the original instances are not mutated. I freeze both the individual items and the list, and then test that the item is successfully removed.

Another reducer will operate on the array of inventory items for the player. Instead of removing the item as it does from the room, it will return a new array that adds the item to the player’s inventory.

The reducer for the room calls the reducer for the things property and returns a new room with properties copied over (and, in the case of navigating to the room, sets the visited flag).

You can view the main reducer code to see the logic of handling various actions, and calling other reducers as well (i.e. main calls the reducer for the rooms list, and rooms calls the reducer for the individual room).

In the end, the tests simply validate that the state changes appropriately based on an action and doesn’t mutate the existing state.

At this stage the entire game logic is complete – all state transitions through to a win are there, and we could write some simple AI to have a robot play the game and output its results. Everything is testable and we have no dependencies on any frameworks (including Redux) yet.

This is a powerful way to build software, because now whether you decide to use Angular, React, plain JavaScript or any other framework, the main business logic and domain remains the same. The code doesn’t change, the tests are all valid and framework agnostic, and the only decision is how you render it.

The Redux Store

The purpose of Redux is to maintain the state in a store that handles the actions and applies the reducers. We’ve already done all of the legwork, all that’s left is to create the store, respond to changes in state, and dispatch actions as they occur.

The root component of the Angular application handles all of this:

Notice how simple the component is! It doesn’t have to handle any business logic. It just creates the store, refreshes a property when the state changes, and dispatches actions.

The template is simple as well. It lists the console, provides a parser to receive user input if the game hasn’t been won yet, and renders a map of the rooms.

With this approach, the components themselves have no business logic at all, but simply respond to the bound data. Let’s dig a little deeper to see.

Components

Approaching the application in this fashion makes it very easy to build components. For example, this is the console component. It does just two things: exposes a list of text, and responds to changes by setting properties on the div element so that it always scrolls the latest information into view:

If you’re nervous about seeing HTML elements mixed in with the component, don’t worry! They are completely testable without the browser:

The parser component solely exists to take input and dispatch actions. The main component listens to the parser and uses the event emitter to dispatch actions to the Redux store (that code was listed earlier). The parser itself has an action to emit the input, and another action that auto-submits when the user hits ENTER from within the input box:

After playing the game I realized it would be a lot easier to test if I had a map, so I created the map component to render the grid and track progress. The map component itself simply translates the list of rooms into a matrix for rendering cells. For each cell, a green square indicates where the user is, a white square is a visited cell (with walls indicated) and a black cell is a place on the map that hasn't been explored yet.

Despite the heavy manipulation of styles to indicate background colors and walls, this component is also completely testable without relying on the browser.

Conclusion

You can view the full source code on GitHub and play the game here. Overall, building this was a great learning experience for me. Many of the articles I read had me slightly confused and left me with the feeling it was overcomplicating things, but having gone through the process I can clearly see the benefits of leveraging Redux for apps.

In general, it enables me to build a domain using vanilla TypeScript/JavaScript and declare any logic necessary on the client in a consistent way by addressing actions and reducers. These are all completely testable, so I was able to design and validate the game logic without relying on any third party framework.

Linking Redux was an easy step, and it made the logic for my components even easier. Instead of encapsulating services to drive the application, I was able to create a store, respond to changes to state within the store, and build every component as a completely testable, independent unit.

What do you think? Are you using Redux in your apps? If you are, please use the comments below to share your thoughts.

Thursday, July 21, 2016

Back to the ngFuture

Angular 2.0 is close to production ready release. Initially the community was in an uproar over the lack of backwards compatibility, but that has changed in recent months with the release of version 1.5 and several modules including ngUpgrade. In this talk, Jeremy Likness discusses the differences between major production versions of Angular, the options for migrating your apps to 2.0, and demonstrates how to get your apps back into the future with the tools that are available today.

Special thanks to the Atlanta AngularJS Meetup group for hosting this event! You can view the deck here:

You can also Visit the GitHub repository to download the code examples or run them live in your browser.

As a fun technology aside, here's a 360 degree photo I took with my Samsung Gear 360 at the Ponce City Market in downtown Atlanta right before I presented the talk (click the photo to be able to view in 360 and use your mouse or phone to scroll around the view).

Enjoy!

Jeremy Likness

Wednesday, June 29, 2016

Learn the Angular 2 CLI Inside and Out

Angular 2 represents a major step in the evolution of modern web front-end frameworks,but it comes with a price. From TypeScript compilation to running test scripts,bundling JavaScript, and following the Angular 2 Style Guide, "ng2 developers" are faced with myriad problems to solve and challenges to overcome.
Fortunately, there exists a way to simplify the process of building Angular 2 applications. Whether your goal is to stand up a rapid prototype or build an enterprise-ready line of business application that is continuously deployed to the cloud, the Angular CLI is a tool that you don't want to code without.

» Read the full article: Rapid Cross-Platform Development with the Angular 2 CLI

Jeremy Likness

Tuesday, May 17, 2016

Tic-Tac-Toe in Angular 2 and TypeScript

I’ve built a lot of small apps and games over the years, often to either learn a new framework or platform, or to help teach it. It’s always fun to dig up old code and migrate it to new technologies. For example, I took the Silverlight C# code to generate a plasma effect from my Old School app and ported it to JavaScript with optimizations. I also recently built a bifurcation diagram with ReactJs (RxJS) and div tags.

The other day I reviewed some older projects and came across an article I wrote as an introduction to Silverlight. I used a tic-tac-toe game and built the logic to enable a computer opponent.

tictactoe

I realized this would be a perfect project for Angular 2 so I proceeded to make the port. You can play it here to test it out. I did not build it responsive (shame on me, being lazy) so I may refactor it in the future to make it easier to play on phones. Tablets and computers should be fine.

Scaffolding

To create my project I used a combination of cross-platform tools including Visual Studio Code and Node.js. I also used the Angular-CLI for just about everything. The first step is get to a Node.js command prompt, then install the Angular CLI and initialize the project:

npm i angular-cli –g
ng new tic-tac-toe-ng2
cd tic-tac-toe-ng2
ng serve

By now I had a working project that I could navigate to, run unit tests against:

ng test

…and even run an end-to-end set of tests:

ng e2e

Great! Now to start porting the code!

Quotes

The original game had some wonky quotes that would show up in each tile that isn’t clicked yet. It would also give you a different, random quote if you tried to tap on a cell while it was the computer’s turn. We’ll get to the computer strategy in a bit (I built in a random delay for the computer to “think”).

A reusable unit of JavaScript code in Angular 2 is referred to as a “service” and we scaffold it like this:

ng g service quotes

This will generate the service and the specification (tests) for the service. One great feature of Angular is that it addresses testing right out of the box.

I really didn’t have many specifications, other than making sure I get an actual string from the service and that multiple calls randomly return all available quotes .

(Note: For the test, I iterate a large number of times to check the quotes, but technically it’s not a “good” test because you could randomly miss a quote and the test will fail even though the service is doing what it is supposed to).

The service itself just randomly sorts an array each time a quote is requested and returns the first element.

I repeated that pattern for the “bad quotes” (i.e. when you click out of turn) and then turned my attention to individual cells on the tic-tac-toe board.

Cell and Game States

Thinking about the game, I determined there would be exactly three states for a cell to be in:

export enum State {
    None = 0,
    X = 1,
    O = 2
}

Either “not played” or marked with an ‘X’ or an ‘O’. I created an interface for the data of a cell to represent where it is on the grid, the current state, and whether it is part of a winning row.

export interface ICell {
    row: number;
    col: number;
    state: State;
    winningCell: boolean;
}

Finally, the game flow will either allow a turn, or end in a win or a draw.

export enum GameState {
    XTurn = 0,
    OTurn = 1,
    Won = 2,
    Draw = 3
}

With these in place, I then built the component for an individual cell.

The Cell

To scaffold a component I used the following syntax:

ng g component cell

This created a sub-directory with the related files (CSS, HTML, code-behind and test). In the CSS you can see the styles to define the size (sorry, this one isn’t responsive for now, but that can be readily fixed), margins, etc.

Components have their own specifications. For example, one parameter that is input to the cell is the row and column. In the test, we create a test component that wraps the tested component, then verify the data-binding is working correctly (note the bindings in the template should match what is picked up by the component):

The “builder” is defined earlier in the source and spins up the instance of the test controller to host the component. This is all generated for you by the command line interface.

The cell itself takes several inputs, specifically the row, column, state, and whether it is a winning cell.

It also exposes an event when it is tapped. The template uses these to display either a random quote, an ‘X’, an ‘O’, and also color the background if the cell is part of a winning row.

The logic triggered when it is tapped checks to make sure it hasn’t already been set and whether it is a valid turn (the valid turn property is bound, so it can be set for testing or bound to other logic for the running application). If it is not the user’s turn, a random quote is set on the square to react to the tap. 

Another interesting behavior to note is the way the cell reacts to changes.

Components can implement the OnChanges interface that will fire when the model mutates. This is useful for responding to change without setting up individual watches (as was the case in Angular 1.x). Instead of using a timer to update quotes as I did in the old Silverlight app, I decided that I could just update the quotes randomly when changes occur.

The Matrix

“The Matrix is everywhere. It is all around us. Even now, in this very room … it is the world that has been pulled over your eyes to blind you from the truth.” – Morpheus

OK, the tic-tac-toe matrix isn’t quite as interesting. The matrix service is what manages the game state. The state machine for the game allows alternating between turns and ends at a draw or win. It is illustrated like this:

tictacstate

This is captured through the specifications for the matrix service. The service itself builds up a list of “winning rows” to make it easy to determine if a given row is a draw, a winning row, or still has open slots.

Each time the state changes, the logic first checks to see if the game was won and whether the computer or the user won it:

Next, it checks to see if any slots are available. This is the “dumb logic” for a draw. I could have eliminated this code as it was the first pass at the algorithm, but I decided a two phase would be fine to illustrate as the second pass does a more intelligent look “row-by-row”. If it’s not a draw, it switches to the next turn.

The main component binds to the matrix service and uses this to drive the state of the individual cells.

Strategies

To drive the computer’s moves, I created two strategies. The first strategy is a simple one and simply picks a random empty cell for the computer move. Notice it is a simple function that is exported.

For the hard strategy, I devised a simple algorithm. Each row is assigned a point value based on the state of the row. The point values are listed here:

  • Row is a draw (one from each) – 0 points
  • Row is empty – 1 point
  • Row has one of theirs – 10 points
  • Row has one of mine – 50 points
  • Row has two of theirs – 100 points
  • Row has two of mine – 1000 points

This is an aggressive (not defensive) strategy because there are more points assigned to building up a winning row than blocking the opponent’s. Each empty cell is assigned a weight based on the sum of all intersecting rows, and then the highest weighted cell wins. Here is a visualization where the computer is “O”:

tictacstrategy

The top grid shows the row values (first and last columns on the top represent the diagonals) and the bottom grid shows the computed values for the cell. Note the highest score is the cell that will win the game.

The logic is encapsulated in the hard strategy function. The pseudo-code follows:

  1. Create a matrix of cell ranks
  2. Iterate each row. If the cell is occupied, set it’s weight to a negative value.
  3. Sum the count of X and O values for the row.
  4. Update the cell’s weight based on the logic described earlier.
  5. Sort by weight.
  6. Create a short array of the cells with the highest weight (in case multiple cells “tie”)
  7. Pick a cell and populate it.

That’s it – a simple strategy that works well.

Putting it All Together

The main component orchestrates everything. A flag is synchronized with the strategy service to run the selected algorithm, and the matrix is consulted for the first turn. (Note the MatrixService is bootstrapped with the main component so the same copy is available throughout).

On initialization, it is determined whether the component is running with a “slow computer.” This is the default and uses a timeout to emulate time for the computer to decide it’s next move. It makes for more realistic gameplay. When set to false, the statements execute immediately to make it easier for testing.

The remaining methods simply check for the game state and pass it through to properties on the component for data-binding, and advance the state. The user is responsible for tapping a cell to trigger their move. This is handled by the stateChange method:

The template the generates the cells iterates through the grid and binds the cell attributes to each CellComponent:

The updateStats method queries the matrix to determine the game state. If it is the computer's turn, the computerMove method is called. This simply calls the strategy service to make the next move and passes control back to the user. That's pretty much it!

Bonus Opportunity: if you like challenges, the AI in this is not perfect. You can take on a two-part challenge. First, the computer is absolutely beatable when you have the first turn. If you solve it, comment here and let us know the solution! My only hint is that it does not start with placing an X in the middle. Second, once you've done that, is there a better algorithm that can beat the winning strategy?

You can view the entire project (along with projects to install and run it locally) at the tic-tac-toe-ng2 repository. I hope this helps illustrate building applications with Angular 2 and TypeScript using the Angular Command Line interface.

Please share your thoughts and comments below, and if you get bored, play some tic-tac-toe!

Until next time,

The Angular 2 CLI and TypeScript

AngularJS is the incredibly popular framework for building single-page web applications. Version 2.0 is a major leap from the 1.x version designed to address shortcomings in the original 5+ year old framework and to embrace modern browsers and language features. It is being written using TypeScript, a superset of JavaScript that allows you to build code using next generation features and compile it to JavaScript that will run on current browsers. Visual Studio Code is the perfect platform to explore Angular applications because it is free, open source, and cross-platform and supports advanced features such as extensions, code completion and IntelliSense. In this session Jeremy Likness goes hands-on to show you how to set up your environment and build your first application while teaching you about the advantages of the framework and language based on his years of in-the-field experience architecting enterprise Angular applications.

In this talk I focused on scaffolding the app using the Angular-CLI to rapidly build a reference app. Here is the video:

The deck has most references, and stay tuned for a new post that will go into more detail with building a tic-tac-toe game with a computer opponent! Here is the deck:

Jeremy Likness

Sunday, April 24, 2016

The Three D’s of Modern Web Development

Modern web development using JavaScript has evolved over the past decade to embrace patterns and good practices that are implemented through various libraries and frameworks. Although it is easy to get caught up in the excitement of frameworks like AngularJS or the KendoUI implementation of MVVM (that’s Model-View-ViewModel, which I’ll explain more about in an upcoming article), it is important to remember the fundamental patterns and repeatable practices that make development easier. In fact, it is difficult to make a qualified decision about your development stack without understanding “how” and “why” a particular tool, library, or framework may benefit the application and, more importantly, your team.

I recently authored a series of articles for the Telerik Developer Network that covers what I believe are three fundamental concepts that have revolutionized modern web app development.

You can read the series here:

  1. Declarative vs. Imperative
  2. Data-Binding
  3. Dependency Injection

The three D’s are just a few of the reasons why JavaScript development drives so many consumer and enterprise experiences today. Although the main point of this series was to demonstrate the maturity of front-end development and the reason why JavaScript development at enterprise scale is both relevant and feasible today, it is also the answer to a question I often receive. “Why use Angular 2 and TypeScript?” My answer is this: together, Angular and TypeScript provide a modern, fast, and practical implementation of the three D’s of modern web development.

Regards,

Saturday, March 19, 2016

Lessons Learned (Re) Writing a 6502 Emulator in TypeScript with Angular 2 and RxJs

In preparation for an upcoming webinar I decided to build a sizeable Angular 2 project. I’ve always been fascinated with the Commodore 64 and the 6502 chipset is not a tough one to emulate. I also wrote it the first time several years back, so I had a good baseline of code to draw from. Thus the Angular 2 6502 emulator was born.

screenshot

Want to see it in action? Here is the running version.

Building a Console that Works

The first thing I needed was a decent console to give information to the user. The console itself isn’t too difficult. A service holds an array of strings, splices them when it gets beyond a certain size, and exposes a method to clear them.

Here is the basic code:

Now you may notice there is a new class (to some) for notifying subscribers when a console message is sent. Angular 2 relies heavily on Reactive Extensions for JavaScript (RxJS).

RxJS

In a nutshell, this library provides a different mechanism for dealing with asynchronous workflows and streams. You essentially end up observing a collection and are handed off “items” like an iterator and then can deal with it as you choose.

You can see that emitting an event is easy, but what does it look like consuming it?

Interacting with the DOM

The component for the console simply data-binds the console messages to a bunch of text contained inside of a div element. If that’s all it had to do there would be no need for an event emitter. Unfortunately because new items are appended at the bottom, the div can quickly fill up with scroll bars and new messages are no longer in view.

Fixing this is simple. First, the component needs to know when the contents of the div may have changed (via the event fired from the service). Second, it simply needs to interact with the DOM element so it can scroll into view.

How do we reference the element associated with an Angular component? Take a look at this source:

There are just two steps needed to reference the element. First, import the ElementRef class. Second, include it on the constructor so it is injected. You probably noticed that the component doesn’t do anything with it in the constructor. This is because the constructor is called before the UI has been wired up and rendered, so there is nothing inside of the element.

So how do you know when the element is ready?

Angular 2 Lifecycle

Angular 2 components have a lifecycle and if certain methods are present, they are called during a specific phase. I provided a high level list at an Angular 2 talk I gave. In this case, the component implements a method that is called after the view is initialized. Once initialized, it’s possible to grab the child div because it’s rendered.

A nice feature of TypeScript is auto-completion and documentation. By casting the element to the HTMLDivElement type I get a clearly typed list of APIs available for that element. The component subscribes to the console events (notice it uses a de-bounce rate of 100ms so if a ton of messages are sent at once, it will still only fire 10 times a second to avoid locking the UI). When the event fires, a property on the div is set and that will force it to scroll to the most recent message.

A similar lifecycle method is used in the main app to indicate everything is initialized.

Creating the Display

The next challenge was creating a graphical display. I decided to implement the emulator using a memory-mapped, palette-based display. This means a block of memory is reserved to represent “pixels” of the display, and when a value is set on a specific address, the number corresponds to a palette entry of red, green and blue values.

If you’re curious you can see the code for the algorithm I use to generate the palette. It basically builds up a distinct set of red, green, and blue values and then sorts them based on their luminosity based on a well known equation. Each item has a hexadecimal rendition and the last five palette slots are manually built as shades of gray.

The display service simply keeps track of a pixel buffer that represents the display, then makes a callback to the component when a value changes.

The display component is a little more involved. It holds an array of values that represent rectangles that will be drawn as a “pixel.” These are literally scalable vector graphics-based rectangles inside an svg element. The entries hold x and y offsets from the upper corner of the display, the width and height (you can adjust this if you like), and the current fill color for that pixel.

The component sets a callback on the display service to be notified of any changes. In this case, a callback is used because it is up to 50x faster than using an event emitter (I tested this). The callback ensures the request is in the valid range of memory and is a byte of data, then cross-references the palette to set the proper fill value.

Finally, this is all rendered by Angular as an array of rectangles. Angular’s dirty tracking will check when the fill is updated and update the attribute. Here is the HTML for the display:

The CPU State

The emulator CPU provides the simplest possible mechanism for emulating the operation of instructions. It contains the memory registers, the stack, a program counter (the area of memory that the next machine instruction will be read from), and tracks things like how many instructions per second it is able to execute.

I decided to take a straightforward approach, and have the CPU manage memory, registers, stack, and program counter but create “self-aware” operation classes that would actually “do work.”

For that reason the CPU itself doesn’t have any logic like loading or adding accumulators. Instead, it knows how to look at memory, update memory (including triggering a call to the display service), set and reset flags, pop addresses and handle various modes for addressing memory.

It has instructions to run, step through code line-by-line for the debug mode, and can be halted. When halted or in an error state, the only recovery is to reset it. Basically the workflow for the cpu is to look at the operation at the program counter, ask it to execute itself, then update the program counter and grab the next instruction.

The executeBatch() function is where I did most of the optimization. Running the program until it stops would lock the UI, and running a single instruction per event loop was slow. The CPU therefore compromises by running up to 255 instructions before it uses setTimeout to allow other events to queue.

(This is an area where using animation frames might make sense, but I haven’t explored that avenue yet).

Angular 1.x users will note I can call setTimeout directly and not rely on a $timeout service. Angular 2 is able to detect when I’ve modified the data model even from within an asynchronous function due to the magic of ZoneJS.

Loading Op Codes

An operation is fairly simple. It exposes how it handles addresses, the numeric value of the instruction, the name, how many bytes the instruction and any parameters take up, and can compile itself to text.

There is an execute method that uses the CPU to do work.

Operation codes derive from a base class:

And then implement themselves like this:

opcode

The example snippet is an operation that adds a value to the accumulator (a special memory register) with a value. It will set the carry bit if there is an overflow, and it is in immediate mode which means the byte after the instruction contains the value to add (as opposed to obtaining it from a memory location, for example). When executed, it uses the utility methods on the OpCodes class to add with carry and passes in the instance of the cpu and the address it is at.

The OpCodes utility will pop the instruction, inspect the address mode, pull the value, perform the add, and update the program counter with the cpu’s help.

Because each operation has multiple addressing modes, and there is one class per combination of op code and address mode, there end up being several hundred classes. This presents a challenge from the perspective of wiring up the op codes into an array unless they “self-register.” In my original emulator, that’s exactly what they did, but with modern TypeScript there’s an even better way!

TypeScript Decorators

TypeScript decorators are heavily used by Angular 2. I decided to create one of my own. First, I created an array of operations to export and make available to the CPU, compiler, and other components that need it. Next, I created a function to serve as a class decorator. It simply takes the target (which is the class, or more precisely the function constructor) then creates the instance and stores it in an array.

With that simple step, because the signature is correct, I can now use that function as a decorator and simply adorn each class that should register as an op code with the @IsOpCode attribute.

Take a look and see for yourself! Now when I implement the other op codes I haven’t finished yet, they will be automatically registered if I decorate them.

The Compiler

Aside from the individual op code definitions, the most logic exists in the compiler class. This is because it is a full two-pass assembly compiler that parses labels and handles both decimal and hexadecimal entries as well as label addition and memory syntax.

The compiler has to be able to take a pass and lay out how much memory each instruction takes so it can assign memory addresses to labels, then use that for “label math” (when the programmer adds or subtracts and offset from a label), then parse the code and get the instructions and memory addresses written out correctly.

Needless to say, a lot of regular expressions are involved. You can look at the source code linked earlier to see how I used interfaces and broke the compiler steps into methods to make it easier to organize all of the steps required.

The CPU Display

The CPU was the easiest component to build. It simply exposes the CPU service:

And then binds to values and cpu methods directly:

There is a compiler component that interacts with the compiler class, and uses forms to collect data. If you go beyond straight data-binding, forms are actually quite interesting.

Forms in Angular 2

The compiler component uses Angular 2’s form builder to expose the form for entering code, loading code, and setting the program counter. The form builder is used to create a group of controls. Each has a name, an initial value, and a set of validators. In the case of the program counter, a “required” validator is combined with a customer validator that parses to ensure it is a valid hexadecimal address.

You can see that in the “pcValidator” method. Passed validations return null, failed return … well, whatever you want except null. I reference the control group to create individual references for each control, so I can use the value of the control for compiling, setting the program counter, etc.

The controls have their own method to update values (such as loading the decompiled code into the corresponding control). I also use it to load source code for some of the pre-built programs I included. Unlike Angular 1.x, the Http service uses RxJS to return its results.

HTTP in Angular 2

There are a few steps to interact with HTTP in Angular 2 apps. First, you must include a JavaScript file if you are using bundles and not loading all of Angular 2 at once. In your app bootstrapper you’ll want to import HTTP_PROVIDERS and bootstrap them.

The component will import Http and should also import the Rx library to handle the special observable results. After that, it’s fairly straightforward.

The get call is issued and returns an observable that you can begin chaining methods onto. In this example, I’ve mapped the result to text (I’m not getting a JSON object but the entire contents of a text file) and then I subscribe.

The subscription has methods for when the next item appears, when an exception is thrown, and when the observable stream reaches the end. I leverage this to grab the loaded data and set it onto the control and then send a console message when it’s loaded.

The Final Results

After everything is said and done I was very pleased with how quickly I was able to leverage the existing code and migrate it to Angular 2 and the more current version of TypeScript. A few things I noted along the way:

  • Dynamic static properties like arrays aren’t really a good idea in the modular/asynchronous world – instead of having a static array for the op codes, I made an instance and exported it so it could be imported by other modules
  • Interfaces are useful for annotations but not very useful for dependency injection in Angular 2 – I found if I didn’t want to use magic strings, I had to still reference the implementation and use the @Inject decorator to make it work if I wanted to declare variables by their interface type
  • Performance surprised me (in a good way)

I’d like to elaborate on that last bullet. In the original implementation, I had a display service that explicitly held an array of svg elements and set the fill attribute on them directly when a value changed. With the Angular 2 version, I rely completely on Angular’s dirty tracking and let Angular re-render the element (or set the attribute) when the values change. Despite that difference, the improved data-binding is incredible and the new app keeps pace with the old one based on instructions per second.

Over all it was a great and fun experience. I still have some op codes to add to the mix and my binary coded decimal (BDC) mode is broken, so if you want to tinker, I do accept pull requests.

Until next time,

Tuesday, February 2, 2016

Angular 2.0 with TypeScript

On Monday, January 25th, 2016 I met with a large group at the Microsoft offices in Alpharetta, GA to present how to build Angular 2.0 apps using TypeScript.

highres_446255655

The talk was aimed at individuals who have little to no experience with Angular 2.0. It covers building an app from scratch, starting with some tools to get you started and ending with a discussion of the more advanced features like data-binding.

I wrote the deck in JavaScript and you can view the presentation here.

A week later I followed up with a hands-on lab. Instead of presenting how to build an Angular 2.0 app, I shared setup instructions and discussed the various steps to build the project as well as supported the attendees while they built the project on their own laptops.

Although the lab was not recorded, you can follow my step-by-step lab instructions and duplicate the steps on your own machine.

Click or tap here to access the lab.

If you have any questions please feel free to reach out to me directly through the contact form on my website. Enjoy building your Angular 2.0 apps!