Showing posts with label performance profiling. Show all posts
Showing posts with label performance profiling. Show all posts

Sunday, July 20, 2014

AngularJS Debugging and Performance

Just over a week ago an Angular project went live after several years of development. Our team peaked at over 25 developers distributed around the world. The source ended up at over 80,000 lines of TypeScript code (although I jokingly note this ends up as just one line of minified JavaScript) and includes hundreds of controllers, services, views, etc. You don’t get that far in an enterprise app without learning a thing or two about debugging and performance.

I just released my video “AngularJS: Debugging and Performance” as the 12th lesson in my course, Mastering AngularJS. This course covers everything you need to know to build business apps in Angular. Topics start out with fundamentals that provides an end-to-end overview, followed by a deep dive into scope and the digest loop. Learn about advanced filters, how dependency injection works, and what the built-in Angular services are. Lessons cover consuming web services from Angular (using both $http and ngResource), handling routing with ngRoute and ui-router, fundamentals of directives and advanced directives. I cover testing using Jasmine and this latest video is all about debugging and performance.

The debugging and performance lesson starts by covering some of the built-in Angular services like $exceptionHandler and $log, and how to leverage the JavaScript debugger key word. I then dive into how to troubleshoot Angular applications by inspecting the HTML DOM and using that to dive into source code and unravel the HTML nodes generated by templates. I cover the Chrome Batarang extension and then go into using Chrome Developer Tools to troubleshoot a memory leak. Learn how to use the Zone library to instrument your apps, how to avoid the overhead of extra watches when data-binding to render static data, and how to deal with extremely long (one million records or more) lists on the client. I include tips for using the right directives to condition rendering of the DOM, how to minimize overhead of watches, and how to take advantage of lazy-loading or instantiation of new components “on the fly.”

The course includes full access to working examples and source code. If you are working with Angular, I know you will want to check this lesson out.

Saturday, April 16, 2011

Performance Optimization of Silverlight Applications using Visual Studio 2010 SP1

Today I decided to work on some miscellaneous performance optimization tasks for my Sterling NoSQL Database that runs on both Silverlight and Windows Phone 7. The database will serialize any class to any level of depth without you having use to attributes or derive from other classes, and despite doing this and tracking both keys and relationships between classes, you can see it manages to serialize and deserialize almost as fast as native methods that don't provide any of the database features:

Note: click on any smaller-sized image in this post to see a full-sized version in case the text is too small for the sample screenshots.

Obviously the query column is lightning fast because Sterling uses in-memory indexes and keys. I decided to start focusing on some areas that can really help speed up the performance of the engine. While there are a lot of fancy profiling tools on the market, there really isn't much need to look beyond the tools built into Visual Studio 2010. To perform my analysis, I decided to run the profile against the unit tests for the in-memory version of the database. This will take isolated storage out of the picture and really help me focus on the issues that are part of the application logic itself and not artifacts of blocking file I/O.

The performance profiling wizard is easy to use. With my tests set as the start project, I simply launched the performance wizard:

I chose the CPU sampling. This will give me an idea of where most of the work is being performed. While it's not as precise as knowing time spent in functions, when a sample rate goes down you can be reasonably assured that the function is executing more quickly and less time is being spent there.

Next, I picked the test project.

Finally, I chose to launch profiling so I could get started right away and finished the process.

The profiler ran my application and I had it execute all of the tests. When done, I closed the browser window and the profiler automatically generated a report. The report showed CPU samples over time, which wasn't as useful to me as a section that reads "Functions doing the most individual work." There are where most of the time is spent. Some of the methods weren't a surprise because they involved reflection, but one was a list "contains" method which implies issues with looking up items in a collection.

Clicking on the offending function quickly showed me where the issues lie. It turns out that a lot of time is being spent in my index collections and key collections. This is expected as it is a primary function of the database, but this is certainly an area to focus on optimizing.

Clicking on this takes me directly to the source in question, with color-coded lines of source indicating time spent (number of samples):

Swapping my view to the functions view, I can bring the functions with the most samples to the top. A snapshot paints an interesting picture. Of course the contains is a side effect of having to parse the list, which uses the equals and hash codes. Sure enough, you can see Sterling is spending a lot of time evaluating these:

So diving into the code, the first thing I looked at was the Equals method. For the key, it basically said "what I'm being compared to must also be the same type of key, and then the value of the key I hold must also be the same."

public override bool Equals(object obj)
{
    return obj is TableKey<T, TKey> && ((TableKey<T, TKey>) obj).Key.Equals(Key);
}

Because these keys are mainly being compared via LINQ functions and other queries on the same table, I'm fairly confident there won't be an issue with the types not matching. So, I removed that condition from the keys and indexes and ran the performance wizard again to see if there were any significant gains.

This time the list stayed on the top, but the table index compare feature also bubbled up:

One nice feature of the profiler is that you can compare two sessions. I highlighted both sessions and chose the comparison feature:

The report actually showed more samples now for the index, still remaining one of the most problematic areas of code:

Obviously, my issue is with the comparison of keys. In Sterling you can define anything as your key, so Sterling has no way of necessarily optimizing the comparison. Or does it? One comparison guaranteed to be fast is integer comparison. All objects have a hash code. Instead of always comparing the key values, why not evaluate the hash code for the key, then compare that first to eliminate any mismatches? Then only if the hash codes are equal, have Sterling compare the actual keys themselves as well to be sure.

I made those tweaks, and the new equals function:

public override bool Equals(object obj)
{
    return obj.GetHashCode() == _hashCode && ((TableKey<T, TKey>) obj).Key.Equals(Key);
}

Notice the fast check for the hash code first, then evaluation down to the key only if the hash code check succeeds. What's nice about this approach is that Sterling probably won't be doing much unless something interesting like a match happens in the first place, so the extra check only takes place when that "interesting event" happens. All of the boring events now benefit from a faster comparison as they are quickly rejected.

That's the theory, anyway. What did the profiler say? I executed a final profiler run. This time there was definite improvement in the equality function and the list lookup:

Of course, the only caveat here is that the samples aren't statistically significant. The improvement is slight, but with a significant margin of error. Fortunately, I was able to run the profiler multiple times in the old and new, and consistently saw improves in the functionality, so there is some improvement and even a little counts as this is the area Sterling spends the most of its time. After evaluating this, a HashSet<T> may out perform the list, but also is not available on the Windows Phone 7 version, so I have some more tweaking and testing to do.

I also managed to find a few other nuggets, including converting the reflection cache to use dynamic methods and taking an expensive operation that was evaluating types and caching the lookup in a dictionary instead. After tweaking and checking these improvements in, next up will be looking at the isolated storage driver and potential bottle necks there.

As you can see, the built-in profiling is very powerful and a useful tool to help improve application performance and identify issues before they become serious.

Jeremy Likness