Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-benefits-using-kotlin-java-android
HariVigneshJayapalan
06 Mar 2017
6 min read
Save for later

Benefits of using Kotlin Java for Android

HariVigneshJayapalan
06 Mar 2017
6 min read
Kotlin is a statically typed programming language for JVM, Android, and browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Why Kotlin? Before we jump into the benefits of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages, but how has Kotlin emerged to capture programmers’ heart? A 2013 study showed that language features matter little compared withecosystem issues when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of the greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage-collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs that do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool(available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project, one file at a time, and everything will still compile, even for complex programs that run up to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well.It also enforces no particular philosophy of programming, such as overly functional or OOP styling. Combined with the appearance of frameworks like Anko and Kovenant, this resource lightness means Kotlin has become popular among Android developers. You can read a report written by a developer at Square on their experience with Kotlin and Android. Kotlin features Let's summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduces the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors, such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or frontend code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100% Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional programming support Functional programming is not easy, at least in the beginning, until it becomes fun. There arezero-overhead lambdas and the ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. Higher-order functions Higher-order Functions either take functions as parameters, return functions, or both.Higher-order functions are everywhere. You just pass functions to collections to make the code easy to read.titles.map{ it.toUpperCase()}reads like plain English. Isn’t it beautiful? Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducingvalkeyword as well as Kotlin collections, which are immutable by default. Once thevalor a collection is initialized, you can be sure about its validity. Null safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as The Billion Dollar Mistake. One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent toa NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that can't (non-null references).For example, a regular variable of type String can’t hold null: var a: String = “abc” a = null // compilation error To allow nulls, you can declare a variable as a nullable string, written String?: var b: String? = “abc” b = null // ok Anko DSL for Android Anko DSL for Android is a great library, which significantly simplifies working with views, threads, and Android lifecycle. The GitHub description states that Anko is a “Pleasant Android application development” and it truly has proven to be so. Removing ButterKnife dependency In Kotlin, you can just reference your view property by its @id XML parameter;these properties would have the same name as declared in your XML file. More info can be found in official docs. Smart casting // Java if (node instanceOf Tree) { return ((Tree) node).symbol; } // kotlin if (node is Tree) { returnnode.symbol; // Smart casting, no need of casting } if (document is Payable &&document.pay()) { // Smart casting println(“Payable document ${document.title} was payed for.”) } Kotlin uses lazy evaluation just like in Java. So, if the document were not a Payable, the second part would not be evaluated in the first place. Hence, if evaluated, Kotlin knows that the document is a Payable and uses a smart cast. Try it now! Like many modern languages, Kotlin has a way to try it out via your web browser. Unlike those other languages, Kotlin’s tryout site is practically a full-blown IDE that features fast autocompletion, real-time background compilation, and even online static analysis! TRY IT NOW About the author HariVigneshJayapalan is a Google Certified Android App developer, IDF Certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and a wannabe entrepreneur.
Read more
  • 0
  • 0
  • 63347

article-image-security-2017-whats-new-and-whats-not
Erik Kappelman
22 Feb 2017
5 min read
Save for later

Security in 2017: What's new and what's not

Erik Kappelman
22 Feb 2017
5 min read
Security has been a problem for web developers since before the Internet existed. By this, I mean network security was a problem before the Internet—the network of networks—was created. Internet and network security has gotten a lot of play recently in the media, mostly due to some high-profile hacks that have taken place. From the personal security perspective, very little has changed. The prevalence of phishing attacks continues to increase as networks become more secure. This is because human beings remain a serious liability when securing a network. However, this type of security discussion is outside the scope of this blog.  Due to the vast breadth of this topic, I am going to focus on one specific area of web security; we will discuss securing websites and apps from the perspective of an open source developer, and I will focus on the tools that can be used to secure Node.js. This is not an exhaustive guide to secure web development. Consider this blog a quick overview of the current security tools available to Node.js developers.  A good starting point is a brief discussion on injection theory. This article provides a more in-depth discussion if you are interested. The fundamental strategy for injection attacks is figuring out a way to modify a command on the server by manipulating unsecured data. Aclassic example is the SQL injection, in which SQL is injected through a form into the server in order to compromise the server’s database. Luckily, injection is a well-known infiltration strategy and there are many tools that help defend against it.  One method of injection compromises HTTP headers. A quick way to secure your Node.js project from this attack is through the use of the helmet module. The following code snippet shows how easy it is to start using helmet with the default settings:  var express = require('express') var helmet = require('helmet') var app = express() app.use(helmet()) Just the standard helmet settings should go a long way toward a more secure web app. By default, helmet will prevent clickjacking, remove the X-Powered-By header, keep clients from sniffing the MIME type, add some small cross-site scripting protections (XSS), and add other protections. For further defense against XSS, use of the sanitizer module is probably a good idea. The sanitizer module is relatively simple. It helps remove syntax from HTML documents that could allow for easy XSS.   Another form of injection attacks is the SQL injection. This attack consists of injecting SQL into the backend as a means of entry or destruction. The sqlmap project offers a tool that can test an app for SQL injection vulnerabilities. There are many tools like sqlmap, and I would recommend weaving a variety of automated vulnerability testing into your development pattern. One easy way to avoid SQL injection is the use of parameterized queries. The PostgreSQL database module supports parameterized queries as a guard against SQL injection.  A fundamental part of any secure website or app is the use of secure transmission via HTTPS. Accomplishing encryption for your Node.js app can be fairly easy, depending on how much money you feel like spending. In my experience, if you are already using a deployment service, such as Heroku, it may be worth the extra money to pay the deployment service for HTTPS protection. If you are categorically opposed to spending extra money on web development projects, Let’s Encrypt is a free and open way to supply your web app with browser-trusted HTTPS protection. Furthermore, Let’s Encrypt automates the process of using an SSL certificate. Let’s Encrypt is a growing project and is definitely worth checking out, if you haven’t already.  Once you have created or purchased a security certificate, Node’s onboard https can do the rest of the work for you. The following code shows how simply HTTPS can be added to a Node server once a certificate is procured:  // curl -k https://localhost:8000/ const https = require('https'); const fs = require('fs'); const options = {   key: fs.readFileSync('/agent2-key.pem'),   cert: fs.readFileSync('/agent2-cert.pem') }; https.createServer(options, (req, res) => { res.writeHead(200); res.end('hello securityn'); }).listen(8000); If you are feeling adventurous, the crypto Node module offers a suite of OpenSSL functions that you could use to create your own security protocols. These include hashes, HMAC authentication, ciphers, and others.  Internet security is often overlooked by hobbyists or up-and-coming developers. Instead of taking a back seat, securing a web app should be one of your highest priorities, especially as threats on the Web become greater with each passing day. As far as the topic of the blog post, what’s new and what’s not, most of what I have discussed is not new. This is in part due to the proliferation of social engineering as a means to compromise networks instead of technological methods. Most of the newest methods for protecting networks revolve around educating and monitoring authorized network users, instead of more traditional security activities. What is absolutely new (and exciting) is the introduction of Let’s Encrypt. Having access to free security certificates that are easily deployed will benefit individual developers and Internet users as a whole. HTTPS should become ubiquitous as Let’s Encrypt and other similar projects continue to grow.  As I said at the beginning of this blog, security is a broad topic. This blog has merely scratched the surface of ways to secure a Node.js app. I do hope, however, some of the information leads you in the right, safe direction.  About the Author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company. 
Read more
  • 0
  • 0
  • 8420

article-image-react-native-performance
Pierre Monge
21 Feb 2017
7 min read
Save for later

React Native Performance

Pierre Monge
21 Feb 2017
7 min read
Since React Native[1] came out, the core group of developers, as well as the community, kept on improving its framework, including the performance and stability of the technology. In this article, we talk about React Native's performance. This blog post is aimed at those people who want to learn more about React Native, but it might be a bit too complex for beginners. How does it work? In order to understand how to optimize our application, we have to understand how React Native works. But don't worry; it's not too hard. Let’s take the following piece of code into consideration: st=>start: Javascript e=>end: Natif op=>operation: Bridge st->op->e Let's discuss what it represents. React Native bases itself on two environments: a JS (Javascript) environment and a Native environment. These two entities communicate together with a bridge. The JS is our "organizer." This is where we will run our algorithms, moderate our views, run network calls, and so on. The Native is there for the display and the physical link part. It senses physical events as well as some virtuals if we ask it to do so, and then sends it to the JS part. The bridge exists as a link, as shown in the following code:       render(): ReactElement<*> {     return ( <TextInput           value={''}           onChangeText={() => { /* Here we handle native event */ } />     );  } Here, we simply have a Text Input. Most of the component involves all the branches of the React Native stack. This Text Input is called in JS, but is displayed on the device in Native. Every character typed in the Native component involves the physical event, transforms it in letter or action, and then transmits it by the bridge to the JS component. In all of the transactions of data between JS and Native, the Bridge always intervenes so that the data is included in both parts. The bridge has to serialize the data. The bridge is simple. It's a bit stupid, and it has only one job, but... it is the one that will bother us the most. The Bridge and other losses of performance Bridge Imagine that you are in an airport. You get your ticket online in five minutes; you are already in the plane and the flight will take the time that it's supposed to. However, before that, there's the regulation of the flight—the checking in. It will take horribly long to find the right flight, drop-down your luggage at the right place, go through security and get yourself checked, and so on. Well, this is our Bridge. Js is fast even though it is the main thread. Native is also fast, but the Bridge is slow. Actually, it's more like it has so much data to serialize that it takes it so much time to serialize that he can't improve its performance. Or... It is slow, simply because you made it go slow! The Bridge is optimized to batch the data[2]. Therefore, we can't send it data too fast; and, if we really have to, then we have to minimize to the maximum. Let's take for example an animation. We want to make a square go from left to the right in 10 seconds. The pure JS versions:       /* on top of class */ let i = 0; loadTimer = () => {     if (i < 100) {         i += 1;         setTimeout(loadTimer, 100);     } }; ... componentDidMount(): void {     loadTimer(); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: i * Dimensions.get('window').width / 100,             },         ],     };     return ( <View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } Here is an implementation in pure JS of a pseudo animation. This version, where we make raw data go through the bridge, is dirty. It's dirty code and very slow, TO BAN! Animated Version:       ... componentDidMount(): void {     this.state.animatedValue.setValue(0);     Animated.spring(this.state.animatedValue, {         toValue: 1,     }); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: this.state.animatedValue.interpolate({                     inputRange: [0, 1],                     outputRange: [0, Dimensions.get('window').width],                 }),             },         ],     };     return ( <Animated.View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } It's already much more understandable. The Animated library has been created to improve the performance of the animations, and its objective is to lighten the use of the bridge by sending predictions of the data to the native before starting the animation. The animation will be much softer and successful with the rightful library. The general perfomance of the app will automatically be improved. However, the animation is not the only one at fault here. You have to take time to verify that you don't have too much unnecessary data going through the bridge. Other Factors Thankfully, the Bridge isn't the only one at fault, and there are many other ways to optimize a React Native application.Therefore, here is an exhaustive list of why and/or how you can optimize your application: Do not neglect your business logic; even if JS and native are supposed to be fast; you have to optimize them. Ban the while or synchronize functions, which takes time in your application. Blocking the JS is the same as blocking the application. The rendering of a view is costly, and it is done most of the time without anything changing! It's why you MUST use the 'shouldComponentUpdate' method in your components. If you do not manage to optimize a JavaScript component, then it means that it would be good to transduce it in Native. The transactions with the bridge should be minimized. There are many states in a React Native application. A debug stage, which is a release state. The release state increases the performance of the app greatly with the flags of compilation while taking out the dev mode. On the other hand, it doesn't solve everything. The 'debugger mode' will slow down your application because the JS will turn on your browser and won't do it on your phone. Tools The React Native "tooling" is not yet very developed, but a great part of the toolset is that itis coming from the application. A hundred percentof the functionality is native. Here is a short list of some of the important tools that should help you out: Tools Platform Description react-addons-perf Both This is a tool that allows simple benchmarks to render components; it also gives the wasted time (the time loss to give the components), which didn't change. Systrace Android This is hard to use but useful to detect big bottlenecks. Xcode iOS This function of Xcode allows you to understand how our application is rendered (great to use if you have unnecessary views). rn-snoopy Both Snoopy is a softwarethatallows you to spy on the bridge. The main utility of this tool is the debug, but it can be used in order to optimize. You now have some more tricks and tools to optimize your React Native application. However,there is no hidden recipeor magic potion...It will take some time and research. The performance of a React Native application is very important. The joy of creating a mobile application in JavaScript must at least be equal to the experience of the user testing it. About the Author Pierre Monge is an IT student from Bordeaux, France. His interests include C, JS, Node, React, React Native, and more. He can be found on GitHub @azendoo. [1]React Native allows you to build mobile apps using only JavaScript. It uses the same design as React, allowing you to compose a rich mobile UI from declarative components. [2]Batch processing
Read more
  • 0
  • 0
  • 21050

article-image-web-development-tools-behind-large-portion-modern-internet-pornography
Erik Kappelman
20 Feb 2017
6 min read
Save for later

The Web Development Tools Behind A Large Portion of the Modern Internet: Pornography

Erik Kappelman
20 Feb 2017
6 min read
Pornography is one of, if not, the most common forms of media on the Internet, if you go by the number of websites or amount of data transferred. Despite this fact, Internet pornography is rarely discussed or written about in positive terms. This is somewhat unexpected, given that pornography has spurned many technological advances throughout its history. Many of the advances in video capture and display were driven by the need to make and display pornography better. The desire to purchase pornography on the Internet with more anonymity was one of the ways PayPal drew, and continues to draw, customers to its services. This blog will look into some of the tools being used by some of the more popular Internet pornography sites today. We will be examining the HTML source for some of the largest websites in this industry. The content of this blog will not be explicit, and the intention is not titillation. YouPorn is one of the top 100 accessed websites on the Internet; so, I believe it is relevant to have a serious conversation about the technologies used by these sites. This conversation does not have to be explicit in anyway, and it will not be. Much of what is in the <head> tag in the YouPorn HTML source is related to loading assets, such as stylesheets. After several <meta> tags, most designed to enhance the website’s SEO, a very large chunk of JavaScript appears. It is hard to say, at this point, whether or not YouPorn is using a common frontend framework, or if this JavaScript was wholly written by a developer somewhere. It certainly was minified before it was sent to the frontend, which is the least you would expect. This script does a variety of things. It handles that lovely popup that occurs as soon as a viewer clicks anywhere on the page; this is handled with vanilla JavaScript. The script also collects a large amount of information about the viewer’s viewing device. This includes information about the operating system, the browser, the device type, device brand, and even some information about the CPU. This information is used to optimize the viewer’s experience. The script also identifies whether or not the viewer is using AdBlock, and then modifies the page as such. Two third-party tools that are in this script are jQuery and AJAX. These two tools would be very necessary for a website that’s main purpose is the display of pornographic content. This is because AJAX can help expedite the movement of the content from backend to frontend, and jQuery can enhance DOM manipulation in order to improve the viewer’s user interface. AJAX and jQuery can also be seen in the source code of the PornHub website. Again, this is really the least you would expect from a website that serves as much content as any of the porn websites that are currently popular. The source code for these pages show that YouPorn and PornHub both use Google Analytics tools, presumably, to assist in their content targeting. This is a part of how pornography websites begin to grow more and more geared toward a specific viewer over time. PornHub and YouPorn spend a lot of lines of code building what could be considered a profile of their viewers. This way, viewers can see what they want immediately, which ought to enhance their experience and keep them online. xHamster follows a similar template as it identifies information about the user’s device and uses Google Analytics to target the viewer with specific content. Layout and navigation of any website is important. Although pornography is very desirable to some, websites that display it have so many competitors that they must all try very hard to satisfy their viewers. This makes every detail very important. YouPorn and PornHub appear to use BootStrap as the foundation of their frontend design. There is quite a bit of customization performed by the sites, but BootStrap is still in the foundation. Although it is somewhat less clear, it seems that xHamster also uses BootStrap as its design foundation. Now, let’s choose a video and see what the source code tells us about what happens when viewers attempt to interact with the content. On PornHub, there are a series of view previews that, when rolled over, a video sprite appears in order to give the viewer a preview of the video. Once the video is clicked on, the viewer is sent to a new page to view the specific video. In the case of PornHub, this is done through the execution of a PHP script that uses the video’s ID and an AJAX request to get the user onto the right page with the right video. Once we are on the video’s page, we can see that PornHub, and probably xHamster and YouPorn as well, are using Flash Video. I am viewing these websites on a MacBook, so it is likely that the video type is different when viewed on a device that does not support Flash Video. This is part of the reason so much information about a viewer’s device is collected upon visiting these websites. This short investigation into the tools used by these websites has revealed that, although pornographers have been on the cutting edge of web technology in the past, some of the current pornography providers are using tools that are somewhat unimpressive, or at least run of the mill. That being said, there is never a good reason to reinvent the wheel, and these websites are clearly doing fine in terms of viewership. For the aspiring developers out there, I would take it to heart that some of the most viewed websites on the Internet are using some of the most basic tools to provide their users with content. This confirms what I have often found to be true, that is, getting it done right is far more important than getting it done in a fancy way. I have only scratched the surface of this topic. I hope that others will investigate more into the type of technologies used by this very large portion of the modern Internet. Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 9435

article-image-gomobile-golangs-foray-mobile-world
Erik Kappelman
15 Feb 2017
6 min read
Save for later

GoMobile: GoLang's Foray into the Mobile World

Erik Kappelman
15 Feb 2017
6 min read
There is no question that the trend today in mobile app design is to get every possible language onboard for creating mobile applications. This is sort of the case with GoMobile. Far from being originally intended to create mobile apps, Go or GoLang, was originally created at Google in 2007. Go has true concurrency capabilities, which can lend itself well to any programming task, certainly mobile app creation. The first thing you need to do to follow along with this blog is get the GoLang binaries on your machine. Although there is a GCC tool to compile Go, I would strongly recommend using the Go tools. I like Go because it is powerful, safe, and it feels new. It may simply be a personal preference, but I think Go is a largely underrated language. This blog will assume a minimum understanding of Go; don’t worry so much about the syntax, but you will need to understand how Go handles projects and packages. So to begin, let's create a new folder and specify it as our $GOPATH bash variable. This tells Go where to look for code and where to place the downloaded packages, such asGoMobile. After we specify our $GOPATH, we add the bin subdirectory of the $GOPATH to our global $PATH variable. This allows for the execution of Go tools like any other bash command: $ cd ~ $ mkdirGoMobile $ export$GOPATH=~/GoMobile $ export PATH=$PATH:$GOPATH/bin The next step is somewhat more convoluted. Today, we are getting started with Android development. I choose Android over iOS because GoMobile can build for Android on any platform, but can only build for iOS on OSX. In order for GoMobile to be able to work its magic, you’ll need to install Android NDK. I think the easiest way to do this is through Android Studio. Once you have the Android NDK installed, it’s time to get started. We are going to be using an example app from our friends over at Go today. The app structure required for Go-based mobile apps is fairly complex. With this in mind, I would suggest using this codebase as you begin developing your own apps. This might save you some time. So, let's first install GoMobile: $ go get golang.org/x/mobile/cmd/gomobile Now, let's get that example app: $ go get -d golang.org/x/mobile/example/basic For the next command, we are going to initialize GoMobile and specify the NDK location. The online help for this example is somewhat vague when it comes to specifying the NDK location, so hopefully my research will save you some time: $ gomobileinit -ndk=$HOME/Library/Android/sdk/ndk-bundle/ Obviously, this is the path on my machine, so yours may be different:however,if you’re on anything Unix-like, it ought to be relatively close. At this point, you are ready to build the example app. All you have to do is use the command below, and you’ll be left with a real live Android application: $ gomobile build golang.org/x/mobile/example/basic This will build an APK file and place the file in your $GOPATH. This file can be transferred to and installed on an actual Android device, or you can use an emulator. To use the emulator, you’ll need to install the APK file using the adb command. This command should already be onboard with your installation of Android Studio. The following command adds the adb command to your path(your path might be different, but you’ll get the idea): export PATH=$PATH:$HOME/Library/Android/sdk/platform-tools/ At this point, you ought to be able to run the adb install command and try out the app on your emulator: adb install basic.apk As you will see, there isn’t much to this particular app, but in this case, it’s about the journey and not the destination. There is another way to install the app on your emulator. First, uninstall the app from your Android VM. Second, run the following command: gomobile install golang.org/x/mobile/example/basic Although the result is the same, the second method is almost identical to the way that the regular Go builds applications. For consistency's sake, I would recommend using the second method. If you're new to Go, at this point, I would recommend checking out some of the documentation. There is an interactive tutorial called A Tour of Go. I have found this tutorial enormously helpful for beginning to intermediate needs. You will need to have a pretty deep understanding of Go to be an effective mobile app developer. If you are new to mobile app design in general, e.g., you don’t already know Java, I would recommend taking the Go route. Although Java is still the most widely used language the world over, I myself have a strong preference for Go. If you will be using Go in the other elements of your mobile app, say maybe, a web server that controls access to data required for the app’s operations, using Go and GoMobile can be even more helpful. This allows for code consistency across the various levels of a mobile app. This is similar to the benefit of using the MEAN stack for web development in that one language controls all the levels of the app. In fact, there are tools now that allow for JavaScript to be used in the creation of mobile apps, and then, presumably, a developer could use Node.js for a backend server ending up with a MEAN-like mobile stack. While this would probably work fine, Go is stronger and perhaps safer than JavaScript. Also, because mobile development is essentially software development, which is fundamentally different fromweb development, using a language geared toward software development makes more intuitive sense. However, these thoughts are largely opinions and I have said before in many blogs, there are so many options, just find the one you like that gets the job done. About the Author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 14614

article-image-rxswift-part-1-where-start-beginning-hot-and-cold-observables
Darren Karl
09 Feb 2017
6 min read
Save for later

RxSwift Part 1: Where to Start? Beginning with Hot and Cold Observables

Darren Karl
09 Feb 2017
6 min read
In the earlier articles, we gave a short introduction to RxSwift and talked about the advantages of the functional aspect of Rx,by using operators and composing a stream of operations. In my journey to discover and learn Rx, I was drawn to it after finding various people talking about its benefits. After I finally bought into the idea that I wanted to learn it, I began to read the documentation. I was overwhelmed by how many objects, classes, or operators were provided. There were loads of various terminologies that I encountered in the documentation. The documentation was (and still is) there, but because I was still at page one, my elementary Rx vocabulary prevented me from actually being able to appreciate, maximize, and use RxSwift. I had to go through months of soaking in the documentation until it saturated in my brain and things clickedfinally. I found that I wasn’t the only one who was experiencing this after talking with some of my RxSwift community members in Slack. This is the gap. RxSwift is a beautifully designed API (I’ll talk about why exactly, later), but I personally didn’t know how long it would take to go from my working non-Rx knowledge to slowly learning the well-designed tools that Rx provides. The problem wasn’t that the documentation was lacking, because it was sufficient. It was that while reading the documentation, I found that I didn't even know what questions to ask or which documentation answered what questions I had. What I did know was programming concepts in the context of application development, in a non-Rx way. What I wanted to discover was how things would be done in RxSwift along with the thought processes that led to the design of elegant units of code, such as the various operators like flatMap or concatMap or the units of code such as Subjects or Drivers. This article aims to walk you through real programming situations that software developers encounter, while gradually introducing the Rx concepts that can be used. This article assumes that you’ve read through the last two articles on RxSwift I’ve written, which is linked above, and that you’ve found and read some of the documentation but don’t know where to start. It also assumes that you’re familiar with how network calls or database queries are made and how to wrap them using Rx. A simple queuing application Let’s start with something simple, such as a mobile application, for queuing. We can have multiple queues, which contain zero to many people in order. Let’s say that we have the following code that performs a network query to get the queue data from your REST API. We assume that these are network requests wrapped using Observable.create(): privatefuncgetQueues() -> Observable<[Queue]> privatefuncgetPeople(in queue: Queue) -> Observable<[Person]> privatevardisposeBag = DisposeBag() An example of the Observable code for getting the queue data is available here. Where do I write my subscribe code? Initially, a developer might write the following code in the viewDidLoad() method and bind it to some UITableView: funcviewDidLoad() { getQueue() .subscribeOn(ConcurrentDispatchQueueScheduler(queue: networkQueue)) .observeOn(MainScheduler.instance) .bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(disposeBag) } However, if the getQueues() observable code loads the data from a cold observable network call, then, by definition, the cold observable will only perform the network call once during viewDidLoad(), load the data into the views, and it is done. The table view will not update in case the queue is updated by the server, unless the view controller gets disposed and viewDidLoad() is performed again. Note that should the network call fail, we can use the catchError() operator right after and swap in a database query or from a cache instead, assuming we’ve persisted the queue data through a file or database. Thisway, we’re assured that this view controller will always have data to display. Introduction to cold and hot observables By cold observable, we mean that the observable code (that is, the network call to get the data) will only begin emitting items on subscription (which is currently on viewDidLoad). This is the difference between a hot and a cold observable: hot observables can be emitting items even when there are no observers subscribed, while cold observables will only run once an observer is subscribed. The examples of cold observables are things you’ve wrapped using Observable.create(), while the examples of hot observables are things like UIButton.rx.tap or UITextField.rx.text, which can be emitting items such as Void for a button press or String for atext field, even when there aren’t any observers subscribed to them. Inherently, we are wrong to use a cold observable because its definition will simply not meet the demands of our application. A quick fix might be to write it in viewWillAppear Going back to our queuing example, one could write it in the viewWillAppear() life cycle state of the app so that it will refresh its data every time the view appears. The problem that arises from this solution is that we perform a network query too frequently. Furthermore, every time viewWillAppear is called, note that a new subscription is added to the disposeBag. If, for some reason, the last subscription does not dispose (that is, it is still processing and emitting items and has not yet entered into the onComplete or onError state) and you’ve begun to perform a network query again, then it means that you have a possibility of amemory leak! Here’s an example of the (impractical) code that refreshes on every view. The code will work (it will refresh every time), but this isn’t good code: publicviewWillAppear(_ animated: Bool) { getQueues().bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(self.disposeBag) } So, if we don’t want to query only one time and we don’t want to query too frequently, it begs the question,“How many times should the queries really be performed?” In part 2 of this article, we'll discuss what the right amount of querying is. About the Author Darren Karl Sapalo is a software developer, an advocate ofUX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games in his free time when he was twelve. He finally finished with his undergraduate thesis on computer vision and took up some industry work with Apollo Technologies Inc. developing for both Android and iOS platforms.
Read more
  • 0
  • 0
  • 8633
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-shift-swift-2017
Shawn Major
27 Jan 2017
3 min read
Save for later

Shift to Swift in 2017

Shawn Major
27 Jan 2017
3 min read
It’s a great time to be a Swift developer because this modern programming language has a lot of momentum and community support behind it and a big future ahead of it. Swift became a real contender when it became open source in December 2015, giving developers the power to build their own tools and port it into the environments in which they work. The release of Swift 3 in September 2016 really shook things up by enabling broad scale adoption across multiple platforms – including portability to Linus/x86, Raspberry Pi, and Android. Swift 3 is the “spring cleaning” release that, while not being backwards compatible, has resulted in a massively cleaner language and ensured sound and consistent language fundamentals that will carry across to future releases. If you’re a developer using Swift, the best thing you can do is get on board with Swift 3 as the next release promises to deliver stability from 3.0 onwards. Swift 4 is expected to be released in late 2017 with the goals of providing source stability for Swift 3 code and ABI stability for the Swift standard library. Despite this shake up that occurred with the new release, developers are still enthusiastic about Swift – it was one of the “most loved” programming languages in StackOverflow’s 2015 and 2016 Developer Surveys. Swift was also one of the top 3 trending techs in 2016 as it’s been stealing market share from Objective C. The keen interest that developers have in Swift is reflected by the +35,000 stars it has amassed on Github and the impressive amount of ongoing collaboration between its core team and the wider community. Rumour has it that Google is considering making Swift a “first class” language and that Facebook and Uber are looking to make Swift more central to their operations. Lyft’s migration of its iOS app to Swift in 2015 shows that the lightness, leanness, and maintainability of the code are worth it and services like the web server and toolkit Perfect are proof that the server-side Swift is ready. People are starting to do some cool and surprising things with Swift. Including… Shaping the language itself. Apple has made a repository on Github called swift-evolution that houses proposals for enhancements and changes to the Swift language. Developers are bringing Swift 3 to as many ARM-based systems as possible. For example, you can get Swift 3 for all the Raspberry Pi boards or you can program a robot in Swift on a BeagleBone. IBM has adopted Swift as the core language for their cloud platform. This opens the door to radically simpler app dev. Developers will be able to build the next generation of apps in native Swift from end-to-end, deploy applications with both server and client components, and build microservice APIs on the cloud. The Swift Sandbox lets developers of any level of experience can actively build server-based code. Since launching it’s had over 2 million code runs from over 100 countries. We think there are going to be a lot of exciting opportunities for developers to work with Swift in the near future. The iOS Developer Skill Plan on Mapt is perfect for diving into Swift and we have plenty of Swift 3 books and videos if you have more specific projects in mind.The large community of developers using iOS/OSX and making libraries combined with the growing popularity of Swift as a general-purpose language makes jumping into Swift a worthwhile venture. Interested in what other developers have been up to across the tech landscape? Find out in our free Skill Up: Developer Talk report on the state of software in 2017.
Read more
  • 0
  • 0
  • 16631

article-image-isomorphic-javascript
Sam Wood
26 Jan 2017
3 min read
Save for later

Why you should learn Isomorphic JavaScript in 2017

Sam Wood
26 Jan 2017
3 min read
One of the great challenges of JavaScript development has been wrangling your code for both the server- and the client-side of your site or app. Fullstack JS Devs have worked to master the skills to work on both the front and backend, and numerous JS libraries and frameworks have been created to make your life easier. That's why Isomorphic JavaScript is your Next Big Thing to learn for 2017. What even is Isomorphic JavaScript? Isomorphic JavaScript are JavaScript applications that run on both the client and the server-side. The term comes from a mathematical concept, whereby a property remains constant even as its context changes. Isomorphic JavaScript therefore shares the same code, whether it's running in the context of the backend or the frontend. It's often called the 'holy grail' of web app development. Why should I use Isomorphic JavaScript? "[Isomorphic JavaScript] provides several advantages over how things were done 'traditionally'. Faster perceived load times and simplified code maintenance, to name just two," says Google Engineer and Packt author Matt Frisbie in our 2017 Developer Talk Report. Netflix, Facebook and Airbnb have all adopted Isomorphic libraries for building their JS apps. Isomorphic JS apps are *fast*, operating off one base of code means that no time is spent loading and parsing any client-side JavaScript when a page is accessed. It might only be a second - but that slow load time can be all it takes to frustrate and lose a user. But Isomorphic apps are faster to render HTML content directly in browser, ensuring a better user experience overall. Isomorphic JavaScript isn't just quick for your users, it's also quick for you. By utilizing one framework that runs on both the client and the server, you'll open yourself up to a world of faster development times and easier code maintenance. What tools should I learn for Isomorphic JavaScript? The premier and most powerful tool for Isomorphic JS is probably Meteor - the fullstack JavaScript platform. With 10 lines of JavaScript in Meteor, you can do what will take you 1000s elsewhere. No need to worry about building your own stack of libraries and tools - Meteor does it all in one single package. Other Isomorphic-focused libraries include Rendr, created by Airbnb. Rendr allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side - and was used to build the Airbnb mobile web app for drastically improved page load times. Rendr also strives to be a library, rather than a framework, meaning that it can be slotted into your stack as you like and gives you a bit more flexibility than a complete solution such as Meteor.
Read more
  • 0
  • 0
  • 14045

article-image-keras
Janu Verma
13 Jan 2017
6 min read
Save for later

Introduction to Keras

Janu Verma
13 Jan 2017
6 min read
Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in Python and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. The key idea behind keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, creator of keras, “Being able to go from idea to result with the least possible delay is the key to doing good research.” Key features of keras: Any one of the theano and tensorflow backends can be used. Supports both CPU and GPU. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone module, and these modules can be combined to create new models. New modules are easy to add. Write only Python code. Installation: Keras has the following dependencies: numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Example: MNIST digits classification using keras We will learn about the basic functionality of keras using an example. We will build a simple neural network for classifying hand-written digits from the MNIST dataset. Classification of hand-written digits was the first big problem where deep learning outshone all the other known methods and this paved the way for deep learning on a successful track. Let's start by importing data; we will use the sample of hand-written digits provided with the scikit-learn base package: from sklearn import datasets mnist = datasets.load_digits() X = mnist.data Y = mnist.target Let's examine the data: print X.shape, Y.shape print X[0] print Y[0] Since we are working with numpy arrays, let's import numpy: import numpy # set seed np.random.seed(1234) Now, we'll split the data into training and test sets by randomly picking 70% of the data points as a training set and the remaining for validation: from sklearn.cross_validation import train_test_split train_X, test_X, train_y, test_y = train_test_split(X, Y, train_size=0.7, random_state=0) Keras requires the labels to be one-hot-encoded, i.e., the labels 1, 2, 3,..,etc., need to be converted to vectors like [1,0,0,...], [0,1,0,0...], [0,0,1,0,0...], respectively: def one_hot_encode_object_array(arr): '''One hot encode a numpy array of objects (e.g. strings)''' uniques, ids = np.unique(arr, return_inverse=True) return np_utils.to_categorical(ids, len(uniques)) # One hot encode labels for training and test sets. train_y_ohe = one_hot_encode_object_array(train_y) test_y_ohe = one_hot_encode_object_array(test_y) We are now ready to build a neural network model. Start by importing the relevant classes from keras: from keras.models import Sequential from keras.layers import Dense, Activation from keras.utils import np_utils In keras, we have to specify the structure of the model before we can use it. A Sequential model is a linear stack of layers. There are other alternatives in keras, but we will with sequential for simplicity: model = Sequential() This creates an instance of the constructor; we don't have anything in the model as yet. As stated previously, keras is modular and we can add different components to the model via modules. Let's add a fully connected layer with 32 units. Each unit receives an input from every unit in the input layer, and since the number of units in the input is equal to the dimension (64) of the input vectors, we need the input shape to be 64. Keras uses a Dense module to create a fully connected layer: model.add(Dense(32, input_shape=(64,))) Next, we add an activation function after the first layer. We will use sigmoid activation. Other choices like relu, etc., are also possible: model.add(Activation('sigmoid')) We can add any number of layers this way. But for simplicity, we will restrict to only one hidden layer. Add the output layer. Since the output is a 10-dimensional vector, we require the output layer to have 10 units: model.add(Dense(10)) Add activation for the output layer. In classification tasks, we use softmax activation. This provides a probilistic interpretation for the output labels: model.add(Activation('softmax')) Next, we need to configure the model. There are some more choices we need to make before we can run the model, e.g., choose an optimization method, loss function, and metric of evaluation: model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) The compile method configures the model, and the model is now ready to be trained on data. Similar to sklearn, keras has a fit method for training: model.fit(train_X, train_y_ohe, nb_epoch=10, batch_size=30) Training neural networks often involves the concept of minibatching, which means showing the network a subset of the data, adjusting the weights, and then showing it another subset of the data. When the network has seen all the data once, that's called an "epoch". Tuning the minibatch/epoch strategy is a somewhat problem-specific issue. After the model has trained, we can compute its accuracy on the validation set: loss, accuracy = model.evaluate(test_X, test_y_ohe) print accuracy Conclusion We have seen how a neural network can be built using keras, and how easy and intuitive the keras API is. This is just an introduction, a hello-world program, if you will. There is a lot more functionality in keras, including convolutional neural networks, recurrent neural networks, language modeling, deep dream, etc. About the author Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, etc. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area; email to schedule a meeting.
Read more
  • 0
  • 0
  • 4529

article-image-hierarchical-data-format
Janu Verma
10 Jan 2017
6 min read
Save for later

Hierarchical Data Format

Janu Verma
10 Jan 2017
6 min read
Hierarchical Data Format (HDF) is an open source file format for storing huge amounts of numerical data. It’s typically used in research applications to distribute and access very large datasets in a reasonable way, without centralizing everything through a database. We can use HDF5 data format for pretty fast serialization of or random access to fairly large datasets in a local/development environment. The Million Song Dataset, for example, is distributed this way. HDF was developed by National Center for Supercomputing Applications. Think of HDF as a file system within a file. It lets you organize the data hierarchically and manage a large amount of data very efficiently. Every object in an HDF5 file has a name, and they are arranged in a POSIX - style hierarchy with / separators, e.g.: /path/to/resource HDF5 has two kinds of objects: Groups Datasets Groups are folder-like objects which contain datasets and other groups. Datasets contain the actual data in the from of arrays. HDF in python For my work, I had to study the data stored in HDF5 files. These files are not human readable, and so I had to write some codes in Python to access the data. Luckily, there is the PyTables package, which has a framework to parse HDF5 files. The PyTables package does much more than that. PyTables can be used in any scenario where you need to save and retrieve large amounts of multidimensional data and provide metadata for it. PyTables can also be employed if you need to structure some portions of your cluttered RDBMS. For example, if you have very large tables in your existing relational database, then you can move those tables to PyTables so as to reduce the burden of your existing database while efficiently keeping those huge tables on disk. Reading a HDF5 file in python: from tables import * h5file = open_file("myHDF5file.h5", "a") All the nodes in the file: for node in h5file: print node This will print all the nodes in the file. This is of little use as this is like listing all the files in my filesystem. The main advantage of a hierarchical framework is that you want to retrieve data in a hierarchical fashion. So the first step would be to look at all the groups (folders): for group in h5file.walk_groups(): print group > / (RootGroup) '' /group1 (Group) /group2 (Group) We have 3 groups in this file, the root, group1, and group2. Everything is either a direct or indirect child of the root, as in a tree. Think of the home folder on your computer. Now, we would want to look at the contents of the groups (which will be either subgroups or datasets): print h5file.root._v_children > {'group1': /group1 (Group) '' children := ['group2' (Group), '/someList' (Array(40000,)], 'list2': /list2 (Array(2500,)) '' atom := Int8Atom(shape=(), dflt=0) maindim := 0 flavor := 'numpy' byteorder := 'irrelevant' chunkshape := None, 'tags': /tags (Array(2, 19853)) '' atom := Int64Atom(shape=(), dflt=0) maindim := 0 flavor := 'numpy' byteorder := 'little' chunkshape := None} _v_children gives a dictionary of the children of a group, the root in the above example. Now we can see that from node, there are 3 children hanging – a group and two arrays. We can also read that group1 has two children – a group and an array. We saw earlier that h5file.walk_groups() is a way to iterate through all the groups of the HDF5 file; this can be used to loop over the groups: for group in h5file.walk_groups(): nodes = group._v_children namesOfNodes = nodes.keys() print namesOfNodes This will print the names of the children for each group. One can do more interesting things using .walk_groups(). A very important procedure one can run on a group is: x = group._v_name for array in h5file.list_nodes(x, classname="Array"): array_name = array._v_name array_contents = array.read() print array_contents This will print the contents of all the arrays that are the children of the group. The supported classes in classname are 'Group', 'Leaf', 'Table', and 'Array’. Recall that array.read() for each array gives a Numpy array, so all the Numpy operations like ndim, shape, etc., work for these objects. With these operations, you can start exploring an HDF5 file. For more procedures and methods, check out the tutorials on PyTables. Converting HDF to JSON I wrote a class to convert the contents of the the HDF5 file into a JSON object. The codes can be found here. Feel free to use and comment. The motivation for this is two-fold: JSON format provides a very easy tool for data serialization and it has always been my first choice for serialization/deserialization. JSON schema is used in many NOSQL databases, e.g.: Membase and MongoDB. We can store information in JSON schema in relational databases also. In fact, there are claims that PostgreSQL 9.4 is now faster than MongoDB for storing JSON documents. We know that the HDF5 files are not human-readable. This class renders them into human-readable data objects consisting of key–value pairs. This creates a JSON file of the same name as the input HDF5 file with the .json extension. When decoded, the file contains a nested python dictionary: HDF5toJSON.py hdf2file.h5 json_data = converter(h5file) contents = json_data.jsonOutput() > 'hdf2file.json' Recall that every object in an HDF5 file has a name and is arranged in a POSIX-style hierarchy with / separators, e.g.: /group1/group2/dataArray. I wanted to maintain the same hierarchy in the JSON file also. So, if you want to access the contents of dataArray in JSON file: json_file = open('createdJSONfile.json') for line in json_file: record = json.loads(line) print record['/']['group1']['group2']['dataArray'] The main key is always going to the root key, '/'. This class also has methods to access the contents of a group directly without following the hierarchy. If you want to get a list of all the groups in the HDF5 file: json_data = converter(h5file) groups = json_data.Groups() print groups > ['/', 'group1', 'group2'] One can also directly look at the contents of group1: json_data = converter(h5file) contents = json_data.groupContents('group1') print contents > {'group2':{'dataArray':[12,24,36]}, array1:[1,2,4,9]} Or, if you are interested in the group objects hanging from group1: json_data = converter(h5file) groups = json_data.subgroups('group1') > ['group2'] About the author Janu Verma is a Researcher in IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology, and healthcare analytics. He had held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, etc. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area. Email to schedule a meeting.
Read more
  • 0
  • 0
  • 5515
article-image-deep-dream-inceptionistic-art-neural-networks
Janu Verma
04 Jan 2017
9 min read
Save for later

Deep Dream: Inceptionistic art from neural networks

Janu Verma
04 Jan 2017
9 min read
The following image, known as dog-slug, was posted on Reddit and was reported to be generated by a convolution neural network. There was a lot of speculation about the validity of such a claim. It was later confirmed that this image was indeed generated by a neural network after Google described the mechanism for generation of such images, they called it deepdream and released their code for anyone to produce these images. This marks the begining of inceptionistic art creation using neural networks. Deep convolution neural networks (CNNs) have been very effective in image recognition problems. A deep neural network has an input layer, where the data is fed into, an output layer, which produces the prediction for each data point, and a lot of layers inbetween. The information moves from one layer to the next. CNNs work by progressively extracting higher-level features from the image at the successive layers of the network. Initial layers detect edges and corners, these features are then fed into next layers which combine them to produce features that make up the image e.g. segments of the image that discern the types of images. The final layer builds a classifier from these features and the output is the most likely category for the image. Deep dream works by reversing this process. An image is fed to the network, which is trained to recognize different categories for the images in the ImageNet dataset which contain 1.2 million images across 1000 categories. As each layer of the network 'learns' features at a different level, we can choose a layer and the output of that layer shows how that layer interprets the input image. The output of this layer is enhanced to produce an inceptionistic-looking picture. Thus a roughly puppy-looking segment of the image becomes super puppy-like. In this post, we will learn how to create inceptionistic images like deep dream using a pre-trained convolution neural network, called VGG (also known as OxfordNet). This network architecture is named after the Visual Geometry Group from Oxford, who developed it. It was used to win the ILSVR (ImageNet) competition in 2014. To this day, it is considered to be an excellent vision model, although it has been somewhat outperformed by more recent advances such as Inception (also known as GoogleNet) used by Google to produce deeo dream images. We will use a library called Keras for our examples. Keras Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. Installation Keras has the followinhg dependencies - numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Deep dream in Keras The following script is taken from official Keras source code on GitHub. from __future__ import print_function from keras.preprocessing.image import load_img, img_to_array import numpy as np from scipy.misc import imsave from scipy.optimize import fmin_l_bfgs_b import time import argparse from keras.applications import vgg16 from keras import backend as K from keras.layers import Input parser = argparse.ArgumentParser(description='Deep Dreams with Keras.') parser.add_argument('base_image_path', metavar='base', type=str, help='Path to the image to transform.') parser.add_argument('result_prefix', metavar='res_prefix', type=str, help='Prefix for the saved results.') args = parser.parse_args() base_image_path = args.base_image_path result_prefix = args.result_prefix # dimensions of the generated picture img_width = 800 img_height = 800 # path to the model weights file weights_path = 'vgg_weights.h5' # some settings we found interesting saved_settings = { 'bad_trip': {'features':{'block4_conv1': 0.05, 'block4_conv2': 0.01, 'block4_conv3': 0.01}, 'continuity': 0.01, 'dream_l2': 0.8, 'jitter': 5}, 'dreamy': {'features': {'block5_conv1':0.05, 'block5_conv2': 0.02}, 'continuity': 0.1, 'dream_l2': 0.02, 'jitter': 0}, } # the settings we will use in this experiment settings = saved_settings['dreamy'] # print(settings['dream_']) # util function to open, resize and format picturs into appropriate tensors. def preprocess_image(image_path): img = load_img(image_path, target_size=(img_width, img_height)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) return img # util function to convert a tensor into a valid image def deprocess_image(x): if K.image_dim_ordering() == 'th': x = x.reshape((3, img_width, img_height)) x = x.transpose((1,2,0)) else: x = x.reshape((img_width, img_height, 3)) # remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] ++ 123.68 # BGR -> RGB x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x if K.image_dim_ordering() == 'th': img_size = (3, img_width, img_height) else: img_size = (img_width, img_height, 3) # this will contain our generated image dream = Input(batch_shape=(1,) + img_size) # build the VGG16 network with our placeholder # the model will be loaded with pre-trained ImageNet weights model = vgg16.VGG16(input_tensor=dream, weights='imagenet', include_top=False) print('Model loaded.') # get the symbolic outputs of each "key" layer (we gave them unique names). layer_dict = dict([(layer.name, layer) for layer in model.layers]) # continuity loss util function def continuity_loss(x): assert K.ndim(x) == 4 if K.image_dim_ordering() == 'th': a = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, 1:, :img_height - 1]) b = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, :img_width - 1, 1:]) else: a = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, 1:, :img_height - 1, :]) b = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, :img_width - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) # define the loss loss = K.variable(0.) for layer_name in settings['features']: # add the L2 norm of the features of a layer to the loss assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.' coeff = settings['features'][layer_name] x = layer_dict[layer_name].output shape = layer_dict[layer_name].output_shape # we avoid border artifacts by only involving non-border pixels in the loss if K.image_dim_ordering() == 'th': loss -= coeff * K.sum(K.square(x[:, :, 2: shape[2] - 2, 2: shape[3] - 2])) / np.prod(shape[1:]) else: loss -= coeff * K.sum(K.square(x[:, 2: shape[1] - 2, 2: shape[2] - 2, :])) / np.prod(shape[1:]) # add continuity loss (gives image local coherence, can result in an artful blur) loss += settings['continuity'] * continuity_loss(dream) / np.prod(img_size) # add image L2 norm to loss (prevents pixels from taking very high values, makes image darker) loss += settings['dream_l2'] * K.sum(K.square(dream)) / np.prod(img_size) # feel free to further modify the loss as you see fit, to achieve new effects... # compute the gradients of the dream wrt the loss grads = K.gradients(loss, dream) outputs = [loss] if type(grads) in {list, tuple}: outputs += grads else: outputs.append(grads) f_outputs = K.function([dream], outputs) def eval_loss_and_grads(x): x = x.reshape((1,) + img_size) outs = f_outputs([x]) loss_value = outs[0] if len(outs[1:]) == 1: grad_values = outs[1].flatten().astype('float64') else: grad_values = np.array(outs[1:]).flatten().astype('float64') return loss_value, grad_values # this Evaluator class makes it possible # to compute loss and gradients in one pass # while retrieving them via two separate functions, # "loss" and "grads". This is done because scipy.optimize # requires separate functions for loss and gradients, # but computing them separately would be inefficient. class Evaluator(object): def __init__(self): self.loss_value = None self.grad_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator() # run scipy-based optimization (L-BFGS) over the pixels of the generated image # so as to minimize the loss x = preprocess_image(base_image_path) for i in range(15): print('Start of iteration', i) start_time = time.time() # add a random jitter to the initial image. This will be reverted at decoding time random_jitter = (settings['jitter'] * 2) * (np.random.random(img_size) - 0.5) x += random_jitter # run L-BFGS for 7 steps x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun=7) print('Current loss value:', min_val) # decode the dream and save it x = x.reshape(img_size) x -= random_jitter img = deprocess_image(np.copy(x)) fname = result_prefix + '_at_iteration_%d.png' % i imsave(fname, img) end_time = time.time() print('Image saved as', fname) print('Iteration %d completed in %ds' % (i, end_time - start_time)) This script can be run using the following schema - python deep_dream.py path_to_your_base_image.jpg prefix_for_results For example: python deep_dream.py mypic.jpg results Examples I created the following pictures using this script. More examples at Google Inceptionism gallery About the author Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He had held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on healthcare informatics, computer graphics and applications, nature genetics, IEEE sensors journals, and so on. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in Delhi-NCR area, email to schedule a meeting.
Read more
  • 0
  • 0
  • 5909

article-image-simple-player-health
Gareth Fouche
22 Dec 2016
8 min read
Save for later

Simple Player Health

Gareth Fouche
22 Dec 2016
8 min read
In this post, we’ll create a simple script to manage player health, then use that script and Unity triggers to create health pickups and environmental danger (lava) in a level. Before we get started on our health scripts, let’s create a prototype 3D environment to test them in. Create a new project with a new scene. Save this as “LavaWorld”. Begin by adding two textures to the project, a tileable rock texture and a tileable lava texture. If you don’t have those assets already, there are many sources of free textures online. Click Here is a good start. Create two new Materials named “LavaMaterial” and “RockMaterial” to match the new textures by right-clicking in the Project pane and selecting Create > Material. Drag the rock texture into the Albedo slot of RockMaterial. Drag the lava texture into the Emission slot of LavaMaterial to create a glowing lava effect. Now our materials are ready to use. In the Hierarchy view, use Create > 3D Object > Cube to create a 3D cube in the scene. Drag RockMaterial into the Materials > Element 0 slot on the Mesh Renderer of your cube in order to change the cube texture from the default blue material to your rock texture. Use the scale controls to stretch and flatten the cube. We now have a simple “rock platform”. Copy and paste the platform a few times, moving the new copies away to form small “islands”. Create a few more copies of the rock platform, scale them so that they’re long and thin, and position them as bridges between the “islands”. For example: Now, create a new cube named “LavaVolume”, and assign it the LavaMaterial. Scale it so that it is large enough to encompass all the islands but shallow (scale the y-axis height down). Move it so that it is lower than the islands, and so they appear to float in a lava field. In order to make it possible that a player can fall into the lava, check the BoxCollider’s “Is Trigger” property on LavaVolume. The Box Collider will now act as a Trigger volume, no longer physically blocking objects that come into contact with it, but notifying the script when an object moves through the collider volume. This presents a problem, as objects will now fall through the lava into infinite space! To deal with this problem, make another copy of the rock platforms and scale/position it so that it’s a similar dimension to the lava, also wide but flat, and position it just below the lava. So it forms a rock “floor” under the lava volume. To make your scene a little nicer, repeat the process to create rock walls around the lava, hiding where the lava volume ends. A few point lights ( Create > Light > Point Light) scattered around the islands will also add interesting visual variety. Now it’s time to add a player! First, import the “Standard Assets” package from the Unity Asset Store (if you don’t know how to do this, google the Unity Asset Store to learn about it). In the newly imported Standard Assets Project folder, go to Characters > FirstPersonCharacter > Prefabs. There you will find the FPSController prefab. Drag it into your scene, rename it to “Player” and position it on one of the islands, like so: Delete the old main camera that you had in your scene; the FPSController has its own camera. If you run the project, you should be able to walk around your scene, from island to island. You can also walk in the lava, but it doesn’t harm you, yet. To make the lava an actual threat, we start by giving our player the ability to track its health. In the Project Pane, right-click and select Create > C# Script. Name the script “Player”. Drag the Player script onto the Player object in the Hierarchy view. Open the script in Visual Studio, and add code as follows: This script exposes a variable, maxHealth, which determines how much health the Player starts with and the maximum health they can ever have. It exposes a function to alter the Player’s current health. And it uses a reference to a Text object to display the Player’s current health on screen. Back to Unity, you can now see the Max Health Property exposed in the inspector. Set Max Health to 100. There is also a field for Current Health Label, but we don’t currently have a GUI. To remedy this, in the Hierarchy view, select Create > UI > Canvas and then Create > UI > Label. This will create the UI root and a text label on it. Change the label’s text to “Health:”, the font size to 20 and colour to white. Drag it to the bottom left corner of the screen (and make sure the Rect Transform anchor is set to bottom left). Duplicate that text label, offset it right a little from the previous text label and change the text to “0”. Rename this new label “CurrentHealthLabel”. The GUI should now look like this: In the Hierarchy view, drag CurrentHealthLabel into your Player script’s “Current Health Label” property. If we run now, we’ll have a display in the bottom corner of the screen showing our Player’s health of 100. By itself, this isn’t particularly exciting. Time to add lava! Create a new c# script as before; call it Lava. Add this Lava script to the LavaVolume scene object. Open the script in Visual Studio and insert the following code: Note the TriggerEnter and TriggerExit functions. Because LavaVolume, the object we’ve added this script to, has a collider with Is Trigger checked, whenever another object enters LavaVolume’s box collider, OnTriggerEnter will be called, with the colliding object’s Collider passed as a parameter. Similarly, when an object leaves LavaVolume’s collider volume, OnTriggerExit will be called. Taking advantage of this functionality, we keep a list of all players who enter the lava. Then, during the Update call, if any players are in the lava, we apply damage to them periodically. damageTickTime determines an interval between every time we apply damage (a “tick”), and damagePerTick determines how much damage we apply per tick. Both properties are exposed in the Inspector by the script, so that they’re customizable. Set the values to Damage Per Tick = 5 and Damage Tick Time = 0.1. Now, if we run the game, stepping in the lava hurts! But, it’s a bit of an anti-climax, since nothing actually happens when our health gets down to 0. Let’s make things a little more fatal. First, use a paint program to create a “You Died!” screen at 1920 x 1080 resolution. Add that image to the project. Under the Import Settings, set the Texture Type to Sprite (2D and UI). Then, from the Hierarchy, select Create > UI > Image. Make the size 1920 x 1080, and set the Source Image property to your new player died sprite image. Go back to your Player Script and extend the code as follows: The additions add a reference to the player died screen, and code in the CheckDead function to check if the player’s health reaches 0, displaying the death screen if it does. The function also disables the FirstPersonController script if the player dies, so that the player can’t continue to move Player around via keyboard/mouse input after Player has died. Return to the Hierarchy view, and drag the player died screen into the exposed Dead Screen property on the Player script. Now, if you run the game, stepping in lava will “kill” the player if they stay in it long enough. Better! But it’s only fair to add a way for the Player to recover health, too. To do so, use a paint program to create a new “medkit” texture. Following the same procedure as used to create the LavaVolume, create a new cube called HealthKit, give it a Material that uses this new medkit texture, and enable “Is Trigger” on the cube’s BoxCollider. Create a new C# script called “Health Pickup”, add it to the cube, and insert the following code: Simpler than the Lava script, this adds health to a Player that collides with it, before disabling itself. Scale the HealthKit object until it looks about the right size for a health pack; then copy and paste a few of the packs across the islands. Now, when you play, if you manage to extricate yourself from the lava after falling in, you can collect a health pack to restore your health! And that brings us to the end of the Simple Player Health tutorial. We have a deadly lava level with health pickups, just waiting for enemy characters to be added. About the author Gareth Fouche is a game developer. He can be found on Github at @GarethNN
Read more
  • 0
  • 0
  • 14212

article-image-mapreduce-amazon-emr-nodejs
Pedro Narciso
14 Dec 2016
8 min read
Save for later

MapReduce on Amazon EMR with Node.js

Pedro Narciso
14 Dec 2016
8 min read
In this post, you will learn how to write a Node.js MapReduce application and how to run it on Amazon EMR. You don’t need to be familiar with Hadoop or EMR API's. In order to run the examples, you will need a Github account, an Amazon AWS, some money to spend at AWS, and Bash or an equivalent installed on your computer. EMR, BigData, and MapReduce We define BigData as those data sets too large or too complex to be processed by traditional processing applications. BigData is also a relative term: A data set can be too big for your Raspberry PI, while being a piece of cake for your desktop. What is MapReduce? MapReduce is a programming model that allows data sets to be processed in a parallel and distributed fashion. How does it work? You create a cluster and feed it with the data set. Then, you define a mapper and a reducer. MapReduce involves the following three steps: Mapping step: This breaks down the input data into KeyValue pairs Shuffling step: KeyValue pairs are grouped by Key Reducing step: KeyValue pairs are processed by Key in parallel It’s guaranteed that all data belonging to a single key will be processed by a single reducer instance. Our processing job project directory setup Today, we will implement a very simple processing job: counting unique words from a set of text files. The code for this article is hosted at Here. Let's set up a new directory for our project: $ mkdir -p emr-node/bin $ cd emr-node $ npm init --yes $ git init We also need some input data. In our case, we will download some books from project Gutenberg as follows: $ mkdir data $ curl -Lo data/tmohah.txt http://www.gutenberg.org/ebooks/45315.txt.utf-8 $ curl -Lo data/mad.txt http://www.gutenberg.org/ebooks/5616.txt.utf-8 Mapper and Reducer As we stated before, the mapper will break down its input into KeyValue pairs. Since we use the streaming API, we will read the input form stdin. We will then split each line into words, and for each word, we are going to print "word1" to stdout. TAB character is the expected field separator. We will see later the reason for setting "1" as the value. In plain Javascript, our ./bin/mapper can be expressed as: #!/usr/bin/env node const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); rl.on('line', function(line){ line.trim().split(' ').forEach(function(word){ console.log(`${word}t1`); }); }); As you can see, we have used the readline module (a Node built-in module) to parse stdin. Each line is broken down into words, and each word is printed to stdout as we stated before. Time to implement our reducer. The reducer expects a set of KeyValue pairs, sorted by key, as input, such as the following: First<TAB>1 First<TAB>1 Second<TAB>1 Second<TAB>1 Second<TAB>1 We then expect the reducer to output the following: First<TAB>2 Second<TAB>3 Reducer logic is very simple and can be expressed in pseudocode as: IF !previous_key previous_key = current_key counter = value IF previous_key equals current_key counter = counter + value ELSE print previous_key<TAB>counter previous_key = current_key; counter = value; The first statement is necessary to initialize the previous_key and counter variables. Let's see the real JavaScript implementation of ./bin/reducer: #!/usr/bin/env node var previousKey, counter; const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); function print(){ console.log(`${previousKey}t${counter}`); } function countWord(line) { let [currentKey, value] = line.split('t'); value = +value; if(typeof previousKey === 'undefined'){ previousKey = currentKey; counter = value; return; } if(previousKey === currentKey){ counter = counter + value; return; } print(); previousKey = currentKey; counter = value; } process.stdin.on('end',function(){ print(); }); rl.on('line', countWord); Again, we use readline module to parse stdin line by line. The countWord function implements our reducer logic described before. The last thing we need to do is to set execution permissions to those files: chmod +x ./bin/mapper chmod +x ./bin/reducer How do I test it locally? You have two ways to test your code: Install Hadoop and run a job With a simple shell script The second one is my preferred one for its simplicity: ./bin/mapper <<EOF | sort | ./bin/reducer first second first first second first EOF It should print the following: first<TAB>4 second<TAB>2 We are now ready to run our job in EMR! Amazon environment setup Before we run any processing job, we need to perform some setup on the AWS side. If you do not have an S3 bucket, you should create one now. Under that bucket, create the following directory structure: <your bucket> ├── EMR │ └── logs ├── bootstrap ├── input └── output Upload our previously downloaded books from project Gutenberg to the input folder. We also need AWS cli installed on the computer. You can install it with the python package manager. If you do not have AWS cli installed on your computer, then run: $ sudo pip install awscli awscli requires some configuration, so run the following and provide the requested data: $ aws configure You can find this data in your Amazon AWS web console. Be aware that usability is not Amazon’s strongest point. If you do not have your IAM EMR roles yet, it is time to create them: aws emr create-default-roles Good. You are now ready to deploy your first cluster. Check out this (run-cluster.sh) script: #!/bin/bash MACHINE_TYPE='c1.medium' BUCKET='pngr-emr-demo' REGION='eu-west-1' KEY_NAME='pedro@triffid' aws emr create-cluster --release-label 'emr-4.0.0' --enable-debugging --visible-to-all-users --name PNGRDemo --instance-groups InstanceCount=1,InstanceGroupType=CORE,InstanceType=$MACHINE_TYPE InstanceCount=1,InstanceGroupType=MASTER,InstanceType=$MACHINE_TYPE --no-auto-terminate --enable-debugging --log-uri s3://$BUCKET/EMR/logs --bootstrap-actions Path=s3://$BUCKET/bootstrap/bootstrap.sh,Name=Install --ec2-attributes KeyName=$KEY_NAME,InstanceProfile=EMR_EC2_DefaultRole --service-role EMR_DefaultRole --region $REGION The previous script will create a 1 master, 1 core cluster, which is big enough for now. You will need to update this script with your bucket, region, and key name. Remember that your keys are listed at "AWS EC2 console/Key pairs". Running this script will print something like the following: { "ClusterId": "j-1HHM1B0U5DGUM" } That is your cluster ID and you will need it later. Please visit your Amazon AWS EMR console and switch to your region. Your cluster should be listed there. It is possible to add the processing steps with either the UI or aws cli. Let's use a shell script (add-step.sh): #!/bin/bash CLUSTER_ID=$1 BUCKET='pngr-emr-demo' OUTPUT='output/1' aws emr add-steps --cluster-id $CLUSTER_ID --steps Name=CountWords,Type=Streaming,Args=[-input,s3://$BUCKET/input,-output,s3://$BUCKET/$OUTPUT,-mapper,mapper,-reducer,reducer] It is important to point out that our "OUTPUT" directory does not exist at S3 yet. Otherwise, the job will fail. Call ./add-step.sh plus the cluster ID to add our CountWords step: ./add-step j-1HHM1B0U5DGUM Done! So go back to the Amazon UI, reload the cluster page, and check the steps. "CountWords" step should be listed there. You can track job progress from the UI (reload the page) or from the command line interface. Once the job is done, terminate the cluster. You will probably want to configure the cluster to terminate as soon as it finishes or when any step fails. Termination behavior can be specified with the "aws emr create-cluster". Sometimes the bootstrap process can be difficult. You can SSH into the machines, but before that, you will need to modify their security groups, which are listed at "EC2 web console/security groups". Where to go from here? You can (and should) break down your processing jobs into smaller steps because it will simplify your code and add more composability to your steps. You can compose more complex processing jobs by using the output of a step as the input for the next step. Imagine that you have run the "CountWords" processing job several times and now you want to sum the outputs. Well, for that particular case, you just add a new step with an "identity mapper" and your already built reducer, and feed it with all of the previous outputs. Now you can see why we output "WORD1" from the mapper. About the author Pedro Narciso García Revington is a Senior Full Stack Developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 26735
article-image-web-developer-app-developer
Oliver Blumanski
12 Dec 2016
4 min read
Save for later

From Web Developer to App Developer

Oliver Blumanski
12 Dec 2016
4 min read
As a web developer, you have to adapt every year to new technologies. In the last four years, the JavaScript world has exploded, and their toolsets are changing very fast. In this blog post, I will describe my experience of changing from a web developer to an app developer. My start in the Mobile App World My first attempt at creating a mobile app was a simple JavaScript one-page app, which was just a website designed for mobile devices. It wasn’t very impressive, but at the time, there was no React-Native or Ionic Framework. It was nice, but it wasn't great. Ionic Framework Later, I developed apps using the Ionic/Angular Framework, which uses Cordova as a wrapper. Ionic apps run in a web-view on the device. To work with Ionic was pretty easy, and the performance increased over time, so I found it to be a good toolset. If you need an app that is running on a broad spectrum of devices, Ionic is a good choice. React-Native A while ago, I made the change to React-Native. React-Native was supported only by iOS at the start, but then it also supported Android, so I thought that the time was right to switch to React-Native. The React-Native world is a bit different than the Ionic world. React-Native is still newish, and many modules are a work-in-progress; so, React-Native itself is released every two weeks with a new version. Working with React-Native is bleeding edge development. React-Native and Firebase are what I use right now. When I was working with Ionic, I was using a SQLite database to cache on the device, and I used Ajax to get data from a remote API. For notifications, I used Google GCM and Pushwoosh, and for uploads, AWS S3. With React-Native, I chose the new Firebase v3, which came out earlier this year. Firebase offers a real-time database, authentication, cloud messaging, storage, analytics, offline data capability, and much more. Firebase can replace all of the third-party tools I have used before. For further information, check out here. Google Firebase supports three platforms: iOS, Android, and the Web. Unfortunately, the web platform does not support offline capabilities, notifications, and some other features. If you want to use all the features Firebase has to offer, there is a React-Native module that is wrapping the IOS and Android native platforms. The JavaScript API module is identical to the Firebase web platform JavaScript API. So, you can use the Firebase web docs on this. Developing with React-Native, you come in touch with a lot of different technologies and programming languages. You have to deal with Xcode, and with Android, you have to add/change the Java code and deal with Gradle, permanent google-service upgrades, and many other things. It is fun to work with React-Native, but it can also be frustrating regarding unfinished modules or outdated documentation on the web. It pushes you into new areas, so you learn Java, Objective-C, or both. So, why not? Firebase V3 Features Let’s look at some of the Firebase V3 features. Firebase Authentication One of the great features that Firebase offers is authentication. They have, ready to go, Facebook login, Twitter login, Google login, Github login, anonymous login, and email/password sign up. OK, to get the Facebook login running, you will still need a third-party module. For Facebook login, I have recently used this module. And, for a Google login, I have recently used this module. Firebase Cloud Messages You can receive notifications on the device, but the differences are depending on the state of the app. For instance, is the app open or closed. Read up here. Firebase Cloud Messages Server You may want to send messages to all or particular users/devices, and you can do this via the FCM Server. I use a NodeJS script as the FCM Server, and I use this module to do so Here. You can read more at Here. Firebase Real-Time Database You can subscribe to database queries; so, as soon as data is changing, your app is getting the new data without a reload. However, you can only call the data once. The real-time database uses web sockets to deliver data. Conclusion As a developer, you have to evolve with technology and keep up with upcoming development tools. I think that mobile development is more exciting than web development these days, and this is the reason why I would like to focus more on app development. About the author Oliver Blumanski is a developer based out of Townsville, Australia. He has been a software developer since 2000, and can be found on GitHub @ blumanski.
Read more
  • 0
  • 0
  • 7990

article-image-express-middleware
Pedro NarcisoGarcíaRevington
09 Dec 2016
6 min read
Save for later

Express Middleware

Pedro NarcisoGarcíaRevington
09 Dec 2016
6 min read
This post provides you with an introduction to Express middleware functions, what they are, and how you can apply a composability principle to combine simple middleware functions to build more complicated ones. You can find the code of this article at github.com/revington/express-middleware-tutorial. Before digging into Express and its middleware functions, let's examine the concept of a web server in Node. In Node, we do not create web applications as in PHP or ASP, but with web servers. Hello world Node web server Node web servers are created by exposing an HTTP handler to an instance of the HTTP class. This handler will have access to the request object, an instance of IncomingMessage; and the response object, an instance of ServerResponse. The following code listing (hello.js) implements this concept: var http = require('http'); function handler(request, response){ response.end('hello world'); } http.createServer(handler).listen(3000); You can run this server by saving the code to a file and running the following command in your terminal: $ node hello.js Open http://localhost:3000 in your favourite web browser to display our greeting. Exciting, isn't it? Of course, this is not the kind of web application we are going to get paid for. Before we can add our own business logic, we want some heavy lifting done for us, like cookie parsing, body parsing, routing, logging, and so on. This is where Express comes in. It will help you orchestrate all of this functionality, plus your business logic, and it will do it with two simple concepts: middleware functions and the middleware stack. What are middleware functions? In the context of an Express application, middleware functions are functions with access to the request, the response, and the next middleware in the pipeline. As you probably noticed, middleware functions are similar to our previous HTTP handler, but with two important differences: Request and Response objects are augmented by Express to expose its own API Middleware has access to the next middleware in the stack The latter leads us to the middleware stack concept. Middleware functions can be "stacked", which means that the given two-stacked middleware functions, A and B, an incoming request will be processed by A and then by B. In order to better understand these abstract concepts, we are going to implement a really simple middleware stack, shown here in the middleware-stack.js code listing: 'use strict'; const http = require('http'); function plusOne(req, res, next){ // Set req.counter to 0 if and only if req.counter is not defined req.counter = req.counter || 0; req.counter++; return next(); } function respond(req, res){ res.end('req.counter value = ' + req.counter); } function createMiddlewareStack(/*a bunch of middlewares*/){ var stack = arguments; return function middlewareStack(req, res){ let i = 0; function next(){ // pick the next middleware function from the stack and // increase the pointer let currentMiddleware = stack[i]; if(!currentMiddleware){ return; } i = i + 1; currentMiddleware(req, res, next); } // Call next for the first time next(); } } var myMiddlewareStack = createMiddlewareStack(plusOne, plusOne, respond); function httpHandler(req, res){ myMiddlewareStack(req, res); } http.createServer(httpHandler).listen(3000); You can run this server with the following command: $ node middleware-stack.js After reloading http://localhost:3000, you should read: req.counter value = 2 Let's analyze the code. We first define the plusOne function. This is our first middleware function and, as expected, it receives three arguments: req, res, next. The function itself is pretty simple. It ensures that the req object is augmented with the counter property, and that this property is incremented by one, then it calls the provided next() function. The respond middleware function has a slightly different signature. The next parameter is missing. We did not include next in the signature because theres.end() function terminates the request and, therefore, there is no need to call next(). While writting a middleware function, you can either terminate the request or call next(). Otherwise, the request will hang and the client will have no response. Bear in mind that calling next or terminating the request more than once will turn into difficult errors to debug. The createMiddlewareStack function is way more interesting than the previous ones and explains how the middleware stack works. The first statement creates a reference to the arguments object. In JavaScript, this is an Array-like object corresponding to the arguments passed to a function. Then, we define the next() function. On each next() call, we pick a reference to the ith element of the middleware stack, which, of course, is the next middleware function on the stack. Then, we increment the value of i and pass control to the current middleware with req, res, and our recently created next function. We invoke next() immediately after its definition. The mechanism is simple: everytime next() is invoked, the value of i is incremented and, therefore, on each call, we will pass control to the next middleware in the stack. Once our core functionality has been defined, the next steps are pretty straightforward. myMiddlewareStack is a middleware stack composed by plusOne and respond. Then, we define a very simple HTTP handler with just one responsibility: to transfer control to our middleware stack. Now, we have a good understanding of middleware functions and middleware stacks, and we are ready to rewrite our simple application with Express. Install express by running the following command: $ npm install express Create the file express.js as follows: 'use strict'; const express = require('express'), app = express(); function plusOne(req, res, next){ req.counter = req.counter || 0; req.counter++; return next(); } function respond(req, res){ res.end('Hello from express req.counter value = ' + req.counter); } app.use('/', plusOne, plusOne, respond); app.listen(3000); app.use mounts the specified middleware/s function/s at the specified path. In our case,"/". The path part can also be a pattern or regular expression. Again, run this server with this command: $ node express.js After reloading http://localhost:3000, we should be able to read the following: "Hello from express req.counter value = 2" So far, we have seen how: To create middleware functions and mount them at a given path To make changes to request/response objects To terminate a request To call the next middleware Where to go from here Express repository is full of examples covering topics like auth, content negotiation, session, cookies, and so on. A simple restfull API can be a good project to get yourself more familiarized with the Express API: middleware, routing, and views, among others. About the author Pedro NarcisoGarcíaRevington is a senior full stack developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D), DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 3190