Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 15858

article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13068

article-image-5-mistake-developers-make-when-working-hbase
Tess Hsu
19 Oct 2016
3 min read
Save for later

5 Mistake Developers Make When Working With HBase

Tess Hsu
19 Oct 2016
3 min read
Having worked with HBase for over six years, I want to share some common mistakes developers make when using HBase: 1. Use aPrefixFilter without setting a start row. This came up several times on the mailing list over the years. Here is the filter: Github The use case is to find rowsthathave a given prefix. Some people complain that the scan was too slow using PrefixFilter. This was due to them not specifying the proper start row.Suppose there are 10K regions in the table, and the first row satisfying the prefix is in the 3000th region. Without a proper start row, the scan begins with the first region. In HBase1.x, you can use the following method of Scan: public Scan setRowPrefixFilter(byte[] rowPrefix) { This setsa start row for you. 2. Incur low free HDFSspace due to HBase snapshots hanging around. In theory, you can have many HBase snapshots in your cluster. This does place a considerable burden on HDFS, and the large number of hfiles may slow down Namenode. Suppose you have a five-column-family table with 40K regions. Each column family has 6 hfiles before compaction kicks in. For this table, you may have 1.2 million hfiles. Take a snapshot to reference the 1.2 million hfiles. After routine compactions, another snapshot is taken, so a million more hfiles (roughly) would be referenced. Prior hfiles stay until the snapshot that references them is deleted. This means that having a practical schedule of cleaning unneeded snapshots is a recipe for satisfactory cluster performance. 3. Retrieve last N rows without using a reverse scan. In some scenarios, you may need to retrieve the last N rows. Assuming salting of keys is not involved, you can use the following API of Scan: public Scan setReversed(boolean reversed) { On the client side, you can choose the proper data structure so that sorting is not needed. For example, use LinkedList 4. Running multiple region servers on the same host due to heap size consideration. Some users run several region servers on the same machine to keep as much data in the block cache as possible, while at the same time minimizing GC time. Compared to having one region server with a huge heap, GC tuning is a lot easier. Deployment has some pain points, because a lot of the start / stop scripts don't work out of the box. With the introduction of bucket cache, GC activities come down greatly. There is no need to use the above trick. See here. 5. Receive a NoNode zookeeper exception due to misconfigured parent znode. When thezookeeper.znode.parentconfig value on the client side doesn't match the one for your cluster, you may see the following exception: Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) atorg.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) atorg.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) at com.ngdata.sep.util.zookeeper.ZooKeeperImpl.getData(ZooKeeperImpl.java:238) One possible scenario is that hbase-site.xml is not on the classpath of the client application.The default value for zookeeper.znode.parent doesn't match the actual one for your cluster. When you get hbase-site.xml onto the classpath, the problem should be gone. About the author Ted Yu is a staff engineer at HortonWorks. He has also been an HBase committer/PMC for five years. His work on HBase covers various components: security, backup/restore, load balancer, MOB, and so on. He has provided support for customers at eBay, Micron, PayPal, and JPMC. He is also a Spark contributor.
Read more
  • 0
  • 0
  • 15024

article-image-strategies-become-more-skillful-tech-pro
Cheryl Adams
18 Oct 2016
5 min read
Save for later

Strategies to Become a More Skillful Tech Pro

Cheryl Adams
18 Oct 2016
5 min read
Being a skillful technician has become a job as much as being employed as one. How so? The technical landscape is constantly changing. There once was a time when all you needed to do was master a few programming skills from the leading enterprise application vendors and one or more operating systems. Training was expensive and only available to licensed customers or employees of the company where a product was licensed. If you worked at the company but were not part of the development or application support team, you had no access. As a user, you only had user access but no real understanding of the administration or how the tool really works. Your scope of learning was limited to your role or job and you had; and what the company had licensed. Server-based tools and applications could only be used. The process of becoming skillful was very controlled and contained. But the landscape of how and where we learn as skilled technicians is constantly changing. Both open source and enterprise technologies have a robust library of applications and operating systems. The landscape has changed most noticeably in the enterprise arena. Enterprise technologies have opened their doors to students, non-customers, and independents. Large enterprise companies such as Oracle now make full versions of their software available for review in non-production environments. By creating a free account and agreeing to their terms, most software can be downloaded into a test environment for development and review. This new open-doors policy allows access to basic forums for support, but gives you a fully functioning product to work with. Microsoft has also adopted this plan by allowing fully functioning products like Microsoft SQL Server to be available for download in a trial for at least 6 months. They also have SQL Express, which is a non-limited version of a SQL Server product but a lighter version. It does not have all the functions of SQL Server, but some of the core ones are still in place. Although Microsoft disables or limits their functionality after a trial, it still gives you reasonable time for review. Most users already have access to a licensed copy of a Microsoft Operating System, but there is a large assortment of tools to try for the 64-bit platform. For those who don’t, Microsoft clearly states this limitation regarding its server-level operating system: if you need more time to evaluate Windows Server 2008, the 60-day evaluation period may be reset (or re-armed) three times, extending the original 60-day evaluation period by up to 180 days for a total possible evaluation time of 240 days. After this time, you will need to uninstall the software or upgrade to a fully-licensed version of Windows Server 2008. With these options available in your arsenal, you have the foundation for maintaining your technology-based skills. As a technical professional the goal is to create an environment that can be used without restriction. This can be accomplished by making a selection of tools and applications from both enterprise and open source. Open source has always been just that. Searching sites like SourceForge or leading software companies in this space you will find what is referred to as the community edition of their software. Yes, it does not have all the features of an enterprise release but enough to build a good foundation for learning and development. In fact, some leading organizations find that the community edition is sufficient to run their production operations. This places any technical resource in a valuable positon, since the same application can be studied independently. Products like MySQL and SugarCRM both have community editions available to download and configure in your own environment. Some web-hosted environments already have tools that are considered part of the stack like LINUX, Apache, MySQL and PHP. These resources have a large community support and most questions are answered in a reasonable amount of time. For a quicker turn around you would to need to finance a paid support service. Cloud computing companies like HortonWorks and Cloudera have a sandbox version of their product that you can download with an environment preconfigured and ready for use. Each site has enough robust documentation to kick start your exploration into this space. Amazon even offers the AWS Loft were you can meet up and work with AWS experts for free. Both AWS and Azure have documentation on usage of their services and products on the site. Programming languages like Python and Java have their own programing engine you can download to work with. There are also alternative database engines like MYSQL, Postgres, Casandra and MongoDb. MongoDB has a free Mongo University program with video based instructions, testing, and a help forum to assist new developers. Microsoft has the Microsoft Virtual Academy, which has videos and basic training giving by their own subject matter experts. Having the programming tools, applications and operating systems aren’t much use without proper training. Leading training vendors like Udemy, Codecamp and Lynda offer classes taught by subject matter experts. If you do a search, no doubt you will find a number of such service offerings, some of which are free. It is always a good idea to review the comments to see if the class will suit your needs. Another suggestion is to subscribe to a useful tech news email to stay abreast with the changes in the technology. Mowing down the tall grasses of useless email solicitations, a useful newsletter can be highly beneficial. You will be able to read about emerging technologies from the leading vendors and perhaps find some useful e-book deliveries. If your selection is not useful you can easily unsubscribe. Clearly having a good strategy or plan is essential for being a skillful tech pro. Now with a wide selection to choose from the possibilities are endless for building your tech expertise.
Read more
  • 0
  • 0
  • 2750

article-image-tweeter-20-or-how-i-learned-stop-worrying-and-modify-amazons-code
Jeremy Karbowski
18 Oct 2016
7 min read
Save for later

Tweeter 2.0 or How I Learned to Stop Worrying and Modify Amazon's Code

Jeremy Karbowski
18 Oct 2016
7 min read
Welcome back, if you made it through Part 1 then you have an Alexa Skill to tweet with your voice and you're probably itching to share it with the world. But wait, there's a problem! You want to get your skill certified so that anyone can log in and tweet from their Twitter account, so you've set up a page for users to log in with and that's all dandy. But, Amazon's Account Linking flow requires you to support a different card type, specifically the "LinkAccount" card type, and the code we downloaded from Amazon in the first tutorial only supports the "Simple" card type. We'll have to do something about that. We're going to modify AlexaSkill.js to support the LinkAccount and Standard card types. Standard cards will allow you to attach an image to your cards. Although Amazon's requirements for where you host those images are a bit tight, it's nice to have the option. If you haven't had a chance to go over AlexaSkill.js in detail, you can find a decent breakdown of the code here. Flip over the code and get familiar with it first, because we're about to dive in and make some changes. But First, a Word on Alexa's Authorization Flow, Twitter, and your Account Linking Portal First, we'll need a way for our skill to get our user's twitter authorization token and secret. Amazon's Account Linking flow expects us to be using the OAuth 2.0 standard. Twitter, however, uses Oauth 1.0a. This means Amazon is expecting us to hand them back a single string back from our authorization URL. To get around this we can pass back the user's key and secret joined by a set of unique characters, and split it back into separate pieces in our skill. Actually building the login page is beyond the scope of this article, as there are countless options out there for hosting and web frameworks and the like. I've provided a sample login portal written with node's express framework with some basic instructions for setting it up on heroku over at the tweeter-login repository on my GitHub profile. Feel free to use it as a base in your project and modify it, it should provide the basic needs for certification, such as pages for a privacy policy and terms of use. Amazon has a more in-depth certification submission checklist for those interested in publishing their skill. Dive In to the LinkAccount Card Now that we have our authorization tokens, we can set up our LinkAccount card in our skill to provide a card with a link to our login/authorization page to users who try to use the skill without logging in through Twitter. Our first stop is AlexaSkill.js. Near the bottom, in the function that builds Response.prototype, we find our core methods of the response object: tell, tellWithCard, ask, and askWithCard. It's here that we'll add our reject method to provide users with the LinkAccount card. At the bottom of the return, after the askWithCard function, add the following: reject: function (speechOutput) { this._context.succeed(buildSpeechletResponse({ session: this._session, output: speechOutput, cardType: "LinkAccount", shouldEndSession: true })); } This function will be used later to reject users who haven't authorized the skill with their twitter account with a LinkAccount card. Notice we'll only need to provide the speech output- we can't control the content or title of the LinkAccount card, only set the card type as such. Near the top of the same Response.prototype function, in buildSpeechletResponse, we'll add a conditional to change the card type if the option is supplied. Add it after the cardTitle and cardContent are set, so we can use this again later to support images as well. if (options.cardType) { alexaResponse.card = { type: options.cardType } } Around the middle of the file, in the AlexaSkill.prototype.execute function, we'll find a conditional that controls what parameters get passed into a new session. We're going to modify this to pass in a new Response object along with the request and session. if (event.session.new) { this.eventHandlers.onSessionStarted(event.request, event.session, new Response(context, event.session)); } Now we'll have access to our reject method when our session starts, so we can serve up those hot fresh LinkAccount cards. Back near the top, in AlexaSkill.prototype.eventHandlers, we'll find the onSessionStarted function. This is the function we'll override in our skill to handle the LinkAccount card logic. onSessionStarted: function (sessionStartedRequest, session, response) { }, We've added the response parameter on to the end of the function so we can use the reject function in our skill, like so: Tweeter.prototype.eventHandlers.onSessionStarted = function(sessionStartedRequest, session, response) { console.log('Twitter onSessionStarted requestId:' + sessionStartedRequest.requestId +', sessionId: ' + session.sessionId); if(session.user.accessToken) { var token = session.user.accessToken; var tokens = token.split(/*your token separator*/); userToken = tokens[0]; userSecret = tokens[1]; } else { var speechOutput = "You must have a Twitter account to use this skill. " + "Click on the card in the Alexa app to link your account now."; response.reject(speechOutput); } }; This will provide us access to the tokens in our oauth invocation in the TweetIntent function: oauth.post( url, userToken, userSecret, null, // body function(err, data, res) { // } ) Now we can share our app with anyone, and provide them with a way to log in through their Twitter account! Standard Cards (the ones with images) Amazon so far has provided very little in the way of support for card markup. Currently the only tools availalable to developers looking to style or format cards for their skill are newlines(or just ) and images. You can only include a single image on a card, so choose wisely. You'll need to provide a small image URL sized at 720 x 480 and a large image URL sized at 1200 x 800 in PNG or JPG behind an HTTPS endpoint. To enable cards with images, head back to the methods returned by the Response.prototype function and add these two functions, tellWithCardImage and askWithCardImage: tellWithCardImage: function (speechOutput, cardTitle, cardContent, images) { this._context.succeed(buildSpeechletResponse({ session: this._session, output: speechOutput, cardTitle: cardTitle, cardContent: cardContent, cardImages: images, shouldEndSession: true })); }, askWithCardImage: function (speechOutput, repromptSpeech, cardTitle, cardContent, images) { this._context.succeed(buildSpeechletResponse({ session: this._session, output: speechOutput, reprompt: repromptSpeech, cardTitle: cardTitle, cardContent: cardContent, cardImages: images, shouldEndSession: false })); }, These methods will accept an array of two images, your small image URL and large image URL, respectively. Then, underneath the conditional we added earlier to change the card type add in the following: if (options.cardImages) { alexaResponse.card.type = "Standard"; alexaResponse.card.images = { smallImageUrl: options.cardImages[0], largeImageUrl: options.cardImages[1] }; } This will change the card type to Standard and set the images. If you've met Amazon's standards, you should be seeing an image along with those cards now. Thanks for reading :) Hopefully this has been an informative journey. Voice experiences can be an exciting challenge to build, and Amazon's skill marketplace is in need of cool new things that are actually worth using. Be the developer you wish to see in the world. About the author Jeremy Karbowski is a JavaScript developer with experience in WebGL, and hardware platforms such as Alexa and Pebble. He participated in SpartaHack 2016, where he was in the top 10 and won Best Alexa Integration from Amazon. He can be found on GitHub at @jkarbows and Twitter at @JeremyKarbowski.
Read more
  • 0
  • 0
  • 2413

article-image-reactive-functional-and-beyond
Daniel Leping
17 Oct 2016
5 min read
Save for later

Reactive, Functional and Beyond

Daniel Leping
17 Oct 2016
5 min read
Every second, three new humans are born. Sometimes it feels like the buzz words are appearing with the same or even greater speed. In the last several years the word reactive became something everybody talked about, but it has different meanings. Wikipedia defines reactive programming in the following way: Reactive programming is a programming paradigm oriented around data flows and the propagation of change. This sounds cool, but it’s hard to comprehend. The reactivemanifesto.org site defines reactive systems as ones that are: Responsive Resilient Elastic Message driven Moreover, this is something backed by Lightbend (the company behind Scala and one of the most active evangelists of the Reactive concept). The concept of Reactive became even harder to define lately because some UI-related frameworks claim to be Reactive: RxSwift or ReactiveCocoa for instance. So, why would I be writing about something I called a buzzword? I believe that the Reactive approach is in its early stages, and we (the community) have not yet comprehended and structured it enough. In this post I will cover the most significant tools in Reactive programming. Right now I'm working on a Swift Express project, which I want to be fully compliant with the reactivemanifesto.org. To accomplish this, I had to spawn a fully featured implementation of the Reactive foundation for Swift: Reactive Swift. Furthermore, I want this project to be suitable for both the client and the server side, which is quite a challenge. The first thing I will describe is the RunLoop, which is a very basic low-level object, but it's crucial for deep asynchronous programming understanding and, of course, implementation. Run Loops A traditional approach to the IO and long-lasting operations is blocking. Of course, there are threads, you can say, but threads are not cheap, and thread management is usually quite a cumbersome task for a developer. Threads were invented to make our applications more responsive, right? OK, there are two main reasons apps don’t become responsive: Blocking IO CPU-heavy tasks The second reason is becoming less and less relevant as hardware evolves, and it's solvable with Reactive patterns in the same way as IO, so let’s focus on IO operations. The traditional pseudocode for reading a file is as follows: let file = open("filename") //here we block let data = file.readAll() //do something with data In the era of console apps, it was ok, since most of them were intended to perform single serial tasks (cat, grep, and so on). But if we do something like that on the main execution loop in the UI app, it will hang until the file is fully read. If we do it in a single-threaded server, it will block the accepting and serving of new requests. Thus, threads were made. The real problem, though, is that we just need the data to become available in the future without stopping other operations. In the C language, you can find several implementations of these on different platforms, like epoll on Linux and kqueue on FreeBSD and OS X (all low-level and hard to work with). Still, what is common about them is that they all are Run Loops in essence. Wikipedia defines Run Loops as follows: A run loop is a programming construct that waits for and dispatches events or messages in a program. In practice, it looks like this (Swift pseudocode): let loop = RunLoop() //here we block and start processing messages loop.run() Each operation on the loop should not last long, and if there is nothing to do, the loop sleeps. It can process IO operations and can have operations to be scheduled for later execution. Here is how it would look to read a file with an event loop: // remember we already have a loop from the previous listing? let file = open("filename", loop: loop) //here we do NOT block file.readAll { data in //do something with data } //here we assume the loop is running, and the code above is executed inside it The trick is very simple. We don't block for every single operation, but rather for all the operations together in the loop until an event happens (data availability, a timer is fired, an operation is due, and so on). As soon as the underlying OS is managing all of these states, it can unblock the loop when at least one event happens (think about the need to redraw the screen, accept new connection, and so on) and tell the loop which event exactly has happened and what it should execute. This way we can simultaneously wait for user input, a new incoming connection, and a response from a server without hanging the app. This is called asynchronous approach. Not reactive programming yet, but the first step and an essential part of it (remember the "Message driven" item from reactivemanifesto.org's list?). If you want to see more practical examples, take a look at our RunLoop from Reactive Swift foundation (sorry for the absence of a README). It's something that we already use in all other parts of Reactive Swift, in the upcoming version (0.4) of Swift Express, and in our iOS apps. It has two underlying implementations and at the time of writing can work either with Dispatch or use libuv (the same is used by NodeJS) as a core engine. I hope you enjoyed reading this. About the author Daniel Leping is the CEO of Crossroad Labs. He has been working with Swift since the early beta releases and continues to do so at Swift Express project. His main interests are reactive and functional programming with Swift and Swift-based web technologies and bringing the best of modern techniques to the Swift world. He can be found on GitHub.
Read more
  • 0
  • 0
  • 2585
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-top-5-devops-tools-increase-agility
Darrell Pratt
14 Oct 2016
7 min read
Save for later

Top 5 DevOps Tools to Increase Agility

Darrell Pratt
14 Oct 2016
7 min read
DevOps has been broadly defined as a movement that aims to remove the barriers between the development and operations teams within an organization. Agile practices have helped to increase speed and agility within development teams, but the old methodology of throwing the code over the wall to an operations department to manage the deployment of the code to the production systems still persists. The primary goal of the adoption of DevOps practices is to improve both the communication between disparate operations and development groups, and the process by which they work. Several tools are being used across the industry to put this idea into practice. We will cover what I feel is the top set of those tools from the various areas of the DevOps pipeline, in no particular order. Docker “It worked on my machine…” Every operations or development manager has heard this at some point in their career. A developer commits their code and promptly breaks an important environment because their local machine isn’t configured to be identical to a larger production or integration environment.  Containerization has exploded onto the scene and Docker is at the nexus of the change to isolate code and systems into easily transferable modules. Docker is used in the DevOps suite of tools in a couple of methods. The quickest win is to first use Docker to provide the developers with easily useable containers that can mimic the various systems within the stack. If a developer is working on a new RESTful service, they can checkout the container that is setup to run Node.js or Spring Boot, and write the code for the new service with the confidence that the container will be identical to the service environment on the servers. With the success of using Docker in the development workflow, the next logical step is to use Docker in the build stage of the CI/CD pipeline. Docker can help to isolate the build environment’s requirements across different portions of the larger application. By containerizing this step, it is easy to use one generic pipeline to build components as different spanning from Ruby and Node.js to Java and Golang. Git & JFrog Artifactory Source control and artifact management acts as afunnel for the DevOps pipeline. The structure of an organization can dictate how they run these tools, be it hosted or served locally. Git’s decentralized source code management and high-performance merging features have helped it to become the most popular tool in version control systems. Atlassian BitBucket and GitHub both provide a good set of tooling around Git and are easy to use and to integrate with other systems. Source code control is vital to the pipeline, but the control and distribution of artifacts into the build and deployment chain is important as well. >Branching in Git Artifactory is a one-stop shop for any binary artifact hosted within a single repository, which now supports Maven, Docker, Bower, Ruby Gems, CocoaPods, RPM, Yum, and npm. As the codebase of an application grows and includes a broader set of technologies, the ability to control this complexity from a single point and integrate with a broad set of continuous integration tools cannot be stressed enough. Ensuring that the build scripts are using the correct dependencies, both external and internal, and serving a local set of Docker containers reduces the friction in the build chain and will make the lives of the technology team much easier. Jenkins There are several CI servers to choose from in the market. The hosted set of tools such as Travis CI, Codeship, Wercker and Circle CI are all very well suited to drive an integration pipeline and each caters slightly better to an organization that is more cloud focused (source control and hosting), with deep integrations with GitHub and cloud providers like AWS, Heroku, Google and Azure. The older and less flashy system is Jenkins. fcommunity that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow >Pipeline example Jenkins has continued to nurture a large community that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow. Hashicorp Terraform At this point we have a system that can build and manage applications through the software development lifecycle. The code is hosted in Git, orchestrated through testing and compilation with Jenkins, and running in reliable containers, and we are storing and proxying dependencies in Artifactory. The deployment of the application is where the operations and development groups come together in DevOps. Terraform is an excellent tool to manage the infrastructure required for running the applications as code itself. There are several vendors in this space — Chef, Puppet and Ansible to name just a few. Terraform sits at a higher level than many of these tools by acting as more of a provisioning system than a configuration management system. It has plugins to incorporate many of the configuration tools, so any investment that an organization has made in one of those systems can be maintained. Load balancer and instance config Where Terraform excels is in its ability to easily provision arbitrarily complex multi-tiered systems, both locally or cloud hosted. The syntax is simple and declarative, and because it is text, it can be versioned alongside the code and other assets of an application. This delivers on “Infrastructure as Code.” Slack A chat application was probably not what you were expecting in a DevOps article, but Slack has been a transformative application for many technology organizations. Slack provides an excellent platform for fostering communication between teams (text, voice and video) and integrating various systems.  The DevOps movement stresses onremoval of barriers between the teams and individuals who work together to build and deploy applications. Web hooks provide simple integration points for simple things such as build notifications, environment statuses and deployment audits. There is a growing number of custom integrations for some of the tools we have covered in this article, and the bot space is rapidly expanding into AI-backed members of the team that answer questions about builds and code to deploy code or troubleshoot issues in production. It’s not a surprise that this space has gained its own name, ChatOps. Articles covering the top 10 ChatOps strategies will surely follow. Summary In this article, we covered several of the tools that integrate into the DevOps culture and how those tools are used and are transforming all areas of the technology organization. While not an exhaustive list, the areas that were covered will give you an idea of the scope of the space and how these various systems can be integrated together. About Darrell Pratt Darrell Pratt is a technologist who is responsible for a range of technologies at Cars.com, where he is the director of software development and delivery. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. Find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13511

article-image-basics-information-architecture
Cheryl Adams
10 Oct 2016
5 min read
Save for later

The Basics of Information Architecture

Cheryl Adams
10 Oct 2016
5 min read
The truth is there is nothing basic about technology anymore. Simple terms like computers and servers have been replaced by virtualized machines and the cloud. Storage has been replaced by data lakes and pools. The terms often present a challenge for even the most seasoned technical resource. Depending on whom you ask, some technology terms can be defined in different ways.Information Architecture is truly a search-and-discovery mission, with the goal of finding all of the necessary information and organizing it in a useful way. The goal is to present a finished product that the audience or recipient can understand. A task completed by an Information Architect will be so well defined that not only is it scalable for growth, but also repeatable based on the type of business. So what is Information Architecture? It depends on whom you ask. The Information Architecture Institute defines Information Architecture as the practice of deciding how to arrange parts of something to be understandable. In the case of technology, this arrangement is based on known data, facts, or information.Let’s take a closer look at the word “architect.”Merriam-Webster defines architect as the manner in which the components of a computer or computer system are organized and integrated.This is really a two-fold task, organizing and integration. For example, you would not group internal employees with external customers. However, an integration tool may be needed if employees are assigned to work with external customers. To illustrate, let’s look at an unfinished puzzle in a box. The object of this project would be to sort the pieces in such a way that you could put together a complete picture. Hobbyists may sort out similar colored pieces or pieces that appear to make the frame of the picture. After some period of sorting and organizing, the pieces are placed together to make a complete picture. An Information Architect’s role can be very similar. Let’s consider another illustration by viewing this as a role in a project and walkthrough the responsibilities associated with it. An Architect has been placed on the project team for Company XYZ. The project consists of organizing existing or new information for a high growth company. Although this project may sound daunting, we’ll only focus on the Information Architect tasks. Solid communication and writing skills are needed as an Architect. As the information provider, being clear and understandable is the key. The Architect will be using these skills to define how the information will be organized.As an Architect, tools will be selected for presentation, modeling and workflow to define the User Experience. The User Experience (UX) defines a person's entire experience using a particular product, system or service. The first requirement for a great user experience is to meet the exact needs for the usage of a product or a service. These tasks are also handled by an Information Architect. An intermediate to advanced understanding in the technology field, including networking, is very important. In the technology field, you will need to be familiar with the different terms and components of the given environment. You will be detailing the known information in such a way that it is easy to understand. You may also discover new information in this process that will need to be documented as well.This research can be conducted through vendor and software reviews as well as online research. As we learned earlier, one of the key aspects of Architect is to design based on given or discovered information. A new method or workflow may need to be clarified or redefined as to how an organization shares, manages and monitors information. As with most information, it is not static, so the Architect will need to determine a workflow for making it repeatable or scalable. An Information Architect is instrumental in designing and defining how to best organize information across every channel and touchpoint throughout a company. The goal is to make the information easy to find, access and leverage. By digging into this information technology, these puzzle pieces will start to come together. These pieces consist of touch points that may include storage, security, servers, and applications. Some of the more challenging pieces in the box may be networking protocols and understanding how to gain access to various environments, applications, files, multiple project tools and more. It is truly an art of organizing systems in such a way that the User Experience (UX) has a solid workflow that is easy to understand and follow. The structure and content of the given system is designed by navigation and labeling structure that enhance the user experience. Thus, the finished product of an Information Architect is a well-definedsystem, project or service. About the author Cheryl Adams is a senior cloud data and infrastructure architect. Her work includes supporting healthcare data for large government contracts; deploying production based changes through scripting, monitoring, and troubleshooting; and monitoring environments using the latest tools for databases, web servers, web API and storage.
Read more
  • 0
  • 0
  • 3156

article-image-transfer-learning
Graham Annett
07 Oct 2016
7 min read
Save for later

Transfer Learning

Graham Annett
07 Oct 2016
7 min read
The premise of transfer learning is the idea that a model trained on a particular dataset can be used and applied to a different dataset. While the notion has been around for quite some time, very recently it's become useful along with Domain Adaptation as a way to use pre-trained neural networks for highly specific tasks (such as in Kaggle competitions) and various fields. Prerequisites For this post, I will be using Keras 1.0.3 configured with TensorFlow 0.8.0. Simple Overview and Example Before using VGG-16 with pre-trained weights, let’s first use a simple example on our own small net to see how it works. For this example we will be using a MNIST trained net and then fine-tuning the last layers to allow for it to predict on a dataset of smiling or not smiling images. from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils.np_utils import to_categorical from scipy.misc import imresize def rgb_g(img): grayscaled = 0.2989 * img[:,0,:,:] + 0.5870 * img[:,1,:,:] + 0.1140 * img[:,2,:,:] return grayscaled (X, Y), (_, _) = cifar10.load_data() nb_classes = len(np.unique(Y_train)) Y = np_utils.to_categorical(Y, nb_classes) X = X.astype('float32')/255. # converts 3 channels to 1 and resizes image X = rgb_g(X) X_tmp = [] for i in range(X.shape[0]): X_tmp.append(imresize(X[i], (28,28))) X = np.array(X_tmp) X = X.reshape(-1,1,28,28) model = Sequential() model.add(Convolution2D(32,3,3, border_mode='same', input_shape=(1,28,28))) model.add(Activation('relu')) model.add(Convolution2D(32,3,3, border_mode='same')) model.add(Activation('relu')) model.add(MaxPooling2D((2,2))) model.add(Dropout(.25)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta') One thing to notice is that our input for the neural net is 1x28x28. This is important, because as the data we feed in must match this dimension, the MNIST and CIFAR datasets are not images of the same size or number of color channels (MNIST is 1x28x28 while CIFAR10 is 3x32x32, where the first image represents the number of channels in the image). There are a few ways to accommodate this, but generally you are working with what the prior weights and model were trained on and must resize and adjust your input accordingly (for instance, grayscaled images can be repeated from 1 channel into 3 channels to use on RGB trained models). With this model we now will load data from MNIST and fit again, but only fine tune on the last few layers. First let’s look at the model and some of the features of the model. > model.layers [<keras.layers.convolutional.Convolution2D at 0x1368fe358>, <keras.layers.core.Activation at 0x1368fe3c8>, <keras.layers.convolutional.Convolution2D at 0x136905ba8>, <keras.layers.core.Activation at 0x136905898>, <keras.layers.convolutional.MaxPooling2D at 0x136930828>, <keras.layers.core.Dropout at 0x136930860>, <keras.layers.core.Flatten at 0x136947550>, <keras.layers.core.Dense at 0x136973240>, <keras.layers.core.Activation at 0x136973780>, <keras.layers.core.Dropout at 0x13697ef98>, <keras.layers.core.Dense at 0x136988a20>, <keras.layers.core.Activation at 0x136e29ef0>] > model.layers[0].trainable True With keras, we have the ability to specify whether we want a layer to be trainable or not. A trainable layer means that its weights that are learned via fitting the model will update. For this experiment we will be doing what is called fine tuning on only the last layer without changing the number of classes. We still want to keep the last few layers so we will set all the layers but the last 2 to be trainable such that the learned weights will stay the same: for l in range(len(model.layers)-2): model.layers[l].trainable=False model.compile(loss='categorical_crossentropy', optimizer='adadelta') Note: we must also recompile every time we adjust the model's layers. This is oftentimes a tedious process with Theano so can be useful when initially experimenting to use TensorFlow. Now we can train a few epochs on the MNIST dataset and see how well it's priorly learned weights work. (X_mnist, y_mnist), (_, _) = mnist.load_data() y_mnist = np_utils.to_categorical(y_mnist) X_mnist = X_mnist.reshape(-1,1,28,28) model.fit(X_mnist, y_mnist, batch_size=32, nb_epoch=5, validation_split=.2) We can also train on the dataset but use different final layers in the model. If, for instance, you were interested in fine tuning the model based on some dataset with 1 single binary classification, you could do something like: model.pop() model.pop() model.add(Dense(1)) model.add(Activation('softmax')) model.train(x_train, y_train) While this example is quite small and the weights are easily learned, the premise that network weights that took a few days or even weeks to learn isn't that uncommon. Also, having a large pre-trained network can be useful to both gauge your own network results as well as to incorporate into other aspects of your deep learning model. Using VGG-16 for Transfer Learning There are a few well known pre-trained models and weights that while plausibly you could train on your own computer, often the training time is much too long [D1] and requires specialized hardware to train. VGG-16 is perhaps one of the better known of these, but there are many others and Caffe has a nice listing of them. Using the VGG-16 is quite simple and allows for a previously trained model that is quite adaptable without having to spend a large amount of time training. With this type of model, we are able to load the model and use these weights; then we can remove the final layers to change to for instance a binary classification problem. Using these pre-trained networks take specialized hardware usually and require the it [D2] may not work on all computers and GPU's. You need to download the pre-trained weights available here and there is also a gist explaining the general use of it. from keras.models import Sequential from keras.layers.core import Flatten, Dense, Dropout from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D from keras.optimizers import Adam model = Sequential() model.add(ZeroPadding2D((1, 1), input_shape=(3, 224, 224))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='softmax')) model.load_weights('vgg16_weights.h5') for l in model.layers[:-2]: l.trainable = False model.layers.pop() model.layers.pop() model.add(Dropout(0.5)) model.add(Dense(1, activation='softmax')) model.compile(optimizer=Adam() loss='categorical_crossentropy', metrics=['accuracy']) You should now be able to try out your own models to experience the benefits of transfer learning. About the author Graham Annett is an NLP engineer at Kip.  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras.  He can be found on GitHub or Here..
Read more
  • 0
  • 0
  • 4280

article-image-how-game-development-giants-are-leading-vr-revolution
Casey Borders
28 Sep 2016
5 min read
Save for later

How Game Development Giants are Leading the VR Revolution

Casey Borders
28 Sep 2016
5 min read
We are still very much in the early days of virtual reality, with only two real choices for VR hardware. They are the HTC Vive and the Oculus Rift, both of which have only been available for less than a year. As you would expect, there are a wide range of VR games and experiences being built by indie developers, since they tend to be the earliest adopters, but there has also been a lot of investment and support from much larger names, most notably, Valve. Valve has been in the games industry for almost 20 years and is the powerhouse behind the HTC Vive. Starting back in 2012 they brainstormed and prototyped early versions of their VR headset. Teaming up with HTC in the spring of 2014 took the Vive hardware to the next level and turned it into what we see today. In addition to playing a huge role in the development of the Vive hardware, they also built SteamVR, which is the software platform that powers the Vive. In fact, SteamVR supports both the HTC Vive and the Oculus Rift. This allows developers to target either platform using the same SDK. Valve has always said that they won't lock down SteamVR because they believe that being restrictive at this early stage would be bad for the industry. Valve has also been the producer of a sizable amount of VR content. Just after Oculus released their first development kit, Vive put out versions of Team Fortress 2 and Half Life 2 that supported the Rift, and they were amazing. For a lot of people, these games were their first experience with VR and it set the bar for how immersive and compelling it can be. They have also built the definitive experience for the HTC Vive with a game called The Lab. Actually, it's a collection of small games and demos allowing players to have a wide range of highly polished AAA-level experiences. Based on community feedback they've actually released the source code for some of the more popular features so that other developers can use them in their own games. Other big game developers are starting to follow Valve's lead into VR. This year, at E3, Bethesda announced that they are going to release a VR version of their incredibly popular game, Fallout 4. Fallout 4 VR will target the Vive and make use of the Vive's controllers for shooting and managing the Pip-Boy. EA has said that they are targeting the PlayStation VR with their Star Wars™ Battlefront: X-Wing VR Mission. Both of these games are sure to come with all of the polish that we expect from these studios and will be a huge step for bringing full length games into VR. Even large companies from outside of the gaming industry are getting involved in VR. Oculus launched their Kickstarter project back in August of 2012 with a truly visionary take on what VR could become with modern hardware. Their Kickstarter went gangbusters and, for a goal of $250,000, they ended up with $2.4 million from backers! They spent the next two years prototyping, iterating and refining their idea and along the way released two development kits. They built a name for themselves and started to get the attention of some of the biggest names in tech. To pretty much everyone's surprise, in the Spring of 2014, Facebook bought them for $2 Billion. The release of the first Oculus Rift development kit sparked the creativity of some amazingly clever Google engineers who decided to see what they could come up with using almost no money. They figured out that you could turn $10 worth of cardboard and plastic lenses into a VR viewer powered by a smartphone. Their incredibly cheap and simple solution did a surprising job of delivering on the VR experience. This year, at Google I/O 2016, Google announced that they were going to take a step up from Cardboard with a project called DayDream. DayDream is a VR platform that is going to be built directly into Android starting with version 7.0 (Nougat). It's still going to be powered by a mobile phone but the viewer is going to be much sturdier and will come with a controller to allow users to interact with the virtual world. VR is just beginning to pick up steam and yet there are already a ton of great experiences from indie developers and mega studios alike. As lower cost headsets become available and the price of existing hardware drops I think we'll see more and more people brining VR into their homes. This will create a much riper audience for larger AAA studios to take their first steps into the virtual world. About the author Casey Borders is an avid gamer and VR fan with over 10 years of experience with graphics development. He was worked on everything from military simulation to educational VR/AR experiences to game development. More recently, he has focused mainly on mobile development.
Read more
  • 0
  • 0
  • 2360
article-image-5-amazing-packages-working-better-r
Sam Wood
28 Sep 2016
2 min read
Save for later

5 Amazing Packages for Working Better with R

Sam Wood
28 Sep 2016
2 min read
The R language is one of the top choices of data scientists - in no small part to the great packages and projects created to support it. If you want to expand the power and functionality of your R code, consider one of these popular and amazing options. ggplot2 ggplot2 is a data visualization package, and widely believed to be one of the best reasons to use R. ggplot2 produces some of the most fantastic graphics and data visualizations you can get. It's so popular, there's even a port to use it in Python - R's great rival. ggplot2 takes care of many of the fiddly details of plotting, leaving you to focus on interpreting your data insight. Shiny R's always been just about the data - but Shiny lets you take it onto the web! Without knowing any HTML or CSS you can use Shiny to turn your R code into interactive web applications. From interactive visualizations to exceptional web interfaces, you'll be amazed what you can build with Shiny without a single line of JavaScript. knitr Use knitr to output your R analyses as dynamic reports in LaTex, Markdown and more. Developed as part of the Literate Program for reproducible research, knitr is the perfect tool for ensuring the the documentation for your R analysis is clear and understandable to all. packrat Have you ever gotten frustrated with trying to figure out dependency management in R? Packrat is there to help you for all those annoying situations where installing one library makes another piece of code stop working. It's great for ensuring that your R projects are more isolated, more reproducible, and more portable. How does it work? Packrat stores your package dependencies inside your project directory instead of your personal R library, and lets you snapshot the information packrat needs to recreate your set-up on another machine. stringr Strings aren't big or fancy - but they are vital for many data cleaning and preparation tasks (and who doesn't love clean data?) Strings can be hard to wrangle in R - stringr seeks to change that. It provides a simple modern interface to common string operations, to make them just as easy in R as in Python or Ruby.  
Read more
  • 0
  • 0
  • 3226

article-image-where-he-develops-innovative-prototypes-to-support-company-growth-he-is-addicted-to-learning
Xavier Bruhiere
26 Sep 2016
1 min read
Save for later

where he develops innovative prototypes to support company growth. He is addicted to learning

Xavier Bruhiere
26 Sep 2016
1 min read
and practicing high-intensity sports. "
Read more
  • 0
  • 0
  • 6315

article-image-my-experience-keystonejs
Jake Stockwin
16 Sep 2016
5 min read
Save for later

My Experience with KeystoneJS

Jake Stockwin
16 Sep 2016
5 min read
Why KeystoneJS? Learning a new language can be a daunting task for web developers, but there comes a time when you have to bite the bullet and do it. I'm a novice programmer, with my experience mostly limited to a couple of PHP websites, using mySQL. I had a new project on the table and decided I wanted to learn Node. Any self-respecting Node beginner has written the very basic "Hello World" node server: var http = require('http'); var server = http.createServer(function(req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); res.end("Hello World"); }); server.listen(80); Run node server.js and open localhost:80 in your web browser, and there it is. Great!It works, so maybe this isn't going to be so painful after all. Time to start writing the website! I quickly figure out that there is quite a jump between outputting "Hello World" and writing a fully functioning website. Some more research points me to the express package, which I install and learn how to use. However, eventually I have quite the list of packages to install, and all of these need configuring in the correct way to interact with each other. At this stage, everything is starting to get a little too complicated, and my small project seems like it's going to take lots of hours' work to get to the final website. Maybe I should just write it in PHP, since I at least know how to use it. Luckily, I was pointed toward KeystoneJS. I'm not going to explain how KeystoneJS works in this post, but by simply running yo keystone, my site was up and running. Keystone had configured all of those annoying modules for me, and I could concentrate on writing the code for my web pages. Adding new content types became as simple as adding a new Keystone "model" to the site, and then Keystone would automatically create all the database schemas for me and add this model to the admin UI. I was so impressed, and I had finished the whole website in just over an hour. KeystoneJS had definitely done 75% of the work for me, and I was incredibly pleased and impressed. I picked up Keystone so quickly, and I have used it for multiple projects since. It is without a doubt my go-to tool if I'm writing a website which has any kind of content management needs. Open Source Software and the KeystoneJS Community KeystoneJS is a completely open source project. You can see the source code on GitHub, and there is an active community of developers constantly improving and fixing bugs in KeystoneJS. It was developed by ThinkMill, a web design company. They use the software for their own work, so it benefits them to have a community helping to improve their software. Anyone can use KeystoneJS, and there is no need to give anything back, but a lot of people who find KeystoneJS really useful will want to help out. It also means if I discover a bug, I am able to submit a pull request to fix it, and hopefully that will get merged into the code. A few weeks ago, I found myself with some spare time and decided to get involved in the project, so I started to help out by adding some end-to-end (e2e) testing. Initially, the work I did was incorrect, but rather than my pull request just being rejected, the developers took the time to point me in the right direction. Eventually I worked out how everything worked, and my pull request was merged into the code. A few days later on, I had written a few more tests. I'd quite often need to ask questions on how things should be done, but the developers were all very friendly and helpful. Soon enough, I understood quite a bit about the testing and managed to add some more tests. It was not long before the project lead, Jed Watson, asked me if I would like to be a KeystoneJS member, which would give me access to push my changes straight into the code without having to make pull requests. For me, as a complete beginner, being able to say I was part of a project as big as this meant a lot. For me, to begin with, I felt as though I was asking so many questions that I must just be annoying everyone and should probably stop. However, Jed andeveryone else quickly changed that, and I felt like I was doing something useful. Into the future The entire team is very motivated to make KeystoneJS as good as it can be. Once version 0.4 is released, there will bemany exciting additions in the pipeline. The admin UI is going to be made more customizable, and user permissions and roles will be implemented, among many other things. All of this is made possible by the community, who dedicate lots of their time for free to make this work. The fact that everyone is contributing because they want to and not because it's what they're paid to do makes a huge difference. People want to see these features added so that they can use them for their own projects, and so they are all very committed to making it happen. On a personal note, I can't thank the community enough for all their help and support over the last few weeks, and am very much looking forward to being part of Keystone's development. About the author Jake Stockwin is a third-year mathematics and statistics undergraduate at the University of Oxford, and a novice full-stack developer. He has a keen interest in programming, both in his academic studies and in his spare time. Next year, he plans to write his dissertation on reinforcement learning, an area of machine learning. Over the past few months, he has designed websites for various clients and has begun developing in Node.js.
Read more
  • 0
  • 0
  • 3789
article-image-5-new-features-will-make-developers-love-android-7
Sam Wood
09 Sep 2016
3 min read
Save for later

5 New Features That Will Make Developers Love Android 7

Sam Wood
09 Sep 2016
3 min read
Android Nougat is here, and it's looking pretty tasty. We've been told about the benefits to end users - but what are some of the most exciting features for developers to dive into? We've got five that we think you'll love. 1. Data Saver If your app is a hungry, hungry data devourer then you could be losing users as you burn through their allowance of cellular data. Android 7's new data saver feature can help with that. It throttles background data usage, and signals to foreground apps to use less data. Worried that will make your app less useful? Don't worry - users can 'whitelist' applications to consume their full data desires. 2. Multi-tasking It's the big flagship feature of Android 7 - it's the ability to run two apps on the screen at once. As phones keep getting bigger (and more and more people opt for Android tablets over an iPad) having the option to run two apps alongside each other makes a lot more sense. What does this mean for developers? Well, first, you'll want to tweak your app to make sure it's multi-window ready. But what's even more exciting is the potential for drag and drop functionality between apps, dragging text and images from one pane to another. Ever miss being able to just drag files to attach them to an email like on a desktop? With Android N, that's coming to mobile - and devs should consider updating accordingly. 3. Vulkan API Nougat brings a new option to Android game developers in the form of the Vulkan graphics API. No longer restricted to just OpenGL ES, developers will find that Vulkan provides them with a more direct control over hardware - which should lead to improved game performance. Vulkan can also be used across OSes, including Windows and the SteamOS (Valve is a big backer). By adopting Vulkan, Google has really opened up the possibility for high-performance games to make it onto Android. 4. Just In Time Compiler Android 7 has added a JIT (Just In Time) compiler, which will work to constantly improve the performance of Android Apps as they run. The performance of your app will improve - but the device won't consume too much memory. Say goodbye to freezes and non-responsive devices, and hello to faster installation and updates! This means users installing more and more apps, which means more downloads for you! 5. Notification Enhancements Android 7 changes the way your notifications work on your device. Rather than just popping up at the top of your device, notifications in Nougat will have the option for a direct reply without opening the app, will be bundled together with related notifications, and can even be viewed as a 'heads-up' notification displayed to the user when the device is active. These heads-up notifications are also customizable by app developers - so better start getting creative! How will this option affect your app's UX and UI? There's plenty more... This are just some of the features of Android 7 we're most excited about - there's plenty more to explore! So dive right in to Android development, and start building for Nougat today!
Read more
  • 0
  • 0
  • 15800

article-image-future-node
Owen Roberts
09 Sep 2016
2 min read
Save for later

The Future is Node

Owen Roberts
09 Sep 2016
2 min read
In the few years we’ve seen Node.js explode onto the tech scene and go from strength to strength – In fact, the rate of adoption has been so great that the Node Foundation has mentioned that in the last year alone we’ve seen the amount of developers using the server-side platform have grown by 100% to reach a staggering 3.5 million users. Early adopters to Nodehave included Netflix, PayPal, and even Walmart. The Node fanbase is constantly building new Node Package Managerpackages to share among themselves. With React and Angular offering the perfect accompaniment to Node in modern web applications, along with a host of JavaScript tools like Gulp and Grunt able to use Node’s best practices for easier development Node has become an essential tool for the modern JavaScript developer, one that shows no signs of slowing down or being replaced. Whether Node will be around a decade from now remains to seen, but with a hungry user base, thousands of user created npms already created, and full-stack JavaScript moving to be the cornerstone of most web applications, it’s definitely not going anyway anytime soon. For now, the future really is Node. Want to get started learning Node? Or perhaps you’re looking give your skills the boost to ensure you stay on top? We’ve just released Node.js Blueprints, and if you’re looking to see the true breadth of possibilities that Node offers you then there’s no better way to discover how to apply this framework in new and unexpected ways. But why wait? With a free Mapt account you can read the first 2 chapters for nothing at all! When you’re ready to continue learning just what Node can do, sign up to Mapt to get unlimited access to every chapter in the book, along with the rest of our entire video and eBook range, at just $29.99 per month!
Read more
  • 0
  • 0
  • 20885