Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-nodejs-its-easy-get-things-done
Packt Publishing
05 Sep 2016
4 min read
Save for later

With Node.js, it’s easy to get things done

Packt Publishing
05 Sep 2016
4 min read
Luciano Mammino is the author (alongside Mario Casciaro) of the second edition of Node.js Design Patterns, released in July 2016. He was kind enough to speak to us about his life as a web developer and working with Node.js – as well as assessing Node’s position within an exciting ecosystem of JavaScript libraries and frameworks. Follow Luciano on Twitter – he tweets from @loige.  1.     Tell us about yourself – who are you and what do you do? I’m an Italian software developer living in Dublin and working at Smartbox as Senior Engineer in the Integration team. I’m a lover of JavaScript and Node.js and I have a number of upcoming side projects that I am building with these amazing technologies.  2.     Tell us what you do with Node.js. How does it fit into your wider development stack? The Node.js platform is becoming ubiquitous; the range of problems that you can address with it is growing bigger and bigger. I’ve used Node.js on a Raspberry Pi, in desktop and laptop computers and on the cloud quite successfully to build a variety of applications: command line scripts, automation tools, APIs and websites. With Node.js it’s really easy to get things done. Most of the time I don't need to switch to other development environments or languages. This is probably the main reason why Node.js fits very well in my development stack.  3.     What other tools and frameworks are you working with? Do they complement Node.js? Some of the tools I love to use are RabbitMq, MongoDB, Redis and Elastic Search. Thanks to the Npm repository, Node.js has an amazing variety of libraries which makes integration with these technologies seamless. I was recently experimenting with ZeroMQ, and again I was surprised to see how easy it is to get started with a Node.js application.  4.     Imagine life before you started using Node.js. What has its impact been on the way you work? I started programming when I was very young so I really lived "a life" as a programmer before having Node.js. Before Node.js came out I was using JavaScript a lot to program the frontend of web applications but I had to use other languages for the backend. The context-switching between two environments is something that ends up eating up a lot of time and energy. Luckily today with Node.js we have the opportunity to use the same language and even to share code across the whole web stack. I believe that this is something that makes my daily work much easier and enjoyable.  5.     How important are design patterns when you use Node.js? Do they change how you use the tool? I would say that design patterns are important in every language and in this case Node.js makes no difference. Furthermore due to the intrinsically asynchronous nature of the language having a good knowledge of design patterns becomes even more important in Node.js to avoid some of the most common pitfalls.  6.     What does the future hold for Node.js? How can it remain a really relevant and valuable tool for developers? I am sure Node.js has a pretty bright future ahead. Its popularity is growing dramatically and it is starting to gain a lot of traction in enterprise environments that have typically bound to other famous and well-known languages like Java. At the same time Node.js is trying to keep pace with the main innovations in the JavaScript world. For instance, in the latest releases Node.js added support for almost all the new language features defined in the ECMAScript 2015 standard. This is something that makes programming with Node.js even more enjoyable and I believe it’s a strategy to follow to keep developers interested and the whole environment future-proof.  Thanks Luciano! Good luck for the future – we’re looking forward to seeing how dramatically Node.js grows over the next 12 months. Get to grips with Node.js – and the complete JavaScript development stack – by following our full-stack developer skill plan in Mapt. Simply sign up here.
Read more
  • 0
  • 0
  • 19863

article-image-does-size-matter-tech-lives-small-and-large-businesses
Richard Gall
01 Sep 2016
9 min read
Save for later

Does Size Matter? The Tech Lives of Small and Large Businesses

Richard Gall
01 Sep 2016
9 min read
“Modern business is strange” Modern business is strange. And this, really, is down to technology. Although the dot com bubble may have blown up almost twenty years ago, it’s really the effects of the open source revolution of the past decade that we’re feeling today. This was a period that really can be called innovative. It was a time of experimentation and invention, when the rigid structures of traditional IT (and the stereotypes of the traditional IT team that existed alongside them) all but disappeared. Everything seemed to fragment. And this includes business culture itself. In its place, the sharp and smart startup became a new signifier for 21st century business. Gone was the stuffy boredom of the drab office (see Office Space for an expert example), in favour of something more creative and casual – something where technology rules and no longer simply serves. Perhaps this is a bit of a caricature. But there’s some truth to it – software culture, from the software itself to its project management methodologies have all had an impact on the way that even the largest companies see themselves. And why should we be surprised? When a multibillion dollar company like Facebook started life in a college dorm, we’re reminded that we’re living in a new business landscape, where technology has flattened barriers to entry. When size no longer seems to matter. But does size really not matter anymore? We looked closely at the technological differences between large and small companies in our 2016 Skill Up report, published earlier this summer in July. The picture it paints is interesting. While it highlights that the software used by businesses does shift according to size, it also demonstrates that there is a lot of crossover. Size matters. Sort of. The concerns and challenges that many organizations are facing are perhaps more similar than we might have thought.  Small Businesses Let’s first look at the tools being used by survey respondents working in smaller companies. It’s worth noting that micro refers to organizations of 1-20 employees, small 21-99, medium 100-1000 and large any number exceeding that.   A large proportion of the data here has come from respondents working for small development startups – these are likely companies that wouldn’t even exist without these useful tools and frameworks – they’re essentially small teams that have come together to take advantage of what these tools are capable of. Unreal is there at there at the top – on the one hand, yes this shows it’s up there with Unity when it comes to game development tools, but it also suggests that game development is an industry that is thriving on small development teams. It’s a huge industry, and perhaps one that’s growing due to a combination of developer enthusiasm and entrepreneurialism. It’s also worth noting Blender’s appearance as well as Android Studio. Android Studio is not quite as popular with micro businesses (startup size), but overall it is a crucial tool for smaller businesses. This perhaps underlines the importance of mobile for modern businesses – whether these developers are working for software development companies specifically or companies that believe they need mobile solutions to meet customer needs, it’s clear that Android Studio is proving to be incredibly popular. You can find our game and mobile development Skill Plans in Mapt here to learn more. It’s also notable that WordPress performs so well here. It’s a tool that has expanded beyond the tech and developer sphere, becoming a mass market tool to just about anyone who wants to write content for the web. But what about the frameworks right at the heart of the web development world? It’s hard to pick a clear narrative thread here – Meteor is pretty high on the list here among developers in small businesses, a tool at the cutting-edge of web development, but to confuse us PHP is right next to it, a framework that’s unlikely to be called ‘cutting-edge’. It’s likely that what we’re seeing here is simply a fact of the fast-paced reality of web development, perhaps the one area of software where keeping up with change and new frameworks is most difficult. Front end or full-stack development - we've got a range of carefully curated Skill Plans on some of the hottest frameworks being used today, including React and Angular 2. One only has to glance at the diverse range of frameworks listed here to see that it’s difficult to characterize small businesses in any other way than simply asserting that diversity is a fact of life – there are a huge range of tools available for performing similar tasks and building similar types of products, all offering different features, and all having different benefits depending exactly on what you need to deliver. One way of looking at it is saying that it’s about those marginal gains. The most popular software are the tools that make your life easier, that fit comfortably into your workflow and can help you build the type of products that you (or, indeed, your clients) want. Depending on what you’re currently or have been using, what the purpose of the product is going to be (is it an app? A game? Does it need to be dynamic? Or would something more static do a perfectly good job?).  Technology used in larger organizations If it’s all about diversity, and finding tools that fit comfortably into the way you work for smaller organizations, what about larger companies? As you can see, the list does look a little different: The trend here, broadly speaking at least, are for operational tools rather than specific frameworks. For the most part it’s all about data, cloud, and virtualization – terms that have unfortunately become somewhat empty buzzwords for vendors trying to shift their clunky enterprise software. But that’s not to say that anything listed here should be characterized as clunky. Tableau, sitting pretty at the top of the table, makes data visualization pretty simple – it makes Business Intelligence accessible and easily manageable. It’s worth noting that a similar proportion of respondents from micro-sized businesses are working with Tableau as medium sized businesses. This means that while it’s a tool that remains in a tradition of enterprise analytics, its appeal is not limited to the world of corporate tech. Elsewhere on the list there are clear nods to the ongoing importance of Big Data; Hadoop and Spark are clearly popular. But again, we’d be wrong to assume these are only of interest to those in large businesses. For software pros in medium and smaller organizations these tools are playing an important role in business strategy too. Whatever you want to do with data, our data science Skill Plans are built around you. Whether you want to dive deeper into Machine Learning with Python or stay up to date in Big Data, simply sign in and find what you need here. Virtualization and cloud tools also emerge as important tools for large businesses. This is evidence of an increasing need to take control of their software infrastructure for reasons relating to both security and resource efficiency. This is perhaps of less concern to smaller organizations for whom free tools can facilitate collaboration and resource sharing for no cost. But the impetus for these tools that can virtualize or ‘abstract’ resources comes from a similar place as that diversification we saw earlier. Fundamentally, it’s all about finding solutions that fit around your problems. Arguably, these solutions may feel like new challenges for large organizations who may, but by spending some time reflecting what’s important for their business, this relative freedom can be essential to success. DevOps Engineers are crucial if we're to unlock the full potential of cloud and virtualization. Learn more in our DevOps Engineer Skill Plan. Size matters when it comes to tech. Sort of… Both graphics demonstrate some slight differences in focus between larger and small organizations – but the focus of both is ultimately on making those marginal gains, whether that’s in terms of the right framework for the job or effective use of virtualization to use tight resources in a more intelligent way. But more than that, while we may be able to discern shifting focuses that correlate to company size, there is nevertheless a lot of crossover in what’s important. Strangely, there appears to be more common ground between the ‘micro’ sized businesses and the large ones. This suggests that when it comes to properly taking advantage of software, the organizations that are in the best position are those with the resources and time to invest in skill development and learning, or those that are streamlined and agile enough – maybe simply enthusiastic enough – to keep up to speed with what’s new and what’s important. Stuck in the middle The real challenge, then, is for those organizations in the middle. They might have ambitions to become an industry giant but lack the resources. They might even be attracted to the startup mentality but are burdened with legacy systems and cumbersome processes. To take advantage of some of these massive opportunities will require detailed, even critical, self-reflection and a renewed focus on organizational purpose and strategic priorities. It also means these organizations will need to fashion a different tech-oriented culture. Startup, small, medium or large business – with Mapt for Teams, you can ensure everyone has the resources they need to learn the skills most important for your projects – for today and tomorrow. Learn more here. 
Read more
  • 0
  • 0
  • 3369

article-image-what-you-should-learn
Owen Roberts
01 Sep 2016
5 min read
Save for later

What You Should Learn

Owen Roberts
01 Sep 2016
5 min read
When it comes to the languages, frameworks, and skills that are available to us what are the reasons we choose the ones we do? One of the key things that Skill Up 2016 looked to uncover was what motivated developers and IT professionals to pick up new tools. The results were interesting, and if you haven’t looked at them yet, check out Pages 23-25 of our Skill Up Report here. The key reasons people either picked up a new piece of software or decided to drop them came down to 4 simple points: What software offered the speed needed for the job? Which software did other programmers recommend? What was the most popular tech currently around? And finally, which piece of software was the best tool for the job in the end? Why would you want to learn a new piece of software for these reasons though? Let’s have have a look at each reason in more detail! The Need For Speed – Docker Who doesn’t like a fast application? How many times have you been waiting for a web page to load, annoyed that it’s taking so long? When it comes to customer needs speed is right up there with usability, and as a developer you have to walk the fine line without sacrificing either for the other. No wonder Docker has become the go-to option for developers looking to get as much speed as they can without having to optimize every single part of their applications. Docker is designed with being lean to the nth degree, it loads in a millisecond most of the time. Who doesn’t want that sort of speed in their applications? Docker’s widespread adoption for its speed is proof enough that developers have seen its potential in the wide world, and if you haven’t jump on it yet then it’s definitely something you should start looking into – faster applications mean that everything is better and everyone is happier after all. The One Everyone Recommends – Swift At the end of the day, what do you trust more? The unproven language or software, or the one that everyone is running up to you, trying to convince you to give it a try and let it change your life? While we all like to try new things in the end, the real winner will be the one that everyone else has already tried, loved, and is now attempting to convince you to take that plunge too so you can join them in the revolution they’re currently going through. Apple is pushing Swift hard. Swift users are pushing Swift even harder. Improving on the key flaws seen after years of C++ development and creating a language designed from the ground up to revolutionize the world of iOS development Swift has created a fanbase that swears by it. People who are trying Swift for the first time constantly understand that it’s just as good as they were told, so why not give it a try for yourself if you haven’t already? After all, could tens of thousands of developers be wrong? The Popular One – React.js No-one wants to be using tired old “uncool” software. Tech is the business of innovation, and when a big company like Facebook releases a new piece of software to the wide world people are going to turn their heads, take notice, and immediately get down the complete ins and outs of it. Adding Facebook’s secret weapon to your toolkit makes you so much more of an appealing developer to companies when the time comes. React has brought one of the biggest buzzes to the JavaScript world since the original Angular was released. Bringing many of the latest changes in web design like a component based approach to JavaScript without sacrificing usability and ease of writing it’s pretty obvious why React has taken the world by storm – and not showing any signs of slowing down either! The Best Tool For The Job – AWS Probably the easiest reason to start picking up a new skill; which one of us hasn’t wanted to enter a job or task without the best tool for the job? Having the best of the best available to us means that the job is so much easier in the long run – there is less chance of issues popping up during and after development and we can have piece of mind that our customers will be having the best possible product we can provide without much cost to us in stress or unforeseen circumstances. AWS is tried and tested when it comes down to providing everything that developers are looking for in a platform, it itself holds dozens of different tools for any situation and when it comes to price as well developers say it’s one of the most competitive around; with Amazon handling the security side of things too it’s obvious that for those who swear by it it’s the development platform that should be your first choice when looking into a cloud based solution. Are you looking to dig deeper into these pieces of tech, maybe even to start learning a new one? Why not sign up to our subscription service Mapt to get all our titles and more at $29.99 per month? As an added bonus we also have Skill Plans directly related to the above tech - allowing you to know exactly which titles you should be looking at to really build up your skills!  
Read more
  • 0
  • 0
  • 2491

article-image-what-should-you-learn-and-why-0
Oli Huggins
01 Sep 2016
5 min read
Save for later

What Should You Learn… And Why

Oli Huggins
01 Sep 2016
5 min read
When it comes to the languages, frameworks, and skills that are available to us what are the reasons we choose the ones we do? One of the key things that Skill Up 2016 looked to uncover was what motivated developers and IT professionals to pick up new tools. The results were interesting, and if you haven’t looked at them yet, check out Pages 23-25 of our Skill Up Report here. The key reasons people either picked up a new piece of software or decided to drop them came down to 4 simple points: What software offered the speed needed for the job? Which software did other programmers recommend? What was the most popular tech currently around? And finally, which piece of software was the best tool for the job in the end? Why would you want to learn a new piece of software for these reasons though? Let’s have have a look at each reason in more detail! The Need For Speed – Docker Who doesn’t like a fast application? How many times have you been waiting for a web page to load, annoyed that it’s taking so long? When it comes to customer needs speed is right up there with usability, and as a developer you have to walk the fine line without sacrificing either for the other. No wonder Docker has become the go-to option for developers looking to get as much speed as they can without having to optimize every single part of their applications. Docker is designed with being lean to the nth degree, it loads in a millisecond most of the time. Who doesn’t want that sort of speed in their applications? Docker’s widespread adoption for its speed is proof enough that developers have seen its potential in the wide world, and if you haven’t jump on it yet then it’s definitely something you should start looking into – faster applications mean that everything is better and everyone is happier after all. The One Everyone Recommends – Swift At the end of the day, what do you trust more? The unproven language or software, or the one that everyone is running up to you, trying to convince you to give it a try and let it change your life? While we all like to try new things in the end, the real winner will be the one that everyone else has already tried, loved, and is now attempting to convince you to take that plunge too so you can join them in the revolution they’re currently going through. Apple is pushing Swift hard. Swift users are pushing Swift even harder. Improving on the key flaws seen after years of C++ development and creating a language designed from the ground up to revolutionize the world of iOS development Swift has created a fanbase that swears by it. People who are trying Swift for the first time constantly understand that it’s just as good as they were told, so why not give it a try for yourself if you haven’t already? After all, could tens of thousands of developers be wrong? The Popular One – React.js No-one wants to be using tired old “uncool” software. Tech is the business of innovation, and when a big company like Facebook releases a new piece of software to the wide world people are going to turn their heads, take notice, and immediately get down the complete ins and outs of it. Adding Facebook’s secret weapon to your toolkit makes you so much more of an appealing developer to companies when the time comes. React has brought one of the biggest buzzes to the JavaScript world since the original Angular was released. Bringing many of the latest changes in web design like a component based approach to JavaScript without sacrificing usability and ease of writing it’s pretty obvious why React has taken the world by storm – and not showing any signs of slowing down either! The Best Tool For The Job – AWS Probably the easiest reason to start picking up a new skill; which one of us hasn’t wanted to enter a job or task without the best tool for the job? Having the best of the best available to us means that the job is so much easier in the long run – there is less chance of issues popping up during and after development and we can have piece of mind that our customers will be having the best possible product we can provide without much cost to us in stress or unforeseen circumstances. AWS is tried and tested when it comes down to providing everything that developers are looking for in a platform, it itself holds dozens of different tools for any situation and when it comes to price as well developers say it’s one of the most competitive around; with Amazon handling the security side of things too it’s obvious that for those who swear by it it’s the development platform that should be your first choice when looking into a cloud based solution. Are you looking to dig deeper into these pieces of tech, maybe even to start learning a new one? Have a look at our 5 for $50 bundles and get deep into getting the most you can get from them now, or sign up to our subscription service Mapt to get all of these and more at $29.99 per month! 
Read more
  • 0
  • 0
  • 1367

article-image-services-reactive-observation
Alejandro Perezpaya
22 Aug 2016
6 min read
Save for later

Services with Reactive Observation

Alejandro Perezpaya
22 Aug 2016
6 min read
When creating apps, it's a good practice to write your business logic and interaction layers into Service Objects or Interactors. This way, you can have them as modular components for your app, thus avoiding the repetition of code, plus you can follow single responsibility principles, and test thebehavior in an isolated fashion. The old-school delegate way As an example, we are going to create a WalletService, which will have the responsibility of managing current credits. The app is not reactive yet, so we are going to sketch this interactor with delegates and try to get the job done. We want a simple Interactor with the following features: Notifications on updates The increase credits method The use credits method Define the protocol for the service You can do this by implementing the following: public protocol WalletServiceDelegate { funccreditsUpdated(credits: Int) } The first requirement is now defined. Create the service Now you can create the service: public class WalletService { public var delegate: WalletServiceDelegate? private (set) var credits: Int public init(initialCredits: Int = 0) { self.credits = initialCredits } public func increase(quantity: Int) { credits += quantity } public func use(quantity: Int) { credits -= quantity } } With these few lines, our basic requirements have been met. Ready for use But, we are using delegates and hence we will need to create an object so that our delegate protocol can use this interface! This will be sufficient for our needs at the moment: class Foo: WalletServiceDelegate { func creditsUpdated(credits: Int) { print(credits) } } let service = WalletService() let myDelegate = Foo() service.delegate = myDelegate Well, while this is working, we need to use WalletService in more parts of the project. That means rewriting the actual code to work as a program class with static vars and class functions. It also needs to support multiple delegate subscriptions (adding and removing them too). That will mean a really complex code for a really simple service, and this problem will be repeated all over your shared services. There's a framework for this called RxSwift! The RxSwift way Our code will look like the following after removing the delegate dependency: public class WalletService { private (set) var credits: Int init(initialCredits: Int = 0) { self.credits = initialCredits } public func increase(quantity: Int) { credits += quantity } public func use(quantity: Int) { credits -= quantity } } We want to operate this as a program class, so we will make a way for that with RxSwift. Rewriting our code with RxSwift When you dig into RxSwift's units, you realize that Variable is the unit that fits the most requirements, and you can easily mutate its value and subscribe to changes easily. If you contain this unit in a public and static variable, you will be able to operate these services as a program class: import RxSwift import RxCocoa public class WalletService { private (set) static var credits = Variable<Int>(0) public class func increase(quantity: Int) { credits.value += quantity } public class func use(quantity: Int) { credits.value -= quantity } } The result of this is great, simple code that is easy to operate, with no protocoland no instanciable classes. Usage with RxSwift Let’s subscribe to credit changes. You need to use the Variable as a Driver or an Observable. We will use it as a driver: let disposeBag = DisposeBag() // First subscription WalletService.credits.asDriver() .driveNext { print($0) } .addDisposableTo(disposeBag) // Second subscription WalletService.credits.asDriver() .driveNext { print("Second: ($0)") } .addDisposableTo(disposeBag) WalletService.increase(10) WalletService.use(5) This clearly shows a lot of advantages. We didn't depend on a class, and we can add as many subscriptions as we want! As you can see, RxSwift helps you make cleaner services/interactors with a simple API as higher functionality. With this pattern, you can have subscriptions over the app to changing data, so we will navigate through the app and have all of the views updated with the last changes without forcing a redraw of the view for every update in the dependency data. A higher complexity example Dealing with geocalization is tough. It’s a reality, but there are ways to make interoperability with it easier. To avoid future problems and repeated or messy code, create a workaround:CoreLocation. With RxSwift, you have easier access to subscribe to CoreLocation updates, but I consider that approach not good enough. In this case, I recomend making a class around CoreLocation, using it as a shared Instance, to manage the geolocation updates with a global interoperability allowing upi to pause or start updates without much code: import RxSwift import RxCocoa import CoreLocation public class GeolocationService { static let sharedInstance = GeolocationService() // Using the explicit operator is not a good practice, // but we are 100% sure we are setting a driver into this variables on init. // If we don't do that, the compiler wont pass, even thought, it works. // Apple might do something here for the future versions of Swift. private (set) var authorizationStatus: Driver<CLAuthorizationStatus>! private (set) var location: Driver<CLLocation>! private (set) var heading: Driver<CLHeading>! private let locationManager = CLLocationManager() init() { locationManager.distanceFilter = kCLDistanceFilterNone locationManager.desiredAccuracy = kCLLocationAccuracyBestForNavigation authorizationStatus = bindAuthorizationStatus() heading = bindHeading() location = bindLocation() } public func requestAuthorization() { locationManager.requestAlwaysAuthorization() } public func startUpdatingLocation() { locationManager.startUpdatingLocation() locationManager.startUpdatingHeading() } public func stopUpdatingLocation() { locationManager.stopUpdatingLocation() locationManager.stopUpdatingHeading() } private func bindHeading() -> Driver<CLHeading> { return locationManager.rx_didUpdateHeading .asDriver(onErrorDriveWith: Driver.empty()) } private func bindLocation() -> Driver<CLLocation> { return locationManager.rx_didUpdateLocations .asDriver(onErrorJustReturn: []) .flatMapLatest { $0.last.map(Driver.just) ?? Driver.empty() } } private func bindAuthorizationStatus() -> Driver<CLAuthorizationStatus> { weak var wLocationManager = self.locationManager return Observable.deferred { let status = CLLocationManager.authorizationStatus() guard let strongLocationManager = wLocationManager else { return Observable.just(status) } return strongLocationManager .rx_didChangeAuthorizationStatus .startWith(status) }.asDriver(onErrorJustReturn: CLAuthorizationStatus.NotDetermined) } } As result of this abstraction, we now have easy operability over geolocation with just subscriptions to our exposed variables. About the author Alejandro Perezpaya has been writing code since he was 14. He has been developing a multidisciplined profile, working as an iOS Developer but also as a Backend and Web Developer formultiple companies. After working for years in Madrid and New York startups (Fever, Cabify, Ermes), he started Triangle, which is a studio based in Madrid, a few months ago, where he crafts high quality software products.
Read more
  • 0
  • 0
  • 2931

article-image-looking-for-alternative-celery-try-huey
Bálint Csergő
09 Aug 2016
5 min read
Save for later

Looking for an alternative to Celery? Try Huey

Bálint Csergő
09 Aug 2016
5 min read
Yes, Celery is amazing in its way; it is the most commonly used "Distributed Task Queue" library, and this did not happen accidentally. But there are cases when you don't need the whole featureset offered by Celery, like multi-broker support. Let's say you just want to execute your tasks asynchronously, without adding a lot of extra bulk to your Python dependencies, and let's say you happen to have a Redis instance lying around. Then Huey might be the one for you. What does Huey offer? It's a lightweight task queue. Lightweight you say? Yes, the only dependency is the Python Redis client. Yes, really. But compared to the skinny requirements list, it can do lots of amazing things. And believe me, you'll see that in this short article. We used Huey actively for our e-commerce start-up Endticket, and it turned out to be a good design decision. Defining your Huey instances from huey import RedisHuey huey = RedisHuey('my-queue', host='your.redis.host') Of course you can define multiple instances in your code. You can actually run a Huey consumer per queue, giving you the control over how many consumers you want per queue, and you can also organize tasks on the queue level.So, important tasks can be put into a queue and less important ones into a different one. Easy, right? #config.py from huey import RedisHuey importanthuey = RedisHuey('my-important-queue', host='your.redis.host') lowpriohuey = RedisHuey('my-lowprio-queue', host='your.redis.host') We have a Huey instance! But how do I define a task? You just have to import your defined Huey instance, and use its task decorator to decorate any regular function. When you apply the task decorator to a function, every time it gets called, instead of directly calling the function, an instance of the QueueTask class is created. #tasks.py from config import huey # import the huey we instantiated in config.py @huey.task() def write_post(quality): print("I'm writing a(n) %s post" % quality) return True Task are supposed to be called, right? As I mentioned before, decorated functions won't get executed instantly when you call the function, but instead a QueueTask object is created, and the task gets enqueued for later execution. However, enqueueing a task involves no magic formulas and hard-to-remember syntaxes. You simply call the function, like you would normally. # main.py from tasks import write_post # import our task from config import huey if __name__ == '__main__': quality = raw_input("What quality post are you writing?") write_post(quality) #QueueTask instance created and enqueued It will be executed instantly in the worker! write_post.schedule(args=(quality,), delay=120) #Task enqueud and scheduled to be executed 120 seconds later print('Enqueued job to write a(n) %s post' % quality) Task execution in Huey Ok, you successfully managed to enqueue your task, and it's resting in the Redis queue waiting to be executed, but how will the system execute your task? You just have to fire up a task executor! $ huey_consumer.py main.huey If you did everything well in the previous steps, the executor should be running and executing the task you enqueue. If you have multiple queues defined, you must run a consumer for every queue. $ huey_consumer.py main.importanthuey $ huey_consumer.py main.lowpriohuey If your system is distributed and there are multiple nodes for processing async tasks, that's not a problem. You can run one executor on every host; Huey can handle multiple workers for a single queue without a problem. Periodic tasks, using Huey in a crontab-like manner Yes, you can do this as well, and to be honest, in a simple way. You just have to use the periodic_task decorator of your huey instance, and the system does the rest. from config import huey # import the huey we instantiated in config.py @huey.periodic_task(crontab(minute='0', hour='3')) def update_caches(): update_all_cache_entries() You just have to make sure you have at least one consumer running for the given queue, and your tasks will be executed periodically. I want to see results, not promises! When you enqueue a task it returns an AsyncData object: # main.py from tasks import write_post # import our task from huey.exceptions import DataStoreTimeout from config import huey if __name__ == '__main__': quality = raw_input("What quality post are you writing?") result = write_post.schedule(args=(quality,), delay=120) print(type(result)) #huey.api.AsyncData result.get() #Try to get the result, if job finished already, the result is returned, else you get a None try: result.get(blocking=True, timeout=120) # You can block until you get your result, of course you can specify your timeout. Be aware that if timeout is reached and still no result is available a "huey.exceptions.DataStoreTimeout" will be raised. except DataStoreTimeout: print("Too slow mate :( ") Running in production We run Huey using Supervisord inside a Docker container. Your supervisor config should look something similar to this. So if your queue happens to die, you can see it in supervisor logs, and will also get auto-restarted by supervisor. Nice and easy. [program:my-queue] command=bash -c "echo starting main.huey && sleep $(( $RANDOM / 2500 + 4)) && exec huey_consumer.py main.huey" environment=PYTHONPATH=%(here)s/.. numprocs=1 process_name=%(program_name)s-%(process_num)d stopwaitsecs=5 stdout_logfile=%(here)s/huey.log redirect_stderr=true About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He loves Unix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 9078
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deep-learning-microsoft-cntk
SarvexJatasra
05 Aug 2016
7 min read
Save for later

Deep Learning with Microsoft CNTK

SarvexJatasra
05 Aug 2016
7 min read
“Deep learning (deep structured learning, hierarchical learning, or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple nonlinear transformations.” -Wikipedia High Performance Computing is not a new concept, but only recently the technical advances along with economies of scale have ensured that HPC is accessible to the masses with affordable yet powerful configurations. Anyone interested can buy commodity hardware and start working on Deep Learning, thus bringing a machine learning subset of artificial intelligence out of research labs and into garages. DeepLearning.net is a starting point for more information about Deep Learning. Nvidia ParallelForAll is a nice resource for learning GPU-based Deep Learning (Core Concepts, History and Training, and Sequence Learning). What is CNTK Microsoft Research released its Computational Network Toolkit in January this year. CNTK is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. CNTK allows the following models: -          Feed Forward Deep Neural Networks (DNN) -          Convolutional Neural Networks (CNN) -          Recurrent Neural Networks (RNN)/Long Short Term Memory Units (LTSM) -          Stochastic Gradient Descent (SGD) Why CNTK? Better Scaling When Microsoft CNTK was released, the stunning feature that it brought was distributed computing, that is, a developer was not limited by the number of GPUs installed on a single machine. This was a significant breakthrough because even the best of machines was limited by4-way SLI, thus limiting the total number of cores to 4 x 3072 = 12288. The configuration of the developer machine put an extra load on the hardware configuration, because this configuration left very little room for upgrades. There is only one motherboard available that supports4-way PCI-E Gen3x16, and there are very few manufacturers who provide good quality 1600W watt power supply to support four Titans. This meant that developers were forced to pay a hefty premium for upgradability in terms of the motherboard and processor, settling for an older generation processor. Distributed computing in High Performance Computing is essential, since it allows scaling out as opposed to scaling up. Developers can build grids with cheaper nodes and the latest processors with a lower hardware cost for an entry barrier. Microsoft Research demonstrated in December 2015 that distributed GPU computing is most efficient in CNTK. In comparison, Google TensorFlow, FAIR Torch, andCaffe did not allow scaling beyond a single machine, and Theano was the worst, as it did not even scale on multiple GPUs on the same machine. Google Research, on April 13, released support for distributed computing. The speed up claimed is 56X for 100 GPUs and 40X for 50 GPUs. The performance deceleration is sharp for any sizable distributed Machine Learning setup. I do not have any comparative performance figures for CNTK, but scaling with GPUs on a single machine for CNTKhad very good numbers. GPU Performance One of the shocking finds with my custom build commodity hardware (2xTitanX) was the TFLOPS achieved under Ubuntu 14.04 LTS and Windows 10. With the fully updated OS and latest drivers from NVIDIA, I got double the number of TFLOPS under Windows than Ubuntu. I would like to rerun the samples with Ubuntu 16.04 LTS, but until then, I have a clear winner in performance with Windows. CNTK works perfectly on Windows, but TensorFlow has a dependency onBazel, which as of now does not build on Windows (Bug#947). Google can look into this and make TensorFlow work on windows, or Ubuntu &Nvidia can achieve the same TFLOPS as Windows. But until that time, architects have two options:to either settle for lower TFLPOS under Ubuntu with TensorFlow, or migrate to CNTK with increased performance. Getting started with CNTK Let’s see how toget started with CNTK. Binary Installation Currently, the CNTK binary installation is the easiest way to get started with CNTK. Just follow the instructions. The only downside is that the currently available binaries are compiled with CUDA 7.0, rather than the latest CUDA 7.5 (released almost a year ago). Codebase Compilation If you want to learn CNTK in detail, and if you are feeling adventurous, you should try compiling CNTK from source. Compile the code base, even when you do not expect to use the generated binary, because the whole compilation process will be a good peek under the hood and enhance your understanding of Deep Learning. The instructions for the Windows installation are available here, whereas the Linux installation instructions are available here. If you want to enable 1-bit Stochastic Gradient Descent (1bit-SGD), you should follow these instructions.1bit-SGD is licensed more restrictively, and you have to understand the differences if you are looking for commercial deployments. Windows Compilation is characterized by older versions of libraries. NvidiaCUDA andcuDNN were recently updated to 7.5, whereas other dependencies such asNvidia CUB, Boost, and OpenCV are still using older versions. Kindly pay extra attention to the versions listed in the documentation to ensure smooth compilation. Nvidia has updated the support for its Nsight to Visual Studio 2015; however, Microsoft CNTK still supportsonly Visual Studio 2013. Samples To test the CNTK installation, here are some really great samples: Simple2d (Feed Forward) Speech / AN4 (Feed Forward &LSTM) Image / MNIST (CNN) Text / PennTreeback (RNN) Alternative Deep Learning Toolkits Theano Theano is possibly the oldest Deep Learning Framework available. The latest release, 0.8,which was released on March 16, enables the much awaited multi-GPU support (there are no indications of distributed computing support though).cuDNNv5 and CNMeM are also supported. A detailed report is available here. Python bindings are available. Caffe Caffe is a deep learning framework primarily oriented towards image processing. Python bindings are available. Google TensorFlow TensorFlow is a deep learning framework written in C++ with Python API bindings. The computation graph is pure Python, making it slower than other frameworks, as demonstrated by benchmarks. Google has been pushing Go for a long time now, and it has even open sourced the language. But when it came to TensorFlow, Python was chosen over Go. There are concerns about Google supporting commercial implementations. FAIR Torch Facebook AI Research (Fair) has release its extension to Torch7. Torch is a scientific computing framework with Lua as its primary language. Lua has certain advantages over Python (lower interpreter overhead, simpler integration with C code), which lend themselves to Torch. Moreover, multi-core using OpenMP directives points to better performance. Leaf Leaf is the latest addition to the machine learning framework. It is based on the Rust programming language (supposed to replace C/C++). Leaf is a framework created by hackers for hackers rather than scientists. Leaf has some nice performance improvements. Conclusion Deep Learning with GPUs is an emerging field, and there is much required to be done to make good products out of machine learning. So every product needs to evaluate all of the possible alternatives (programming language, operating system, drivers, libraries, and frameworks) available for specific use-cases. Currently, there is no one-size-fits-all approach available. About the author SarvexJatasra is a Technology Aficionado, exploring ways to apply technology to make lives easier. He is currently working as the Chief Technology Officer at 8Minutes. When not in touch with technology, he is involved in physical activities such as swimming and cycling.
Read more
  • 0
  • 0
  • 5665

article-image-deployment-done-right-teletraan
Bálint Csergő
03 Aug 2016
5 min read
Save for later

Deployment done right – Teletraan

Bálint Csergő
03 Aug 2016
5 min read
Tell me, how do you deploy your code? If you still GIT pull on your servers, you are surely doing something wrong. How will you scale that process? How will you eliminate the chance of human failure? Let me help you; I really want to. Have you heard that Pinterest open sourced its awesome deployment system called Teletraan? If not, read this post. If yes, read it still, and maybe you can learn from the way we use it in production at endticket. What is Teletraan? It is a deployment system that consists of 3 main pieces: The deploy service is a Dropwizard-based Java web service that provides the core deploy support. It's actually an API, the very heart and brain of this deployment service. The deploy board is a Django-based web UI used to perform day-to-day deployment works. Just an amazing user interface for Teletraan. The deploy agent is the Python script that runs on every host and executes the deploy scripts. Is installing it a pain? Not really; the setup is simple. But if you're using Chef as your configmanagement tool of choice, take thesesince they might prove helpful: chef-teletraan-agent, chef-teletraan. Registering your builds Let the following snippet speak for itself. import requests headers = {'Content-type': 'application/json'} r = requests.post("%s/v1/builds" % teletraan_host, data=json.dumps({'name': teletraan_name, 'repo': name, 'branch': branch, 'commit': commit, 'artifactUrl': artifact_base_url + '/' + artifact_name, 'type': 'GitHub'}), headers=headers ) I've got the system all set up.What now? Basically, you have to make your system compatible with Teletraan. You must have an aritfact repository available to store your builds, and add deploy scripts to your project. Create a directory called "teletraan" in your project root. Add the following scripts to it: POST_DOWNLOAD POST_RESTART PRE_DOWNLOAD PRE_RESTART RESTARTING Although referred as Deploy Scripts, they can be written in any programming language as long as they are executable. Sometimes, the same build artifact can be used to run different services based on different configurations. In this case, create different directories under the top-level teletraan with the deploy environment names, and put the corresponding deploy scripts under the proper environment directories separately. For example: teletraan/serverx/RESTARTING teletraan/serverx/POST_RESTART teletraan/servery/RESTARTING What do these scripts do? The host level deploy cycle looks the following way: UNKNOWN->PRE_DOWNLOAD->[DOWNLOADING]->POST_DOWNLOAD->[STAGING]->PRE_RESTART->RESTARTING->POST_RESTART->SERVING_BUILD Autodeploy? You can define various environments. In our case, every successful master build ends up on the staging cluster automatically. It is powered by Teletraan’s autodeploy feature. It works nicely. Whenever Teletraan detects a new build, it gets automatically pushed to the servers. Manual control We don’t autodeploy the code to the production cluster. Teletraan offers a feature called "Promoting builds". Whenever a build proves to be valid at the staging cluster (some Automated end-to-end testing; and of course, manual testing is involved in the process) the developer has the ability to promote a build to production. Oh noes!Things went wrong. Is there a way to go back? Yes there is a way! Teletraan gives you the ability to roll back any build which happens to be failing. And this can happen instantly. Teletraan keeps a configurable numberof builds on the server of every deployed project; an actual deploy is just a symlink being changed to the new release. Rolling deployments, oh the automation! Deploy scripts should always run flawlessly. But let's say they do actually fail. What happens then? You can define it. There are three policies in Teletraan: Continue with the deployment.Move on to the next host as ifnothing happened. Roll back everything to the previous version. Make sure everything is fine; it's more important than a hasty release. Remove the failing node from production. We have enough capacity left anyway, so let's just cut off the dead branches! This gives you all the flexibility and security you need when deploying your code to any HA environment. Artifacts? Teletraan just aims to be a deployment system, and nothing more. And it does its purpose amazingly well. You just have to notify it about builds. Also you just have to make sure that your tarballs are available to every node where deploy agents are running. Lessons learned from integrating Teletraan into our workflow It was acutally a pretty good experience even when I was fiddling with the Chef part. We use Drone as our CI server, and there was no plugin available for Drone, so thathad to be done also. Teletraan is a new kid on the block, so you might have to write some lines of code to make it apart of your existing pipeline. But I think that if you're willing to spend a day or two on integrating it, it will pay off for you. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He loves Unix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 3440

article-image-speed-speed-speed
Owen Roberts
26 Jul 2016
3 min read
Save for later

Speed, Speed, Speed

Owen Roberts
26 Jul 2016
3 min read
We’re currently in the middle of our Skill Up campaign, with all our titles at just $10 each!If you’re late to the party, Skill Up is your opportunity to get the knowledge and skills you need to become the developer you want to be in 2016 and beyond. Along with the launch of our campaign we’ve also released our 2016 Skill Up report; you can download it here if you haven’t done so already, and in the report one particular piece of information really stood out to me - the link between salary and programming language used. Take a look at the graph below: The one thing linking the top earning languages is the speed that each language is able to offer. Whether it’s SQL’s ease of fine tuning or C’s structured design created for simplicity, each is renowned for being faster than their peers and the alternatives. It should be no surprise that faster languages end up offering more pay. Scala’s ability to handle the stress of big data applications and still crunch data fast has made it one of, if not THE biggest programming languages for any big data related issue. Even Perl, a language that has fallen by the wayside in the eyes of many since 2005 is a speed machine, often beating Python 3 when it comes to everyday tasks leading it to carving out its own niche in finance, bioinformatics, and other specialized sectors. The benefits for a company to hire those who can create are obvious – we all know how important it is to customers to have a fast product; those few seconds it takes to load can be the deciding factor as to whether your product is picked up or left in the dust. This is especially true for enterprise level applications or big data crunchers. If the solution you're selling is too slow for the cost you're offering then why should these customers stay with your product when there are so many potentially faster options on the market at a comparable price? So this Skill Up, why not take the opportunity to speed up your applications? Whether it’s by laying the foundations for a completely new language (it’s never too late to change after all) or checking out how to streamline your current apps for better performance and speed, there’s no better time to ensure your programming skills are better than ever.
Read more
  • 0
  • 0
  • 3987

article-image-what-pokemon-go-and-augmented-reality-mean-future-building-apps
Sam Wood
22 Jul 2016
6 min read
Save for later

What Pokémon Go and Augmented Reality Mean For the Future of Building Apps

Sam Wood
22 Jul 2016
6 min read
Since its release, Pokémon Go has taken the world by storm. It's the must-have new mobile game to be playing (and it's also the must-blog new topic for almost any content site on the net). So what's made it so successful - and what can other app designers seek to learn from the Pokémon Go experience? The Pokémon Go World from the Packt Office In Packt's Skill Up 2016 Report we revealed the one topic almost all developers were convinced was going to be the next big thing - augmented reality. So is Pokémon Go a quick AR fad, or the shape of things to come? We think it's the latter, and here's some of the lessons app developers can learn from its success. Content Will Be Key The key to Pokémon Go's success is not its gameplay - it's that it's a Pokémon game. Imagine Pokémon Go with identical mechanics but some other original variety of small monster to hunt and battle with. There's no way it would be as successful (at least so soon after release). Partially, this is because Pokémon has had twenty years to become a classic IP. Partially, this is because Pokémon is a very good IP - imaginative and recognizable characters about whom it is fun and easy to create and tell stories. Pokémon is a highly successful piece of intellectual property, which has contributed enormously to Pokémon Go being a highly successful app. What does this mean for app design? Good content is key to success. It's not just enough to have a neat gameplay mechanic or a cool feature - you need a good story too. Very few developers are going to have the resources to be able to create or license something as popular as Pokémon. But Ingress (the other augmented reality game from Niantic) boasts over seven million players for a game with its own rich and entirely original story. Facebook thrives on its ability to serve us up relevant content. Content is vital now – and is only going to be even more vital in the future. It Will Run Its AR on Wearables Playing Pokémon Go is probably the first time your average member of the public has been properly disappointed that Google Glass failed. As an app, it is one of the first I have used where running it primarily through wearables rather than the phone device would be amazingly beneficial. Nintendo is way ahead of us here - one of the ways it's seeking to monetize Go is through the Pokémon Go Plus wearabale device. The device's function is simple: it vibrates when there's a Pokémon in the vicinity, saving you the need to always have your phone on hand with the app open. What does this mean for app design? Pokémon Go is the first app which really benefits from integration with wearables. This is heavily tied to the physicality of its gameplay. And sure, Pokémon Go is a game - but is it just a game? As Chris Dixon said way back in 2010, "the next big thing will start out looking like a toy". Pokémon Go shows us augmented reality on the common smartphone, and the experience is less than ideal. There will be better devices built for this new kind of AR app and those devices will be wearables. There Will Be Physical Benefits That Won't Be the Primary Reason For Use Did you know that Pokémon Go is actually an exercise app? Niantic head John Hanke has noted that one of the principle 'secret' goals of Pokémon Go is to encourage people to exercise more. In interview with Business Insider, he notes: "Pokémon Go" is designed to get you up and moving by promising you Pokémon as rewards, rather than placing pressure on you. Users are hailing the hidden benefits of Pokémon Go making them exercise, including the mood boost of getting outside. Whilst it's not as good as a dedicated fitness app for those looking to get amazingly in shape, people often feel better for exercise. Pokémon Go has not tried to gamify fitness - it's a made the benefits of exercise and exploring the outdoors a subtle reward for engaging with its main game. In this, Pokémon comes full circle. Popular legend claims that the initial 90s video game was inspired by the creator's boyhood hobby of bug collecting - something he was concerned was no longer possible for kids in the modern world. He made a virtual alternative - Pokémon. Now, twenty years later, that virtual alternative is moving back into the physical realm once more. What does this mean for app design? The future is not about virtual reality, but augmented reality - and the same is true for apps. The next generation of killer apps and games isn't going to be about replacing our real-world experiences, it's going to be about taking those virtual experiences back into the real world. Social Will Be Social The return of a virtual experience to the real world can be seen most clearly in the social communities which Pokémon Go has created. But these are not the virtual communities of Friends Lists and Twitter followers. These are real people meeting in real physical spaces to build their communities. Businesses are investing in buying lures for Pokémon from the microtransaction store in order to attract customers to their business and apparently, it's working. Can Facebook advertising do that for your coffee shop or bar? What does this mean for app design? We can only speculate how other apps might implement and expand on Pokémon Go's virtual/physical community cross over. However, we've already seen an integration of augmented reality and the Yelp App for a similar 'local business enhancing' experience. Whether it's accessing people's Facebook pages from facial recognition, or more games and apps that encourage physical closeness to other players, we can be sure that we're going to see a lot more apps encouraging 'social' that's actually real-world social interaction.
Read more
  • 0
  • 0
  • 20026
article-image-virtual-reality-developers-cardboard-gear-vr-rift-and-vive
Casey Borders
22 Jul 2016
5 min read
Save for later

Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive

Casey Borders
22 Jul 2016
5 min read
Right now is a very exciting time in the virtual reality space! We’ve already seen what mobile VR platforms have to offer with Google Cardboard and Samsung Gear VR. Now we’re close to a full commercial release of the two biggest desktop VR contenders. Oculus started taking pre-orders in January for their Rift and plan to start shipping bythe end of March. HTC opened pre-orders on March 1st for their Vive, which will ship in April. Both platforms are working to make it as easy as possible for developers to build games for their platform. Oculus have offered Unity integration since the beginning with its first development kit (DK1), but have stopped supporting OSX after their acquisition by Facebook, leaving the Mac runtime at version V0.5.0.1-beta. The Windows runtime is up to version v0.8.0.0-beta. You can download both of these as well as a bunch of other tools from their developer download site. HTC has teamed up with Valve, who is writing all of the software for the Vive. It was announced at the Vision AR/VR Summit in February that they would have official Unity integration coming soon. Adding basic VR support to your game is amazingly easy with the Oculus Unity package. From the developer download page, look under the Engine Integration heading and download the “Oculus Utilities for Unity 5” bundle. Importing that into your Unity project will bring in everything you need to integrate VR into your game as well as some sample scenes that you can use for reference while you're getting started. Looking under OVR> Prefabs, you'll find an OVRCameraRig prefab that works as a drop-in replacement for the standard Unity camera. This prefab handles retrieving of the sensor data from the head-mounted display (HMD) and rendering the stereoscopic output. This lets you go from downloading the Unity package to viewing your game in Oculus Rift in just a few minutes! Virtual Reality Games Virtual reality opens up a whole new level of immersion in games. It can make the player truly feel like they are in another world. It also brings with it some unique obstacles that you'll need to consider when working on a VR game. The first and most obvious thing to consider is that the player’s vision is going to be completely blocked by the head-mounted display. This means that you can't ask the player to type anything and it's going to be extremely difficult and frustrating for them to use a large number of key bindings. Game controllers are a great way to get around this since they have a limited number of buttons and are very tactile. If you are going to target PCs then supporting a mouse and keyboard is a must; just try to keep the inputs to a reasonable number. The User Interface The second issue is the user interface. Screen space UI is jarring in VR and can really damage a player’s sense of immersion. Also, if it blocks out a large portion of the player’s field of view, it can cause them to become nauseous since it will remain static as they move their head around. A better way to handle this would be to build the UI into your world. If you want to show the user how much ammo they have left, build a display into the gun. If your game requires users to follow a set path, try putting up signs along the way or paint the directions on the road. If you really need to keep some kind of information visible all the time, try to make that fit with the theme of your world. For example, maybe your player has a helmet that projects an HUD like Tony Stark's Ironman suit. Player Movement The last big thing to keep in mind when making VR-friendly games is player movement. The current hardware offerings allow two levels of player movement. Google Cardboard, Samsung Gear VR and Oculus Rift allow for mostly stationary interaction. The head-mounted display will give you its yaw, pitch and roll, but the player will remain mostly still. The Rift DK2 and consumer version allow for some range of motion as long as the head-mounted display stays within the field of view of its IR camera. This allows players to lean in and get a closer look at things, but not much else. To allow the player to explore your game world, you'll still need to implement the same type of movement controls that you would for a regular non-VR game. The HTC Vive is full room-scale VR, which means that the player has a volume within which they have complete freedom to move around. The position and orientation of the head-mounted display and the controllers will be given to you, so you can see where the player is and what they are trying to interact with. This comes with its own interesting problems since the game world can be larger than the player’s play space. And each person is going to have different amounts of room that they can dedicate to VR, so the volumes are going to be different for each player. Above everything else though, virtual reality is just a whole lot of fun! For developers, it offers a lot of new and interesting challenges, and for players, it allows you to explore worlds like you never could before. And with VR setups ranging from a few dollars of cardboard to an entire room-scale rig, there's something out there to satisfy just about anybody! About the author Casey Borders is an avid gamer and VR fan with over 10 years of experience with graphics development. He was worked on everything from military simulation to educational VR / AR experiences to game development. More recently, he has focused on mobile development.
Read more
  • 0
  • 0
  • 12133

article-image-exploring-r-packages
Peter Shultz
20 Jul 2016
5 min read
Save for later

Exploring R packages

Peter Shultz
20 Jul 2016
5 min read
In any given discipline, from business to academia, there is a need for data analysis. Among the most popular tools to analyze data sets is R, a programming language that allows you to easily perform statistical analyses and create data visualizations. In this post, I'm going to share with you the best tools to manipulate datasets in R such that they're easy to analyze. In addition, you'll be introduced to a wide variety of visualization tools that are sure to bring your data to life. If you haven't used R before, no problem! I'll get you set up with the proper software. This week, save 50% on some of our top R products or pick up any 5 for $50. It's the perfect opportunity to push your analytical skills forward, and get even more out of R...  Downloading R and RStudio If you haven't already downloaded R, you can do so at the following links for a Mac, for a PC, and for various flavors of Linux. Downloading R is all well and good, but doing significant work with the language is best done using an integrated development environment, or IDE. One of the most popular choices is RStudio, with support for Mac, Windows, Debian, Ubuntu, and RedHat. Downloading is painless: click this link and choose the proper download for your system and architecture. After installing, fire up RStudio. You're now ready to begin programming with R! Learning R Learning a new programming language is tough, especially if you haven't learned one before. Luckily, there are dozens of great resources available to teach you the ins-and-outs of R. While MOOCs are always an option, you might have better luck using sites like DataCamp or Code School. If you'd rather go old school, I’ll recommend the PDF R for Beginners. The coolest option that I've seen of late is a package called swirl. This package allows you to learn about R right within RStudio. If I were relearning the language, this would be my first stop. Packages R by itself can do quite a bit, but the real fun comes in with packages. Put simply, packages extend the functionality of R to do just about anything users can dream of. Don't believe me? Check out all 8,153 of the packages currently available (as of 26/03/2016). At their core, R packages are just libraries of specially-created R functions. Rather than making an R function and keeping it for the good of one, programmers in the R community share their R functions by packaging them up and sharing them on the Comprehensive R Archive Network (CRAN). Not all packages are going to come in handy to beginners. That's why I listed some that are integral to any work in R, whether you're a newcomer or a PhD-holding statistican. Learning swirl: You can learn R right within RStudio using this package. Lessons take 15-20 minutes, so you're guaranteed to walk away with having learned something, even if only on a coffee break. Manipulation tidyr AKA "Tidy R": Cleans up datasets. This package was actually made by the developers of RStudio. As RStudio describes, tidyr allows users to easily manipulate datasets by categorizing columns of your data. Performing statistical analysis on those columns then becomes a cinche. dplyr: Goes hand-in-hand with tidyr. Easily creates data tables (think Excel table), more frequenty referred to by the R community as data frames. Visualization ggplot2: Considered one of the most important visualization packages in all of R. Its syntax can be a little scary, but once you see a couple of examples, it can be fully utilized to make great visualizations. This really is the R community's visual gold standard. htmlwidgets: Allows users to make visualizations that can then be easily exported on the Internet. htmlwidgets is used by a bevy of other packages. You can see them all at this link. shiny: Interested in making a web app from your analysis but without skills in HTML, CSS, or JavaScript? If so, shiny is for you. Also fromthe makers of RStudio, shiny has developed quite a community. Its site is chock full of documentation to help get you started. LightningR: An up-and-coming visualization tool that I've worked with in the past. Lightning's visualizations utilize the best technology in web graphics, and their gallery of visualizations speaks for itself. The R packages listed above are just a few of my favorites, and are especially good for just starting out. Doing anything with R the first time around can be challenging, and so limiting the number of packages you utilize is important. Keep it simple! Installation Installing and utilizing packages is an easy three-step process: Install Include Use Toinstall, enter the command install.packages("<package_name>"), where <package_name> is the name of your package. Next, load the package using the command library(<package_name). At this point, any functions within the installed package areready to use. Call the function by typing <function>(), where <function> is the function name. When it comes to utilizing packages, documentation is your best friend. Luckily, any package available on CRAN will have documentation, or perhaps its own site! About the Author Peter Shultz is a student at the University of Michigan, studying computer science.
Read more
  • 0
  • 0
  • 4026

article-image-employee-or-freelance-what-skill-2016-reveals-you-should-do
Sam Wood
19 Jul 2016
3 min read
Save for later

Employee or Freelance? What Skill Up 2016 reveals you should do

Sam Wood
19 Jul 2016
3 min read
At some point in their professional career, almost every developer has asked themselves the same question - corporate or freelance? Does it make more sense to work a salaried full time position, or to strike off by oneself into the world of clients and being your own boss? We were wondering the same thing - so we took a dive into the Skill Up 2016 data to find out. Go Freelance, or work for the Man? To keep things simple, our analysis focused on results from the Anglosphere - in particular, the United Kingdom and the USA. We ran a comparison of the salaries of full time workers versus freelancers, which is charted below. This chart shows the cumulative distribution of stated salaries in the UK and the US - which shows an interesting trend. Below the $100,000 mark it pays much better to work in a full time position. However, when we get into the upper range of salaries, the top 40% of freelancers and contractors get paid significantly more than their peers 'on payroll'. Where's best to work freelance? What industries are these highly paid freelancers working in to get such great salaries? If you're just looking at the mean average salary, then the usual 'top three' industries of Insurance, Healthcare and Banking come out on top. But what do we see if we look at median income? Banking, Healthcare and Insurance are all still ranked - but it's Cyber Security that comes out on top by a fair margin. This is no doubt reflective of the massive demand and much greater importance industries are giving to Cyber Security in the last few years. If you're looking for a lucrative career in tech whilst still being your own boss, it looks like you might want to learn pentesting. We also took a comparison of UK freelancers and full time workers against the same in the US, to find out which country was more rewarding of freelance work. In the UK, it generally pays to work full time for a company - but in the US, higher salaries can be earned by taking the leap and going freelance. How to find freelance and contracting work So, how do successful freelancers and contractors find themselves new clients and new jobs? We asked our respondents how they came across their contract work - and picked out the common themes. About 50% of Freelancers find their work through their personal networking (it always pays to have friends) which the other half make use of popular freelancing sites. When analysed by age, younger freelancers overwhelmingly favored finding their work by utilizing online services such as Freelancer.com and Upwork. These services are a good way to develop the personal network which they can then rely on later in their careers, like their more experienced peers do.
Read more
  • 0
  • 0
  • 2855
article-image-opencv-and-android-making-your-apps-see
Raka Mahesa
07 Jul 2016
6 min read
Save for later

OpenCV and Android: Making Your Apps See

Raka Mahesa
07 Jul 2016
6 min read
Computer vision might sound like an exotic term, but it's actually a piece of technology that you can easily find in your daily life. You know how Facebook can automatically tag your friends in a photo? That's computer vision. Have you ever tried Google Image Search? That's computer vision too. Even the QR Code reader app in your phone employs some sort of computer vision technology. Fortunately, you don't have to conduct your own researches to implement computer vision, since that technology is easily accessible in the form of SDKs and libraries. OpenCV is one of those libraries, and it's open source too. OpenCV focuses on real-time computer vision, so it feels very natural when the library is extended to Android, a device that usually has a camera built in. However, if you're looking to implement OpenCV in your app, you will find the official documentations for the Android version a bit lagging behind the ever evolving Android development environment. But don't worry; this post will help you with that. Together we're going to add the OpenCV Android library and use some of its basic functions on your app. Requirements Before you get started, let’s make sure you have all the following requirements: Android Studio v1.2 or above Android 4.4 (API 19) SDK or above OpenCV for Android library v3.1 or above An Android device with a camera Importing the OpenCV Library All right, let's get started. Once you have downloaded the OpenCV library, extract it and you will find a folder named "sdk" in it. This "sdk" folder should contain folders called "java" and "native". Remember the location of these 2 folders, since we will get back to them soon enough. So now you need to create a new project with blank activity on Android Studio. Make sure to set the minimum required SDK to API 19, which is the lowest version that's compatible with the library. Next, import the OpenCV library. Open the File > New > Import Module... menu and point it to the "java" folder mentioned earlier, which will automatically copy the Java library to your project folder. Now that you have added the library as a module, you need to link the Android project to the module. Open the File > Project Structure... menu and select app. On the dependencies tab, press the + button, choose Module Dependency, and select the OpenCV module on the list that pops up. Next, you need to make sure that the module will be built with the same setting as your app. Open the build.gradle scripts for both the app and the OpenCV module. Copy the SDK version and tools version values in the app graddle script to the OpenCV graddle script. Once it's done, sync the gradle scripts and rebuild the project. Here are the values of my graddle script, but your script may differ based on the SDK version you used. compileSdkVersion 23 buildToolsVersion "23.0.0 rc2" defaultConfig { minSdkVersion 19 targetSdkVersion 23 } To finish importing OpenCV, you need to add the C++ libraries to the project. Remember the "native" folder mentioned earlier? There should be a folder named "libs" inside it. Copy the "libs" folder to the <project-name>/OpenCVLibrary/src/main folder and rename it to "jniLibs" so that Android Studio will know that the files inside that folder are C++ libraries. Sync the project again, and now OpenCV should have been imported properly to your project. Accessing the Camera Now that you’re done importing the library, it's time for the next step: accessing the device's camera. The OpenCV library has its own camera UI that you can use to easily access the camera data, so let’s use that. To do that, simply replace the layout XML file for your main activity with this one. Then you'll need to ask permission from the user to access the camera. Add the following line to the app manifest. <uses-permission android_name="android.permission.CAMERA"/> And if you're building for Android 6.0 (API 23), you will need to ask for permission inside the app. Add the following line to the onCreate() function of your main activity to ask for permission. requestPermissions(new String[] { Manifest.permission.CAMERA }, 1); There are two things to note about the camera UI from the library. First, by default, it will not show anything unless it's activated in the app by calling the enableView() function. And second, on portrait orientation, the camera will display a rotated view. Fixing this last issue is quite a hassle, so let’s just choose to lock the app to landscape orientation. Using OpenCV Library With the preparation out of the way, let's start actually using the library. Here's the code for the app's main activity if you want to see how the final version works. To use the library, initialize it by calling the OpenCVLoader.initAsync() method on the activity's onResume() method. This way the activity will always check if the OpenCV library has been initialized every time the app is going to use it. //Create callback protected LoaderCallbackInterface mCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { //If not success, call base method if (status != LoaderCallbackInterface.SUCCESS) super.onManagerConnected(status); else { //Enable camera if connected to library if (mCamera != null) mCamera.enableView(); } } }; @Override protected void onResume() { //Super super.onResume(); //Try to init OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10, this, mCallback); } The initialization process will check if your phone already has the full OpenCV library. If it doesn't, it will automatically open the Google Play page for the OpenCV Manager app and ask the user to install it. And if OpenCV has been initialized, it simply activates the camera for further use.   If you noticed, the activity implements the CvCameraViewListener2 interface. This interface enables you to access the onCameraFrame() method, which is a function that allows you to read what image the camera is capturing, and to return what image the interface should be showing. Let's try a simple image processing and show it on the screen. @Override public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { //Get edge from the image Mat result = new Mat(); Imgproc.Canny(inputFrame.rgba(), result, 70, 100); //Return result return result; } Imgproc.Canny() is an OpenCV function that does Canny Edge Detection, which is a process to detect all edges in a picture. As you can see, it's pretty simple; you simply need to put the image from the camera (inputFrame.rgba()) into the function and it will return another image that shows only the edges. Here's what the app’s display will look like. And that's it! You've implemented a pretty basic feature from the OpenCV library on an Android app. There are still many image processing features that the library has, so check out this exhaustive list of features for more. Good luck! About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 27794

article-image-containerized-data-science-docker
Darwin Corn
03 Jul 2016
4 min read
Save for later

Containerized Data Science with Docker

Darwin Corn
03 Jul 2016
4 min read
So, you're itching to begin your journey into data science but you aren't sure where to start. Well, I'm glad you’ve found this post since I will give the details in a step-by-step fashion as to how I circumvented the unnecessarily large technological barrier to entry and got my feet wet, so to speak. Containerization in general and Docker in particular have taken the IT world by storm in the last couple of years by making LXC containers more than just VM alternatives for the enterprising sysadmin. Even if you're coming at this post from a world devoid of IT, the odds are good that you've heard of Docker and their cute whale mascot. Of course, now that Microsoft is on board, the containerization bandwagon and a consortium of bickering stakeholders have formed, so you know that container tech is here to stay. I know, FreeBSD has had the concept of 'jails' for almost two decades now. But thanks to Docker, container tech is now usable across the big three of Linux, Windows and Mac (if a bit hack-y in the case of the latter two), and today we're going to use its positives in an exploration into the world of data science. Now that I have your interest piqued, you're wondering where the two intersect. Well, if you're like me, you've looked at the footprint of R-studio and the nightmare maze of dependencies of IPython and “noped” right out of there. Thanks to containers, these problems are solved! With Docker, you can limit the amount of memory available to the container, and the way containers are constructed ensures that you never have to deal with troubleshooting broken dependencies on update ever again. So let's install Docker, which is as straightforward as using your package manager in Linux, or downloading Docker Toolbox if you're using a Mac or Windows PC, and running the installer. The instructions that follow will be tailored to a Linux installation, but are easily adapted to Windows or Mac as well. On those two platforms, you can even bypass these CLI commands and use Kitematic, or so I hear. Now that you have Docker installed, let's look at some use cases for how to use it to facilitate our journey into data science. First, we are going to pull the Jupyter Notebook container so that you can work with that language-agnostic tool. # docker run --rm -it -p 8888:8888 -v "$(pwd):/notebooks" jupyter/notebook The -v "$(pwd):/notebooks" flag will mount the current directory to the /notebooks directory in the container, allowing you to save your work outside the container. This will be important because you’ll be using the container as a temporary working environment. The --rm flag ensures that the container is destroyed when it exits. If you rerun that command to get back to work after turning off your computer for instance, the container will be replaced with an entirely new one. That flag allows it access to the folder on the local filesystem, ensuring that your work survives the casually disposable nature of development containers. Now go ahead and navigate to http://localhost:8888, and let's get to work. You did bring a dataset to analyze in a notebook, right? The actual nuts and bolts of data science are beyond the scope of this post, but for a quick intro to data and learning materials, I've found Kaggle to be a great resource. While we're at it, you should look at that other issue I mentioned previously—that of the application footprint. Recently a friend of mine convinced me to use R, and I was enjoying working with the language until I got my hands on some real data and immediately felt the pain of an application not designed for endpoint use. I ran a regression and it locked up my computer for minutes! Fortunately, you can use a container to isolate it and only feed it limited resources to keep the rest of the computer happy. # docker run -m 1g -ti --rm r-base This command will drop you into an interactive R CLI that should keep even the leanest of modern computers humming along without a hiccup. Of course, you can also use the -c and --blkio-weight flags to restrict access to the CPU and HDD resources respectively, if limiting it to the GB of RAM wasn't enough. So, a program installation and a command or two (or a couple of clicks in the Kitematic GUI), and we're off and running using data science with none of the typical headaches. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 11956