Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-testing-unity
Travis Scott
14 Oct 2015
6 min read
Save for later

Testing with Unity

Travis Scott
14 Oct 2015
6 min read
What is Testing? Traditionally in Software Development, testing plays an integral role in both the maintainability and quality of the product. Of course, in game development, user acceptance testing is performed frequently. Each time you play the game you check if your newly added creation works the way you intended -- this is user acceptance testing. While this is great, frequently game development testing only centers on "user" centric testing principles. Two testing levels that are often left out are Unit and Integration Testing. We're going to focus on Unit testing, as that's frequently the first step of Automated testing. Unit testing, for our purposes, is an automated test, which verifies the functionality of a specific section of code. There is some debate whether this should be a method, or whether the test should have an even smaller scope. I understand both arguments, and personally agree with the second, yet usually find myself resorting to the first. The reason for this is the nature of a unit test. Our goal is to give our test an input, and expect a certain output or result. The simplest way to give an input and get an output? Methods. So up to this point, why have games not utilized testing as much as enterprise level software? Well, many large game companies have already taken this route, while small indie teams frequently don't have the budget, or the longevity to do it. But by testing from the start of your project, and using TDD (Test-Driven Development) processes, you'll find testing becomes a natural starting point for a new feature. So how do we do this? Unity Test Tools Unity has released an official testing Library. For the sake of this demo, we'll be using the standard C# for development. The testing library can be found in the asset store, under Unity Testing Tools. Once you have added the package to your project, you can begin writing and running your tests. Example Test To write the the tests were going to start with a basic script. Let’s make a new script and call it Cat. The file will have the following code: public class Cat { private int lives; public Cat() { this.lives = 9; } public void useLife(){ lives > 0 ? lives-- : lives; } public int getLives(){ return lives; } public bool IsAlive(){ return lives > 0; } } Nice and simple. We can use one of the cat’s lives, get the current amount of lives, and check if our cat is alive. Now, in traditional TDD, we should have written the test first. Since I'm assuming as a reader you may never have seen testing, I prefer you understand what we're testing first. So our goal here is to make sure our logic is working as expected. No matter how simple the logic may seem, we as programmers know that we still make many mistakes. Because we do make mistakes, writing the test before the logic can be a great benefit to us. That way, you won't let a small bug last in your code. The earlier we can catch a bug, the less of a problem it is. So let’s take a look at a test object. To create a test, create an Editor folder in your assets. Inside the Editor folder create a C# file called CatTest.cs. Inside, we will write: using NUnit.Framework [TestFixture] [Category("Testing Cat")] public class CatTest { Cat cat; [SetUp] public void Init() { cat = new Cat(); } [Test] public void DoesUsingLifeReduceTheCatsLivesByOne() { int currentLives = cat.getLives(); Assert.AreEqual(currentLives - 1, cat.useLife()); } [Test] public void ReducingACatsLivesBelowZeroHasNoImpact() { EmptyCatLives(); cat.useLife(); Assert.AreEqual(0, cat.getLives()); } [Test] public void CatIsAliveWhileItHasLives() { Assert.IsTrue(cat.IsAlive); } [Test] public void ReducingCatsLivesToZeroWillMakeIsAliveFalse() { EmptyCatLives(); Assert.IsFalse(cat.isAlive()); } public void EmptyCatLives(){ for(int i = 0; i < cat.getLives(); i++){ cat.useLife(); } } } Finally to run our tests, there should be a Unit Test Tools dropdown in the navigation bar at the top of the unity pane. Select it, and choose Run Unit Tests. Once the small window opens click play. An important piece of any projects testing is it's Methods names. These names are frequently used to describe the tests purpose. This is actually a great way to introduce a new team member to the project, as this can give new members a way to step-by-step walk through your code, understanding what the purpose of its functions are. In our test, Test Fixture and Category, for this purpose, define our scripts tests. The [Setup] will be run before each test. We use it to initialize our class cat. That way we know each test starts off with a clean instantiation of Cat. The four methods with [Test] are our tests. They will determine that the code written in our script will return what we expect it to. For example, assume a new team member comes to our project and believes that a cat truly loses two lives at a time. Out of excitement, they change your code without even consulting you. Assuming they run the tests as expected, you or this new team member will then be notified that something isn't right. The numbers aren't lining up to what they should be. You've corrected this bug before it's even gotten dangerous. In the enterprise world, it’s the norm for many team members to come and go on projects. Games are no exception, and by having safeguards like Unit and Integration testing, you'll be able to catch those small bugs before they get too far. I recommend a further reading on Unity Tests, and testing in general, as this just scratched the surface! About the Authors Denny is a Mobile Application Developer at Canadian Tire Development Operations. While working, Denny regularly uses Unity to create in-store experiences, but also works on other technologies like Famous, Phaser.IO, LibGDX, and CreateJS when creating game-like apps. He also enjoys making non-game mobile apps, but who cares about that, am I right? Travis is a Software Engineer, living in the bitter region of Winnipeg, Canada. His work and hobbies include Game Development with Unity or Phaser.IO, as well as Mobile App Development. He can enjoy a good video game or two, but only if he knows he'll win!
Read more
  • 0
  • 0
  • 2677

article-image-brief-history-python
Sam Wood
14 Oct 2015
4 min read
Save for later

A Brief History of Python

Sam Wood
14 Oct 2015
4 min read
From data to web development, Python has come to stand as one of the most important and most popular open source programming languages being used today. But whilst some see it as almost a new kid on the block, Python is actually older than both Java, R, and JavaScript. So what are the origins of our favorite open source language? In the beginning... Python's origins lie way back in distant December 1989, making it the same age as Taylor Swift. Created by Guido van Rossum (the Python community's Benevolent Dictator for Life) as a hobby project to work on during week around Christmas, Python is famously named not after the constrictor snake but rather the British comedy troupe Monty Python's Flying Circus. (We're quite thankful for this at Packt - we have no idea what we'd put on the cover if we had to pick for 'Monty' programming books!) Python was born out of the ABC language, a terminated project of the Dutch CWI research institute that van Rossum worked for, and the Amoeba distributed operating system. When Amoeba needed a scripting language, van Rossum created Python. One of the principle strengths of this new language was how easy it was to extend, and its support for multiple platforms - a vital innovation in the days of the first personal computers. Capable of communicating with libraries and differing file formats, Python quickly took off. Computer Programming for Everybody Python grew throughout the early nineties, acquiring lambda, reduce(), filter() and map() functional programming tools (supposedly courtesy of a Lisp hacker who missed them and thus submitted working patches), key word arguments, and built in support for complex numbers. During this period, Python also served a central role in van Rossum's Computer Programming for Everybody initiative. The CP4E's goal was to make programming more accessible to the 'layman' and encourage a basic level of coding literacy as an equal essential knowledge alongside English literacy and math skills. Because of Python's focus on clean syntax and accessibility, it played a key part in this. Although CP4E is now inactive, learning Python remains easy and Python is one of the most common languages that new would-be programmers are pointed at to learn. Going Open with 2.0 As Python grew in the nineties, one of the key issues in uptake was its continued dependence on van Rossum. 'What if Guido was hit by a bus?' Python users lamented, 'or if he dropped dead of exhaustion or if he is rubbed out by a member of a rival language following?' In 2000, Python 2.0 was released by the BeOpen Python Labs team. The ethos of 2.0 was very much more open and community oriented in its development process, with much greater transparency. Python moved its repository to SourceForge, granting write access to its CVS tree more people and an easy way to report bugs and submit patches. As the release notes stated, 'the most important change in Python 2.0 may not be to the code at all, but to how Python is developed'. Python 2.7 is still used today - and will be supported until 2020. But the word from development is clear - there will be no 2.8. Instead, support remains focused upon 2.7's usurping younger brother - Python 3. The Rise of Python 3 In 2008, Python 3 was released on an almost-unthinkable premise - a complete overhaul of the language, with no backwards compatibility. The decision was controversial, and born in part of the desire to clean house on Python. There was a great emphasis on removing duplicative constructs and modules, to ensure that in Python 3 there was one - and only one - obvious way of doing things. Despite the introduction of tools such as '2to3' that could identify quickly what would need to be changed in Python 2 code to make it work in Python 3, many users stuck with their classic codebases. Even today, there is no assumption that Python programmers will be working with Python 3. Despite flame wars raging across the Python community, Python 3's future ascendancy was something of an inevitability. Python 2 remains a supported language (for now). But as much as it may still be the default choice of Python, Python 3 is the language's future. The Future Python's userbase is vast and growing - it's not going away any time soon. Utilized by the likes of Nokia, Google, and even NASA for it's easy syntax, it looks to have a bright future ahead of it supported by a huge community of OS developers. Its support of multiple programming paradigms, including object-oriented Python programming, functional Python programming, and parallel programming models makes it a highly adaptive choice - and its uptake keeps growing.
Read more
  • 0
  • 0
  • 43526

article-image-pythons-new-asynchronous-statements-and-expressions
Daniel Arbuckle
12 Oct 2015
5 min read
Save for later

Python’s new asynchronous statements and expressions

Daniel Arbuckle
12 Oct 2015
5 min read
As part of Packt’s Python Week, Daniel Arbuckle, author of our Mastering Python video, explains Python’s journey from generators, to schedulers, to cooperative multithreading and beyond…. My romance with cooperative multithreading in Python began in the December of 2001, with the release of Python version 2.2. That version of Python contained a fascinating new feature: generators. Fourteen years later, generators are old hat to Python programmers, but at the time, they represented a big conceptual improvement. While I was playing with generators, I noticed that they were in essence first-class objects that represented both the code and state of a function call. On top of that, the code could pause an execution and then later resume it. That meant that generators were practically coroutines! I immediately set out to write a scheduler to execute generators as lightweight threads. I wasn’t the only one! While the schedulers that I and others wrote worked, there were some significant limitations imposed on them by the language. For example, back then generators didn’t have a send() method, so it was necessary to come up with some other way of getting data from one generator to another. My scheduler got set aside in favor of more productive projects. Fortunately, that’s not where the story ends. With Python 2.5, Guido van Rossum and Phillip J. Eby added the send() method to generators, turned yield into an expression (it had been a statement before), and made several other changes that made it easier and more practical to treat generators as coroutines, and combine them into cooperatively scheduled threads. Python 3.3 was changed to include yield from expressions, which didn’t make much of a difference to end users of cooperative coroutine scheduling, but made the internals of the schedulers dramatically simpler. The next step in the story is Python 3.4, which included the asyncio coroutine scheduler and asynchronous I/O package in the Python standard library. Cooperative multithreading wasn’t just a clever trick anymore. It was a tool in everyone’s box. All of which brings us to the present, and the recent release of Python 3.5, which includes an explicit coroutine type, distinct from generators, new asynchronous async def, async for, and async with statements, and an await expression that takes the place of yield from for coroutines. So, why does Python need explicit coroutines and new syntax, if generator-based coroutines had gotten good enough for inclusion in the standard library? The short answer is that generators are primarily for iteration, so using them for something else — no matter how well it works conceptually — introduces ambiguities. For example, if you hand a generator to Python’s for loop, it’s not going to treat it as a coroutine, it’s going to treat it as an iterable. There’s another problem, related to Python’s special protocol methods, such as __enter__ and __exit__, which are called by the code in the Python interpreter, leaving the programmer with no opportunity to yield from it. That meant that generator-based coroutines were not compatible with various important bits of Python syntax, such as the with statement. A coroutine couldn’t be called from anything that was called by a special method, whether directly or indirectly, nor was it possible to wait on a future value. The new changes to Python are meant to address these problems. So, what exactly are these changes? async def is used to define a coroutine. Apart from the async keyword, the syntax is almost identical to a normal def statement. The big differences are, first, that coroutines can contain await, async for, and async with syntaxes, and, second, they are not generators, so they’re not allowed to contain yield or yield from expressions. It’s impossible for a single function to be both a generator and a coroutine. await is used to pause the current coroutine until the requested coroutine returns. In other words, an await expression is just like a function call, except that the called function can pause, allowing the scheduler to run some other thread for a while. If you try to use an await expression outside of an async def, Python will raise a SyntaxError. async with is used to interface with an asynchronous context manager, which is just like a normal context manager except that instead of __enter__ and __exit__ methods, it has __aenter__ and __aexit__ coroutine methods. Because they’re coroutines, these methods can do things like wait for data to come in over the network, without blocking the whole program. async for is used to get data from an asynchronous iterable. Asynchronous iterables have an __aiter__ coroutine method, which functions like the normal __iter__ method, but can participate in coroutine scheduling and asynchronous I/O. __aiter__ should return an object with an __anext__ coroutine method, which can participate in coroutine scheduling and asynchronous I/O before returning the next iterated value, or raising StopAsyncIteration. This is Python, all of these new features, and the convenience they represent, are 100% compatible with the existing asyncio scheduler. Further, as long as you use the @asyncio.coroutine decorator, your existing asyncio code is also forward compatible with these features without any overhead.
Read more
  • 0
  • 0
  • 7139

article-image-application-flow-generators
Wesley Cho
07 Oct 2015
5 min read
Save for later

Application Flow With Generators

Wesley Cho
07 Oct 2015
5 min read
Oftentimes, developers like to fall back to using events to enforce the concept of a workflow, a rigid diagram of business logic that branches according to the application state and/or user choices (or in psuedo-formal terms, a tree-like uni-directional flow graph). This graph may contain circular flows until the user meets the criteria to continue. One example of this is user authentication to access an application, where a natural circular logic arises of returning back to the login form until the user submits a correct user/password combination - upon login, the application may decide to display a bunch of welcome messages pointing to various pieces of functionality & giving quick explanations. However, eventing has a problem - it doesn’t centralize the high level business logic with obvious branching, so in an application of moderate complexity, developers may scramble to figure out what callbacks are supposed to trigger when, and in what order. Even worse, the logic can be split across multiple files, creating a multifile spaghetti that makes it hard to find the meatball (or other preferred food of reader interest). It can be a taxing operation for developers, which in turn hampers productivity. Enter Generators Generators are an exciting new feature in ES6 that is mysterious to more inexperienced developers. They allow one to create a natural loop that blocks up until the yielded expression, which has some natural applications such as in pagination for handling infinite scrolling lists such as a Facebook newsfeed or Twitter feed. They are currently available in Chrome (and thus io.js, as well as Node.js via --harmony flags) & Firefox, but can be used in other browsers & io.js/Node.js through transpilation of JavaScript from ES6 to ES5 with excellent transpilers such as Traceur or Babel. If you want generator functionality, you can use regenerator. One nice feature of generators is that it is a central source of logic, and thus naturally fits the control structure we would like for encapsulating high level application logic. Here is one example of how one can use generators along with promises: var User = function () { … }; User.authenticate = function* authenticate() { var self = this; while (!this.authenticated) { yield function login(user, password) { return self.api.login(user, password) .then(onSuccess, onFailure); }; } }; function* InitializeApp() { yield* User.authenticate(); yield Router.go(‘home’); } var App = InitializeApp(); var Initialize = { next: function (step, data) { switch step { case ‘login’: return App.next(); ... } ... } }; Here we have application logic that first tries to authenticate the user, then we can implement a login method elsewhere in the application: function login (user, password) { return App.next().value(user, password) .then(function success(data) { return Initialize.next(‘login’, data); }, function failure(data) { return handleLoginError(data); }); } Note the role of the Initialize object - it has a next key whose value is a function that determines what to do next in tandem with App, the “instantiation” of a new generator. In addition, it also makes use of the fact that what we choose to yield with a second generator, which lets us yield a function which can be used to pass data to attempt to login a user. On success, it will set the authenicated flag as true, which in turn will break the user out of the User.authenticate part of the InitializeApp generator, and into the next step, which in this case is to route the user to the homepage. In this case, we are blocking the user from normally navigating to the homepage upon application boot until they complete the authentication step. Explanation of differences The important piece in this code is the InitializeApp generator. Here we have a centralized control structure that clearly displays the high level flow of the application logic. If one knows that there is a particular piece that needs to be modified due to a bug, such as in the authentication piece, it becomes obvious that one must start looking at User.authenticate, and any piece of code that is directly concerned with executing it. This allows the methods to be split off into the appropriate sectors of the codebase, similarly with event listeners & the callbacks that get fired on reception of the event, except we have the additional benefit of seeing what the high level application state is. If you aren't interested in using generators, this can be replicated with promises as well. There is a caveat to this approach though - using generators or promises hardcodes the high level application flow. It can be modified, but these control structures are not as flexible as events. This does not negate the benefits of using events, but it gives another powerful tool to help make long term maintenance easier when designing an application that potentially may be maintained by multiple developers who have no prior knowledge. Conclusion Generators have many use cases that most developers working with JavaScript are not familiar with currently, and it is worth taking the time to understand how they work. Combining them with existing concepts allow some unique patterns to arise that can be of immense use when architecting a codebase, especially with possibilities such as encapsulating branching logic into a more readable format. This should help reduce mental burden, and allow developers to focus on building out rich web applications without the overhead of worrying about potentially missing business logic. About the Author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.  
Read more
  • 0
  • 0
  • 7825

article-image-openstack-guide-helps-train-cerns-staff
Tim Bell
07 Oct 2015
4 min read
Save for later

The OpenStack guide that helps train CERN’s staff

Tim Bell
07 Oct 2015
4 min read
What is OpenStack? OpenStack is one of the great success stories of open source technology. Born from Rackspace and NASA, it has been developed by scores of people across the globe as one of the most powerful and popular cloud operating stacks. Backed by some of the leading players in the cloud space today, OpenStack is widely deployed from small businesses to huge multi-million research projects alike. What is CERN? At CERN, the European Organization for Nuclear Research, physicists and engineers are probing the fundamental structure of the universe. They use the world's largest and most complex scientific instruments to study the basic constituents of matter – the fundamental particles. The particles are made to collide together at close to the speed of light. The process gives the physicists clues about how the particles interact, and provides insights into the fundamental laws of nature. The Large Hadron Collider (LHC) is the world’s largest and most powerful particle accelerator. It consists of a 27-kilometre ring of superconducting magnets with a number of structures to accelerate particles at close to the speed of light before they are made to collide. This produces 27 Petabytes of data every year – all of which is recorded in the CERN data centre and analysed thanks to the Worldwide LHC Computing Grid. OpenStack at CERN In 2015 the LHC was upgraded to nearly double the collision energy. It was clear to us that further computing resources were needed. To provide the additional capacity and be more responsive to the users, a new approach was needed. In 2012, a small team at CERN started looking at OpenStack for creating computing clouds. It was a very promising technology with an enthusiastic community but a significant level of complexity. The OpenStack Cookbook - Accelerating training for CERN cloud administrators Along with the code being very new, it was very early days for the OpenStack documentation and training. We wanted to educate the people rapidly to start the project and so looked for guides to get the new administrators productive. This was when we encountered the first edition of the OpenStack Cloud Computing Cookbook. It became the standard document for newcomers to the team to understand the concepts, set up their first clouds and then start work on the CERN cloud. As the cloud evolved and the OpenStack technology matured, we continued to use the guide as members of the team rotated, building small clouds to try out new concepts and investigate the flexibility of cloud computing. Over the years, I have frequently met with book authors Kevin, Cody and Egle at OpenStack summits which give the community an opportunity to meet and exchange experiences. With OpenStack evolving so rapidly, it also gives an opportunity to get the latest editions of the Cookbook which they have continued to keep up to date. The CERN cloud is now in production across two data centers in Geneva and Budapest with over 4,000 servers running tens of thousands of virtual machines. With new staff members joining frequently, we continue to use the Cookbook as a key part of the team’s training and look forward to the updates in the latest edition. From 4th to the 10th April, save 50% on our very best cloud titles. Find them here!  Get the OpenStack Cloud Computing Cookbook Today If you’re interested in the practical, hands-on OpenStack guidance that has been helpful to training CERNs engineers for working on the CERN cloud you can pick up the latest edition now direct from Packt Publishing. Available in both eBook and print, find out today why this guide is invaluable for setting up an OpenStack cloud for anything from a small enterprise to a huge research endeavour such as CERN. About the Author Tim Bell is infrastructure manager at CERN, an OpenStack Foundation board member, and author of the foreword to the OpenStack Cloud Computing Cookbook - Third Edition, from Packt Publishing.
Read more
  • 0
  • 0
  • 2645

article-image-openstack-jack-all-trades-master-none-cloud-solution
Richard Gall
06 Oct 2015
4 min read
Save for later

Is OpenStack a ‘Jack of All Trades, Master of None’ Cloud Solution?

Richard Gall
06 Oct 2015
4 min read
OpenStack looks like all things to all people. Casually perusing their website you can see that it emphasises the cloud solution’s high-level features – ‘compute’, ‘storage’ and ‘networking’. Each one of these is aimed at different types of users, from development teams to sysadmins. Its multi-faceted nature makes it difficult to define OpenStack – is it, we might ask, Infrastructure or Software as a Service? And while OpenStack’s scope might look impressive on paper, ticking the box marked ‘innovative’, when it comes actually choosing a cloud platform and developing a relevant strategy, it begins to look like more of a risk. Surely we’d do well to remember the axiom ‘jack of all trades, master of none’? Maybe if you’re living in 2005 – even 2010. But in 2015, if you’re frightened of the innovation that OpenStack offers (and, for those of you really lagging behind, cloud in general) you’re missing the bigger picture. You’re ignoring the fact that true opportunities for innovation and growth don’t simply lie in faster and more powerful tools, but instead in more efficient and integrated ways of working. Yes, this might require us to rethink the ways we understand job roles and how they fit together – but why shouldn’t technology challenge us to be better, rather than just make us incrementally lazier? OpenStack’s multifaceted offering isn’t simply an experiment that answers the somewhat masturbatory question ‘how much can we fit into a single cloud solution’, but is in fact informed (unconsciously, perhaps) by an agile philosophy. What’s interesting about OpenStack – and why it might be said to lead the way when it comes to cloud – is what it makes possible. Small businesses and startups have already realized this (although this is likely through simple necessity as much as strategic decision making considering how much cloud solutions can cost), but it’s likely that larger enterprises will soon be coming to the same conclusion. And why should we be surprised? Is the tiny startup really that different from the fortune 500 corporation? Yes, larger organizations have legacy issues – with both technology and culture – but this is gradually growing out, as we enter a new era where we expect something more dynamic from the tools and platforms we use. Which is where OpenStack fits in – no longer a curiosity or a ‘science project’, it is now the standard to which other cloud platforms are held. That it offers such impressive functionality for free (at a basic level) means that those enterprise solutions that once had a hold over the marketplace will now have to play catch up. It’s likely they’ll be using the innovations of OpenStack as a starting point for their own developments – which they will be hoping keep them ‘cutting-edge’ in the eyes of customers. If OpenStack is going to be the way forward and the wider technical and business communities (whoever they are exactly) are to embrace it with open arms, it means there will need to be a cultural change in how we use it. OpenStack might well be the Jack of all trades and master of all when it comes to cloud, but it nevertheless places an emphasis on users to use it in the ‘correct’ way. That’s not to say that there is a correct way – it’s more about using it strategically and thinking about what you want from OpenStack. CloudPro articulates this in a good way, arguing that OpenStack needs a ‘benevolent dictator’. ‘Are too many cooks spoiling the open-source broth?’ it asks, getting to the central problem with all open-source technologies – the existentially troubling fact that the possibilities are seemingly endless. This point doesn’t mean we need to step away from the collaborative potential of OpenStack; it emphasises that effective collaboration requires an effective and clear strategy. Orchestration is the aim of the game – whether you’re talking software or people. At this year’s OpenStack conference in Vancouver, COO Mark Collier described OpenStack as ‘an agnostic integration engine… one that puts users in the best position for success’. This is a great way to define OpenStack, and positions it as more than just a typical cloud platform. It is its agnosticism that is particularly crucial – it doesn’t take a position on what you do, it rather makes it possible for you to do what you think you need to do. Maybe, then, OpenStack is a jack of all trades that lets you become the master. For more OpenStack tutorials and extra content, visit our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 7728
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-keep-coding-creative-some-thoughts-national-coding-week
Richard Gall
24 Sep 2015
4 min read
Save for later

Keep Coding Creative! - Some Thoughts on National Coding Week

Richard Gall
24 Sep 2015
4 min read
Does coding need a national week? Just about everything has a day, week, or month dedicated to it, so why shouldn’t one of the most important skills in today’s economy have a week to promote itself? Yes, it’s a great photo opportunity for a government obsessed with the bland mantra of ‘innovation’, but I’m not going to be cynical. Let’s not let ‘coding’ be taken over by self-important entrepreneurs; instead, let’s remind everyone of the immense possibilities and opportunities that learning to code opens up for everyone – from data analysts to designers. There’s a great story on Medium posted by the event’s organizers on one of the Codex DLD (the team behind National Coding Week) success stories. Dwayne Murray explains how learning to code helped him through a difficult period, the skills he developed eventually landing him a job in the digital sector after being unemployed. While Dwayne’s story has a nice personal dimension, as he talks about his difficulties finding a job after becoming unemployed, while trying to support his partner and new baby, it’s the broader picture of what he’s doing that’s particularly interesting. He says in the interview: “We learnt how to write code in HTML & CSS, the basic fundamentals of design, digital marketing, social media and SEO. In fact, everything I learnt in those two with Codex, I use every day in my current job. It was the perfect platform to start on my new career path. Even the website I designed and built during those two weeks is still live now! I’m actually thinking of developing it further in order to create a sideline business through affiliate marketing.” Coding is always about more than just code – it’s about creating things that have a tangible, real-world value. For Dwayne, learning code has given him access to the rapidly expanding world of digital marketing. But we could also be talking about a whole range of jobs that require programming skills, from design to data modelling. If National Coding Week wants to accomplish anything, it needs to make people realise that coding is a creative activity, ultimately about producing new experiences and new ideas. It’s about generating value. This is essential, particularly if you want to get kids into coding. After all, if you want to inspire kids with what might simply appear to be nonsensical lines of text, you want to make it more like design, woodwork, and even cookery lessons – it certainly shouldn’t be like my old IT lessons where someone showed you how to use PowerPoint and navigate your way through spreadsheets. It might prepare you for the ennui of the modern workplace, but it doesn’t show you what programming makes possible, and it certainly doesn’t ignite enthusiasm. But if you’re already a developer, what relevance does National Coding Week actually have for you? It’s great that people are talking about programming, but when you’re dealing with design requests, tackling another deployment, debugging and fixing things you thought would work, the apparent glamour and even minor heroism with which your programming skills are presented in initiatives like these doesn’t quite ring true. It might be different for kids – and indeed, anyone who hasn’t ever coded before – but even if you’re an accomplished and confident programmer, National Coding Week is a great reminder that there is always something new to learn – a new programming language perhaps, but even a new approach to a language, a new framework or library. As an initiative for the uninitiated, National Coding Week promotes the wealth of opportunities that coding skills can bring and invites curiosity about the digital world; for programmers, it’s a time to rethink what you know about your skills, your job and what you create. Want to know what to learn next? Download and read our free Salary and Skills reports on Web Development & Design, App Development, Data Science and System Administration & Security. Find out what the future holds in your corner of tech – and how you can get ahead. National Coding Week runs from 21st to the 27th September 2015 in the UK. Learn more here, or follow @codingweek on Twitter.
Read more
  • 0
  • 0
  • 2603

article-image-what-tableau-can-do-your-data
Joshua Milligan
10 Sep 2015
4 min read
Save for later

What Tableau can do for your data

Joshua Milligan
10 Sep 2015
4 min read
Joshua Milligan is a Tableau Zen master, trainer and senior data consultant at Teknion data solutions, a company dedicated to empowering people to leverage their data. He is also author of the wildly successful Learning Tableau. In using Tableau Joshua has a great insight into all the different hats a Data Scientist needs to wear. Here he explains a bit about his craft and the range of skills you really need to master as a data scientist, becoming analyst, designer, architect, mentor and detective all at once, and how Tableau can help you be all of them. Early in my career, when I was developing software, I remember one of my mentors telling me that everything was about the data. Lines of code were written to read data, move it, create it, and store it. I wrote a lot of lines of code and it really was all about the data. However, over the past decade, I’ve come to a new appreciation for the meaning of data and how being able to see and understand it opens up a whole new world of insight. Over the years I’ve done everything from data modeling, analysis, ETL, and data visualization. I’ve used a lot of different tools, systems, and platforms. But the tool that really ignited my passion for data is Tableau. Tableau is a data visualization and analytics platform which allows me to build visualizations and interactive dashboards by dragging and dropping fields onto a canvas. This gives me hands-on interaction with the data. The tool becomes transparent and my attention can be focused on asking and answering questions using hands-on interaction with the data.   As a consultant, I’m asked to play a variety of roles and I use Tableau in almost all of them. Some of these roles include: Data Detective: Tableau allows me to quickly connect to one or more data sources and rapidly explore. I’m especially interested in the quality of the data and discovering what kinds of questions can even be asked of a particular data set. Data Analyst: I use Tableau’s visualization and statistical capabilities to carefully examine data, looking for patterns, correlations, and outliers. I may start with an initial set of questions given to me by business users or executives. The analysis yields answers and additional questions for deeper understanding. Dashboard Designer: Tableau allows me to turn my analysis into fully interactive dashboards. These can be anything, from simple KPI indicators allowing executives to make time-sensitive decisions to analytical dashboards allowing managers and team-members to drill into the details. Tableau Architect: Tableau truly gives everyone the ability to perform self-service business intelligence. I work with organizations to help craft strategies around Tableau so they can go from having individuals or departments who are overwhelmed with creating reports for the entire organization and instead open up the data, in a secure and structured way, so that everyone can generate their own insights. Trainer and Mentor: Tableau is so intuitive and transparent that business users can very quickly use it – often creating their first visualizations and dashboards in minutes. The greatest fulfillment in my career is sitting down with others or teaching a class of people to empower them to understand their data. My experience with Tableau and my interaction with other users led me to write the book Learning Tableau. My goal was to give readers, whether they were beginners or had been using Tableau for a few years, a solid foundation for understanding how Tableau works with data and how to use the tool to better understand their data.
Read more
  • 0
  • 0
  • 2246

article-image-visit-3d-printing-filament-factory-3dkberlin
Michael Ang
02 Sep 2015
5 min read
Save for later

Visit a 3D printing filament factory - 3dk.berlin

Michael Ang
02 Sep 2015
5 min read
Have you ever wondered where the filament for your 3D printer comes from and how it’s made? I recently had the chance to visit 3dk.berlin, a local filament manufacturer in Berlin. 3dk.berlin distinguishes itself by offering a huge variety of colors for their filament. As a designer it’s great to have a large palette of colors to choose from, and I chose 3dk filament for my Polygon Construction Kit workshop at Thingscon 2015 (they’re sponsoring the workshop). Today we’ll be looking at how one filament producer takes raw plastic and forms it into the colored filament you can use in your 3D printer. Some of the many colors offered by 3dk.berlin 3dk.berlin is located at the very edge of Berlin, in the area of Heiligensee which is basically its own small town. 3dk is a family-owned business run by Volker Bernhardt as part of BERNHARDT Kunststoffverarbeitungs GmbH (that’s German for "plastics processing company"). 3dk is focused on bringing BERNHARDT’s experience with injection moulded and extruded plastics to the new field of 3D printing. Inside the factory neutral-colored plastic pellets are mixed with colored "master batch" pellets and then extruded into filament. The extruding machine melts and mixes the pellets, then squeezes them through a nozzle, which determines the diameter of the extruded filament. The hot filament is run through a cool water bath and coiled on large spools. Conceptually it’s quite simple, but getting extremely consistent filament diameter, color and printing properties is demanding. Small details like air and moisture trapped inside the filament can lead to inconsistent prints. Bigger problems like material contamination can lead to a jammed nozzle in your printer. 3dk spent 1.5 years developing and fine tuning their machine before they were satisfied with the results to a German level of precision. They didn’t let me to take pictures of their extrusion machines since some of their techniques are proprietary but you can get a good view of a similar machine in this filament extrusion machine video. Florian (no small guy himself) with a mega-spool from the extrusion machine The filament from the extrusion machine is wound onto 10kg spools - these are big! The filament from these large spools is then rewound onto smaller spools for sale to customers. 3dk tests their filament on a variety of printers in-house to ensure ongoing quality. Where we might do a small print of 20 grams to test a new filament, 3dk might do a "small" test of 2kg! Test print with a full-size plant (about 4 feet tall) Why produce filament in Germany when cheaper filament is available from abroad? Florian Deurer from 3dk explained some of the benefits to me. 3dk gets their PLA base material directly from a supplier that does use additives. The same PLA is used by other manufacturers for items like food wrapping. The filament colorants come from a German supplier and are also "harmless for food". For the colorants in particular there might be the temptation for less scrupulous or regulated manufacturers to use toxic substances like heavy metals or other chemicals. Beyond safety and practical considerations like printing quality, using locally produced filament provides local jobs What really sets 3dk apart from other filament makers in an increasingly competitive field is the range of colors they produce. I asked Florian for some orange filament and he asked "which one?" The colors on offer range from subtle (there’s a whole selection of whites, for example) to more extreme bright colors and metallic effects. Designers will be happy to hear that they can order custom colors using the Pantone color standard (for orders of 5kg / 11lbs and up).   Which white would you like? Standard, milky, or pearl? Looking to the future of 3D printing, it will be great to see more environmentally friendly materials become available. The most popular material for home 3D printing right now is probably PLA plastic (the same material 3dk uses for most of their filament). PLA is usually derived from corn, which is an annually renewable crop. PLA is technically compostable, but this has to take place in industrial composting conditions at high temperature and humidity. People are making progress on recycling PLA and ABS plastic prints back into filament at home but the machines to make this easy and more common are still being developed. 100% recycled PLA print of Origamix_Rabbit by Mirice printed on an i3 Berlin 3dk offers a filament made from industrially recycled PLA. The color and texture for this material varies a little on the spool but I found it to print very well in my first tests and your object ends up a nice slightly transparent olive green. I recently got a "sneak peek" at a filament 3dk is working on that is compostable under natural conditions. This filament is pre-production, so the specifications haven’t been finalized, but Florian told me that the prints are stable under normal conditions but can break down when exposed to soil bacteria. The pigments also contain "nothing bad" and break down into minerals. The sample print I saw was flexible with a nice surface finish and color. A future where we can manufacture objects at home and throw them onto our compost heap after giving them some good use sounds pretty bright to me! A friendlier future for 3D printing? This print can naturally biodegrade About the Author Michael Ang is a Berlin-based artist / engineer working at the intersection of technology and human experience. He is the creator of the Polygon Construction Kit, a toolkit for creating large physical polygons using small 3D-printed connectors. His Light Catchers project collects crowdsourced light recordings into a public light sculpture.
Read more
  • 0
  • 0
  • 10120

article-image-what-is-docker-and-why-is-it-so-popular
Julian Gindi
28 Aug 2015
6 min read
Save for later

What is Docker and Why is it So Popular?

Julian Gindi
28 Aug 2015
6 min read
Docker is going to revolutionize how we approach application development and deployment; trust me. Unfortunately, it's difficult to get started due to the fact that all the resources out there are either too technical, or are lacking in the intuition behind why Docker is amazing. I hope this post finds a nice balance between the two. What is Docker? Docker is a technology that wraps Linux's operating-system-level virtualization. This is an extremely verbose way of saying that Docker is a tool that quickly creates isolated environments for you to develop and deploy applications in. Another magical feature of Docker is that it allows you to run ANY linux-based operating system within the container - offering even greater flexibility. If you need to run one application with CentOS and another with Ubuntu - no problem! Containers and Images Docker uses the term Containers to represent an actively running environment that can run almost any piece of software; whether it be a web application or a service daemon. Docker Images are the "building blocks" that act as the starting point for our containers. Images become containers, and containers can be made into images. Images are typically base operating system images (such as Ubuntu), but they can also be highly-customized images containing the base OS plus any additional dependencies you install on it. Alright, enough talk. It's time to get our hands dirty. Installation You will need to be running either Linux or Mac to use Docker. Installation instructions can be found here. One thing to note: I am running Docker on Mac OS X. The only major difference between running Docker commands on a Linux machine versus a Mac machine, is that you have to sudo your commands on Linux. On Mac (like you will see below), you do not have to. There is a bit more set-up involved in running Docker on a Mac (you actually run commands through a Linux VM), but once set-up, they behave the same. Hello Docker! Let's start with a super basic Docker interaction and step through it piece-by-piece. $ docker run ubuntu:14.04 /bin/echo 'Hello Docker!' This command does a number of things: The docker run takes a base image and a command to run inside a new container created from that image. Here we are using the "ubuntu" base image and specifying a tag of "14.04". Tags are a way to use a specific version of an image. For example, we could have run the above command with ubuntu:12.04. Docker images are downloaded if they do not exist on your local machine already, so the first time you run the above command it will take a bit longer. Finally, we specify the command to run in the newly instantiated container. In this simple case, we are just echoing Hello Docker. If run successfully, you should see 'Hello Docker!' printed to your terminal. Congrats! You just created a fresh Docker container and ran a simple command inside...feel the power! An 'Interactive' Look at Docker Now let's get fancy and poke around inside of a container. Issue the following command, and you will be given a shell inside a new container. $ docker run -i -t ubuntu:14.04 /bin/bash The -i and -t flags are a way of telling Docker that you want to "attach" to this container and directly interact with it. Once inside the container, you can run any command you would normally run on a Ubuntu machine. Play around a bit and explore. Try installing software via apt-get, or running a simple shell-script. When you are done you can type exit to leave. One thing to note: containers are only active as long as they are running a service or command. In the case above, the minute we exited from our container, it stopped. I will show you how to keep containers running shortly. This does, however, highlight an important paradigm shift. Docker containers are really best used for small, discrete services. Each container should run one service or command and cease to exist when it has completed its task. A Daemonized Docker Service Now that you know how to run commands inside of a Docker container, let's create a small service that will run indefinitely in the background. This container will just print "Hello Docker" until you explicitly tell it to stop. docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo Hello Docker; sleep 1; done" Let's step through this command: As you know by now, this command runs a small shell command inside of a container running ubuntu 14.04. The new addition is the -d flag, which tells Docker to run this container in the background as a daemon. To verify this container is actually running, issue the command docker ps, which gives you some helpful information about our running containers. $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 09e17400c7ee ubuntu:14.04 /bin/sh -c 'while tr 2 minutes ago Up 1 minute sleepy_pare You can view the output of your command by taking the NAME and issuing the following command: $ docker logs sleepy_pare Hello Docker Hello Docker Hello Docker To stop the container and remove it, you can run docker rm -f sleepy_pare. Congratulations! You have learned the basic concepts and usage of Docker. There is still a wealth of features to learn, but this should set a strong foundation for your own further exploration into the awesome world of Docker. Want more Docker? Our dedicated Docker page has got you covered. Featuring our latest titles and even more free content, make sure you check it out! From the 4th to the 10th of April save 50% on 20 of our very best cloud titles. From AWS to Azure. OpenStack and, of course, Docker, we're confident you'll find something useful. And if one's not enough, pick up any 5 featured titles for just $50! Find them here. About the Author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at iStrategylabs where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 3681
article-image-guide-better-typography-web
Brian Hough
19 Aug 2015
8 min read
Save for later

Better Typography for the Web

Brian Hough
19 Aug 2015
8 min read
Despite living in a world dominated by streaming video and visual social networks, the web is still primarily a place for reading. This means there is a tremendous amount of value in having solid, readable typography on your site. With the advances in CSS over the past few years, we are finally able to tackle a variety of typographical issues that print has long solved. In addition, we are also able to address a lot of challenges that are unique to the web. Never have we had more control over web typography. Here are 6 quick snippets that can take yours to the next level. Responsive Font Sizing Not every screen is created equal. With a vast array of screen sizes, resolutions, and pixel densities it is crucial that our typography adjusts itself to fit the user's screen. While we've had access to relative font measurements for awhile, they have been cumbersome to work with. Now with rem we can have relative font-sizing without all the headaches. Let's take a look at how easy it is to scale typography for different screens. html { font-size: 62.5%; } h1 { font-size: 2.1rem // Equals 21px; p { font-size: 1.6rem; // Equals 16px } By setting font-size to 62.5% it allows use base 10 for setting our font-size using rem. This means a font set to 1.6rem is the same as setting it to 16px. This makes it easy to tell what size our text is actually going to be, something that is often an issue when using em. Browser support for rem is really good at this stage, so you shouldn't need a fallback. However, if you need to support older IE it is simple as creating a second font-size rule set in px after the line you set it in rem. All that is left is to scale our text based on screen size or resolution. By using media queries, we can keep the relative sizing of type elements the same without having to manually adjust each element for every breakpoint. // Scale Font Based On Screen Resolution @media only screen and (min-device-pixel-ratio: 2), only screen and (min-resolution: 192dpi) { html { font-size: 125%; } } // Scale Font Based On Screen Size @media only screen and (max-device-width : 667px) { html { font-size: 31.25%; } } This will scale all the type on the page with just one property per breakpoint, there is now no excuse not to adjust font-size based on your users' screens. Relative Line Height Leading, or the space between baselines in a paragraph, is an important typographical attribute that directly affects the readability of your content. The right line height provides a distinction between lines of text, allowing a reader to scan a block of testing quickly. An easy tip a lot of us miss is setting a unitless line-height. By setting line-height in this way, it acts as a ratio to the size of your type. This scales your leading with your font-size, making it a perfect compliment to using rem: p { font-size: 1.6rem; line-height: 1.4; } This will set our line-height at a ratio of 1.4 times our font-size. Consequently, 1.4 is a good value to start with when tweaking your leading, but your ratio will ultimately depend on the font you are using. Rendering Control The way type renders on a web page is affected not only by the properties of a user's screen, but also by their operating system and browser. Different font-rendering implementations can mean the difference between your web fonts loading quickly and clearly or chugging along and rendering into a pixelated mess. Font services like TypeKit recognize this problem and provide you a way to preview how a font will appear in different operating system/browser combinations. Luckily though, you don't have to leave it completely up to chance. Some browser engines, like WebKit, give us some extra control over how a font renders. text-rendering controls how a font is rendered by the browser. If your goal is optimal legibility, setting text-rendering to optimizeLegibility will make use of additional information provided in certain fonts to enhance kerning and make use of built-in ligatures. p.legibility { text-rendering: optimizeLegibility; } While this sounds perfect, there are scenarios where you don't want to use it. It can crush rendering times on less powerful machines, especially mobile browsers. It is best to use it sparingly on your content, and not just apply to every piece of text on a page. All browsers support this property except Internet Explorer. This is not the only way you can optimize font rendering. Webkit browsers also allow you to adjust the type of anti-aliasing they use to render fonts. Chrome is notoriously polarizing in how fonts look, so this is a welcome addition. It is best to experiment with the different options, as it really comes down to the font you've chosen and your personal taste. p { webkit-font-smoothing: none; webkit-font-smoothing: antialiased; webkit-font-smoothing: subpixel-antialiased; } Lastly, if you don't find that the font-smoothing options aren't enough, you can had a bit of boldness to your fonts in WebKit, with the following snippet. The result isn't for everyone, but if you find your font is rendering a bit on the light side, it does the trick. p { -webkit-text-stroke 0.35px; } Hanging Punctuation Hanging punctuation is a typographical technique that keeps punctuation at the begging of a paragraph from disrupting the flow of the text. By utilizing the left margin, punctuation like open quotes and list bullets are able to be displayed while still allowing text to be left justified. This makes it easier for the reader to scan a paragraph. We can achieve this effect by applying the following snippet to elements where we lead with punctuation or bullets. p:first-child { text-indent: -0.5rem; } ul { padding: 0px; } ul li { list-style-position: outside; } One note with bulleted lists is to make sure its container does not have overflow set to hidden as this will hide the bullets when they are set to outside if you want to be super forward-looking. Work is being done on giving us even more control over hanging punctuation, including character detection and support for leading and trailing punctuation. Proper Hyphenation One of the most frustrating things on the web for typography purists is the ragged right edge on blocks of text. Books solve this through carefully justified text. This, unfortunately, is not an option for us yet on the web, and while a lot of libraries attempt to solve this with JavaScript, there are some things you can do to handle this with CSS alone. .hyphenation { -ms-word-break: break-all; word-break: break-all; // Non standard for webkit word-break: break-word; -webkit-hyphens: auto; -moz-hyphens: auto; hyphens: auto; } .no-hyphenation { -ms-word-break: none; word-break: none; // Non standard for webkit word-break: none; -webkit-hyphens: none; -moz-hyphens: none; hyphens: none; } Browser support is pretty solid, with Chrome being the notable exception. You must set a language attribute on the parent element, as the browser leverages this to determine hyphenation rules. Also, note if you are using Autoprefixer, it will not add all the appropriate variations to the hyphens property. Descender-Aware Underlining This is a newer trick I first noticed in iMessage on iOS. It makes underlined text a bit more readable by protecting descenders (the parts of a letter that drop before the baseline) from being obscured by the underline. This makes it an especially good fit for links. .underline { text-decoration: none; text-shadow: .03rem 0 #FFFFFF,-.03rem 0 #FFFFFF,0 .03rem #FFFFFF,0 -.03rem #FFFFFF,.06rem 0 #FFFFFF,-.06rem 0 #FFFFFF,.09rem 0 #FFFFFF,-.09rem 0 #FFFFFF; color: #000000; background-image: linear-gradient(#FFFFFF,#FFFFFF),linear-gradient(#FFFFFF,#FFFFFF),linear-gradient(#000000,#000000); background-size: .05rem 1px,.05rem 1px,1px 1px; background-repeat: no-repeat,no-repeat,repeat-x; background-position: 0 90%,100% 90%,0 90%; } First, we create a text-shadow the same color as our background (in this case white) around the content we want to be underlined. The key is that the shadow is thin enough to obscure things behind it, without overlapping other letters. Next, we use a background gradient to recreate our underline. The text shadow alone is not enough as a normal text-decoration: underline is actually placed over the type. The gradient will appear just like a normal underline, but now the text-shadow will obscure the underline where the descenders overlap it. By using rem, this effect also scales based on our font-size. Conclusion Users still spend a tremendous amount of time reading online. Making that as frictionless as possible should be one of the top priorities for any site with written content. With just a little bit of CSS, we can drastically improve the readability of our content with little to no overhead. About The Author Brian is a Front-End Architect, Designer, and Product Manager at Piqora. By day, he is working to prove that the days of bad Enterprise User Experiences are a thing of the past. By night, he obsesses about ways to bring designers and developers together using technology. He blogs about his early stage startup experience at lostinpixelation.com, or you can read his general musings on twitter @b_hough.
Read more
  • 0
  • 0
  • 11397

article-image-real-tech-knowledge-gap-your-boss
Richard Gall
13 Aug 2015
6 min read
Save for later

The Real Tech Knowledge Gap - Your Boss

Richard Gall
13 Aug 2015
6 min read
The technology skills gap is, apparently, an abyss of economic despair and financial ruin. It has been haunting the pages of business websites and online tech magazines for some time now. It effectively follows a lazy but common line of thought – that young people aren’t capable of dealing with the rigours of STEM subjects. That perspective ignores a similar, but even more problematic issue. It’s not so much that we have a skills gap, it’s more of a fundamental ‘understanding-gap’. And it’s not the kids who are the problem – it’s senior management. There are plenty of memes that reflect this state of affairs on social media - ‘When my boss tries to code’ being a personal favourite. Anyone working in a technical role recognizes variations of these day-to-day workplace problems. From communicating design constraints on a website, to discussing how to understand and act on data, that gap between those with technical knowledge and skills and those with strategic control – and authority – is a common experience of the modern workplace. Your boss, trying to code (Courtesy of devopsreactions.tumblr.com) However, just because there’s a knowledge gap among organizational leaders doesn’t mean that they’re not aware of it – if anything, they’re more aware of it than ever. And this is where the problem really lies. Business leaders expect a lot from technology – a morning spent reading Forbes, Gartner, or any other business website, can yield a long to-do list. Responsive website, Big Data strategy, cloud migration – these buzzwords that float around in the upper echelons of industry, by the sort of people that use the phrase ‘thought leader’ sincerely, put pressure on those with real expertise. Often, they are asked to deliver something they simply cannot. ‘Chief Whatever Officers’ – A Symptom of Managerial Failure One of the symptoms of this problem is the phenomenon of the ‘Chief Whatever Officer’ – someone leading a certain project or area of the business without any real authority within an organization. The Chief Data Officer is one of the most common examples of this – someone who can plan and implement, say, a Big Data strategy, and can use data to improve the business in different ways. Effectively, this Chief Data Officer sits in lieu of senior management’s knowledge. This isn’t immediately a problem – after all, this is basically how any organization is built, with people doing different jobs based on their skillset. The problem happens when, to take this example, a data strategy is treated as separate siloed project within a business, rather than something that requires very real leadership and authority. A lot of Big Data reports (you can find them pretty easily on any online business magazine looking for easy signups), and business think pieces such as this one from Forbes, and this from Read Write, talk a lot about the failure of Big Data projects. The Read Write one is particularly interesting as it singles out ‘corporate culture’ as one of the biggest causes of Big Data projects. However, if you want to be frank about it, it’s ultimately the fault of senior managers – the very people responsible for cultivating an organizational culture. It goes on to argue for a ‘cultural affinity for data’, which isn’t that helpful – the problem, as their own research highlights, isn’t so much there is no affinity for data, but instead that there is a fundamental lack of understanding of how it should be used. Information Age recently published an article taking issue with the concept of the CWO: It is an absurd notion that we need to define and hire someone as a ‘chief’ each time a challenge or opportunity arises that requires leadership attention and accountability. Isn’t this what we pay the big bucks to the CEO and his or her team to do? Whether or not you want to focus on the CWO here is irrelevant – the important point here is ultimately that we’re seeing senior management neglect their responsibilities, instead delegating to project managers with trendy job titles. The article doesn’t ever quite follow through with the implications of their point here, choosing instead to link it to vague trends in management philosophy in which a few straw-man gurus are held responsible. To put it another way, it’s not simply that there’s a knowledge gap, but instead that the very adjectives regularly espoused by the tech world – collaborative, agile, and innovative – are being used by business leaders to simultaneously relinquish and consolidate their position. This might sound counter-intuitive, and to a certain extent it is especially if you want to avoid the common growing pains of an expanding business – as technology has become more and more integral to businesses all around the world it comes to be seen as an easy solution that can deliver results in a really simple way. It’s effectively a form of hubris – the idea that you can solve things with just a click of your fingers (or a new hire). Pernicious tech solutionism is stopping you from doing your job The mistake being made here is that while technology – from the algorithms that predict customer behaviour to responsive websites that offer seamless and elegant user experiences – always targets points of friction and inefficiency, actually using that technology – integrating it and understanding how it can be used most effectively within an organization – is far from frictionless. Of course, it’s easy for anyone to make that mistake – just about everyone has, at some point, thought that something is going to revolutionize their lives or the way they work only to be disappointed when it turns out you’re still overworked and stressed. But it’s those leading our businesses and organizations who are most prone to misunderstanding – and then overestimating how different technologies could improve the organization. It’s another form of what Evgeny Morozov calls ‘tech solutionism’. It’s particularly pernicious because it stops people who really can implement solutions from properly doing their job. Adopting a More Holistic Approach What can be done about this situation? Ultimately, over time those with the best understanding of how to use tools will take greater control over these companies. But we’ll probably also need to see better integration of technology – a more holistic approach is required. In theory, this is how the CWO role should function – with chiefs reporting horizontally, so different departments and projects are in dialogue with one another, rather than simply reporting to a superior. But the managerial knowledge gap stops this from happening – essentially those leading technical projects are expected to simply deliver impressive benefits without necessarily having the resources or the support needed. Perhaps it’s not just a knowledge gap but also a cultural change – one which your boss loves, but doesn’t fully understand. If it is, then maybe it’s time to finally properly embrace it and try to really understand it. That way we might not have to keep battling through failed projects, stretched budgets, and damaged egos.  
Read more
  • 0
  • 0
  • 2982

article-image-no-patch-human-stupidity-three-social-engineering-techniques-you-might-not-know-about
Sam Wood
12 Aug 2015
5 min read
Save for later

No Patch for Human Stupidity - Three Social Engineering Techniques You Might Not Know About

Sam Wood
12 Aug 2015
5 min read
There's a simple mantra beloved by pentesters and security specialists: "There's no patch for human stupidity!" Whether it's hiding a bunch of Greeks inside a wooden horse to breach the walls of Troy or hiding a worm inside the promise of a sexy picture of Anna Kournikova, the gullibility of our fellow humans has long been one of the most powerful weapons of anyone looking to breach security. In the penetration testing industry, working to exploit that human stupidity and naivety has a name - social engineering. The idea that hacking involves cracking black ICE and de-encrypting the stand-alone protocol by splicing into the mainframe backdoor - all whilst wearing stylish black and pointless goggles - will always hold a special place in our collective imagination. In reality, though, some of the most successful hackers don't just rely on their impressive tech skills, but on their ability to defraud. We're wise to the suspicious and unsolicited phone call from 'Windows Support' telling us that they've detected a problem on our computer and need remote access to fix it. We've cottoned on that Bob Hackerman is not in fact the password inspector who needs to know our login details to make sure they're secure enough. But hackers are getting smarter. Do you think you'd fall for one of these three surprisingly common social engineering techniques? 1. Rogue Access Points - No such thing as a free WiFi You've finally impressed your boss with your great ideas about the future of Wombat Farming. She thinks you've really got a chance to shine - so she's sent you to Wombat International, the biggest convention of Wombat Farmers in the world, to deliver a presentation and drum up some new investors. It's just an hour before you give the biggest speech of your life and you need check the notes you've got saved in the cloud. Helpfully, though, the convention provides free WiFi! Happily, you connect to WomBatNet. 'In order to use this WiFi, you'll need to download our app,' the portal page tells you. Well, it's annoying - but you really need to check your notes! Pressed for time, you start the download. Plot Twist: The app is malware. You've just infected your company computer. The 'free WiFi' is in fact a wireless hotspot set up by a hacker with less-than-noble intentions. You've just fallen victim to a Rogue Access Point attack. 2. The Honeypot - Seduced by Ice Cream You love ice cream - who doesn't? So you get very excited when a man wearing a billboard turns up in front of your office handing out free samples of Ben and Jerry’s. They're all out of Peanut Butter Cup - but it's okay! You've been given a flyer with a QR code that will let you download a Ben and Jerry’s app for the chance to win Free Ice Cream for Life! What a great deal! The minute you're back in the office and linked up to your work WiFi, you start the download. You can almost taste that Peanut Butter Cup. Plot Twist: The app is malware. Like Cold War spies seduced by sexy Russian agents, you've just fallen for the classic honeypot social engineering technique. At least you got a free ice cream out of it, right? 3. Road Apples - Why You Shouldn't Lick Things You Pick Up Off the Street You spy a USB stick, clearly dropped on the sidewalk. It looks quite new - but you pick it up and pop it in your pocket. Later that day, you settle down to see what's on this thing - maybe you can find out who it belongs to and return it to them; maybe you're just curious for the opportunity to take a sneak peek into a small portion of a stranger’s life. You plug the stick into your laptop and open up the first file called 'Government Secrets'... Plot Twist: It's not really much of a twist by now, is it? That USB is crawling with malware - and now it's in your computer. Early today, that pesky band of hackers went on a sowing spree scattering their cheap flash drives all over the streets near your company hoping to net themselves a sucker. Once again, you've fallen victim - this time to the Road Apples attack. What can you do? The reason people keep using social engineering attacks is simple - they work. As humans, we're inclined to be innately trusting - and certainly there are more free hotspots, ice cream apps, and lost USB sticks that are genuine and innocent than ones that are insidious schemes of hackers. There may be no patch for human stupidity, but that doesn't mean you need to be careless - keep your wits about you and remember security rules that you shouldn't break, no matter how innocuous the situation seems. And if you're a pentester or security professional? Keep on social engineering and make your life easy – the chink in almost any organisation’s armour is going to be its people. Find out more about internet security and what we can learn from attacks on the WEP protocol with this article. For more on modern infosec and penetration testing, check out our Pentesting page.
Read more
  • 0
  • 0
  • 5968
article-image-uncovering-secrets-bitcoin-blockchain
Alex Leishman
10 Aug 2015
5 min read
Save for later

Discover the Secrets of the Bitcoin Blockchain

Alex Leishman
10 Aug 2015
5 min read
In a previous post Deploy Toshi Bitcoin Node with Docker on AWS you learned how to setup and run Toshi, a Bitcoin node built with Ruby, PostgreSQL and Redis. We walked through how to provision our AWS server, run our Docker containers and access Toshi via the built-in web interface. As you might notice, it takes a very long time (~two weeks) for the blockchain to sync and for the Toshi database to be fully populated. However, Toshi sees new transactions in the Bitcoin network once it starts running, which allows us to start playing with the data soon after it is up and running. In this post we will walk through a few simple ways to analyze our new blockchain data to gain insights into and uncover some mysteries of the Blockchain. First we need a way to access our database in the toshi_db Docker container. I’ve found that the easiest way to do that is to setup a new container running ‘psql’, the PostgreSQL interactive terminal. Run the command below, which will take you to the psql command line inside a new container. toshi@ip-172-31-62-77:~$ sudo docker run -it --link toshi_db:postgres postgres sh -c 'exec psql -h "172.17.0.3" -p "5432" -U postgres' psql (9.3.5) Type "help" for help. postgres=# Remember to replace my Postgres IP with your own. Any commands run from here are actually being done IN the psql container. Once we quit psql (‘q’), the container will be stopped. Let’s explore the data! schema.txt There are 37 tables in Toshi. Let’s find out what some of them have to offer: 1_query.txt Here we queried the first two entries in the addresses table. We can see that Toshi stores the ’total_received’ and ‘total_sent’ values from each address. This allows us to calculate the balance of any bitcoin address with ease, something not possible with bitcoind. We also queried the count of entries in the ‘addresses’ table, this value will continually change as your data syncs and new transactions are created in the network. Another interesting column in this table is ‘address_type’, which tells us whether an address is a traditional Pay to PubKey Hash (P2PKH) address or a Pay to Script Hash (P2SH) address. In the ‘addresses’ table, P2PK addresses have ‘address_type = 0’ and P2SH addresses have ‘address_type=1’. Querying the amount of P2SH addresses can help us estimate the percentage of Bitcoin users taking advantage of multi-signature addresses. To learn more about different address types, look here. If we want to learn more about a specific table in the Toshi DB, we can use the same ‘d’ command followed by the table name. For example: 2_query.txt This lists the columns, indices and relations of the ‘unspent_outputs’ table. Now, let’s look for something a bit more fun – hidden messages in the blockchain. The ‘op code’ OP_RETURN allows us to create bitcoin transactions with zero value outputs used to store a string of data. The hexadecimal code for OP_RETURN is ‘x6a’. Therefore, we can search for OP_RETURN outputs using the following query: 3_query.txt If we convert each of the script hex values to UTF-8 (online tool), we get some interesting results. Most of them are just noise (they may be encrypted), but there are a few decoded values that stand out: (CaMarche!ޛKꈶ-誗�bَˆګ㟟ǒo⁏ˀ "OAu=https://cpr.sm/aNSAyIRJSr (DOCPROOFv㒶nۗ䧴꠿کRS:Cw-J䙻$:_䌽 (Test0004嬟zN畆oﱵ联؞¬⋈ǦQ<㺏֪怀 The first result includes the French expression “Ça marche”, meaning “ok, that works”. The second result includes a URL that leads to the JSON description of a colored coin asset. To learn more about colored coins, check out Coinprism. The third result includes the text ‘DOCPROOF’, which indicates that the output was used for proof of existence, allowing a user to cryptographically prove existence of a document at a point in time. The last result looks like somebody just wanted to play around and test out OP_RETURN. Lastly, if we want to export the results of a query from our container we can copy it to SDOUT and then export it from the container log afterwards. 4_query.txt If we quit psql (‘q’), we find the name of our previously used psql container (now stopped): ubuntu@ip-172-31-29-91:~$ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c66179a35dbe postgres:latest "/docker-entrypoint. 2 minutes ago Exited (0) 15 seconds ago sad_shockley Then we can export the log into a CSV file: sudo docker logs sad_shockly > data.csv You can now ‘scp’ this CSV file back to your local machine for further analysis. Note that this file will include all commands and outputs from your psql container. It may require some manual touch-ups. You can always start a new psql container for a fresh log. So, in just a few minutes we were able to create a new psql Docker container, allowing us to explore blockchain data in ways that are impossible or very difficult to do with bitcoind. We discovered messages that people have left in the blockchain and learned how to export any queries we make into a CSV file. We have only scratched the surface, there are many insights yet to be discovered. Happy querying! About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 3270

article-image-sysadmin-security-salary-skills-report-video
Packt Publishing
05 Aug 2015
1 min read
Save for later

SysAdmin & Security - Salary & Skills Video

Packt Publishing
05 Aug 2015
1 min read
What do Sys Admins and Security specialists need to get the best salary they can get in the world today? What skills are companies looking for their employees to have mastered? We interviewed over 2,000 sys admins and security specialists to see the latest trends so far this year for you to take advantage of. Which industries value admins the most hightly? Do Linux or Windows lead the way with superior tools? What role does Python have for the budding pentester in the community today? And what comes out on top – Puppet, Chef, Ansible, or Salt? With this animation on our Skill Up survey results you can get the answers you need and more! View the full report here: www.packtpub.com/skillup/sys-admin-salary-report
Read more
  • 0
  • 0
  • 11077