Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-arduino-yun-welcome-to-the-internet-things
Michael Ang
26 Sep 2014
6 min read
Save for later

Arduino Yún - Welcome to the Internet of Things

Michael Ang
26 Sep 2014
6 min read
Arduino is an open source electronics platform that makes it easy to interface with sensors, lights, motors, and much more with a small standalone board. Arduino Yún combines a standard Arduino micro-controller with a tiny Linux computer, all on the same board! The Arduino micro-controller is perfectly suited to interfacing with hardware like sensors and motors, and the Linux computer makes it easy to get online to the Internet and perform more intensive tasks. The combination really is the best of both worlds. This post will introduce Arduino Yún and give you some ideas of the possibilities that it opens up. The key to the Yún is that it has two separate processors on the board. The first provides the normal Arduino functions using an ATmega32u4 micro-controller. This processor is perfect for running "low-level" operations like driving timing-sensitive LED light strips or interfacing with sensors. The second processor is an Atheros AR9331 "system on a chip" that is typically used in WiFi access points and routers. The Atheros processor runs a version of Linux derived from OpenWRT and has built-in WiFi that lets it connect to a WiFi network or act as an access point. The Atheros is pretty wimpy by desktop standards (400MHz processor and 64MB RAM) but it has no problem downloading webpages or accessing an SD card, for example—two tasks that would otherwise require extra hardware and be challenging for a standard Arduino board. One selling point for the Arduino Yún is that the integration between the two processors is quite good and you program the Yún using the standard Arduino IDE (currently you need the latest beta version). You can program the Yún by connecting it by a USB cable to your computer, but much more exciting is to program it over the air, via WiFi! When you plug the Yún into a USB power supply it will create a WiFi network with a name like "Arduino Yun-90A2DAF3022E". Connect to this network with your computer and you will be connected to the Yún! You'll be able to access the Yún's configuration page by going to http://arduino.local in your web browser and you should be able to reprogram the Yún from the Arduino IDE by selecting the network connection in Tools-> Port. There's a new access point in town Being able to reprogram the board over WiFi is already worth the price of admission for certain projects. I made a sound-reactive hanging light sculpture and it was invaluable to adjust and "dial in" the program inside the sculpture while it was hanging in the air. Look ma, no wires! Programming over the air The Bridge library for Arduino Yún is used to communicate between the processors. A number of examples using Bridge are provided with the Arduino IDE. With Bridge you can do things like controlling the pins on the Arduino from a webpage. For example, loading http://myArduinoYun.local/arduino/digital/13/1 in your browser could turn on the built-in LED. You can also use Bridge to download web pages, or run custom scripts on the Linux processor. Since the Linux processor is a full-blown computer with an SD card reader and USB, this can be really powerful. For example, you can write a Python script that runs on the Linux processor, and trigger that script from your Arduino sketch. The Yún is ideally suited for the "Internet of Things". Want to receive an e-mail when your cat comes home? Attach a switch to your pet door and have your Yún e-mail you when it sees the door open. Want to change the color of an LED based on the current weather? Just have the Linux processor download the current weather from Yahoo! Weather and the ATMega micro-controller can handle driving the LEDs. Temboo provides library code and examples for connecting to a large variety of web services. The Yún doesn't include audio hardware, but because the Linux processor supports USB peripherals, it's easy to attach a low-cost USB sound card. This tutorial has the details on adding a sound card and playing an mp3 file in response to a button press. I used this technique for my piece Forward Thinking Sound at the Art Hack Day in Paris that used a Yún to play modem sounds while controlling an LED strip. With only 48 hours to complete a new work from scratch, being able to get an mp3 playing from the Yún in less than an hour was amazing! Yún with USB sound card, speakers and LED strip. Forward Thinking Sound at Art Hack Day Paris. The Arduino Yún is a different beast than the Raspberry Pi and BeagleBone Black. Where the other boards are best thought of as small computers (with video output, built-in audio, and so on) the Arduino Yún is best thought of as the combination of an Arduino board and WiFi router than can run some basic scripts. The Yún is unfortunately quite a bit more expensive than a standard Arduino board, so you may not want to dedicate one to each project. The experience of programming the Yún is generally quite good—the Arduino IDE and Bridge library make it easy to use the Yún as a "regular" Arduino and ease into the network/Linux features as you need them. Once you can program your Arduino over WiFi and connect to the Internet, it's a little hard to go back! About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 4325

article-image-top-5-newish-javascript-libraries-arent-angularjs
Ed Gordon
30 Jul 2014
5 min read
Save for later

Top 5 Newish JavaScript Libraries (That Aren't AngularJS...)

Ed Gordon
30 Jul 2014
5 min read
AngularJS is, like, so 2014. Already the rumblings have started that there are better ways of doing things. I thought it prudent to look into the future to see what libraries are on the horizon for web developers now, and in the future. 5. Famo.us “Animation can explain whatever the mind of man can conceive.” - Walt Disney Famo.us is a clever library. It’s designed to help developers create application user interfaces that perform well; as well, in fact, as native applications. In a moment of spectacular out-of-the-box thinking, Famo.us brings with it its own rendering engine to replace the engine that browsers supply. To get the increase in performance from HTML5 apps that they wanted, Famo.us looked at which tech does rendering best, namely game technologies, such as Unity and Unreal Engine.  CSS is moved into the framework and written in JavaScript instead. It makes transformations and animations quicker. It’s a new way of thinking for web developers, so you best dust off the Unity Rendering tutorials… Famo.us makes things running in the browser as sleek as they’re likely to be over the next few years, and it’s massively exciting for web developers. 4. Ractive “The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.” - Carl Jung Manipulating the Document Object Model (which ties together all the webpages we visit) has been the major foe of web developers for years. Mootools, YUI, jQuery, AngularJS, Famo.us, and everything between have offered developers productivity solutions to enable them to manipulate the DOM to their client’s needs in a more expedient manner. One of the latest libraries to help DOM manipulators at large is Ractive.js, developed by the team at The Guardian (well, mainly one guy – Rich Harris). Its focus remains on UI, so while it borrows heavily from Angular (it was initially called AngularBars), it’s a simpler tool at heart. Or at least, it approaches the problems of DOM manipulation in as simple a way as possible. Ractive is part of the reactive programming direction that JavaScript (and programming, generally) seems to be heading in at the moment. 3. DC.js “A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected.” ― Reif Larsen, The Selected Works of T.S. Spivet DC.js, borrowing heavily from both D3 and Crossfilter, enables you to visualize linked data through reactive (a theme developing in this list) charts. I could try and explain the benefits in text, but sometimes, it’s worth just going to have a play around (after you’ve finished this post). It uses D3 for the visualization bit, so everything’s in SVG, and uses Crossfilter to handle the underlying linkage of data. For a world of growing data, it provides users with immediate and actionable insight, and is well worth a look. This is the future of data visualization on the web. 2. Lo-dash "The true crime fighter always carries everything he needs in his utility belt, Robin." - Batman There’s something appealing about a utility belt; something that’s called to all walks of life, from builders to Batman, since man had more than one tool at his disposal. Lo-dash, and Underscore.js that came before it, is no different. It’s a library of useful JavaScript functions that abstract some of the pain away of JS development, whilst boosting performance over Underscore.js. It’s actually based around Underscore, which at the time of writing is the most depended upon library used in Node, but builds on the good parts, and gets rid of the not so good. Lo-dash will take over from Underscore in the near future. Watch this space. 1. Polymer “We are dwarfs astride the shoulders of giants. We master their wisdom and move beyond it. Due to their wisdom we grow wise and are able to say all that we say, but not because we are greater than they.” - Isaiah di Trani As with a lot of things, rather than trying to reinvent solutions to existing problems, Google is trying to just reinvent the things that lead to the problem. Web Components is a W3 standard that’s going to change the way we build web applications for the better, and Polymer is the framework that allows you to build these Web Components now. Web Components envision a world where, as a developer, you can select a component from the massive developer shelf of the Internet, call it, and use it without any issues. Polymer provides access to these components; UI components such as a clock – JavaScript that’s beyond my ability to write at least, and a time-sink for normal JS developers – can be called with: <polymer-ui-clock></polymer-ui-clock> Which gives you a pretty clock that you can actually go and customize further if you want. Essentially, they put you in a dialog with the larger development world, no longer needing to craft solutions for your single project; you can use and reuse components that others have developed. It allows us to stand on the shoulders of giants. It’s still some way off standardization, but it’s going to redefine what application development means for a lot of people, and enable a wider range applications to be created quickly and efficiently. “There's always a bigger fish.” - Qui-Gon Jin There will always be a new challenger, an older guard, and a bigger fish, but these libraries represent the continually changing face of web development. For now, at least!
Read more
  • 0
  • 0
  • 8174

article-image-devops-evolution-and-revolution
Julian Ursell
24 Jul 2014
4 min read
Save for later

DevOps: An Evolution and a Revolution

Julian Ursell
24 Jul 2014
4 min read
Are we DevOps now? The system-wide software development methodology that breaks down the problematic divide between development and operations is still in that stage where enterprises implementing the idea are probably asking that question, working out whether they've reached the endgame of effective collaboration between the two spheres and a systematic automation of their IT service infrastructure. Considered to be the natural evolution of Agile development and practices, the idea and business implementation of DevOps is rapidly gaining traction and adoption in significant commercial enterprises and we're very likely to see a saturation of DevOps implementations in serious businesses in the near future. The benefits of DevOps for scaling and automating the daily operations of businesses are wide-reaching and becoming more and more crucial, both from the perspective of enabling rapid software development as well as delivering products to clients who demand and need more and more frequent releases of up-to-date applications. The movement towards DevOps systems moves in close synchronization with the growing demand for experiencing and accessing everything in real time, as it produces the level of infrastructural agility to roll out release after release with minimal delays. DevOps has been adopted prominently by hitters as big as Spotify, who have embraced the DevOps culture throughout the formative years of their organization and still hold this philosophy now. The idea that DevOps is an evolution is not a new one. However, there’s also the argument to be made that the actual evolution from a non-DevOps system to a DevOps one entails a revolution in thinking. From a software perspective, an argument could be made that DevOps has inspired a minor technological revolution, with the spawning of multiple technologies geared towards enabling a DevOps workflows. Docker, Chef, Puppet, Ansible, and Vagrant are all powerful key tools in this space and vastly increase the productivity of developers and engineers working with software at scale. However, it is one thing to mobilize DevOps tools and implement them physically into a system (not easy in itself), but it is another thing entirely to revolve the thinking of an organization round to a collaborative culture where developers and administrators live and breathe in the same DevOps atmosphere. As a way of thinking, it requires a substantial cultural overhaul and a breaking down of entrenched programming habits and the silo-ization of the two spheres. It's not easy to transform the day-to-day mind-set of a developer so that they incorporate thinking in ops (monitoring, configuration, availability) or vice versa of a system engineer so they are thinking in terms of design and development. One can imagine it is difficult to cultivate this sort of culture and atmosphere within a large enterprise system with many large moving parts, as opposed to a startup which may have the “day zero” flexibility to employ a DevOps approach from the roots up. To reach the “state” of DevOps is a long journey, and one that involves a revolution in thinking. From a systematic as well as cultural point of view, it takes a considerable degree of ground breaking in order to shatter (what is sometimes) the monolithic wall between development and operations. But for organizations that realize that they need the responsiveness to adapt to clients on demand and have the foresight to put in place system mechanics that allow them to scale their services in the future, the long term benefits of a DevOps revolution are invaluable. Continuous and automated deployment, shorter testing times, consistent application monitoring and performance visibility, flexibility when scaling, and a greater margin for error all stem from a successful DevOps implementation. On top of that a survey showed that engineers working in a DevOps environment spent less time firefighting and more productive time focusing on self-improvement, infrastructure improvement, and product quality. Getting to that point where engineers can say “we’re DevOps now!” however is a bit of a misconception, because it’s more than a matter of sharing common tools, and there will be times where keeping the bridge between devs and ops stable and productive is challenging. There is always the potential that new engineers joining an organization can dilute the DevOps culture, and also the fact that DevOps engineers don't grow overnight. It is an ongoing philosophy, and as much an evolution as it is a revolution worth having.
Read more
  • 0
  • 0
  • 33570

article-image-5-go-libraries-frameworks-and-tools-you-need-to-know
Julian Ursell
24 Jul 2014
4 min read
Save for later

5 Go Libraries, Frameworks, and Tools You Need to Know

Julian Ursell
24 Jul 2014
4 min read
Golang is an exciting new language seeing rapid adoption in an increasing number of high profile domains. Its flexibility, simplicity, and performance makes it an attractive option for fields as diverse as web development, networking, cloud computing, and DevOps. Here are five great tools in the thriving ecosystem of Go libraries and frameworks. Martini Martini is a web framework that touts itself as “classy web development”, offering neat, simplified web application development. It serves static files out of the box, injects existing services in the Go ecosystem smoothly, and is tightly compatible with the HTTP package in the native Go library. Its modular structure and support for dependency injection allows developers to add and remove functionality with ease, and makes for extremely lightweight development. Out of all the web frameworks to appear in the community, Martini has made the biggest splash, and has already amassed a huge following of enthusiastic developers. Gorilla Gorilla is a toolkit for web development with Golang and offers several packages to implement all kinds of web functionality, including URL routing, optionality for cookie and filesystem sessions, and even an implementation with the WebSockets protocol, integrating it tightly with important web development standards. groupcache groupcache is a caching library developed as an alternative (or replacement) to memcached, unique to the Go language, which offers lightning fast data access. It allows developers managing data access requests to vastly improve retrieval time by designating a group of its own peers to distribute cached data. Whereas memcached is prone to producing an overload of database loads from clients, groupcache enables a successful load out of a huge queue of replicated processes to be multiplexed out to all waiting clients. Libraries such as Groupcache have a great value in the Big Data space as they contribute greatly to the capacity to deliver data in real time anywhere in the world, while minimizing potential access pitfalls associated with managing huge volumes of stored data. Doozer Doozer is another excellent tool in the sphere of system and network administration which provides a highly available data store used for the coordination of distributed servers. It performs a similar function to coordination technologies such as ZooKeeper, and allows critical data and configurations to be shared seamlessly and in real time across multiple machines in distributed systems. Doozer allows the maintenance of consistent updates about the status of a system across clusters of physical machines, creating visibility about the role each machine plays and coordinating strategies for failover situations. Technologies like Doozer emphasize how effective the Go language is for developing valuable tools which alleviate complex problems within the realm of distributed system programming and Big Data, where enterprise infrastructures are modeled around the ability to store, harness and protect mission critical information.  GoLearn GoLearn is a new library that enables basic machine learning methods. It currently features several fundamental methods and algorithms, including neural networks, K-Means clustering, naïve Bayesian classification, and linear, multivariate, and logistic regressions. The library is still in development, as are the number of standard packages being written to give Go programmers the ability to develop machine learning applications in the language, such as mlgo, bayesian, probab, and neural-go. Go’s continual expansion into new technological spaces such as machine learning demonstrates how powerful the language is for a variety of different use cases and that the community of Go programmers is starting to generate the kind of development drive seen in other popular general purpose languages like Python. While libraries and packages are predominantly appearing for web development, we can see support growing for data intensive tasks and in the Big Data space. Adoption is already skyrocketing, and the next 3 years will be fascinating to observe as Golang is poised to conquer more and more key territories in the world of technology.
Read more
  • 0
  • 0
  • 7382

article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 17004

article-image-3-reasons-why-the-cloud-is-a-terrible-metaphor-and-one-why-it-isnt-2
Sarah
01 Jul 2014
4 min read
Save for later

3 Reasons Why "the Cloud" Is a Terrible Metaphor (and One Why It Isn't)

Sarah
01 Jul 2014
4 min read
I have a lot of feelings about “the cloud” as a metaphor for networked computing. All my indignation comes too late, of course. I’ve been having this rant for a solid four years, and that ship has long since sailed–the cloud is here to stay. As a figurative expression for how we compute these days, it’s proven to have way more sticking power than, say, the “information superhighway”. (Remember that one?) Still, we should always be careful about the ways we use figurative language. Sure, you and I know we’re really talking about odd labyrinths of blinking lights in giant refrigerator buildings. But does your CEO? I could talk a lot about the dangers of abstracting away our understanding of where our data actually is and who has the keys. But I won’t, because I have even better arguments than that. Here are my three reasons why “the cloud” is a terrible metaphor: 1. Clouds are too easy to draw. Anyone can draw a cloud. If you’re really stuck you just draw a sheep and then erase the black bits. That means that you don’t have to have the first clue about things like SaaS/PaaS/IaaS or local persistent storage to include “the cloud” in your Power Point presentation. If you have to give a talk in half an hour about the future of your business, clouds are even easier to draw than Venn Diagrams about morale and productivity. Had wecalled it “ Calabi–Yau Manifold Computing” the world would have saved hundreds of man hours spent in nonsensical meetings.The only thing sparingus from a worse fate is the stalling confusion that comes from trying to combine slide one–“The Cloud”–and slide two–”BlueSky Thinking!”. 2. Hundreds of Victorians died from this metaphor. Well, okay, not exactly. But in the nineteenth century, the Victorians had their own cloud concept–the miasma. The basic tenet was that epidemic illnesses were caused by bad air in places too full of poor people wearing fingerless gloves (for crime). It wasn’t until John Snow pointed to the infrastructure that people worked out where the disease was coming from. Snow mapped the pattern of pipes delivering water to infected areas and demonstrated that germs at one pump were causing the problem. I’m not saying our situation is exactly analogous. I’m just saying if we’re going to do the cloud metaphor again, we’d better be careful of metaphorical cholera. 3. Clouds might actually be alive. Some scientists reckon that the mechanism that lets clouds store and release precipitation is biological in nature. If this understanding becomes widespread, the whole metaphor’s going to change underneath us. Kids in school who’ve managed to convince the teacher to let them watch a DVD instead of doing maths will get edu-tained about it. Then we’re all going to start imagining clouds as moving colonies of tiny little cartoon critters. Do you want to think about that every time you save pictures of your drunken shenanigans to your Dropbox? And one reason why it isn’t a bad metaphor at all: 1. Actually, clouds are complex and fascinating . Quick pop quiz–what’s the difference between cirrus fibrates and cumulonimbus? If you know the answer to that, you’re most likely either a meteorologist, or you’re overpaid to sit at your desk googling the answers to rhetorical questions. In the latter case, you’ll have noticed that the Wikipedia article on clouds is about seventeen thousand words long. That’s a lot of metaphor. Meteorological study helps us to track clouds as they move from one geographic area to another, affecting climate, communications, and social behaviour. Through careful analysis of their movements and composition, we can make all kinds of predictions about how our world will look tomorrow. The important point came when we stopped imagining chariots and thundergods, and started really looking at what lay behind the pictures we’d painted for ourselves.
Read more
  • 0
  • 0
  • 7082
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-buying-versus-renting-pros-and-cons-moving-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Buying versus Renting: The Pros and Cons of Moving to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
Convenience One major benefit of the IaaS model is the promise of elasticity to support unforeseen demand. This means that the Cloud vendor will provide the ability to quickly and easily scale the provided resources up or down, based on the actual usage requirements. This typically means that an organization can plan for the ″average″ case instead of the “worst case” of usage, simultaneously saving on costs and preventing outages. Additionally, since the systems provided through cloud vendors are usually virtual machines running on the vendor’s underlying hardware, the process of adding new machines, increasing the disk space, or subscribing to new services is usually just a change through a web UI, instead of a complicated hardware or software acquisition process. This flexibility is an appealing factor because it significantly reduces the waiting time required to support a new capability. However, this automation benefit is sometimes a hindrance to administrators and developers that need to access the low-level configuration settings of certain software. Additionally, since the services are being offered through a virtualized system, continuity in the underlying environment can’t be guaranteed. Some applications – for example, benchmarking tools – may not be suitable for that type of environment. Cost One appealing factor for the transition to the cloud is cost–but in certain situations, using the cloud may not actually be cheaper. Before making a decision, your organization should evaluate the following factors to make sure the transition will be beneficial. One major benefit is the impact on your organization′s budget. If the costs are transitioned to the cloud, they will usually count as operational expenditures, as opposed to capital expenditures. In some situations, this might make a difference when trying to get the budget for the project approved. Additionally, some savings may come in the form of reduced maintenance and licensing fees. These expenditures will be absorbed into the monthly cost, rather than being an upfront requirement. When subscribing to the cloud, you can disable any unnecessary resources ondemand, reducing costs. In the same situation with real hardware, the servers would be required to remain on 24/7 in order to provide the same access benefits. On the other hand, consider the size of the data. Vendors have costs associated with moving data into or out of the cloud, in addition to the charge for storage. In some cases, the data transfer time alone would prohibit the transition. Also, the previously mentioned elasticity benefits that draw some people into the cloud–scaling up automatically to meet unexpected demand–can also have unexpected impact on the monthly bill. These costs are sometimes difficult to predict, and since the cloud computing pricing model is based on usage, it is important to weigh the possibility of an unanticipated hefty bill against an initial hardware investment. Reliability Most cloud vendors typically guarantee service availability or access to customer support. This places that burden on the vendor, as opposed to being assumed by the project′s IT department. Similarly, most cloud vendors provide backup and disaster recovery options either as add-ons or built-in to the main offering. This can be a benefit for smaller projects that have the requirement, but do not have the resources to support two full clusters internally. However, even with these guarantees, vendors still need to perform routine maintenance on their hardware. Some server-side issues will result in virtual machines being disabled or relocated – usually communicated with some advanced notice. In certain cases this will cause interruptions and require manual interaction from the IT team. Privacy All data and services that get transitioned into the cloud will be accessible from anywhere via the web–for better or worse. Using this system, the technique of isolating the hardware onto its own private network or behind a firewall is no longer possible. On the positive side, this means that everyone on the team will be able to work using any Internet-connected device. On the negative side, this means that every precaution needs to be taken so that the data stays safe from prying eyes. For some organizations, the privacy concerns alone are enough to keep projects out of the cloud. Even assuming that the cloud can be made completely secure, stories in the news about data loss and password leakage will continue to project a negative perception of inherent danger. It is important to document all precautions being taken to protect the data and make sure that all affected parties in the organization are comfortable moving to the cloud. Conclusion The decision of whether or not to move into the cloud is an important one for any project or organization. The benefits of flexibility of hardware requirements, built–in support, and general automation must be weighed against the drawbacks of decreased control over the environment and privacy. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 6419

article-image-why-gamification-is-changing-everything
Julian Ursell
30 Jun 2014
5 min read
Save for later

Why Gamification is Changing Everything

Julian Ursell
30 Jun 2014
5 min read
'Must keep streak going'. That is the sound of someone on a twenty plus kill streak on Titanfall (probably me). It's also the sound of someone who's on a 20 day streak learning JavaScript on CodeAcademy. Gamification is becoming an increasingly popular concept as a way to structure and make enjoyable the way people engage in the realms of learning, business and even the most routine aspects of lifestyle. Applications are being developed by prominent businesses which apply game mechanics and rules to a variety of different scenarios, building in incentives and rewards for users to strive toward, whether intrinsic or with a practical benefit in the real world. A brilliant example of gamification is Code Academy, which teaches new coders how to learn a programming language through a motivational system of badges and streaks to keep learners hooked and incentivized to continue learning. I’m currently learning Spanish with the language learning app, Duolingo, which uses gamification to measure and motivate learning progression using streaks, experience (xp) points and ‘checkpoints’ to structure the experience and enhance retention. Learners can unlock bonus skills by acquiring hearts, which are achieved by answering all of the questions correctly in lessons. When I’m on a streak, Duolingo will send notifications to my phone to keep it up, and compel me to reinforce my learning by taking refresher (called ‘strengthen’) lessons. It’s extraordinarily effective as a fun, fulfilling educational experience and I can positively say that I am retaining much of what I have learnt. Gamification has been rolled out among several high profile companies, including Nike, Starbucks and Microsoft (who used gamification for staff appraisals!), and in recent years has increasingly been considered as a solution for a number of important business concerns, whether it’s easing the pain of unpalatable training sessions or to drive customer and community engagement with a product. Nike + is a shining example of gamification on a grand, successful scale. Built on the idea of Nike fuel points, it rewards consistent exercise and activity with trophies and personal benchmarks, offers the option to set individual challenges, as well as compete with friends on a community leaderboard. On the one, cynical, hand, it’s powerful, effective marketing which engages users in Nike’s virtual community and generates revenue through sales of the Fuel Band (a wrist band which tracks wearer movements), as well as running shoes (from 2006-09 Nike increased its share of the running shoe market from 47% to 61% ) and other merchandise, all without rewarding exercisers with anything of physical value (instead they’re treated to celebratory animations). On the other hand, there are obvious benefits to accruing Fuel points as it means undertaking consistent, healthy exercise, with the positive reinforcement of earning trophies, setting new performance goals, and recording statistics about calories burned, distance covered and time spent exercising. All of this is integrated socially, as friends can see exactly what kind of activity you’ve undertaken, creating a socially connected sphere of collective competition. As a statistics junkie, the ability to constantly valorize my exercise and visualize the impact of the hours I’m putting in is even more reason to keep burning down the treads on my Nikes. What gamified apps bring is a way to accommodate the seemingly natural inclination of humans to structure and conceptualise challenges according to game (like) logic. Regardless of whether gamification is being employed for driving marketing and business, engineering customer and community participation, or for encouraging learning, it has proved a versatile approach (I’m trying to avoid calling it a business methodology) to thinking about how to solve different problems in the real world, via games. Gamification won’t be for everyone, and we would assume that it is in part dependent on a degree of investment from the user (gamer) (we might also ask if the user is engaging with the game or the actual subject at hand?), but the beauty of it is that games are so appealing and intuitive to modern generations that users don’t have to be coerced to engage, and enjoy gamified applications. It may have the charge of ‘mandatory enjoyment’ levelled at it, but I’ve never heard someone adamantly refuse to play a game. If anything, it makes the pill of dull company training much easier to swallow. Whatever scepticism some may have over the Gamification of Things we should appreciate it as a validation of the value of games, that it says something very positive about the way we can implement game mechanics in the real world, and that businesses are treating it seriously as a strategy for application development. Gamification is being tested and considered as a solution in a huge array of situations, from project management to the actual deployment of applications – for example the cloud deployment platform Engine Yard introduced gamification in order to increase the contribution of users to the community, giving rewards to users for providing help to other customers. There are even gamification startups offering platforms (Gamification-as-a-Service?) for implementing game mechanics into new or existing applications in the enterprise – just imagine a gamified tech startup! While it won’t be successful or suitable for every application or domain, but there has been enough demonstrable success to show that when implemented correctly, gamified applications are hugely productive for both the user and the business which mobilises it. As long as developers are not building in gamification for gamification’s sake, and the mechanics are intelligently thought out with clever incentive systems in place, we may see an even greater incorporation of it in the future. As with any great game, the focus needs to be on the gameplay as much as the outcome, so that applications benefit both the player and the business in equal measure.
Read more
  • 0
  • 0
  • 2727

article-image-makers-journey-3d-printing
Travis Ripley
30 Jun 2014
14 min read
Save for later

A Maker's Journey into 3D printing

Travis Ripley
30 Jun 2014
14 min read
If you’ve visited any social media outlets, you’ve probably come across a never-ending list of new words and terms—the Internet of Things, technological dissonance, STEM, open source, tinkerer, maker culture, constructivism, DIY, fabrication, rapid-prototyping, techshop, makerspace, 3D printers, Raspberry Pi, wearables, and more. These terms are typically used to describe a Maker, or they have something to do with Maker culture. Follow along to learn about my particular journey into the Maker culture, specifically in the 3D printing space. The rise of the maker culture Maker culture is on the rise. This is a culture that thrives at the intersection of technology and innovation at the informal, social, and peer-led level. The interactions of skilled people driven to share their knowledge with others, develop new pathways, and create solutions for current problems have built a new community. I am proud to say that I am a Maker-Tinkerer (or that I have some form of motivated ADHD that drives me to engage in engineering-oriented pursuits). My journey started at ground zero while studying 3D design and development. A maker's journey I knew there was more that I could do with my knowledge of rendering the three-dimensional surface of an object. Early on, however, I only thought about extending my knowledge for entertainment purposes, such as video games. I didn’t understand the power of having this knowledge and the way it could help create real-world solutions. Then, I came across an issue of Make Magazine and it changed my mental state overnight—I had to create tangible things. Now that I had the information to send me in the right direction, I needed an outlet. An industry friend mentioned a local Hackerspace, known as Deezmaker, which was holding informational workshops about 3D printing. So, I signed up for an introductory class. I had no clue what I was getting myself into as I crossed that first threshold, but by that evening, I was versed in topics that I thought were far from my mental capabilities. I was hooked. The workshop consisted of part lecture, and part hands-on material. I learned that you couldn't just start using a 3D printer. You actually need to have some basic understanding of the manufacturing process, like understanding that layers of material need to be successfully laid down in order to move on to the next stage in the process. Being the curious, impatient, and overly enthusiastic man-child that I am, this was the most difficult part for me, as I couldn’t wait to engage in this new world. 3D printing Almost two years later, I am fully immersed in the world of 3D printing. I currently have a 3D printer at home (which is almost obsolete, by today’s standards) and I have access to multiple printers at a local techshop/makerspace known as Makerplace here in San Diego, Ca. I use this technology regularly, since I have changed directions in my career as a 3D artist towards Manufacturing Engineering and Rapid Prototyping. I am currently attending a Machine Technology/Engineering program at San Diego City College; (for more info on the best Machining program in the country visit http://www.JCbollinger.com). The benefit for me using 3D printers is rapidly producing iterations of prototypes for my clientele, since most people feel more reassured in the process if they have tangible and solid objects and are more likely to trust you as a designer. I feel that having access to this also helps me complete more jobs successfully given that turnaround times for updates can be as little as a few hours, rather than days or weeks (depending on the size/scale). Currently I have a few reoccurring clients that want updates often, and by showing them my progress, the iterations are fewer and I can move onto the next project with no hesitation given how we can successfully see design updates rapidly and minimize the flaws and failures. I produce prototypes for all industries: toys, robotics, vehicles, and so on. Think of it as producing solutions, and how you can either make something better or simpler. Entertaining the idea of a challenge and solving these challenges has benefits as with each new design job you have all these tangible objects to look at and examine. As a hobbyist, the technology has made it easy to produce new or even obsolete items. For example, I love Transformers, but you know how plastic does two things very well: it breaks and gets lost. I came across a forum where guys were distributing the programs for the arm extrusions that break (no one likes gluing), so I printed the parts that had been missing for decades, rebuilt the armature that had for so long been displaced, and then like magic I felt like I was six years old again with a perfectly working Transformer. Here are a few things that I've learned along the way: 3D printing is also known as Additive Manufacturing. It is the process of producing three-dimensional objects in which successive layers of varied material are extruded under computer-controlled equipment that is fed information from 3D models. These models are derived from a data source that processes the information into machine language. The plastic extrusion technology that is now becoming slowly more popular is known as Fused Deposition Modeling (FDM). This process was developed in the early 1990s for the application of job production, mass production, rapid prototyping, product development, and distributed manufacturing. The principle of FDM is that material is laid down in layers. There are many other processes such as Selective Heat Sintering (SHS), Selective Laser Sintering (SLS), Stereolithography (SLA), and Plaster-Based 3D Printing (PP) to name a few. We will keep it simple here and go over the FDM process for now, as most of the printers at the hobbyist level use this process. The FDM process significantly affected roles within the production and manufacturing industries, as wearing multiple hats as an engineer, designer, and operator and as growth made the technology more affordable to an array of industrial fields. In contrast, CNC Machining, which is a Subtractive Manufacturing process, has been incorporated naturally to work together in this development. The influence of this technology in the industrial and manufacturing industries created exposure to new methods of production at exponential rates, for example Automation. For the home-use and hobbyist market, the 3D printers produced by the open source/open hardware initiative can be stemmed directly or indirectly from the RepRap.org project, which is a free to low-cost desktop 3D printer that is self-replicating. That being said, you can thank them for starting this revolution. By getting involved in this community you are benefiting everyone by spreading the spark that will continue to create new developments in manufacturing and consumer technology. The FDM process can be done with a multitude of materials; the two most popular options at this time are PLA (Polylactic acid) and ABS (Acrylonitrile butadiene styrene). Both PLA and ABS have pros and cons, depending upon your model structure. The future use of the print and client requests and understanding the fundamental differences between the two can help you determine your choice of one over the other, or in case of owning a printer with two extruders, how they can be combined. In some cases, PVA (Polyvinyl Acetate) is also used as support material (in the case of two extruders) unlike PLA or ABS, which if used as support material will require cleanup when finishing a print. PVA is water soluble, so you can soak your print in warm water and the support structures will dissolve away. PLA (Polylactic Acid) is a strong biodegradable plastic that is derived from renewable resources: cornstarch and sugarcane. It is more resistant to UV rays than ABS (so you will not see fading with your prints). Also, it sticks better than any other material to the surface of your hotplate (minimal warping), which is a huge advantage. It prints at -180* C, and it can create an ooze, and if your nozzle is loaded it will drip, which also means that leaving a print in your car on a hot day may cause damage. ABS (Acrylonitrile butadiene styrene) is stronger than PLA, but is non-biodegradable; it is a synthetic monomer produced from propylene and ammonia. This means it has more rigidity than PLA, but is also more flexible. It is a colorfast material (which means it will hold its color for years). It prints at -220*C, and is amorphous and therefore has no true melting point, so a heated bed is needed as warping can and will occur (usually because the bed is not hot enough—at least 80*C —or the Z axis is not calibrated correctly). Printer options For the hobbyist maker, there are a few 3D printer options to consider. Depending upon your skill level, your needs, budget and commitments, there is a printer out there for you. The least expensive, smallest, and most straightforward printer available on the market is Printrbot Simple Maker’s 3D Printer. Retailing at $349.99, this printer comes in a kit that includes the bare necessities you need to get started. It is capable of printing a 4” cube. You can also purchase it already assembled for a little extra. The kit and PLA filament are available at www.makershed.com. The 3D printer I started on, personally own, and recommend is the Afina H480 3D printer. Retailing at $1299.99, this printer provides the easiest setup right out of the box, it’s fully assembled, comes with a heated platform for the aid of adhesion and for less chance of warping, and can print up to a 5” cube. It also comes loaded with its own native 3D software, where you can manipulate your .STL files. It has an automated utility to calibrate the printer’s build platform with the printhead, and also automatically generates any support setup material and the “raft”, which is the base support for your prints. There is so much more to it, but as I said I recommend this for beginners, and it is also available through www.makershed.com. For the person who wants to print, and is at the hobbyist and semi-professional level, consider the next generation in 3D printing, the MAKERBOT Replicator. It is quick and efficient. Retailing at $2899.00, this machine has an extremely high layer resolution, LCD display, and if you run out of filament (ABS/PLA), there is no need to start over; this machine will alert you via computer or smartphone that a replacement is needed. There are many types of 3D printers available, with options including open source, open hardware, filament types, delta style mechanics, single/double extruders, and the list goes on. My main suggestion is to try before you buy, either at a local hackerspace or a local Makerfaire. It’s a worthwhile investment that pays for itself. Choosing your tools Before you begin, it's also important to choose your design tools. There are many great open source tools to choose from. Here are some of my favorites. When it comes to design tools, there is a multitude of cost effective and free tools out there to get you started. First off, the 3D printing process has a required “tool-chain” that must be followed in order to complete the process, roughly broken down into three parts: CAD (Computer Aided Design): Tools used to design 3D parts for printing. There are very few interchangeable CAD file formats that are sometimes referred to as parametric files. The most widely used interchangeable mesh file format is .STL (Stereolithography). This format is the most important as it used by CAM tools. CAM (Computer Aided Manufacturing): Tools handling the intermediate step of translating CAD files into a machine-friendly format. Firmware for electronics: This is what runs the onboard electronics of the printer, and is the closest to actual programming; a process known as cross compiling. Here are my best picks for each category, known as FLOSS (free/libre/open source software). FLOSS CAD tools, for example OpenSCAD, FreeCAD, and HeeksCAD for the most part create these parametric files that usually represent parts or assemblies in terms of CSG (Constructive Solid Geometry) which basically represent a tree of Boolean operations performed on primitive shapes such as cubes, spheres, cylinders, and pyramids. These are modified numerically and with great precision and the geometry is a mathematical representation of such, no matter how much you zoom in or out. Another category of CAD tool that represents the parts as 3D polygon mesh is for the most part used for special effects in movies or video games (CG). They are also a little more user friendly, and examples would be Autodesk Maya and Autodesk 3ds Max (these choices are subscription/retail-based). But there are also open source and free versions of this tool such as Autodesk 123D, Google Sketchup, and Blender; I suggest the latter options, since they are free, user friendly, and they are much easier to learn since their options are narrowed down strictly to producing 3D meshes. If you need more precision you should look at OpenSCAD (my favorite), as it was created directly for making physical objects rather than game design or animation. OpenSCAD is easy to learn, with a simple interface, it is powerful and cross-platform, and there are many examples you can use along with strong community support. Next, you’ll need to convert your 3D masterpiece (.stl) into a machine friendly format known as G-Code. This process is also known as “slicing”. You’re going to need some CAM software to produce the “tool paths,” which is the next stop in the tool chain. Most of the slicing software available is open source. Some examples are Slic3r (the most popular, with an ease of use recommended for beginners), Skeinforge (dated, but still one of the best), Cura, and MatterSlice. There is also great closed source slicing software out there. One in particular is KISSlicer, which is a pro version that supports multi-extruder printing. The next stop after slicing is using software known as: A G-Code interpreter, which breaks down each line of the code into electronic signals. A G-Code sender, which sends the signals to the motors on the printer to tell them how to move. This software is usually directly linked to an EMC (Electronic Machine Controller), which controls the printer directly. It can also be linked to an integrated hardware interface that has a G-Code interpreter built in, which loads the G-Code directly from a memory card (SD card/USB). The last stop is the firmware, which controls the electronics onboard the printer. For the most part, the CPUs that control these machines are simple microcontrollers that are usually Arduino-based, and they are compiled using the Arduino IDE. This process may sound time consuming, but once you go through the tool chain process a few times, it becomes second nature, just like driving a manual transmission in a car. Where to go from here? When I finished my first hackerspace workshop, I had been assimilated into a culture that I was not only benefiting from personally, but a culture that I could share my knowledge with and contribute to. I have received far more in my journey as a maker than any previous endeavor. To anyone who is curious, and mechanically inclined (or not), who believes they have an answer to a solution, I challenge you. I challenge you to make the leap into this culture—join a hackerspace, attend a makerfaire, and enrich your life and the lives of others. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 10543

article-image-what-zerovm
Lars Butler
30 Jun 2014
6 min read
Save for later

What is ZeroVM?

Lars Butler
30 Jun 2014
6 min read
ZeroVM is a lightweight virtualization technology based on Google Native Client (NaCl). While it shares some similarities with traditional hypervisors and container technologies, it is unique in a number of respects. Unlike KVM and LXC, which provide an entire virtualized operating system environment, it isolates single processes and provides no operating system or kernel. This allows instances to start up in a very short time: about five milliseconds. Combined with a high level of security and zero execution overhead, ZeroVM is well-suited to ephemeral processes running untrusted code in multi-tenant environments. There are of course some limitations inherent in the design. ZeroVM cannot be used as a drop-in replacement for something like KVM or LXC. These limitations, however, were the deliberate design decisions necessary in order to create a virtualization platform specifically for building cloud applications. How ZeroVM is different to other virtualization tools Blake Yeager and Camuel Gilyadov gave a talk at the 2014 OpenStack Summit in Atlanta which summed up nicely the main differences between hypervisor-based virtual machines (KVM, Xen, and so on), containers (LXC, Docker, and so on), and ZeroVM. Here are the key differences they outlined: Traditional VM Container ZeroVM Hardware Shared Shared Shared Kernel/OS Dedicated Shared None Overhead High Low Very low Startup time Slow Fast Fast Security Very secure Somewhat secure Very secure Traditional VMs and containers provide a way to partition and schedule shared server resources for multiple tenants. ZeroVM accomplishes the same goal using a different approach and with finer granularity. Instead of running one or more application processes in a traditional virtual machine, applications written for ZeroVM must be decomposed in microprocesses, and each one gets its own instance. The advantage of in this case is that you can avoid long running VMs/processes which accumulate state (leading to memory leaks and cache problems). The disadvantage, however, is that it can be difficult to port existing applications. Each process running on ZeroVM is a single stateless unit of computation (much like a function in the “purely functional” sense; more on that to follow), and applications need to be structured specifically to fit this model. Some applications, such as long-running server applications, would arguably be impossible to re-implement entirely on ZeroVM, although some parts could be abstracted away to run inside ZeroVM instances. Applications that are predominantly parallel and involve many small units of computation are better suited to run on ZeroVM. Determinism ZeroVM provides a guarantee of functional determinism. What this means in practice is that with a given set of inputs (parameters, data, and so on), outputs are guaranteed to always be the same. This works because there are no sources of entropy. For example, the ZeroVM toolchain includes a port of glibc, which has a custom implementation of time functions such that time advances in a deterministic way for CPU and I/O operations. No state is accumulated during execution and no instances can be reused. The ZeroVM Run-Time environment (ZRT) does provide an in-memory virtual file system which can be used to read/write files during execution, but all writes are discarded when the instance terminates unless an output “channel” is used to pipe data to the host OS or elsewhere. Channels and I/O “Channels” are the basic I/O abstraction for ZeroVM instances. All I/O between the host OS and ZeroVM must occur over channels, and channels must be declared explicitly in advance. On the host, a channel can map to a file, character device, pipe, or socket. Inside an instance, all channels are presented as files that can be written to/read from, including devices like stdin, stdout, and stderr. Channels can also be used to connect multiple instances together to create arbitrary multi-stage job pipelines. For example, a MapReduce-style search application with multiple filters could be implemented on ZeroVM by writing each filter as a separate application/script and piping data from one to the next. Security ZeroVM has two key security components: static binary validation and a limited system call API. Static validation occurs before “untrusted” user code is executed to ensure that there are no accidental or malicious instructions that could break out of the sandbox and compromise the host system. Binary validation in this instance is largely based on the NaCl validator. (For more information about NaCl and its validation, you can read the following whitepaper http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34913.pdf.) To further lock down the execution environment, ZeroVM only supports six system calls via a "trap" interface: pread, pwrite, jail, unjail, fork, and exit. By comparison, containers (LXC) expose the entire Linux system call API which presents a larger attack surface and more potential for exploitation. ZeroVM is lightweight ZeroVM is very lightweight. It can start in about five milliseconds. After the initial validation, program code is executed directly on the hardware without interpretation overhead or hardware virtualization. It's easy to embed in existing systems The security and lightweight nature of ZeroVM makes it ideal to embed in existing systems. For example, it can be used for arbitrary data-local computation in any kind of data store, akin to stored procedures. In this scenario, untrusted code provided by any user with access to the system can be executed safely. Because inputs and outputs must be declared explicitly upfront, the only concerns remaining are data access rules and quotas for storage and computation. Contrasted with a traditional model, where storage and compute nodes are separate, data-local computing can be a more efficient model when the cost of transferring data over the network to/from compute nodes outweighs the actual computation time itself. The tool has already been integrated with OpenStack Swift using ZeroCloud (middleware for Swift). This turns Swift into a “smart” data store, which can be used to scale parallel computations (such as multi-stage MapReduce jobs) across large collections of objects. Language support C and C++ applications can run on ZeroVM, provided that they are cross-compiled to NaCl using the provided toolchain. At present there is also support for Python 2.7 and Lua. Licensing All projects under the ZeroVM umbrella are licensed under Apache 2.0, which makes ZeroVM suitable for both commercial and non-commercial applications (the same as OpenStack).
Read more
  • 0
  • 0
  • 10846
article-image-progression-maker
Travis Ripley
30 Jun 2014
6 min read
Save for later

Progression of a Maker

Travis Ripley
30 Jun 2014
6 min read
There’s a natural path for the education of a maker that takes place within the techshops and makerspaces. It begins in the world of tools you may already know, like handheld tools or power tools, and quickly creeps into an unknown world of machines suited to bring any desire to fruition. At first, taking any classes may seem like a huge investment, but the payback you receive from the knowledge is priceless. I can’t even put a price on the payback I’ve earned from developing these maker skills, but I can tell you that the number of opportunities is overflowing. I know it doesn’t sound like much, but the opportunities to grow and learn also increase your connections and that’s what helps you to create an enterprise. Your options for education all depend upon what is available to you locally. As the ideology of technological dissonance has been growing culturally, it is influencing advancements on open source and open hardware. It has a big impact on the trend of creating incubators, startups, techshops, and makerspaces on a global scale. When I first began my education into the makerspace, I was worried that I’d never be able to learn it all. I started small by reading blogs and magazines, and eventually I decided to take a chance and sign up for a membership at our local makerspace: http://www.Makerplace.com. There I was given access to a variety of tools that would be too bulky and loud for my house and workspace, not to mention extremely out of my price range. When I first started at the Makerplace, I was overwhelmed by the amount of technology that was available to me, and I was daunted by the degree of difficulty it would take to even use these machines. But you can only learn so much from videos and books; the real trial begins when you put that knowledge to work with hands-on experience. I was ready to get some experience under my belt. The degree of difficulty for a student can vary, obviously, by experience, and how well one grasps the concepts. I started by taking a class that offers a brief introduction to a topic and some guidance from an expert. After that, you learn on your own and will break things such as materials, end mills, electronic components, and lots of consumables (I do not condone breaking fingers, body parts, or huge expensive tools). This stage is key, because once you understand what can and will go wrong, you’ll undeniably want more training from an expert. And as the saying goes, “practice makes perfect,” which is the key to mastery. As you begin your education, it will become apparent to you what classes will need to come next. The best place to start is learning the obvious software necessary to develop your tangible goods. For those of you who are interested I will list the suggested order of the tools and experience I have learned from ground zero. I suggest the first tools to learn are the Laser, Waterjet, and Plasma CNC cutters, as they can precisely cut shapes out of sheet type material. The laser is the easiest to learn, and can be used to not only cut, but engrave wood, acrylics, metal, and other sheet type materials. Most likely the makerspaces and hackerspaces that you have access to will have this available. The Waterjet and Plasma CNC machines will depend upon the workshop, since they require more room, along with the outfitting of vapor and fume containment equipment. The next set of tools that require a bigger learning curve are the Multi-Axis CNC Mills, Routers, Conventional Mill, and Lathe. CNC (Computer Numerical Control) is the automation of machine tools. These processes of controlled material removal today are collectively known as Subtractive Manufacturing. This requires you to take unfinished work pieces made of materials such as metals, plastics, ceramics, and wood and create 2D/3D shapes, which can be made into tools or finished as tangible objects. The CNC routers are for the same process, but they use sheet materials, such as plywood, MDF, and foam. The first time I took a tour of the makerplace, these machines looked so intimidating. They were big, loud, and I had no clue what they were used for. It wasn’t until I gained further insight into manufacturing that I understood how valuable these tools are. The learning curve is gradual, since there are multiple moving parts and operations happening at once. I took the CNC fundamentals class, which was required before operating any of these machines. I then completed the conventional Mill and Lathe classes before moving on to the CNC machines. I suggest the steps in this order, since understanding the conventional process will play an integral role in how you design your parts to be machined using the CNC machines. I found out the hard way why endmills were called consumables, as I scrapped many parts and broke many endmills. This is a great skill to understand as it directly compliments the Additive processes, such as 3D printing. Once you have a grasp on the basics of automated machinery, the next step is to learn welding and plasma cutting equipment and metal forming tools. This skill opens many possibilities and opportunities to makers, such as making and customizing frames, chassis, and jigs. Along the way you will also learn how to use the metal forming tools to create and craft three-dimensional shapes from thin-gauge sheet metal. And last but not least, depending on how far you want to develop your learning, there are large air compressors, such as bead blasters and paint sprayers used with tools that require constant pressure in the metal forming category. There is also high temperature equipment, such as furnaces, ovens, and acrylic sheet benders, and my personal new favorite, the vacuum formers that bend and form plastic into complex shapes. With all of these new skills under my belt, a network of like-minded individuals, and a passion for knowledge in manufacturing and design, I was able to produce and create products at a pro level, which totally changed my career. Whatever your curious intentions may be, I encourage you to take on a new challenge, such as learning manufacturing skills, and you will be guaranteed a transformative look at the world around you, from consumer to maker. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 16989

article-image-icon-haz-hamburger
Ed Gordon
30 Jun 2014
7 min read
Save for later

Icon Haz Hamburger

Ed Gordon
30 Jun 2014
7 min read
I was privileged enough recently to be at a preview of Chris Chabot’s talk on the future of mobile technology. It was a little high-line (conceptual), but it was great at getting the audience thinking about the implications that “mobile” will have in the coming decades; how it will impact our lives, how it will change our perceptions, and how it will change physically. The problem with this, however, is that mobile user experience just isn’t ready to scale yet. The biggest challenge facing mobile isn’t its ability to handle an infinite increase in traffic; it’s how we navigate this new world of mobile experiences. Frameworks like Bootstrap et al have enabled designers to make content look great on any platform, but finding your way around, browsing, on mobile is still about as fun as punching yourself in the face. In a selection of dozens of applications, I’m in turns required to perform a ballet of different digital interface interaction: pressing, holding, sliding, swiping, pulling (but never pushing?!), and dragging my way to finding the article of choice. The hamburger eats all One of the biggest enablers of scalable user interface design is going to be icons, right? A picture paints a thousand words. An icon that can communicate “Touch me for more… ” is more valuable in the spatio-prime-real estate of the mobile web than a similarly wordy button. Of course, when the same pictures start meaning a different thousand words, everything starts getting messy. The best example of icons losing meaning is the humble hamburger icon. Used by so many sites and applications to achieve such different end goals, it is becoming unusable. Here are a few examples from fairly prominent sites:   Google+: Opens a reveal menu, which I can also open by swiping left– [MG1] right. SmashingMag: Takes me to the bottom of the page, with no faculty to get back up without scrolling manually. The reason for this remains largely unclear to me. eBay: Changes the view of listed items. Feels like the Wilhelm Scream of UI design. Linked in: Drop-down list of search options, no menu items. IGN: Reveal menu which I can only close by pressing a specific part of the “off” page. Can’t slide it open. There’s an emerging theme here, in that it’s normally related to content menus (or search), and it normally happens by some form of CSS trickery that either drops down or reveals the “under” menu. But this is generally speaking. There’s no governance, and it introduces more friction to the cross-site browsing experience. Compare the hamburger to the humble magnifying glass: How many people have used a magnifying glass? I haven’t. Despite this set back, through consistent use of the icon with consistent results, we’ve ended up with a standard pattern that increases the usability and user experience of a site. Want to find something? Click the magnifying glass. The hamburger isn’t the only example of poorly implemented navigation, it’s just indicative of the distances we still have to cover to get to a point where mobile navigation is intuitive. The “Back”, “Forward”, and “Refresh” buttons have been a staple of browsers since Netscape Navigator–they have aided the navigation of the Web as we know it. As mobile continues to grow, designers need similarly scalable icons, with consistent meaning. This may be the hamburger in the future, but it’s not at that point yet. Getting physical, or, where we discuss touch Touch isn’t yet fully realized on mobile devices. What can I actually press? Why won’t Google+ let me zoom in with the “pinch” function? Can I slide this carousel, or not? What about off-screen reveals? Navigating with touch at the moment really feels like you’re a beta tester for websites; trying things that you know work on other sites to see if they work here. This, as a consumer, isn’t the base of a good user experience. Just yesterday, I realised I could switch tabs on Android Chrome by swiping the grey nav bar. I found that by accident. The one interaction that has come out with some value is the “Pull to refresh” action. It’s intuitive, in its own way, and it’s also used as a standard way of refreshing content across Facebook, Twitter, and Google+—any site that has streamed content. People can use this function without thinking about it and without many visual prompts now that it’s remained the standard for a few years. Things like off-screen reveals, carousel swiping, and even how we highlight text are still so in flux that it becomes difficult to know how to achieve a given action from one site to the next. There’s no cross application consistency that allows me to navigate my own browsing experience with confidence. In cases such as the Android Chrome, I’m actually losing functionality that developers have spent hours (days?) creating. Keep it mobile, stupid Mobile commerce is a great example of forgetting the “mobile” bit of browsing. Let’s take Amazon. If I want to find an Xbox 360 RPG, it takes me seven screens and four page loads to get there. I have to actually load up a list of every game, for every console, before I can limit it to the console I actually own. Just give me the option to limit my searches from the home page. That’s one page load and a great experience (cheques in the post please, Amazon). As a user, there are some pretty clear axioms for mobile development: Browser > app. Don’t make me download an app if it’s going to require an Internet connection in the future. There’s no value in that application. Keep page calls to a minimum. Don’t trust my connection. I could be anywhere. I am mobile. Mobile is still browsing. I don’t often have a specific need; if I do, Google will solve that need. I’m at your site to browse your content. Understanding that mobile is its own entity is an important step – thinking about connection and page calls is as important as screen size. Tools such as Hood.ie are doing a great job in getting developers and designers to think about “offline first”. It’s not ready yet, but it is one possible solution to under the hood navigation problems. Adding context A lack of governing design principles in the emergent space of mobile user experience is limiting its ability to scale to the place we know it’s headed. Every new site feels like a test, with nothing other than how to scroll up and down being hugely apparent. This isn’t to say all sites need to be the same, but for usability and accessibility to not be impacted, they should operate along a few established protocols. We need more progressive enhancement and collaboration in order to establish a navigational framework that the mobile web can operate in. Designers work in the common language of signification, and they need to be aware that they all work in the same space. When designing for that hip new product, remember that visitors aren’t arriving at your site in isolation–they bring with them the great burden of history, and all the hamburgers they’ve consumed since. T.S. Eliot said that “No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists”. We don’t work alone. We’re all in this together.
Read more
  • 0
  • 0
  • 3416

article-image-pixar-all-renderman-made-free-everyone
Julian Ursell
30 Jun 2014
3 min read
Save for later

Pixar for All: RenderMan Made Free to Everyone

Julian Ursell
30 Jun 2014
3 min read
Render me excited. Following the announcement that Pixar will be making available a non-commercial version of its 3D visual effects and rendering software, there’s a resounding buzz among the creative populace about the opportunity to play around with the RenderMan sandbox. Just reflect on that–you get to use the technology Pixar used to make Toy Story, Wall-E, and Monsters Inc., and that is responsible for the vast majority of incredible visual trickery of modern cinema. The thought of being able to recreate the astonishing visual environments and landscapes produced by Pixar’s cutting edge rendering software, recognized so unmistakeably around the world, is a mouth-watering prospect. I’m looking forward to messing around with the software and producing several poor man’s versions of Pixar’s most famous films. I’ll also make Cars into a good film. (Pixar is mobilizing lawyers right now to make us redact this.) It’s not just the general availability that has people animated about RenderMan. Along with the free-to-all announcement were details about an overhaul of the software, which vastly enhances the rendering mechanics and capabilities. RIS is the fast new rendering architecture under the hood. It specifically enhances global illumination rendering and ray-tracing scenes, which work with heavy geometry. The classic rendering architecture, REYES, also remains available, giving artists the option to work with either. It’s a wonderful bonus that at the same time it has been made freely available, RenderMan has also been supercharged, giving amateur (and professional) visual effects artists an immensely powerful palette with which to develop animation projects. If you’re someone who’s never used animation software before, it’s probably like being given the keys to a Ferrari without having a driving license. And let’s be honest, that’s not an opportunity anyone would pass up, right?    Threaded into this, the price for the commercial version of RenderMan has been slashed, which may give users who have developed animations with the free version the temptation to go a step further and purchase the paid product for the purposes of industrial distribution. If you have the inclination (and the expertise), you too could be producing the visual effects that have revolutionized cinema in the past 25 years or so. Okay, maybe I’m selling Pixar’s line for them, but this is a hugely progressive move for aspiring animators who will jump at the opportunity to experiment with the technology that is still blazing a trail. It’s potentially a gateway for people with creative talent and flair to showcase their abilities and get into the industry of animation and visual effects, whether that’s for cinema, television, or advertising. I’ve registered for a free license for when it’s made available in August. I may not ever be a VFX wizard but I am, like a vast number of others, nonetheless piqued with intrigue about the opportunity to try my hand with RenderMan—even if the end product is a mangled Ferrari.
Read more
  • 0
  • 0
  • 2734
article-image-notes-javascript-learner
Ed Gordon
30 Jun 2014
4 min read
Save for later

Notes from a JavaScript Learner

Ed Gordon
30 Jun 2014
4 min read
When I started at Packt, I was an English grad with a passion for working with authors, editorial rule, and really wanted to get to work structuring great learning materials for consumers. I’d edited the largest Chinese-English dictionary ever compiled without speaking a word of Chinese, so what was tech but a means to an end that would allow me to work on my life’s ambition? Fast forward 2 years, and hours of independent research and reading Hacker News, and I’m more or less able to engage in a high level discussion about any technology in the world, from Enterprise class CMIS to big data platforms. I can identify their friends and enemies, who uses what, why they’re used, and what learning materials are available on the market. I can talk in a more nebulous way of their advantages, and how they ”revolutionized” that specific technology type. But, other than hacking CSS in WordPress, I can’t use these technologies. My specialization has always been in research, analysis, and editorial know-how. In April, after deploying my first WordPress site (exploration-online.com), I decided to change this. Being pretty taken with Python, and having spent a lot of time researching why it’s awesome (mostly watching Monty Python YouTube clips), I decided to try it out on Codecademy. I loved the straightforward syntax, and was getting pretty handy at the simple things. Then Booleans started (a simple premise), and I realised that Python was far too data intensive. Here’s an example: · Set bool_two equal to the result of-(-(-(-2))) == -2 and 4 >= 16**0.5 · Set bool_three equal to the result of 19 % 4 != 300 / 10 / 10 and False This is meant to explain to a beginner how the Boolean operator “and” returns “TRUE” when statements on either side are true. This is a fairly simple thing to get, so I don’t really see why they need to use expressions that I can barely read, let alone compute... I quickly decided Python wasn’t for me. I jumped ship to JavaScript. The first thing I realised was that all programming languages are pretty much the same. Variables are more or less the same. Functions do a thing. The syntax changes, but it isn’t like changing from English to Spanish. It’s more like changing from American English to British English. We’re all talking the same, but there are just slightly different rules. The second thing I realized was that JavaScript is going to be entirely more useful to me in to the future than Python. As the lingua franca of the Internet, and the browser, it’s going to be more and more influential as adoption of browser over native apps increases. I’ve never been a particularly “mathsy” guy, so Python machine learning isn’t something I’m desperate to master. It also means that I can, in the future, work with all the awesome tools that I’ve spent time researching: MongoDB, Express, Angular, Node, and so on. I bought Head First JavaScript Programming, Eric T. Freeman, Elisabeth Robson, O’Reilly Media, and aside from the 30 different fonts used that are making my head ache, I’m finding the pace and learning narrative far better than various free solutions that I’ve used, and I actually feel I’m starting to progress. I can read things now and hack stuff on W3 schools examples. I still don’t know what things do, but I no longer feel like I’m standing reading a sign in a completely foreign language. What I’ve found that books are great at is reducing the copy/paste mind-set that creeps in to online learning tools. C/P I think is fine when you actually know what it is you’re copying. To learn something, and be comfortable using it in to the future, I want to be able to say that I can write it when needed. So far, I’ve learned how to log the entire “99 Bottles of Beer on the Wall” to the console. I’ve rewritten a 12 line code block to 6 lines (felt like a winner). I’ve made some boilerplate code that I’ve got no doubt I’ll be using for the next dozen years. All in all, it feels like progress. It’s all come from books. I’ll be updating this series regularly when I’ve dipped my toe into the hundreds of tools that JavaScript supports within the web developer’s workflow, but for now I’m going to crack on with the next chapter. For all things JavaScript, check out our dedicated page! Packed with more content, opinions and tutorials, it's the go-to place for fans of the leading language of the web. 
Read more
  • 0
  • 0
  • 10998

article-image-war-data-science-python-versus-r
Akram Hussain
30 Jun 2014
7 min read
Save for later

The War on Data Science: Python versus R

Akram Hussain
30 Jun 2014
7 min read
Data science The relatively new field of data science has taken the world of big data by storm. Data science gives valuable meaning to large sets of complex and unstructured data. The focus is around concepts like data analysis and visualization. However, in the field of artificial intelligence, a valuable concept known as Machine Learning has now been adopted by organizations and is becoming a core area for many data scientists to explore and implement. In order to fully appreciate and carry out these tasks, data scientists are required to use powerful languages. R and Python currently dominate this field, but which is better and why? The power of R R offers a broad, flexible approach to data science. As a programming language, R focuses on allowing users to write algorithms and computational statistics for data analysis. R can be very rewarding to those who are comfortable using it. One of the greatest benefits R brings is its ability to integrate with other languages like C++, Java, C, and tools such as SPSS, Stata, Matlab, and so on. The rise to prominence as the most powerful language for data science was supported by R’s strong community and over 5600 packages available. However, R is very different to other languages; it’s not as easily applicable to general programming (not to say it can’t be done). R’s strength and its ability to communicate with every data analysis platform also limit its ability outside this category. Game dev, Web dev, and so on are all achievable, but there’s just no benefit of using R in these domains. As a language, R is difficult to adopt with a steep learning curve, even for those who have experience in using statistical tools like SPSS and SAS. The violent Python Python is a high level, multi-paradigm programming language. Python has emerged as one of the more promising languages of recent times thanks to its easy syntax and operability with a wide variety of different eco-systems. More interestingly, Python has caught the attention of data scientists over the years, and thanks to its object-oriented features and very powerful libraries, Python has become the go-to language for data science, many arguing it’s taken over R. However, like R, Python has its flaws too. One of the drawbacks in using Python is its speed. Python is a slow language and one of the fundamentals of data science is speed! As mentioned, Python is very good as a programming language, but it’s a bit like a jack of all trades and master of none. Unlike R, it doesn’t purely focus on data analysis but has impressive libraries to carry out such tasks. The great battle begins While comparing the two languages, we will go over four fundamental areas of data science and discuss which is better. The topics we will explore are data mining, data analysis, data visualization, and machine learning. Data mining: As mentioned, one of the key components to data science is data mining. R seems to win this battle; in the 2013 Data Miners Survey, 70% of data miners (from the 1200 who participated in the survey) use R for data mining. However, it could be argued that you wouldn’t really use Python to “mine” data but rather use the language and its libraries for data analysis and development of data models. Data analysis: R and Python boast impressive packages and libraries. Python, NumPy, Pandas, and SciPy’s libraries are very powerful for data analysis and scientific computing. R, on the other hand, is different in that it doesn’t offer just a few packages; the whole language is formed around analysis and computational statistics. An argument could be made for Python being faster than R for analysis, and it is cleaner to code sets of data. However, I noticed that Python excels at the programming side of analysis, whereas for statistical and mathematical programming R is a lot stronger thanks to its array-orientated syntax. The winner of this is debatable; for mathematical analysis, R wins. But for general analysis and programming clean statistical codes more related to machine learning, I would say Python wins. Data visualization: the “cool” part of data science. The phrase “A picture paints a thousand words” has never been truer than in this field. R boasts its GGplot2 package which allows you to write impressively concise code that produces stunning visualizations. However. Python has Matplotlib, a 2D plotting library that is equally as impressive, where you can create anything from bar charts and pie charts, to error charts and scatter plots. The overall concession of the two is that R’s GGplot2 offers a more professional feel and look to data models. Another one for R. Machine learning: it knows the things you like before you do. Machine learning is one of the hottest things to hit the world of data science. Companies such as Netflix, Amazon, and Facebook have all adopted this concept. Machine learning is about using complex algorithms and data patterns to predict user likes and dislikes. It is possible to generate recommendations based on a user’s behaviour. Python has a very impressive library, Scikit-learn, to support machine learning. It covers everything from clustering and classification to building your very own recommendation systems. However, R has a whole eco system of packages specifically created to carry out machine learning tasks. Which is better for machine learning? I would say Python’s strong libraries and OOP syntax might have the edge here. One to rule them all From the surface of both languages, they seem equally matched on the majority of data science tasks. Where they really differentiate is dependent on an individual’s needs and what they want to achieve. There is nothing stopping data scientists using both languages. One of the benefits of using R is that it is compatible with other languages and tools as R’s rich packagescan be used within a Python program using RPy (R from Python). An example of such a situation would include using the Ipython environment to carry out data analysis tasks with NumPy and SciPy, yet to visually represent the data we could decide to use the R GGplot2 package: the best of both worlds. An interesting theory that has been floating around for some time is to integrate R into Python as a data science library; the benefits of such an approach would mean data scientists have one awesome place that would provide R’s strong data analysis and statistical packages with all of Python’s OOP benefits, but whether this will happen remains to be seen. The dark horse We have explored both Python and R and discussed their individual strengths and flaws in data science. As mentioned earlier, they are the two most popular and dominant languages available in this field. However a new emerging language called Julia might challenge both in the future. Julia is a high performance language. The language is essentially trying to solve the problem of speed for large scale scientific computation. Julia is expressive and dynamic, it’s fast as C, it can be used for general programming (its focus is on scientific computing) and the language is easy and clean to use. Sounds too good to be true, right?
Read more
  • 0
  • 0
  • 10670