Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-mysteries-big-data-and-orient-db
Julian Ursell
30 Jun 2014
4 min read
Save for later

The Mysteries of Big Data and the Orient … DB

Julian Ursell
30 Jun 2014
4 min read
Mapping the world of big data must be a lot like demystifying the antiquated concept of the Orient, trying to decipher a mass of unknowns. With the ever multiplying expanse of data and the natural desire of humans to simultaneously understand it—as soon as possible and in real time—technology is continually evolving to allow us to make sense of it, make connections between it, turn it into actionable insight, and act upon it physically in the real world. It’s a huge enterprise, and you’ve got to imagine with the masses of data collated years before on legacy database systems, without the capacity for the technological insight and analysis we have now, there are relationships within the data that remain undefined—the known unknowns, the unknown knowns, and the known knowns (that Rumsfeld guy was making sense you see?). It's fascinating to think what we might learn from the data we have already collected. There is a burning need these days to break down the mysteries of big data and developers out there are continually thinking of ways we can interpret it, mapping data so that it is intuitive and understandable. The major way developers have reconceptualized data in order to make sense of it is as a network connected tightly together by relationships. The obvious examples are Facebook or LinkedIn, which map out vast networks of people connected by various shared properties, such as education, location, interest, or profession. One way of mapping highly connectable data is by structuring data in the form of a graph, a design that has emerged in recent years as databases have evolved. The main progenitor of this data structure is Neo4j, which is far and away the leader in the field of graph databases, mobilized by a huge number of enterprises working with big data. Neo4j has cornered the market, and it's not hard to see why—it offers a powerful solution with heavy commercial support for enterprise deployments. In truth there aren't many alternatives out there, but alternatives exist. OrientDB is a hybrid graph document database that offers the unique flexibility of modeling data in the form of either documents, or graphs, while incorporating object-oriented programming as a way of encapsulating relationships. Again, it's a great example of developers imagining ways in which we can accommodate the myriad of different data types, and relationships that connect it all together. The real mystery of the Orient(DB) however, is the relatively low (visible) adoption of a database that offers both innovation, and reputedly staggering levels of performance (claims are that it can store up to 150,000 records a second). The question isn't just why it hasn't managed to dent a market essentially owned by Neo4j, but why, on its own merits, haven’t more developers opted for the database? The answer may in the end be vaguely related to the commercial drivers—outside of Europe it seems as if OrientDB has struggled to create the kind of traction that would push greater levels of adoption, or perhaps it is related to the considerable development and tuning of the project for use in production. Related to that, maybe OrientDB still has a way to go in terms of enterprise grade support for production. For sure it's hard to say what the deciding factor is here. In many ways it’s a simple reiteration of the level of difficulty facing startups and new technologies endeavoring to acquire adoption, and that the road to this goal is typically a long one. Regardless, what both Neo4j and OrientDB are valuable for is adapting both familiar and unfamiliar programming concepts in order to reimagine the way we represent, model, and interpret connections in data, mapping the information of the world.
Read more
  • 0
  • 0
  • 9696

article-image-soldering-tips-and-tricks-makers
Clare Bowman
30 Jun 2014
5 min read
Save for later

Soldering: Tips and Tricks for Makers

Clare Bowman
30 Jun 2014
5 min read
Although solderless breadboards provide makers with an easy way to build functioning circuits and software, the builds are only really reliable if they aren't handled too heavily. For example, in our first post, we talked about building a Weather Cube as a sensory tool for occupational therapists. The breadboard circuit and the foam cube secured inside this might survive fairly well, but for any highly-physical wearable applications, it would be easy for a single wire to be pulled out of the circuit, causing it to fail at a vital moment. In this post, we will detail how we soldered our Weather Cube project, plus provide you with timesaving and pain-saving tips born through trial and error (and one burnt finger). If you have very little or no experience working with stripboards, it could be worth practicing your skills before starting. Important Safety warning Protective equipment such as safety glasses should always be worn. You should also have first aid equipment available whenever working with metal, including melting solder, hacksawing, and spot-cutting copper board. Before you begin soldering your project, you will need the following: A soldering iron (this iron becomes extremely hot, so take care not to touch the tip with your hands)· Solder (usually made of tin and lead). Soldering a stripboard for a Weather Cube First, cut your stripboard (also called veroboard by some people, but it's the same thing). Do this by laying the stripboard horizontal, with the copper side facing you. Count 25 points from the middle, right, and side of the stripboard. Draw a line from top to bottom. Use a G-clamp to secure your stripboard to a solid surface, and then cut along the line with your junior hacksaw. Starting with just downward strokes will help you keep on track initially. You could also cut the top two rails off if you want your project to be as small as possible, or color the top two rails to remind yourself not to count these holes. Then, follow these steps: Count six spaces from the right side. Draw a line from the top to the bottom of the board on the copper side. Count seven spaces from the line you’ve just drawn, and draw a line from the top to the bottom again. Count a further six spaces and once again draw a line from the top to the bottom. Spot cut these lines. Spot cutting involves twisting a dedicated spot cutter into parts of the copper where you want a gap in the copper rails. Then, flip over the stripboard so that the copper bit is facing down, and clip it onto the soldering station holder. For convenience, we recommend using exactly the same component positions as the breadboard build. It’s useful to keep a tested breadboard version of the layout nearby. You can use this as a reference for component positions on the stripboard version as you build it, to help ensure you don’t introduce errors. Soldering a piezo A piezo is a small sensor device used by Makers to convert pressure and force into an electrical charge. These sensors are also very delicate, and can easily come apart. If it does, you will have to re-solder it. To solder the piezo back together, follow these steps: Strip the end of the wire approximately 4mm. Twist the wire strands to make one piece of wire. Tin the wire by coating a bit of solder onto the exposed wire. Then, either push the wire into a hole on the same railing, or if the wire has come detached on the piezo end, then solder it back on to the piezo. Don’t leave the soldering iron on the piezo element for too long as you could damage it. Conclusion Soldering can provide projects with greater robustness, allowing them to be handled without easily falling apart. With these steps, we hope to have provided you with some of the tips and tricks to successfully solder your inventions. About the authors Clare Bowman enjoys hacking playful interactive installations and co-designing digitally fabricated consumer products. She has exhibited projects at Maker Faire UK, Victoria and Albert Museum, FutureEverything, and Curiosity Collective gallery shows. Some recent work includes “Sands Everything”, an interactive hourglass installation interpreting Shakespeare’s Seven Ages of Man soliloquy through gravity-controlled animated grains, and more. Cefn Hoile sculpts open source hardware and software, and supports others doing the same. Drawing on 10 years of experience in R&D for a multinational technology company, he works as a public domain inventor, and an innovation catalyst and architect of bespoke digital installations and prototypes. He is a founder-member of the CuriosityCollective.org digital arts group, and a regular contributor to open source projects and not-for-profits. Cefn is currently completing a PhD in Digital Innovation at Highwire, University of Lancaster, UK.
Read more
  • 0
  • 0
  • 18906

article-image-rise-data-science
Akram Hussain
30 Jun 2014
5 min read
Save for later

The Rise of Data Science

Akram Hussain
30 Jun 2014
5 min read
The rise of big data and business intelligence has been one of the hottest topics to hit the tech world. Everybody who’s anybody has heard of the term business intelligence, yet very few can actually articulate what this means. Nonetheless it’s something all organizations are demanding. But you must be wondering why and how do you develop business intelligence? Enter data scientists! The concept of data science was developed to work with large sets of structured and unstructured data. So what does this mean? Let me explain. Data science was introduced to explore and give meaning to random sets of data floating around (we are talking about huge quantities here, that is, terabytes and petabytes), which are then used to analyze and help identify areas of poor performance, areas of improvement, and areas to capitalize on. The concept was introduced for large data-driven organisations that required consultants and specialists to deal with complex sets of data. However, data science has been adopted very quickly by organizations of all shapes and sizes, so naturally an element of flexibility would be required to fit data scientists in the modern work flow. There seems to be a shortage for data scientists and an increase in the amount of data out there. The modern data scientist is one who would be able to apply analytical skills necessary to any organization with or without large sets of data available. They are required to carry out data mining tasks to discover relevant meaningful data. Yet, smaller organizations wouldn’t have enough capital to invest in paying for a person who is experienced enough to derive such results. Nonetheless, because of the need for information, they might instead turn to a general data analyst and help them move towards data science and provide them with tools/processes/frameworks that allow for the rapid prototyping of models instead. The natural flow of work would suggest data analysis comes after data mining, and in my opinion analysis is at the heart of the data science. Learning languages like R and Python are fundamental to a good data scientist’s tool kit. However would a data scientist with a background in mathematics and statistics and little to no knowledge of R and Python still be as efficient? Now, the way I see it, data science is composed of four key topic areas crucial to achieving business intelligence, which are data mining, data analysis, data visualization, and machine learning. Data analysis can be carried out in many forms; it’s essentially looking at data and understanding it to make a factual conclusion from it (in simple terms). A data scientist may choose to use Microsoft Excel and VBA to analyze their data, but it wouldn’t be as accurate, clean, or as in depth as using Python or R, but it sure would be useful as a quick win with smaller sets of data. The approach here is that starting with something like Excel doesn’t mean it’s not counted as data science, it’s just a different form of it, and more importantly it actually gives a good foundation to progress on to using things like MySQL, R, Julia, and Python, as with time, business needs would grow and so would expectations of the level of analysis. In my opinion, a good data scientist is not one who knows more than one or two languages or tools, but one who is well-versed in the majority of them and knows which language and skill set are best suited to the task in hand. Data visualization is hugely important, as numbers themselves tell a story, but when it comes to representing the data to customers or investors, they're going to want to view all the different aspects of that data as quickly and easily as possible. Graphically representing complex data is one of the most desirable methods, but the way the data is represented varies dependent on the tool used, for example R’s GGplot2 or Python’s Matplotlib. Whether you’re working for a small organization or a huge data-driven company, data visualization is crucial. The world of artificial intelligence introduced the concept of machine learning, which has exploded on the scene and to an extent is now fundamental to large organizations. The opportunity for organizations to move forward by understanding a consumer’s behaviour and equally matching their expectations has never been so valuable. Data scientists are required to learn complex algorithms and core concepts such as classifications, recommenders, neural networks, and supervised and unsupervised learning techniques. This is just touching the edges of this exciting field, which goes into much more depth especially with emerging concepts such as deep learning.   To conclude, we covered the basic fundamentals of data science and what it means to be data scientists. For all you R and Python developers (not forgetting any mathematical wizards out there), data science has been described as the ‘Sexiest job of 21st century’  as well as being handsomely rewarding too. The rise in jobs for data scientists has without question exploded and will continue to do so; according to global management firm McKinsey & Company, there will be a shortage of 140,000 to 190,000 data scientists due to the continued rise of ‘big data’.
Read more
  • 0
  • 0
  • 8592
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-frontend-frameworks-bootstrapping-beginners
Ed Gordon
30 Jun 2014
3 min read
Save for later

Frontend Frameworks: Bootstrapping for Beginners

Ed Gordon
30 Jun 2014
3 min read
I was on the WebKit.org site the other day, and it struck me that it was a fairly ugly site for the home page of such a well-known browser engine. Lime green to white background transition, drop-shadow headers. It doesn’t even respond; what? I don’t want to take anything away from its functionality – it works perfectly well – but it did bring to mind the argument about frontend frameworks and the beautification of the Internet. When the Internet started to become a staple of our daily compute, it was an ugly place. Let’s not delude ourselves in thinking every site looked awesome. The BBC, my home page since I was about 14, looked like crap until about 2008. As professional design started improving, it left “home-brew” sites still looking old, hacky, and unloved. Developers and bedroom hacks, not au fait with the whims of JavaScript or jQuery, were left with an Internet that still looks prehistoric. A gulf formed between the designers who were getting paid to make content look better and those who wanted to, but didn’t have the time. It was the haves, and the have nots. Whilst the beautification of websites built by the “common man” is a consequence of the development of dozens of tools in the open source arena, I’m ascribing the flashpoint as Twitter Bootstrap. Yes, you can sniff a Bootstrap site a mile off, and yes it loads a bit slower except for the people who use Bootstrap (me), and yes some of the mark-up syntax is woeful. It does remain, however, a genuine enabler of web design that doesn’t suck. The clamor of voices that have called out Bootstrap for the reasons mentioned above, I think, have really misunderstood who should be using this tool. I would be angry if I paid a developer to knock me up a hasty site in Bootstrap. Designers should only be using Bootstrap to knock up a proof of concept (Rapid Application Development), before building a bespoke site and living fat off the commission. If, however, someone asked me to make a site in my spare time, I’m only ever going to be using Bootstrap (or, in fairness, Foundation), because it’s quick, easy, and I’m just not that good with HTML, CSS, or JavaScript (though I’m learning!). Bootstrap, and tools like it, abstract away a lot of the pain that goes into web development (really, who cares if your button is the same as someone else’s?) for people who just want to add their voice to the sphere and be heard. Having a million sites that look similar but nice, to me is a better scenario than having a million sites that are different and look like the love child of a chalkboard and MS Paint. What’s clear is that it has home-brew developers contributing to the conversation of presentation of content; layout, typography, iconography. Anyone who wants to moan can spend some time on the wayback machine.
Read more
  • 0
  • 0
  • 7095

article-image-making-greater-good
Clare Bowman
30 Jun 2014
7 min read
Save for later

Making for the Greater Good

Clare Bowman
30 Jun 2014
7 min read
Occupational Therapists work with individuals to achieve increased participation in their desired occupations--be it in work, self-care, or leisure activities. The cross collaboration between OTs and the Maker community, a group of technology-based do-it-yourself hobbyists, is a space that has much potential, and should be explored further. In this blog post, we are going to explore one such collaboration: a "Weather Cube" case study. The Weather Cube was originally built for individuals with severe learning difficulties in an environmental awareness group that experienced problems with their sensory integration (SI). If inefficient sensory processing is prevalent in an individual, this may result in sensory integration dysfunction. The cube stimulates the user's imagination, and increases understanding of weather. Discussions can be started around the different weather elements, and introduce stimuli. For example, a fan can give the impression of wind, or water can be dripped onto the service user's hands to experience the feeling of wetness. The sound files and images of the cube can be changed to suit different individuals and groups. Building the Weather Cube Each side of the large foam Weather Cube is stenciled with different meteorological icons and is associated with relevant weather sounds. By turning the face of the cube, the user can hear sounds and associate them with images. Each icon is assigned a unique sound file. As the cube is picked up, the sound file linked with the upward facing plane is wirelessly triggered. Inside of the Weather Cube is housed a Shrimp, which is a DIY circuit (see shrimping.it for further information). Sourcing the Prototyping Materials The hardware we sourced for the Weather Cube use what we call ‘Shrimping,' a strategy for sourcing and openly documenting interactive physical computing kits we create to support UK learners. We call it Shrimping out of loyalty to our humble hometown of Morecambe, an area so famous for its shrimps that they named the soccer team after them! Shrimping is based on sourcing, testing, and documenting the cheapest possible components and modules direct from the manufacturers and wholesalers who serve professional electronics engineers and integrators. After prototyping a project, we provide free, easy-to-follow build graphics, instructions, and sourcing information online enabling others to prepare their own project kits direct from wholesalers, and substantially below retail prices especially when purchased in volume. In this section we outline the benefits and problems of sourcing your own parts direct. Make Circuits Like an Engineer Wholesale component suppliers do not operate with the hobbyist in mind, but their products are incredibly cheap, and with just a bit of community-maintained documentation, can be used like Legos--brought together in different combinations to prototype and deploy in a variety of educational and entertaining devices. Constructing devices on breadboards and stripboards helps makers develop substantial prototyping skills, and understand the pathway that professional device-inventors use. With these skills and materials you can personalize the circuit to meet your own specific needs, which is nearly impossible with a printed circuit board. Once complete, you can use their working circuits as a reference to move towards full-scale manufacturing of printed circuit boards. However, for many people, the main benefit of this approach is price. For hobbyists, Shrimping makes it cost-effective to deploy large numbers of experimental projects. For classrooms and Hackspaces, it becomes feasible to donate the kits for learners to adopt and personalize, which would be prohibitive if using prefabricated microcontroller boards from hobbyist suppliers. Shrimp vs. Arduino The programs that run on the Arduino Uno microcontroller board will run on the Shrimp too. The Shrimp has the full set of input and output pins as an Uno meaning that makers can use the circuit to replicate the many thousands of community-documented Arduino projects. However, it is built from the bare minimum of components, making it roughly one tenth the cost of an official Arduino board. In the Weather Cube, we decided to attach a Shrimp circuit to a Raspberry Pi. Relative to the Shrimp, the Raspberry Pi is more geared up for power and processor-heavy multimedia and desktop applications. For physical computing projects, a Pi always needs some kind of interfacing circuit to be attached, which can themselves be quite expensive. The Shrimp therefore has complementary strengths to the Raspberry Pi, with its low cost, ability to attach directly to sensors and actuators, it's ability to run in low power, and to run software in real time. The computing capabilities of an official Arduino board come from the ATMEGA328 chip at its center, and the Shrimp is essentially the same as the reference circuit from the manufacturer’s data sheet, laid out on a breadboard. Unfortunately, a special program, called an Arduino bootloader needs to be copied to an ATMEGA to make it possible to program it from the Arduino IDE. That means you can’t use wholesale ATMEGAs without an extra preparation step. Using online auction sites you can buy a chip with an Arduino bootloader already added, and once you already have an Arduino-compatible chip, you can use this to bootload more chips using a special Arduino Sketch called Optiloader. Breakout Modules In addition to the Shrimp on breadboard, we've used three breakout modules and another sensor component, a piezoelectric transducer. The breakout modules needed are: a CP2102 USB to UART for wired programming and communication, a HC-06 module for wireless Bluetooth communication and an ADXL345 Accelerometer module for sensing the orientation of our wearable sensors. The codes CP2102, HC-06, and ADXL345 actually refer to small ‘surface mount’ components that have tiny connections intended to be mounted industrially onto printed circuit boards. These components cannot be inserted into a breadboard or connected to for easy prototyping. For this reason, various suppliers provide ‘breakout modules’ which make the connections available as large pins with 0.10 inch (2.54mm) separation, suitable for insertion into breadboard, or wiring with female header cables. The components themselves are quite cheap, and breakout boards are fairly simple to engineer, ensuring that the prices remain low. This also means different suppliers end up making similar-looking breakout boards but with different pin sequences and labeling. Breakout boards have the same fundamental capabilities, because they ‘break out’ the same pins from the same component, so if you wire to the correctly labeled pins, changes to the layout should not normally make much difference. One major exception, sadly, are the 'transmit and receive pins' on UART modules. Some UARTs label their pins according to their role - describing if they transmit TX or receive RX data. Others label their pins to describe what pins on the communicating device to attach to, so a transmitting pin is actually labeled RX, and a receiving pin, TX. As you can see, there is a lot of potential for the Maker community to collaborate with health professionals (and others), to design projects for the greater good. Also, by sourcing wholesale prototyping materials, makers are able to cheaply test and document their projects, and invent personalized circuits. So, if you are a maker, we urge you to get out and partner with your community; your imagination is limitless. About the authors Clare Bowman enjoys hacking playful interactive installations and co-designing digitally fabricated consumer products. She has exhibited projects at Maker Faire UK, Victoria and Albert Museum, FutureEverything, and Curiosity Collective gallery shows. Some recent work includes “Sands Everything”, an interactive hourglass installation interpreting Shakespeare’s Seven Ages of Man soliloquy through gravity-controlled animated grains, and more. Cefn Hoile sculpts open source hardware and software, and supports others doing the same. Drawing on 10 years of experience in R&D for a multinational technology company, he works as a public domain inventor, and an innovation catalyst and architect of bespoke digital installations and prototypes. He is a founder-member of the CuriosityCollective.org digital arts group, and a regular contributor to open source projects and not-for-profits. Cefn is currently completing a PhD in Digital Innovation at Highwire, University of Lancaster, UK.
Read more
  • 0
  • 0
  • 2612

article-image-aspiring-data-analyst-meet-your-new-best-friend-excel
Akram Hussain
30 Jun 2014
4 min read
Save for later

Aspiring Data Analyst, Meet Your New Best Friend: Excel

Akram Hussain
30 Jun 2014
4 min read
In general, people want to associate themselves with cool job titles and one that indirectly says both that you’re clever and you get paid well, so what’s better than telling someone you’re a data analyst? Personally, as a graduate in Economics I always thought my natural career progression would be to go into a role of an analyst working for a banking organization, a private hedge fund, or an investment firm. I’m guessing at some point all people with a background in maths or some form of statistics have envisaged becoming a hotshot investment banker, right? However, the story was very different for me; I somehow was fortunate enough to fall into the tech world and develop a real interest in programming. What I found really interesting was that programming languages and data sets go hand in hand surprisingly well, which uncovered a relatively new field to me known as data science. Here’s how the story goes – I combined my academic skills with programming, which opened up a world of opportunity, allowing me to appreciate and explore data analysis on a whole new level. Nowadays, I’m using languages like Python and R to mix background knowledge of statistical data with my new-found passion. Yet that’s not how it started. It started with Excel. Now if you want to eventually move into the field of data science, you have to become competent in data analysis. I personally recommend Excel as a starting point. There are many reasons for this, one being that you don’t have to be technical wizard to get started and more importantly, Excel’s functionalities for data analysis are more powerful than you would expect and a lot quicker and efficient in resolving queries and allowing you to visualize them too. Excel has an inbuilt Data tab to get you started: The screenshot shows the basic analytical features to get you started within Excel. It’s separate to any functions and sum calculations that could be used. However, one useful and really handy plugin called Data Analysis is missing from that list. If you click on: File | Options | Add-ins and then choose Analysis tool and Analysis tool pack - VBA from the list and select Go, you will be prompt with the following image: Once you select the add-ins (as shown above) you will now find an awesome new tag in your data tab called Data Analysis: This allows you to run different methods of analysis on your data, anything from histograms, regressions, correlations, to t-tests. Personally I found this to save me tons of time. Excel also offers features such as Pivot-tables and functions like V-look ups, both extremely useful for data analysis, especially when you require multiple tables of information for large sets of data. A V-look up function is very useful when trying to identify products in a database that have the same set of IDs but are difficult to find. A more useful feature for analysis I found was using pivot tables. One of the best things about a pivot table is that it saves so much time and effort when you have a large set of data that you need to categorize and analyze quickly from a database. Additionally, there’s a visual option named a pivot chart, which allows you to visualize all your data in the pivot table. There are many useful tutorials and training available online on pivot tables for free. Overall, Excel provides a solid foundation for most analysts starting out. A general search on the job market for “Excel data” returns a search result of over 120,000 jobs all specific to an analyst role. To conclude, I wouldn’t underestimate Excel for learning the basics and getting valuable experience with large sets of data. From there, you can progress to learning a language like Python or R (and then head towards the exciting and supercool field of data science). With R’s steep learning curve, Python is often recommended as the best place to start, especially for people with little or no background in programming. But don’t dismiss Excel as a powerful first step, as it can easily become your best friend when entering the world of data analysis.
Read more
  • 0
  • 0
  • 5308
article-image-virtual-reality-and-social-e-commerce-rift-between-worlds
Julian Ursell
30 Jun 2014
4 min read
Save for later

Virtual Reality and Social E-Commerce: a Rift between Worlds?

Julian Ursell
30 Jun 2014
4 min read
It’s doubtful many remember Nintendo’s failed games console, the Virtual Boy, which was one of the worst commercial nose dives for a games console in the past 20 years. Commercial failure though it was, the concept of virtual reality back then and up till the present day is still intriguing for many people considering what the sort of technology that can properly leverage VR is capable of. The most significant landmark in this quarter of technology in the past 6 months undoubtedly is Facebook’s acquisition of the Oculus Rift VR headset manufacturer, Oculus VR. Beyond using the technology purely for creating new and immersive gaming experiences (you can imagine it’s pretty effective for horror games), there are plans at Facebook and amongst other forward-thinking companies of mobilizing the tech for transforming the e-commerce experience into something far more interactive than the relatively passive browsing experience it is right now. Developers are re-imagining the shopping experience through the gateway of virtual reality, in which a storefront becomes an interactive user experience where shoppers can browse and manipulate the items they are looking to buy ( this is how the company Chaotic Moon Studios imagines it), adding another dimension to the way we can evaluate and make decisions on the items we are looking to purchase. On the surface there’s a great benefit to being able to draw the user experience even closer to the physical act of going out into the real world to shop, and one can imagine a whole other array of integrated experiences that can extend from this (say, for example, inspecting the interior of the latest Ferrari). We might even be able to shop with others, making decisions collectively and suggesting items of interest to friends across social networks, creating a unified and massively integrated user experience. Setting aside the push from the commercial bulldozer that is Facebook, is this kind of innovation something that people will get on board with? We can probably answer with some confidence that even with a finalized experience, people are not going to instantly “buy-in” to virtual reality e-commerce, especially with the requirement of purchasing an Oculus Rift (or any other VR headset that emerges, such as Sony’s Morpheus headset) for this purpose. Factor in the considerable backlash against the KickStarter-backed Oculus Rift after its buyout by Facebook and there’s an even steeper hill of users already averse to engaging with the idea. From a purely personal perspective, you might also ask if wearing a headset is going to be anything like the annoying appendage of wearing 3D glasses at the cinema, on top of the substantial expense of acquiring the Rift headset. 3D cinema actually draws a close parallel – both 3D and VR are technology initiatives attempted and failed in years previous, both are predicated on higher user costs, and both are never too far away from being harnessed to that dismissive moniker of “gimmick”. From Facebook’s point of view we can see why incorporating VR activity is a big draw for them. In terms of keeping social networking fresh, there’s only so far re-designing the interface and continually connecting applications (or the whole Internet) through Facebook will take them. Acquiring Oculus is one step towards trying to augment (reinvigorate?) the social media experience, orchestrating the user (consumer) journey for business and e-commerce in one massive virtual space. Thought about in another way, it represents a form of opt-in user subscription, but one in which the subscription is based upon a strong degree of sustained investment from users into the idea of VR, which is something that is extremely difficult to engineer. It’s still too early to say whether the tech mash-up between VR, social networking, and e-commerce is one in which people will be ready to invest (and if they will ever be ready). You can’t fault the idea on the basis of sheer innovation, but at this point one would imagine that users aren’t going to plunge head first into a virtual reality world without hesitation. For the time being, perhaps, people would be more interested in more productive uses of immersive VR technology, say for example flying like a bird.
Read more
  • 0
  • 0
  • 12035

article-image-micro-transactions-gamers-bane
Ed Bowkett
23 Jun 2014
6 min read
Save for later

Micro Transactions: Gamer's Bane?

Ed Bowkett
23 Jun 2014
6 min read
I play a lot of games (who doesn't these days) and I've grown from my Nintendo 64, where I considered myself a pro at Mario Kart, through to the PS4, but my pride and joy is my PC. Lovingly upgraded through the years, my Steam library getting steadily bigger (thanks to Gabe) through the endless sales. Yet, increasingly, I've found myself wanting to be transported back to the days of the Nintendo 64. The reason for this? Micro transactions in games. Now, as we get into the grittiness of this blog, a few confessions and a definition of what micro transactions are. Micro transactions are small sums of money and usually take place online and in-game. It involves the purchase of virtual goods. Now on to the confession part. I’ve partaken in micro transactions in the past. Back during university when my gaming addiction was all about World of Tanks, gold ammo was all the rage. So I calmly handed over my hard earned student loan money and paid for ammo, without really thinking what I was doing. More recently, with the release of Hearthstone, I was intrigued by the legendary cards and how to build decks and generally just wanted to bypass the whole process of slowly working your way to awesome decks. So I purchased a few decks and boosted some of these decks. Do I consider these pay to win elements? With World of Tanks, I sympathised more with those that accused others of paying to win by using premium ammo. However, when you arrived at the higher levels, gold ammo became not only valuable, but necessary. Everyone at this stage is using gold ammo, so ultimately you are doing yourself a disservice by not buying gold ammo. However this is isn’t really a good argument, as people shouldn’t feel it’s necessary to buy gold ammo, but are forced to due to others doing it. With Hearthstone, the assumption that it is pay to win is, in my view, wrong. You can easily obtain these cards freely from leveling up and winning packs through the game mode Arena. While you can “speed” up how many decks you can construct and how quickly you can get up the levels in ranked modes, eventually, others can obtain the cards freely. So there’s not really an element of pay to win in my view. Coupled with the fact you get quests every day (win five games with Mage, and so on) it’s quite feasible to obtain enough gold a month to obtain four and a half packs a week, which translates to 18 packs a month, 216 packs a year, which will give the gamer a huge amount of cards from which to create their deck. Another benefit of this system is the ability to disenchant cards and create more cards to add to decks. Hearthstone can be considered pay to win as there is the mechanism there to pay for more cards, but it is not the only way to win; there are basic cards that can take out the more difficult cards, so it’s a great balance in my view. While the person that paid for all the extra cards will have an initial advantage, the person that went through the daily quests and leveled up will eventually catch up, pay to win is not a permanent thing. And nor should it be. Pay to win, while being be a cash cow for most game companies, does nothing but add negative feedback to games. If it was to be a permanent thing, the benefits of playing the game would be greatly diminished. One of the great pay to win protests I was involved with was in Eve Online during the monoclegate scandal of 2011. This opened my eyes to the number of companies that adopt micro transactions and an element of pay to win into their games. Don’t get me wrong. If you want to fork out money to make a character look aesthetically pleasing, that’s fine, it’s your money. If you want to put in a game damage increasing ammo if you pay $50, then that’s a whole different kettle of fish. Both the enjoyment of the game is decreased as well as strategies employed and effort. What made the 2011 revolt so great is that the entire universe of Eve became united against pay to win and changed the direction of the gaming company, CCP. For me, more players from across the gaming world need to do this as ultimately it’s the gamers that they should be focussed on, not the cash cow that micro transactions offer. The increasing of micro transactions in games also, in my opinion, has come about with the increasing of RMTing or RWting (Real Money Trading or Real World Trading). This involves a website offering credits for the various games (for example, in Eve it is ISK), for a set price, which goes against the EULA of the game. To combat this, CCP introduced PLEX, Pilot's License Extension, an in-game item that you can both purchase on partner websites and trade for ISK. Runescape too did a form of this and the amount of RWTing decreased. So there is a way of using micro transactions sensibly that has a benefit to everyone and goes some way to solving the issue of trying to bend the system illegally. Ultimately there are ways of having micro transactions in games and not making the game slanted towards pay to win. For me, I think Hearthstone has the balance just right. Sure the initial boost to the purchase of decks will be slanted, but there are opportunities to catch up to this boost. With no card being really OP (overpowered), as in there are always ways to counter the cards placed down, the pay to win element is greatly diminished, though not removed. With protests like the summer of 2011 in Eve, micro transactions will continue to be an issue and it's really an area in which gaming companies need to tread carefully. While I appreciate that companies need to make money, they need to be aware that pay to win is not the way forward and there needs to be a balance to it and careful consideration of the consequences. Get up to speed with the very latest in game development. Visit our Unity page here.
Read more
  • 0
  • 0
  • 2563

article-image-gdc-2014-synopsis
Ed Bowkett
22 Jun 2014
5 min read
Save for later

GDC 2014: A Synopsis

Ed Bowkett
22 Jun 2014
5 min read
The GDC of 2014 came and went with something of a miniature Big Bang for game developers. Whether it was new updates to game engines, or new VR peripherals announced, this GDC had plenty of sparks and I am truly excited for the future of game development. In this blog I’m going to cover the main announcements that took my attention and why I’m excited for them. The clash of the Titans Without a shadow of a doubt, out of the main announcements that came out of GDC 14 that was the most appealing to me was the announcement of the updates to the three main game engines, Unity, Unreal, and CryEngine all within a short timeframe. All introduced unique price models that will be covered in a separate blog post, but it was like having a second Christmas, particularly for me, who has a strong interest in this area, both from a hobbyist perspective and in my current role concerning game development books. All three offered a long list of changes and massive updates to various parts of their engine and at some point in the future, I hope to dabble in all three and provide insight on which I preferred and why. The advancement of the hobbyist developer Not to be outdone by the big three, smaller tools announced various new features including Gamemaker, who announced a partnership to develop on the Playstation 4, and Construct 2 announced a similar deal with Wii U (admittedly before GDC). These are hugely significant for me. Support for the new consoles with tools that are primarily aimed at the hobbyist in us all opens up a massive market for potential indie developers and those just trying game development for fun, with the added benefit of the console ecosystem! It means my dream of the game studio I created with Game Dev Tycoon can finally come true. Would you like a side of immersion with your games? I might as well be honest here. VR and I don’t get along. Not in the sense that we broke up after a long relationship and are no longer speaking, I mean more the sense that I just don’t get it. It probably also has something to do with my motion sickness, but that’s less fun. No, in all seriousness, I have no doubt that VR will revolutionize gaming in a big way. From what we’ve seen with certain games such as EVE: Valkyrie, VR has a unique opportunity to take gaming beyond just the screen and for the masses of people out there that love video games, this can only be a positive thing. With Sony announcing Project Morpheus, Oculus Rift releasing a new headset, and Microsoft expressing a strong interest in developing a headset in this area, the area will only continue to expand and competition is not a bad thing. The one question I have is whether it can go from the current gimmicky idea with the large, bulky headset, and become a tour de force in the gaming community. Consoles reaching out to indie developers GDC has always been focussed on indie games and development in the past and this year was no exception. But it wasn’t from the traditional PC love for indie games. Consoles are beginning to cotton on that indie games are much loved and indeed highly played, and as a result, 2014 was the year where the main consoles announced efforts to release more indie games onto their platforms, while trying to drive more indie developers to their respective consoles. Sony, for example, introduced PhyreEngine in GDC 2013, but plans to extend further support through the partnerships of, as mentioned earlier in this article, GameMaker: Studio and MonoGame. Through these tools and their promotion, Sony hopes to improve relations with indie developers and encourage them to use the Sony ecosystem. A similar announcement was made by Nintendo, by introducing the Nintendo Web Framework. They also promoted the fact that Nintendo would be willing to get it promoted and marketed properly. These announcements are both significant and positive for the future of game development, as, from my view, indie games are only going to increase in popularity and to have the ecosystems available for these people to develop on the popular consoles can only be a good thing, and will allow those that are not on an expensive budget or working for an AAA studio to create games and reach a wider audience. That’s the ambition of Sony and Nintendo, I believe. So there you have it; the big announcements that grabbed my attention at GDC. Whilst I could have mentioned Amazon Fire TV and further announcements by Valve, or gone into depth with specific peripherals, I felt an overview of what was announced at GDC was better; the analysis of these announcements can be covered more in depth at a later stage. However what is evident from this blog and what came out of GDC 2014 in general is that game development is an extremely healthy area and is continuously being pushed to the limits and constantly innovated. As an avid fan of games and a mere newbie at game development this excites me and keeps me interested. How was GDC 2014 for you? Any issues that you thought I should have included? Let me know!
Read more
  • 0
  • 0
  • 2913
article-image-you-want-job-publishing-please-set-fire-unicorn-c
Sarah
22 Jun 2014
5 min read
Save for later

You Want a Job in Publishing? Please Set Fire to this Unicorn in C#.

Sarah
22 Jun 2014
5 min read
Total immersion in a tech puzzle that's over your head. That's part of the Packt induction policy. Even those in an editorial role are expected to do battle with some code to get a feel for why we make the books we make. When I joined the commissioning team I'd heard rumours that the current task involved frontend web development. Now, English major I may be, but I've been building sites since the first time my rural convent school took away our paper keyboards and let us loose on the real thing. (True story.) I apprenticed in frames and div.gifs thankfully lost in the Geocities extinction event. CSS or Java? Maybe Sass or JQuery? I was smug. I was on this. Assignment: "This is Unity. Build a game. You have four days." Hang on. What?   The last time I wrote a computer game it was a text adventure in TADS. Turns out amateur game dev technology has moved on somewhat since then. There's nothing like an open brief in a technology you've never even installed before to make you feel cool and in control in a new job. But that was the point, of course. Four days to read, Google, innovate, or pray one's way out of what business types like to call a "pain point". So this is a quick précis of my 32-hour journey from mortifying ignorance to relative success. No, I didn't become the next Flappy Bird millionaire. But I wrote a game, I learned some C#, and I gained a new appreciation for how valuable the guidance of a good author can be as part of the vast toolkit we now have at our fingertips when learning new software. My completed game had a complicated narrative. Day one: deciding what kind of game to make "Make a game" is a really mean instruction. (Yes, boss. I'm standing by that.) "Make an FPS." "Make a pong clone." "Make a game where you're a Tetris block trapped in a Beckett play." All of these are problems to be solved. But "I want to make a game" is about as clear a motivation as "I want to be rich". There are a lot of missing steps there. And it can lead to delusions of scale. Four whole days? I'll write an MMO! I wasted a morning on daydreaming before panicking at lunch and deciding on a side-scroller, on the reasonable logic that those have a beginning, a middle, and an end. I knew from the start that I didn't just want to copy and paste, but I also knew that I couldn't afford to be too precious about my plans before learning the tool. The sheer volume of information on Unity out there is overwhelming. Eventually I started with a book on 2D Games in Unity. By the end of the day, I had a basic game. It wasn't my game, but I'd learned enough along the way to start thinking about what I could do with Unity. Day two: learning the code By mid-morning of day two I'd hit a block. I don't know C#. I've never programmed in C. But if I wanted to do this properly I was going to have to write my own code. Terry Norton wrote us a book on learning C# with unity. For me, a day out working with one clear voice explaining the core concepts before I experimented on my own was exactly what I needed. Day two was given over to learning how to build a state machine from Norton's book. State machines give one strong and flexible control over the narrative of a game. If nothing else ever comes from this whole exercise that is a genuinely cool thing to be able to do in Unity. Eight hours later I had a much better feel for what the engine could do and how the language worked. Day three: everything is terrible Day three is the Wednesday of which I do not speak. Where did the walls go? Everything is pink. Why don't you go left when I tell you to go left? And let's not even get started on my abortive attempts to simultaneously learn to model game objects in Maya. For one hour it looked like all I had to show for the week's work was a single grey sphere that would roll off the back of a platform and then fall through infinite virtual space until my countdown timer triggered a change of state and it all began again. This was an even worse game than the Beckett-Tetris idea. Day four: bringing it together Even though day three was a nightmare, there was a structure to the horror. Because I'd spent Monday learning the interface and Tuesday building a state machine, I had some idea of where the problems lay and what the solutions might look like, even if I couldn't solve them alone. That's where the brilliance of tech online communities comes in, and the Unity community is pretty special. To my astonishment, step-by-step, I fixed each element with the help of my books, the documentation, and Unity Answers. I ended up with a game that worked. Day five: a lick of paint I cheated, obviously. On Friday I came in early and stole a couple of extra hours to swap my polygons for some sketched sprites and add some splash pages. Now my game worked and it was pretty. Check it out: Green orbs increase speed, red slow you down, the waterfall douses the flame. Complex. It works. It's playable. If I have the time there's room to extend it to more levels. I even incidentally learned some extra skills (like animating the sprites properly) and particle streams for extra flair. Bursting with pride I showed it to our Category Manager for Game Dev. He showed me Unicorn Dash. That game is better than my game. Well, you can't win 'em all.
Read more
  • 0
  • 0
  • 2805

article-image-why-phaser-is-a-great-game-development-framework
Alvin Ourrad
17 Jun 2014
5 min read
Save for later

Why Phaser is a Great Game Development Framework

Alvin Ourrad
17 Jun 2014
5 min read
You may have heard about the Phaser framework, which is fast becoming popular and is considered by many to be the best HTML5 game framework out there at the moment. Follow along in this post where I will go into some detail about what makes it so unique. Why Phaser? Phaser is a free open source HTML5 game framework that allows you to make fully fledged 2D games in a browser with little prior knowledge about either game development or JavaScript for designing for a browser in general. It was built and is maintained by a UK-based HTML5 game studio called Photon Storm, directed by Richard Davey, a very well-known flash developer and now full-time HTML5 game developer. His company uses the framework for all of their games, so the framework is updated daily and is thoroughly tested. The fact that the framework is updated daily might sound like a double-edged sword to you, but now that Phaser has reached its 2.0 version, there won't be any changes that break its compatibility, only new features, meaning you can download Phaser and be pretty sure that your code will work in future versions of the framework. Phaser is beginner friendly! One of the main strengths of the framework is its ease of use, and this is probably one of the reasons why it has gained such momentum in such a short amount of time (the framework is just over a year old). In fact, Phaser abstracts away all of the complicated math that is usually required to make a game by providing you with more than just game components.It allows you to skip the part that you spend thinking about how you can implement a given special feature and what level of calculus it requires. With Phaser, everything is simple. For instance, say you want to shoot something using a sprite or the mouse cursor.Whether it is for a space invader or a tower defense game, here is what you would normally have to do to your bullet object (the following example uses pseudo-code and is not tied to any framework): var speed = 50; var vectorX = mouseX - bullet.x; var vectorY = mouseY - bullet.y;   // if you were to shoot a target, not the mouse vectorX = targetSprite.x - bullet.x; vectorY = targetSprite.y - bullet.y;   var angle = Math.atan2(vectorY,vectorX);   bullet.x += Math.cos(angle) * speed;   bullet.y += Math.sin(angle) * speed; With Phaser, here is what you would have to do: var speed = 50; game.physics.arcade.moveToPointer(bullet, speed); // if you were to shoot a target : game.physics.arcade.moveToObject(bullet,target, speed); The fact that the framework was used in a number of games during the latest Ludum Dare (a popular Internet game jam) highly reflects this ease of use.There were about 60 games at Ludum Dare, and you can have a look at themhere. To get started with learning Phaser, take a look here at thePhaser examples, where you’ll find over 350 playable examples. Each example includes a simple demo explaining how to do specific actions with the framework, such as creating particles, using the camera, tweening elements, animating sprites, using the physics engine, and so on. A lot of effort has been put into these examples, and they are all maintained with new ones constantly added by either the creator or the community all the time. Phaser doesn't need any additional dependencies When using a framework, you will usually need an external device library, one for the math and physics calculations, a time management engine, and so on. With Phaser, everything is provided, giving you a very exhaustive device class that you can use to detect the browser's capabilities that is integrated into the framework and is used extensively internally and in games to manage scaling. Yeah, but I don't like the physics engine… Physics engines are usually a major feature in a game framework, and that is a fair point, since physics engines often have their own vocabulary and way of dealing and measuring things. And it's not always easy to switch over from one to another. The physics engines were a really important part of the Phaser 2.0 release. As of today, there are three physics engines fully integrated into Phaser's core, with the possibility to create a custom build of the framework in order to avoid a bloated source code. A physics management module was also created for this release.It dramatically reduces the pain to make your own or an existing physics engine work with the framework. This was the main goal of this feature: make the framework physics-agnostic. Conclusion Photon Storm has put a lot of effort into their framework, and as a result the framework has become widely used by both hobbyists and professional developers. The HTML5 game developers forum is always full of new topics and the community is very helpful as a whole. I hope to see you there.
Read more
  • 0
  • 0
  • 5835

article-image-internet-things-or-internet-thieves
Oli Huggins
07 Jan 2014
3 min read
Save for later

Internet of Things or Internet of Thieves

Oli Huggins
07 Jan 2014
3 min read
While the Internet of Things(IoT) sounds like some hipster start-up from the valley, it is in actual fact sweeping the technology world as the next big thing and is the topic of conversation (and perhaps development) through the majority of the major league tech titans. Simply, the IoT is the umbrella term for IP-enabled every day devices with the ability to communicate over the Internet. Whether that is your fridge transmitting temperature readings to your smartphone, or your doorbell texting you once it has been rung, anything with power (and even some without) can be hooked up to the World Wide Web and be accessed anywhere, anytime. This will of course have a huge impact on consumer tech, with every device under the sun being designed to work with your smartphone or PC, but whatäó_s worryingis how all this is going to be kept secure. While there are a large number of industry leading brands we can all trust (sometimes), there are an even bigger number of companies shipping devices out of China at extremely low production (and quality) costs. This prompts the questionäóñif the companyäó_s mantra is low cost products and mass sales, do they have the time, money (or care) to have an experienced security team and infrastructure to ensure these devices are secure? Iäó_m sure you know the answer to that question. Unconvinced? How about TrendNetcams back in 2012äó_ The basic gist was that a flaw in the latest firmware enabled you to add /anony/mjpg.cgi to the end of one of the camsäó_ IP addresses and you would be left with a live stream of the IP camera. Scary stuff (and some funny stuff) but this was a huge mistake made by what seems to be a fairly legitimate company. Imagine this on a much larger scale, with many more devices, being developed by much more dubious companies. Want a more up-to-date incident? How about a hacker gaining access to a Foscom IP camera that a couple was using to watch over their child, and the hacker screaming "Wake up, baby! Wake up, baby!äó_ Iäó_ll leave you to read more about that. With the suggestion that by 2020 anywhere between 26 and 212 billion devices will be connected to the Internet, this opens up an unimaginable amount of attack vectors, which will be abused by the black hats among us. Luckily, chip developers such as Broadcom have seen the payoff here by developing chips with a security infrastructure designed for wearable tech and the IoT. The newBCM20737 SoC provides äó_ Bluetooth, RSA encryption and decryption capabilities, and Appleäó_s iBeacon device detection technologyäó_ adding another layer of security that will be of interest to most tech developers. Whether the cost of such technology will appeal to all though is another thing altogetheräóîlow cost tech developers will just not bother. Now, I see the threat of someone hacking your toaster and burning your toast is not something you would worry about, but imagine healthcare implants or house security being given the IoT treatment. Not sure Iäó_d want someone taking control of my pacemaker or having a skeleton key to my house! Security is one of the major barriers to total adoption of the IoT, but is also the only barrier that can be jumped over and forgotten about by less law abiding companies. If I were to give anyone any advice before äó_connectingäó_, it would be to spend your money wisely, donäó_t go cheap, and avoid putting yourself in compromising situations around your IoT tech.
Read more
  • 0
  • 0
  • 13887