Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-devops-not-continuous-delivery
Xavier Bruhiere
11 Apr 2017
5 min read
Save for later

DevOps is not continuous delivery

Xavier Bruhiere
11 Apr 2017
5 min read
What is the difference between DevOps and continuous delivery? The tech world is full of buzzwords; DevOps is undoubtly one of the best-known of the last few years. Essentially DevOps is a concept that attempts to solve two key problems for modern IT departments and development teams - the complexity of a given infrastructure or service topology and market agility. Or, to put it simply, there are lots of moving parts in modern software infrastructures, which make changing and fixing things hard.  Project managers who know their agile inside out - not to mention customers too - need developers to: Quickly release new features based on client feedback Keep the service available, even during large deployments Have no lasting crashes, regressions, or interruptions How do you do that ? You cultivate a DevOps philosophy and build a continous integration pipeline. The key thing to notice there are the italicised verbs - DevOps is cultural, continuous delivery is a process you construct as a team. But there's also more to it than that. Why do people confuse DevOps and continuous delivery? So, we've established that there's a lot of confusion around DevOps and continuous delivery (CD). Let's take a look at what the experts say.  DevOps is defined on AWS as: "The combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity." Continuous Delivery, as stated by Carl Caum from Puppet, "…is a series of practices designed to ensure that code can be rapidly and safely be deployed to production by delivering every change to a production-like environment and ensuring that business applications and services function as expected through rigorous automated testing." So yes, both are about delivering code. Both try to enforce practices and tools to improve velocity and reliability of software in production. Both want the IT release pipeline to be as cost effective and agile as possible. But if we're getting into the details, DevOps is focused on managing challenging time to market expectations, while continuous delivery was a process to manage greater service complexity - making sure the code you ship is solid, basically. Human problems and coding problems In its definition of DevOps, Atlassian puts forward a neat formulation: "DevOps doesn’t solve tooling problems. It solves human problems." DevOps, according to this understanding promotes the idea that development and operational teams should work seamlessly together. It argues that they should design tools and processes to ensure rapid and efficient development-to-production cycles. Continuous Delivery, on the other hand, narrows this scope to a single mentra: your code should always be able to be safely released. It means that any change goes through an automated pipeline of tests (units, integrations, end-to-end) before being promoted to production. Martin Fowler nicely sums up the immediate benefits of this sophisticated deployment routine: "reduced deployment risk, believable progress, and user feedback." You can't have continuous delivery without DevOps Applying CD is difficult and requires advanced operational knowledge and enough resources to set up a pipeline that works for the team. Without a DevOps culture, you're team won't communicate properly, and technical resources won't be properly designed. It will certainly hurt the most critical IT pipeline: longer release cycles, more unexpected behaviors in production, and a slow feedback loop. Developers and management might fear the deployment step and become less agile. You can have DevOps without continuous delivery... but it's a waste of time The reverse this situation, DevOps without CD, is slightly less dangerous. But it is, unfortunately, pretty inefficient. While DevOps is a culture, or a philosophy, it is by no means supposed to remain theoretical. It's supposed to be put into practice. After all, the main aim isn't chin stroking intellectualism, it's to help teams build better tools and develop processes to deliver code. The time (ie. money) spent to bootstrap such a culture shouldn't be zeroed by a lack of concrete actions. CD delivery is a powerful asset for projects trying to conquer a market in a lean fashion. It overcomes the investments with teams of developers focused on business problems anddelevering to clients tested solutions as fast as they are ready. Take DevOps and continuous delivery seriously What we have, then, are two different, but related, concepts in how modern development teams understand operations. Ignoring one of them induces waste of resources and poor engineering efficiency. However, it is important to remember that the scope of DevOps - involving an entire organizational culture, potentially - and the complexity of continuous delivery mean that adoption shouldn't be rushed or taken lightly. You need to make an effort to do it properly. It might need a mid or long term roadmap, and will undoubtedly require buy-in from a range of stakeholders. So, keep communication channels open, consider using built cloud services if required, understand the value of automated tests and feedback-loops, and, most importantly, hire awesome people to take responsibility. A final word of caution. As sophisticated as they are, DevOps and continuous delivery aren't magical methodologies. A service as critical as AWS S3 claims 99.999999999% durability, thanks to rigorous engineering methods and yet, on February 28, it suffered a large service disruption. Code delivery is hard so keep your processes sharp! About the author Xavier Bruhiere is a Senior Data Engineer at Kpler. He is a curious, sharp, entrepreneur, and engineer who has built many projects, broke most of them, and launched and scaled what was left, learning from them all.
Read more
  • 0
  • 0
  • 34252

article-image-6-ways-businesses-can-become-more-digitally-secure
Mary Gualtieri
11 Apr 2017
5 min read
Save for later

6 ways businesses can become more digitally secure

Mary Gualtieri
11 Apr 2017
5 min read
Web security is a term we've constantly been hearing about in recent days, especially in the news. We’ve seen an onslaught of high-profile hacks, most notably the 2016 US presidential election. Web security will always be a hot topic because of the constant development of technology and how, as a society, we will continue to rely on it. Attacks happen for a number of reasons, but it is usually due to human error. It can be a flaw in the code, an unsecure network, and so on. This can create holes for attackers to get in and cause damage. This begs the thought, what exactly is web security? In short, web security is the security of websites, web applications, and web services. Increased information sharing has emerged in recent years, especially through social networking and increased e-commerce business, and has increased direct attacks. We are seeing web application attacks happen through XSS and SQL injection attacks (this is usually a result of a flaw in the code). We are also seeing increased phishing attacks. But what does this mean for businesses, and what can they do to help prevent these attacks? When a business is going through the process of setting up their website or web application, they should consider what type of information, if any, is considered sensitive. For instance, if you have a signup and require a password, what security measures are you taking to make sure that the password cannot be stolen? Businesses can take steps to be preventive rather than reactive and, in the end, save themselves a big headache when they are done. But back to what is important: security. Businesses can take steps in order to ensure the integrity of their data and utilize strategies to counteract an attack. Establish the importance of security from the beginning with employees It can be very easy to forget that an employee carries sensitive information within and outside the workplace. It should be emphasized from the beginning of the hiring process that this sensitive information should always be protected. As an employer, you can take preventive measures to ensure that this is followed by having certain websites blocked on your network, making employees choose passwords that are complex, or set an expiration on passwords where they must be renewed after a certain time. Have a strong network One of the most important security measures you can take is to have a strong network. This means you should have a proper firewall to capture bad data packets, and it should be included on all employee-operated equipment like computers, cellphones, and tablets. One of these solutions could be establishing a virtual private network or VPN. This allows employees who work from home and have remote access to remain secure. A VPN would protect your data through encryption and tunnel protocols. VPNs provide the integrity of security needed with sensitive data. Train your employees As an employer, you should take the time to invest in your employees, and this should include telling them about the importance of security. Take the time to make sure an employee knows how to recognize a phishing e-mail or attack, why clicking on a pop-up link is harmful to the company, or how to recognize a data breach. Vendor compliance Many times, businesses must use outside vendors to accomplish a certain goal. But what is that vendor doing to make sure that they are keeping the integrity of your data safe and secure. When onboarding a new vendor, it should be part of your protocol to look at how they store your data and whether they comply with data protection regulations. Monitor your employees The biggest advice my dad has ever given is this: no one is your friend. Employees are your employees. Some employees have access to sensitive information, and it is up to you to take the protective measures to ensure that information always remains secure. Run the occasional assessment On occasion, you should run an assessment of where the security vulnerabilities in your network are and what you can do to rectify them. This is when you should seek an outside resource to perform this because they don’t have a bias. They would be able to clearly identify the loopholes and make recommendations to fix them. Web security is going to be an ongoing topic in today’s world. When it comes to businesses, it's not a matter of “if” an attack happens; it is now a matter of “when” it will happen. Businesses can take preventative measures to help ensure they do not fall victim to an attack. About the author  Mary Gualtieri is a full-stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri. 
Read more
  • 0
  • 0
  • 2446

article-image-vr-overrated
Raka Mahesa
11 Apr 2017
6 min read
Save for later

Is VR overrated?

Raka Mahesa
11 Apr 2017
6 min read
This post gives a short overview of what you can do and which tools to use to maximize the reach of VR content that you publish. It will also outline some pitfalls. Rule No. 1 of VR: No shortcuts Rule No. 2 of VR: seriously, no shortcuts. This is important. Stick to native SDKs and high-performance 3D rendering. Stay away from WebVR and JavaScript. WebVR will become a great standard one day in the future, but right now we are months away from the sort of adoption and user-friendly-ness it needs to be suitable for the run-in-the-middle consumer. If you don’t believe me or disagree, just head over to this Sketchfab 3D Model (or any other, really) and enter VR mode on your mobile phone. Unless you are on the latest-generation Android device you’ll see a distorted mess, which runs well below what anybody would call smooth. If you’re absolutely in no position to use a native-code VR player, you can choose krpano (or Kolor Panotour, which is built on top of krpano) to wrap your 360 images and video. They have decent enough cross-platform support and some nifty workarounds in place to mitigate the most common browser-based VR pitfalls. For 3D content use Unity 3D. There might be other tools, but especially true if you are a beginner, Unity offers the quickest results.  Serve everyone Figure out a way to serve everyone. As I have outlined in previous posts, there are many different VR device types out there, and all together, there’s not yet a huge amount of VR devices at all. Aiming to reach everyone will give you a decent audience reach in the end. If you set out and only publish your content on Oculus, or HTC Vive, or GearVR, you can be sure that you will exclude over 80 percent of the total audience. You should also offer a non-VR mode to access your content. While VR is a good way to get an extra kick out of an experience,you should never ever only offer a VR mode; this will exclude anyone without a VR device from your potential audience, which will shrink the reach of your content dramatically. You also have to consider social situations in which people might access your content (for example, on the bus or waiting in line at the airport) where it is not feasible to enter a VR-based view mode. Fragmentation & Pitfalls Fragmentation in VR is huge; it means that out of 100 users that access your VR content, only 10 might use the same type and generation of VR device at the same time. Unless you are creating a VR game (where it might be practical to only target one device type at a time) you should always aim to support all VR devices out there. Here are some pointers on how to make that feasible and some common pitfalls: If your main experience is desktop-based 3D content, you should offer a 360-image or video-based tour for mobile devices. This can easily be achieved by doing a 360 screen capture of your 3D content. The 360 images/videos might offer less interaction, but most of the time this format is enough to bring across your main story points. You can always hint to the user that there’s a full-blown immersive experience available on another platform, which the user then might check out or recommend to a friend. If your experience is video-based, make sure you offer multiple resolutions of your content, including a streaming version (you can host it via YouTube). You should offer streamed variants, but also an option to download content to the device (like Netfilx now does); this again helps with re-engagement in case a user accesses the content for the first time in a setting where VR-mode is not feasible, but wants to revisit it later to view it in VR. There are plenty of shareware video converters available to help you create all of those versions of your video, or you can use the free, open source tool ffmpeg (tutorial available here). Bear in mind that while VR video is a great experience for the first couple of times, it will quickly wear out its novelty, so make sure to create great content and distinguish it with cool features like spatial audio. Image-based experiences are inherently very scalable across all the different types of VR devices. They load fast and most of the time are not a big strain neither on PC nor on mobile devices, even older ones. Interactions, that is, hotspots or popups, need to be device-agnostic on all potential target devices. For this your hotspots need to be able to react to gaze-based interaction as well as touch based. You should stay away from motion-based gestures (like asking the user to nod his head to access a menu) because most of the time these only work somewhat reliably. The word is still out on the usability of direction-based menus (for example, “look down to open the menu”). Personally, I think they are a good option in combination with gaze-based interaction, but bear in mind: I have been using VR on a daily basis for a couple of years.I hardly qualify as a run-in-the-middle consumer anymore. File size: VR assets tend to be pretty large, at the same time the context of accessing this content is mobile. Don’t expect anyone to download a standalone mobile app, which is a couple of hundred megabytes in size because you baked it into a 4K video. Reduce file size and include streaming assets where possible. Provide some sort of offline fallback in case there’s no active Internet connection (for example, a very low res fall back video). You can keep file size down most efficiently by keeping your experiences short and sweet. VR still is not well suited for long sessions. A casual experience is best kept below 120 seconds.  Include Sharing options and a Feedback Channel Great content needs to be shared. Make sure your content can be shared and include calls to action for users about what to do once they consumed the main portion of your content. Direct them to related content, or to a website where they can learn more about what they have seen (outside VR).  Don’t Despair Today, VR is still a challenge. It is still a challenge to create great VR content, it’s a challenge to publish and deploy it and after that it’s an even bigger one to reach 100 percent of your potential audience. Don’t despair; everybody is dealing with the same issues and there’s no magic solution yet. We are working on it, however.  About the Author Andreas is the founder and CEO of Vuframe. He’s been working with augmented andvirtual reality on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone. 
Read more
  • 0
  • 0
  • 3059

article-image-how-are-container-technologies-changing-programming-languages
Xavier Bruhiere
11 Apr 2017
7 min read
Save for later

How are container technologies changing programming languages?

Xavier Bruhiere
11 Apr 2017
7 min read
In March 2013, Solomon Hykes presented Docker, which democratized access to Linux containers. The underlying technology, control groups, was already incubating for a few years at Google. However, Docker abstracts away the complexity of containers' lifecycle and adoption skyrocketed among developers. In June 2016, Datadog published some compelling statistics about Docker adoption: the industry as a whole increasingly adopted containers for production. Since everybody is talking about how to containarize everything, I would like to take a step back and study how it is influencing the development of our most fundamental medium: programming languages. The rise of Golang, the Java8 release, Python 3.6 improvements--how do language development and containerization marketsplay together in 2017? Scope of Container Technologies Let's define the scope of what we call container technologies. Way back in 2006, two Google engineers started to work on a new technology for the partition hierarchical group of tasks. They called it cgroups and submitted the code to the Linux Kernel. This lightweight approach of virtualization (sorry Mike) was an opportunity for infrastructure-heavy companies and Heroku and Google, among others, took advantage of it to orchestrate so-called containers. Put simply, they were now able to think of application deployment as the dynamic manipulation of theses determinist runtimes. Whatever the code or the business logic, it was encapsulated into a uniform execution format. Cgroups are very low level though, and tooling around the original primitives quickly emerged, like LXC backed by Canonical. Then, Solomon Hykes came in and made the technology widely accessible with Docker. The possibilities were endless and, indeed, developers and startups alike rushed in all directions. Lately, however, the hype seems to have cooled down. Docker market share is being questioned while the company sorts its business strategy. At the end of the day, developers forgetabout vendors/technology and just want simple tooling for more efficient coding. Docker-compose, Red Hat Container Development Kit, GC Container Builder, or local Kubernetes are very sophisticated pieces of technologies that hide the details of the underlying container mechanics. What they give to engineers are powerful primitives for advanced development practices: development/production environment parity, transparent services replication, and predictable runtime configuration. However,this is not just about development correctness or convenience, considering how containers are eating the IaaS landscape. It is also about deployment optimizations and resilience. Tech giants who operate crazy large infrastructures developed incredible frameworks, often in the open, to push how fast they could deploy auto-scalable, self-healing, zero-downtime fleets. Apache Mesos backed by Microsoft, or Kubernetes by Google, make at least two promises: Safe and agile deployments at the (micro-)service level Reliable orchestration with elegant service discovery, load-balancing, and failure management (because you have to accept that production always goes wrong at some point) Containers enabled us to manage complexity with infrastructure design patterns like micro-services or serverless. Behind the hype of these buzzwords, engineers try to improve team collaboration, safe and agile deployments, large project maintenance, and monitoring. However,we quickly came to realize it was sold with a DevOps tax. Fortunately, the software industry has a hard-won experience of such balance, and we start to see it converging toward the most robust approaches. This container landscape overview hopefully provides the requirements to now study how it has impacted the development of programming languages. We will take a look first at their ecosystems, and then we will dive into language designs themselves. Language Ecosystems and Usages Most developers are now aware of how invasive container technologies can be. It makes its way into your development toolbox or how your company manages its servers. Some will argue that the fundamentals of coding did not evolve much, but the trend is hard to ignore anyway. While we are free, of course, to stay away from Silicon Valley’s latest fashions, I think containers tackle a problem most languages struggle with: dependencies and packaging. Go, for example, got packaging right, but it’s still trying to figure out how to handle dependencies versioning and vendoring. JavaScript, on the other hand, has npm to manage fine-grained third-party code, but build tools are scattered all over Github. Containers won't spare you the pain of setting things up (they target runtimes, not build steps), but it can lower the bar of language adoption. Official images can run most standard language projects and one can both give a try and deploy a basic hello world in no time. When you realize that Go1.5+ needs Go1.4 to be compiled, it can be a relief to just docker run your five-lines-long main.go. Growing a language community is a sure way to develop its tooling and libraries, but containers also influence how we design those components. They are the cloud counterparts of the current functional trend. We tend to embrace a world where both functions and servers are immutable and single-purpose. We want predictable, pure primitives (in the mathematical sense). All of that to match increasingly distributed and intensive workloads. I hope those approaches come from a product’s need but, obviously, having the right technology at hand drives the innovation. As software engineers in 2017, we also design libraries and tools with containers in mind: high performance networking, distributed process management, Data pipelines, and so on. Language Design What about languages? To get things straight, I don't think containers influence how Guido Van Rossum designs Python. And that is the point of containers. They abstract the runtime to let you focus on your code™  (it is literally on every Docker-based PaaS landing page). You should be able to design whatever logic implementation you need, and containers will come in handy to help you run it when needed. I do believe, however, that both languages last evolutions and the rise of containers serve the same maturation of ideas in the tech community. Correctness at compile time: Both Python 3.6, ELM, and JavaScript ES7 are bringing back typing to their language (see type hints or Typescripts). An application running locally will launch just the same in production. You can even run tests against multiple runtimes without complex scripts or heavy setup. Simplicity: Go won a lot of its market share thanks to its initial simplicity, taking a lot of decisions for you. Containers try their best to offer one unified way to run code, whatever the stack. Functional: Scala, JavaScript, and Elixir, all enforce immutable states, function compositions with support for lambda expressions, and function purity. It echoes the serverless trend that promotes function as a service. Most of the providers leverage some kind of container technology to bring the required agility to their platforms. There is something elegant about having language features, programmatical design patterns, and infrastructure operations going hand in hand. While I don't think one of them influences the other, I certainly believe that their development smoothen other’s innovations. Conclusion Container technologies and the fame around them are finally starting to converge toward fewer and more robust usages. At the same time, infrastructure designs, new languages, and evolutions of existing ones seem to promote the same underlying patterns: simple, functional, decoupled components. I think this coincidence comes from industry maturity and openness, more than, as I said, one technology influencing the other. Containers, however, are shaking how we collaborate and design tools for the languages we love. It changes the way we on-board developers learning a new language. It changes how we setup local development environments with micro-replicates of production topology. It changes the way we package and deploy code. And, most importantly, it enables architectures like micro-services or lambdas that influence how we design our programs. In my opinion, programming language design should continue to evolve decoupled from containers. They serve different purposes, and given the pace of the tech industry, major languages should never depend on new shining tools. That being said, the evolution of languages now comes with the activity of its community—what they build, how they use it, and how they spread it in companies. Coping with containers is an opportunity to bring new developers, improve production robustness, and accelerate both technical and human growth. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high-intensity sports.
Read more
  • 0
  • 0
  • 12483

article-image-ai-and-raspberry-pi-machine-learning-and-iot-whats-impact
RakaMahesa
11 Apr 2017
5 min read
Save for later

AI and the Raspberry Pi: Machine Learning and IoT, What's the Impact?

RakaMahesa
11 Apr 2017
5 min read
Ah, Raspberry Pi, the little computer that could. On its initial release back in 2012, it quickly gained popularity among creators and hobbyists as a cheap and portable computer that could be the brain of their hardware projects. Fast forward to 2017, and Raspberry Pi is on its third generation and has been used in many more projects across various fields of study.  Tech giants are noticing this trend and have started to pay closer attention to the miniature computer. Microsoft, for example, released Windows 10 IoT Core, a variant of Windows 10 that could run on a Raspberry Pi. Recently, Google revealed that they have plans to bring artificial intelligence tools to the Pi. And not just Google's AI, more and more AI libraries and tools are being ported to the Raspberry Pi every day.  But what does it all mean? Does it have any impact on Raspberry Pi’s usage? Does it change anything in the world of Internet of Things? For starters, let's recap what the Raspberry Pi is and how it has been used so far. The Raspberry Pi, in short, is a super cheap computer (it only costs $35) and is the size of a credit card. However, despite its ability to be usedasa usual, general-purpose computer, most people useRaspberry Pias the base of their hardware projects.  These projects range from simple toy-like projects to complicated gadgets that actually do important work. They can be as simple as a media center for your TV or as complex as a house automation system. Do keep in mind that this kind of projectscan always be built using desktop computers, but it's not really practical to do so without the low price and the small size of the Raspberry Pi.  Before we go on talking about having artificial intelligence on the Raspberry Pi, we need to have the same understanding of AI. (from http://i2.cdn.turner.com/cnn/2010/TECH/innovation/07/09/face.recognition.facebook/t1larg.tech.face.recognition.courtesy.jpg)  Artificial Intelligence has a wide range of complexity. It can range from a complicated digital assistant like Siri, to a news-sorting program, to a simple face detection system that can be found in many cameras. The more complicated the AI system, the bigger the computing power required by the system. So, with the limited processing power we have on the Raspberry Pi, the types of AI that can run on that mini computer will be limited to the simple ones as well.  Also, there's another aspect of AI called machine learning. It's the kind of technology that enables an AI to play and win against humans in a match of Go. The core of machine learning is basically to make a computer improve its own algorithm by processing a large amount of data. For example, if we feed a computer thousands of cat pictures, it will be able to define a pattern for 'cat' and use that pattern to find cats in other pictures.  There are two parts in machine learning. The first one is the training part, where we let a computer find an algorithm that suits the problem. The second aspect is the application part, where we apply the new algorithm to solve the actual problem. While the application part can usually be run on a Raspberry Pi, the training part requires a much higher processing power. To make it work, the training part is done on a high-performance computer elsewhere, and the Raspberry Pi only executes the training result. So, now we know that the Raspberry Pi can run simple AI. But what's the impact of this?  Well, to put it simply, having AI will enable creators to build an entirely new class of gadgets on the Raspberry Pi. It will allow the makers to create an actually smart device based on the small computer. Without AI, the so-called smart device will only act following a limited set of rules that have been defined. For example, we can develop a device that automatically turns off lights at a specific time every day, but without AI we can't have the device detect if there's anyone in the room or not.  With artificial intelligence, our devices will be able to adapt to unscripted changes in our environment. Imagine connecting a toy car with a Raspberry Pi and a webcam and have the car be able to smartly map its path to the goal, or a device that automatically opens the garage door if it sees our car coming in. Having AI on the Raspberry Pi will enable the development of such smart devices.  There's another thing to consider. One of Raspberry Pi's strong points is its versatility. With its USB ports and GPIO pins, the computer is able to interface with various digital sensors. The addition of AI will enable the Raspberry Pi to process even more sensors like fingerprint readers or speech recognition with a microphone, further enhancing its flexibility.  All in all, artificial intelligence is a perfect addition to the Raspberry Pi. It enables the creation of even smarter devices based on the computer and unlocks the potential of the Internet of Things to every maker and tinkerer in the world. About the author RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/),who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 26660

article-image-how-do-web-development-frameworks-drive-changes-programming-languages
Antonio Cucciniellois
11 Apr 2017
5 min read
Save for later

How do web development frameworks drive changes in programming languages?

Antonio Cucciniellois
11 Apr 2017
5 min read
If you have been keeping up with technologies lately through various news sites like Hacker News and in the open source community on GitHub, you've probably noticed it seems as if there have been web development frameworks popping up left and right. It can be overwhelming to follow them all and understand how they are causing things to change in the technology world. As developers, we want to be able to catch a web development trend early and improve our skills with the technology that will be used in the future. Let's discuss the effects of web development frameworks and how they drive change in languages. What is a web development framework? Before we continue, let's make sure that everyone is on the same page regarding what exactly a web development framework is. It is defined as software made to aid in the creation and development of web applications. These framewoeks are usually made to help out with speeding up common tasks that a web developer runs into. There are client-side frameworks, which help with speeding up front-end work, along with helping us create dynamic web pages. Some examples would be React.js and Angular.js. There are also server-side web development frameworks that help with creating a server, handling routes, and much more. A few examples of this kind of framework are Express.js, Django, and Rails. Now we know what exactly web development frameworks are, but how do they affect languages? Effects on languages Quantity StackOverflow recently came out with a new Developer Survey for 2017, in which they interviewed 64,000 developers and asked them a wide variety of questions. One question was about their usage of various languages. Here is a graph of their findings:   Changes_Languages_Image If you look at this graph, it shows you the percentage of developers using a language in the year 2013, and compares it to 2017. Four years may not be a long period of time for other fields, but for software engineers, that is a century. We can see that the usage of languages such as C#, C, and C++ has been decreasing over these years. Meanwhile, Node.js and Python have increased in usage. After doing some research on the number of prominent web development frameworks that have come out for these languages in the past few years, I noticed a couple of things: C had one framework that came out in 2015. C++ had four web development frameworks over those years. On the other hand, Python had 17 frameworks aiding its development, and Node.js had nine frameworks in the past 3 years alone. What does this all mean? Well, it seems as if the languages that received more web development frameworks in order to make development easier for engineers saw an increase in the number of users, while the others, which did not receive as many web development frameworks, ended up being used less and less. Verdict: There must be a correlation at least between the number of web development frameworks and the corresponding language's usage. Ease of use and quality Not all of these frameworks are created equal. Some web development frameworks are extremely easy to use when starting out, and for others, it can take longer to create your first web application. Time is a factor when learning a new framework (or migrating existing software to it), and if it is not easy to pick up that framework, it can be a barrier to its usage. Another factor is having previous experience with the language. A reason why Node.js has an increasing user base is because it is written in JavaScript. If you have done any basic front-end development, you must have used some JavaScript to make your application. When learning a new web framework, if you have to learn a new obscure language that does not have many other uses, it does not make it easier for you, hence slowing down transition time. Lastly, if the framework itself does not actually make web development tasks for the engineer any easier, or not much easier, then that language will end up being used less. Verdict: If the framework is easy to use and speeds up development time, more people will use it and move toward it. Conclusion Overall, there are two clear factors of web development frameworks that drive changes in languages. If there are plenty of frameworks, people will be more likely to use the language that the framework was created for. If the frameworks are simple and really speed up development time, that also will increase the language's usage. If you enjoyed this post, share it on Twitter! Leave a comment down below and let me know your thoughts on the subject.  About the author Antonio Cucciniellois a software engineer with a background in C, C++, and JavaScript (Node.Js) from New Jersey. His most recent project, Edit Docs, is an Amazon Echo skill that allows users to edit Google Drive files using voice. He loves building cool things with software and reading books on self-help and improvement, finance, and entrepreneurship. You can follow him on Twitter (@antocucciniello) and on GitHub. 
Read more
  • 0
  • 0
  • 6344
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ibm-machine-learning-what-it-and-what-does-it-mean-data-scientists
Graham Annett
11 Apr 2017
5 min read
Save for later

IBM Machine Learning: What Is It, and What Does It Mean For Data Scientists?

Graham Annett
11 Apr 2017
5 min read
Like many cloud providers, IBM is invested and interested in developing and building out their machine learning offerings. Because more and more companies are interested in trying to figure out what machine learning can currently do and what it is on the cusp of doing, there has been a plethora of new services and companies trying to see if they can either do anything new or be better than the many other competitors in the market. While much of the machine learning world is based around cloud computing and the ability to horizontally scale during training, the reality is that not every company is cloud based. While IBM has had many machine learning tools and services available on their various cloud platforms, the IBM Machine Learning system seems to be just an on-premise compromise to many of the previously cloud-based APIs that IBM offered under their machine learning and Watson brand. While I am sure many enterprise-level companies have a large amount of data that would previously not have been able to use these services, I’m unaware whether and skeptical that these services would be any better or more utilitarian than their cloud counterparts. This seems like it may be particularly useful to companies in industries with many regulations or worries about hosting data outside of their private network, although this mentality seems like it is being slowly eroded away and becoming outdated in many senses. One of the great things about the IBM Machine Learning system (and many similar companies as well) are the APIs that allow developers to pick whatever language they would like and allowing multiple frameworks because there are so many available and interesting options at the moment. This is a really important aspect for something like deep learning, where there is a constant barrage of new architectures and ideas that iterate upon prior ideas, but require new architecture and developer implementations. While I have not used IBM’s new system and will most likely not be in any particular situation where it would be available, I was lucky enough to participate in IBM’s Catalyst program and used a lot of their cloud services for a deep learning project and testing out many of their available Bluemix offerings. While I thought the system was incredibly nice and simple to use compared to many other cloud providers, I found the various machine learning APIs they offered either weren’t that useful or seemed to provide worse results than their comparable Google, Azure, and other such services. Perhaps that has changed, or their new services are much better and will be available on this new system, but it is hard to find any definitive information that this is true. One aspect of these machine learning tools that I am not excited about is the constant focus on using machine learning to create some business cost-savings models (which the IBM Machine Learning press release touts), which companies may state is passed onto the customers, but it seems like this is rarely the truth (and one of the things that was stressed on in the IBM Machine Learning launch event). The ability for machine learning methodology to solve tedious and previously complicated problems is much more important and fascinating to me than simply saving money via optimizing business expenses. While many of these problems are much harder to solve and we are far from solving them, the current applications of machine learning in business and marketing areas often provides proof for the rhetoric that machine learning is exploitive and a toxic solution. Along with this, while the IBM Machine Learning and Data Science products may seem to be aimed at someone in a data scientist role, I can never help but wonder to what extent data scientists are actually using these tools outside of pure novelty or an extremely simple prototyping step in a more in-depth analytical pipeline. I personally think the ability of these tools to create usefulness for someone who may otherwise not be interested in many aspects of traditional data science is where the tools are incredibly powerful and useful. While not all traditional developers or analysts are interested in in-depth data exploration, creating pipelines, and many other traditional data science skills, the ability to do so at ease and without having to learn skills and tooling can be seen as a huge burden to those not particularly interested. While true machine learning and data science skills are unlikely to be completely replaceable, many of the skills that a traditional data scientist has will be less important as more people are capable of doing what previously may have been a quite technical or complicated process. These sorts of products and services that are a part of the IBM Machine Learning catalog reinforce that idea and are an important step in allowing services to be used regardless of data location or analytical background, and are an important step forward for machine learning and data science in general. About the author  Graham Annett is an NLP engineer at Kip (Kipthis.com). He has been interested in deep learning and has worked with and contributed to Keras. He can be found on GitHub or on his website. 
Read more
  • 0
  • 0
  • 4179

article-image-material-design-best-practices
HariVigneshJayapalan
04 Apr 2017
5 min read
Save for later

Material Design Best Practices

HariVigneshJayapalan
04 Apr 2017
5 min read
If you’re an Android/hybrid app developer, you’ll probably be using Material UI components in your app. However, for every design pattern, there are a few basic UX concepts. Coupled with the UX concepts, the usability of the app and the user’s experience will bloom and flourish. This article will be showcasing a handful of best UX practices for Material Design components. What is “Material”? A material metaphor is the unifying theory of a rationalized space and a system of motion. The material is grounded in tactile reality, inspired by the study of paper and ink, yet technologically advanced and open to imagination and magic. Learn more about Material at Material.io. Derivation of best practices All of the best practices showcased here are derived by assuming that the user is a beginner with the smartphone. The core ideology is not to make the user think even for a moment. Components to focus There are 25+ components under Material Design. We’ll be focusing our best practices with respect to the below components: RecyclerView Tab Layouts Best practices for RecyclerView RecyclerView is a flexible view for providing a limited window into a large data set. RecyclerView supports representing homogenous and heterogenous data in two ways Linear style (vertical and horizontal) Grid style (fixed and staggered) Peek-A-Boo problem A Peek-a-boo problem will mostly occur for listing things horizontally. When we’re representing data in the form of list, a typical question to the user is: how will the user know that the content can be scrolled (right to left)? Or how will you tell the user that there is more content down the line? For example, please consider the following image. All the list items are arranged within the viewport, and for a moment, the user will think about what to do next. So how do we solve this problem? The solution is very simple. We need to display a half or quarter content of the last item next to the viewport. In short, the next item by the end of the viewport should be half visible. This visual representation will automatically trigger the brain to notice the half visible content and it will automatically drag the list to reveal it. In order to maintain responsiveness in all devices and tackle this problem, we need to consider a few calculations. We need to calculate the viewport width and dynamically set the width of each ViewHolder. This is just a one way of solving this problem. The good news is that we have an alternate approach as well. Alternate approach tothe “Peek-A-Boo” problem In our alternate approach, we need to switch our RecyclerView from Linear style to Grid style. The solution is simple. Let’s make the user scroll only in one direction. If the user is scrolling vertically, let him do that alone, and vice versa. This might sound brutal, but trust me; this will benefit the user a lot. Consider the image below. This type of visual representation will allow him to scroll only vertically. When you switch things to GridView, the visibility of items are clear and the user will not be thinking further. Consider the image below. Apart from switching to grid view, I’ve also tweaked the last grid item to notify the user that they still have more items to see. Data overload problem This problem is usually seen in news feeds, image listings, and chat interfaces. When we have so much data to view and process, the user will get confused. Though each item in the list will have a timestamp saying when this post or message is created or delivered, the problem is that the user has to notice the timestampand each item when he is scrolling. To solve this problem, almost all top-notch apps are using headers for identifying and sorting things out. Headers are indeed a beautiful solution for the data overload problem. Headers will speak to a user like “Hey user! You’re now entering my cave. Till the next guy speaks, all of the things below belong to me” — that’s good. Let’s take Google’s Inbox app. Inbox has used headers effectively, but using just the header alone has got some problems as well. Imagine the items under a particular header are long and the user has gone away from the app. Now when he comes back, he will not be remembering the section he was in. To solve this problem, we have sticky headers. These headers will hold the context throughout the section and the user will haveno trouble identifying the section. TabLayout best practices Tabs make it easy to explore and switch between different views. Tabs enable content organization at a high level, such as switching between views, data sets, or functional aspects of an app. Present tabs as a single row above their associated content. Tab labels should succinctly describe the content within because swipe gestures are used for navigating between tabs;they don’t pair tabs with content that also support swiping. Nested tab problem Even in a few best selling apps, I was able to spot the nested tabs issue. The images below will show what nesting of tabs is. Initially, by seeing the nested tabs, a majority of users get confused withthe navigation. But they get used to it in a while. Even an old version of one of the Google apps had this issue. Later they changed their way of classifying things.The best way to solve this nested tab problem is to find an alternative way to categorize things and you can also couple TabLayout with Bottom Navigation. Hopefully you’ve gained a few best practices for designing better apps. If you’ve liked this article please share. About the Author  HariVigneshJayapalan is a Google Certified Android App developer, IDF Certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wanna-be entrepreneur.
Read more
  • 0
  • 0
  • 3156

article-image-mvp-android
HariVigneshJayapalan
04 Apr 2017
6 min read
Save for later

MVP for Android

HariVigneshJayapalan
04 Apr 2017
6 min read
The Android framework does not encourage any specific way to design an application. In a way, that makes the framework more powerful and vulnerable at the same time. You may be asking yourself things like, "Why should I know about this? I'm provided with Activity and I can write my entire implementation using a few Activities and Fragments, right?” Based on my experience, I have realized that solving a problem or implementing a feature at that point of time is not enough. Over time, our apps will go through a lot of change cycles and feature management. Maintaining these over a period of time will create havoc in our application if not designed properly with separation of concerns. That’s why developers have come up with architectural design patterns for better code crafting. How has it evolved? Most developers started creating an Android app with Activity at the center and capable of deciding what to do and how to fetch data. Activity code over a period of time started to grow and became a collection of non-reusable components.Then developers started packaging those components and the Activity could use them through the exposed APIs of these components. Then they started to take pride and began breaking codes into bits and pieces as much as possible. After that, they found themselves in an ocean of components with hard-to-trace dependencies and usage. Also, later we were introduced to the concept of testability and found that regression is much safer if it’s written with tests. Developers realized that the jumbled code that they developed in the above process is very tightly coupled with the Android APIs, preventing JVM tests and also hindering an easy design of test cases. This is the classic MVC with Activity or Fragment acting as a Controller. SOLID principles SOLID principles are object-oriented design principles, thanks to dear Robert C. Martin. According to the SOLID article on Wikipedia, it stands for: S (SRP): Single responsibility principle This principle means that a class must have only one responsibility and do only the task for which it has been designed. Otherwise, if our class assumes more than one responsibility we will have a high coupling causing our code to be fragile with any changes. O (OCP): Open/closed principle According to this principle, a software entity must be easily extensible with new features without having to modify its existing code in use. Open for extension: new behavior can be added to satisfy the new requirements. Close for modification: extending the new behavior is not required to modify the existing code. If we apply this principle, we will get extensible systems that will be less prone to errors whenever the requirements are changed. We can use abstraction and polymorphism to help us apply this principle. L (LSP): Liskov substitution principle This principle was defined by Barbara Liskov and says that objects must be replaceable by instances of their subtypes without altering the correct functioning of our system. Applying this principle, we can validate that our abstractions are correct. I (ISP): Interface segregation principle This principle defines that a class should never implement an interface that does not go to use. Failure to comply with this principle means that in our implementations we will have dependencies on methods that we do not need but that we are obliged to define. Therefore, implementing a specific interface is better than implementing a general-purpose interface. An interface is defined by the client that will use it; so it should not have methods that the client will not implement. D (DIP): Dependency inversion principle The dependency inversion principle means that a particular class should not depend directly on another class, but on an abstraction (interface) of this class. When we apply this principle we will reduce dependency on specific implementations and thus make our code more reusable. MVP somehow tries to follow (not 100% completely) all of these five principles. You can try looking up clean architecture for pure SOLID implementation. What is an MVP design pattern? An MVP design pattern is a set of guidelines that if followed, decouples the code for reusability and testability. It divides the application components based on its role, called separation of concerns. MVP divides the application into three basic components: Model: The Model represents a set of classes that describes the business logic and data. It also defines business rules for data, which means how the data can be changed and manipulated. In other words, it is responsible for handling the data part of the application. View: The View represents the UI components. It is only responsible for displaying the data that is received from the presenter as the result. This also transforms the model(s) into UI. In other words, it is responsible for laying out the views with specific data on the screen. Presenter: The Presenter is responsible for handling all UI events on behalf of the view. This receives input from users via the View, then processes the user’s data with the help of Model, and passes the results back to the View. Unlike view and controller, view and presenter are completely decoupled from each other and communicates to each other by an interface. Also, Presenter does not manage the incoming request traffic as Controller. In other words, it is a bridge that connects a Model and a View. It also acts as an instructor to the View. MVP lays down a few ground rules for the abovementioned components, as listed below: A View’s sole responsibility is to draw a UI as instructed by the Presenter. It is a dumb part of the application. The View delegates all the user interactions to its Presenter. The View never communicates with Model directly. The Presenter is responsible for delegating the View’s requirements to Model and instructing the View with actions for specific events. The Model is responsible for fetching data from the server, database and file system. MVP projects for getting started Every developer will have his/her own way of implementing MVP. I’m listing a few projects down the line. Migrating to MVP will not be quick and it will take some time. Please take your time and get your hands dirty with MVP: https://github.com/mmirhoseini/marvel https://github.com/saulmm/Material-Movies https://fernandocejas.com/2014/09/03/architecting-android-the-clean-way/  About the author HariVigneshJayapalan is a Google-certified Android app developer, IDF-certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur.
Read more
  • 0
  • 0
  • 27618

article-image-business-value-existing-3d-data-age-augmented-and-virtual-reality-part-2
Erich Renz
16 Mar 2017
5 min read
Save for later

The Business Value of Existing 3D Data in the Age of Augmented and Virtual Reality - Part 2

Erich Renz
16 Mar 2017
5 min read
In Part 2 of this article, we will continue our discussion of business theories in relation to the realm of 3D, Augmented, and Virtual Reality.  Customer segments Ask yourself, “Who are we creating value for?” or “Who are our most important customers?” Chances are, you are serving several customer segments. Producers of TVs or washing machines are targeting a mass market that can be served with AR-enabled mobile apps allowing customers to visualize products. Manufacturers of niche market products, such as luxury watches or highly customized industrial goods, might like the idea of virtual showrooms in which they can present core product functionalities on interactive touch tables that are connected to stationary computers.  Distribution channels Customers want to be reached through a variety of channels. Especially in those phases before purchase — awareness and evaluation — Virtual Reality delivers experiences to customers at home,trade shows, public events, or during presentations and takes the pain away from products or processes that are difficult to explain. When would it make sense to consider Virtual Reality as a content delivery vehicle to reach customers? Exactly. If a product or process is so expensive, dangerous, impossible, or rare to produce, reach or see that it makes sense to give a remote audience access to it.  Customer relationships “What type of relationship does each of our customer segments expect us to establish and maintain with them?” If a particular customer group expects a dedicated personal assistance, a key account manager can handle all the needs and questions with a consolidated sales tool, mentioned in the chapter about getting the job done with AR/VR. The ease of use allows the key account manager to connect text-based product specifications and information-heavy customer requirements with the visual data rendered in 3D. Rightly used, it lays the foundation of a deeper dedicated personal assistance and is truly helpful during customer acquisition and on-boarding.  A client that is self-capable in compiling the very information she needs is best served with a self-service tool. For example, take an interior configurator. Any consumer who wants to furnish her apartment downloads an app on their mobile device, recreates the living room in the application, chooses between given interior designs, picks favorite furnishings, and walks through the newly decorated room in Virtual Reality. If she is interested in previewing armchairs and stools in her state-of-the-art recreation room, Augmented Reality will be her new planning and purchasing tool.  Revenue Streams Let’s face the truth (also called “Return on Investment”in business terms). Before making an investment, any executive, marketer, or sales manager would ask, “If I invest one dollar into AR/VR technology, will I get back the dollar sooner or later or not at all?” A marketing manager might say, “This technology will drive more users to the brand and engagement time will rise due to the immersive character.” He would then connect his analytics to the AR/VR app and measure his key performance indicators.  A sales manager would argue differently. Her world is volume dependent. The higher the conversion from an initial contact to a paying customer, the better. Especially, Augmented Reality used in tandem with other digital sales material, such as PDF-brochures, product movies, product documentation, and image galleries of certain product lines will pay off at a later stage.  Also, product designers or planners would argue differently. AR/VR technologies would give them the answers they need for rapid prototyping. “Can we build this room we intended to do, given by all the information on paper?” Put on your VR headset and you will see if you can, or what you need to change if you want to make it habitable. “I want to see if the design of our newly invented elevator delivers what it promises without having massive costs coming along.” Sure thing, preview the elevator in real-time 3D, spin it, change buttons, lighting, or material (steel to glass) and access it in VR.  Conclusion It is no secret that AR/VR technology has the potential to become the next big computing platform after the introduction of the PC in the 80s, the Internet in the 90s, and Mobile in the 00s. The odds are in favor of AR/VR related hardware and software. Goldman Sachs estimates $80bn in revenue by 2025 ($45bn in hardware, $35bn in software) and assumes that head-mounted displays “gain popularity as technology improves, but adoption is limited by mobility and battery life.”  When we see through the lens of jobs to be done, both technologies are fully capable of taking the circumstances of its users into consideration, no matter if they are designing, producing, promoting, or selling a product or if they are businesses or consumers interacting with these products. As the nature of their names imply, the value of virtual products will unfold its full power if virtual product demonstrations can be deeply integrated into daily activities of our professional or private lives. The moment we hear people admitting that without the help of AR/VR technologies there is truly something missing in their lives, only then we are done with our homework.  About the author Erich Renz is Director of Product Management at Vuframe, an online platform for virtual product demonstrations with Augmented and Virtual Reality, where he is investigating and driving the development of AR, VR, 3D, and 360° applications for businesses and consumers.
Read more
  • 0
  • 0
  • 2568
article-image-google-daydream
RakaMahesa
15 Mar 2017
5 min read
Save for later

Google Daydream

RakaMahesa
15 Mar 2017
5 min read
Google Cardboard, with more than 5 million users, is a success for Google. So, it's not a big surprise when Google announced their next step into the world of virtual reality with the evolution of Google Cardboard: the Google Daydream, a more robust and enhanced mobile VR platform.  So, what is Google Daydream? Is it just a better version of Google Cardboard? How does it differ from Google Cardboard?  Well, they are both platforms for mobile VR apps that can be viewed in a mobile headset. Unlike Cardboard though, Google Daydream has a set of specifications that mobile devices and headsets must follow. This means developers would know exactly what kind of input the user of their apps would have, something that wasn’t possible on the Cardboard platform.  The biggest and the most notable feature of Google Daydram compared with Cardboard however, is the addition of a motion-based controller. Users will now be able to use this remote-like controller to point and interact with the virtual world much more intuitively. With this controller, developers would be able to build a better and more immersive VR experience.  As can be seen in the image above, there are 4 physical inputs available to the user on the Daydream Controller: Touchpad (the big circular pad) App Button (the button with the line symbol) Home button (the button with the circle symbol) Volume buttons (the buttons on the side) And since it's a motion-based controller, the controller comes with various sensors to detect the user's hand movement. Do note that the movement that can be detected by the controller is mostly limited to rotational movement, unlike the fully positional tracked controller on a PC VR platform.  Two more things to keep in mind: the first one is that the home and volume buttons are not accessible to developers and are reserved for the platform's functionality. The second one is that the touchpad is only capable of detecting single touch. And since the documentation doesn't mention multitouch being added in the future, it's safe to assume that the controller is designed for single touch and will stay that way for the foreseeable future.  All right, now that we know about what the controller can do, let's dive deeper into the Google Daydream SDK and figure out how to use the Daydream Controller in our apps.  Before we go further though, let's make sure we have all the requirements for developing Daydream apps: Unity 5.6 (with native Daydream support) Google VR SDK for Unity v1.2 Daydream Controller or an Android phone with Gyroscope. Yes, you don't have to own the controller to develop a controller-compatible app, so don't fret. Instead. we're going to emulate the daydream controller using an Android phone. To do that, all we need to do is to install the controller emulator APK on our phone and run the emulator app. Then, to enable the emulator to be detected on Unity Editor, we simply connect the phone to the computer with a USB cable.  Do note that we can't connect the actual Daydream Controller to our computer and will only be able to use the controller when it's paired to a mobile phone. So you may want to use the emulator for testing purposes even if you have the controller.  To start reading user input from the controller, we first must add the GvrControllerMain prefab to our scene. Afterwards, we can simply use the GvrController API to detect any user interaction with the device. The GvrController API behaves similarly to Unity's Input API, so you're in luck if you're familiar with the Unity project.  Like the Unity Input API, there are three functions to use if we want to find out the state of the buttons on the controller. Use the GvrController.ClickButtonDown property to check if the touchpad was just clicked, the GvrController.ClickButtonUp property to check if the touchpad was just released, and the GvrController.ClickButton property to see if the user is holding down the touchpad click. Simply replace the "ClickButton" part with "AppButton" to detect the state of the app button on the controller.  The API for the controller's touchpad is similar to the Unity Mouse Input API as well. First, we need to find out if the touchpad is being used by calling the GvrController.IsTouching property. Then, we can read the touch position with GvrController.TouchPos property. There is no function for detecting swipes and other movements, but you should be able to create your own detector by reading the touch position changes.  For traditional controllers, these properties should be enough to get all the user inputs. However, Daydream Controller is a controller for VR, so there's still another aspect we should read: Movement. Using the GvrController.Orientation property, we can get a rotational value of the controller's orientation in the real world. We can then apply that value to a GameObject in our scene and have it mirror the movement of the controller.  And that's it for our introduction to the Daydream Controller. The world of virtual reality is still vast and unexplored, and every day, new ways to interact with the VR world are being tried out.So, keep experimenting!  About the author  Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 13456

article-image-value-existing-3d-data-age-augmented-virtual-reality
Stefan Minack
14 Mar 2017
5 min read
Save for later

The Value of Existing 3D Data in the Age of Augmented & Virtual Reality

Stefan Minack
14 Mar 2017
5 min read
In this article, we take a look at what is necessary and what you can achieve with the data you might already possess (i.e. OBJ, FBX, Collada) to make it work in the context of virtual and augmented reality. As AR/VR technologies are being considered to change the way human beings interact (as was the case with the advent of the PC, Internet, and mobile), applying these technologies in practice will surely affect the way we convey messages to our customers, the way we do business together, and the day-to-day communication with our co-workers. These technologies equip us with visualization, information, and playfulness at the same time, and they are ready at hand wherever we go. Getting Started Working with real-time visualization can be rough. It can be difficult because using Virtual and Augmented Reality is no less than rendering images in real time, but with plenty of sensor data or processing images from the device’s camera on top of it. Luckily there is enough software and hardware around that takes away the pain of dealing with the second part of that equation: processing the sensor data. Software tools such as Vuforia, Wikitude and also hardware and software embedded into devices such as mobile phones and tablets. Nonetheless, we should keep in mind that processing sensor data still uses some of the system's resources and limits the space and computation cycles we will have left to render the visualization. We do have some constraints we need to optimize the content for, to run as smoothly as possible. This means we need to take care of things like the limited memory and processing power of the device, something that is very common in designing games. So for our use case, the 3D data can come in plenty of different formats and flavors. To make it feasible for our application we need to do some work. Two Types of Data First, there is data that is only created to look as good as possible but does not have to be accurate in any kind of way. It is amere visually correct representation of an object. If you are working in the field of creating high-end CGIs, it normally does not matter if an image takes two or two and a half hours to compute. It also doesn’t matter if it takes 30 or 40 gigabytes of RAM to draw those images to a hard disk. Working with animations makes the processing time decrease, but we are still far away from creating those images in real time. Secondly, there is the kind of data that is created by engineers—data that is a virtual representation of an actual physical object, with all its mechanical properties. This can go down to the point where every nut and every bolt on a machine, every small detail, is present in the model. Most of the time, this kind of data is problematic in different kind of ways. On the one hand it is not possible to display it on, let’s say, a mobile device. On the other hand this level of detail is usually confidential data, something an engineer does not want to have floating around somewhere on the Internet. In both cases there has to be some kind of data reduction. This could be achieved by just doing manual labor, in terms of reducing objects that are not necessarily needed for the presentation. It could also be achieved by recreating models with the right amount of details, or even by using algorithms, and by recreating or mimicking the visual characteristics of the materials used on the object we want to present. Conclusion and Outlook We are still talking about data that is already there. Data that was already created in the process of designing and engineering a product, data we only have to take care of in the right and feasible way. While this real-time representation of a product cannot compete with the visual fidelity of a pre-rendered image or video, it can carry a lot of value by adding different levels of interaction and information to the 3D model. Example of a 3D tour with interaction (Vuframe) Practically speaking, it can be a deal breaker if a consumer sees that an object (a couch, a TV, or a piano) will fit into her home or not. AR helps her to bring immovable goods into her home. With VR she could do it the other way around and explore spaces that are either not existing yet or hard to reach (future properties or production centers on the other side of the globe). In short, we are all visual thinkers to a great extent and AR/VR will become the leading technologies facilitating this aspect. Sofa in life-size Augmented Reality based on CAD data (Vuframe) For companies and freelancing professionals that are into creating or working with 3D data, this is a huge step. They can utilize the data they have already created, optimize it, and put it into a better context by adding interaction, different layers of information that can be displayed, and use it for an example in sales and marketing activities, or even employee education. And by keeping in mind that this is an option, it is possible to create the data in a way that they can skip most of the optimization process, which reduces costs and production time. About the Author Stefan Minack is a 3D artist and head of content at Vuframe. He’s been working in 3D content creation on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone.
Read more
  • 0
  • 0
  • 2976

article-image-get-most-out-your-vr-content-maximizing-reach-your-immersive-experience-part-2
Andreas Zeitler
10 Mar 2017
6 min read
Save for later

Get the most out of your VR content: Maximizing the reach of your immersive experience - Part 2

Andreas Zeitler
10 Mar 2017
6 min read
This post gives a short summary of the landscape of VR devices and tools available today, continuing with non-mobile devices (read about mobile devices in the first part). I will also outline some pitfalls. A follow-up post will talk about the tools used to create VR content and optimizing your content for maximum reach.  Dedicated VR Devices If you plug it into your PC, it’s a dedicated VR device. The most popular ones are Oculus Rift and HTC Vive (developed by Valve, but manufactured by HTC). Another one is the Playstation VR set, which only works with Playstation, but is a dedicated VR device nonetheless. All dedicated VR devices include (or offer on top) sensors for positional head tracking and controllers for both hands that track the hands and arms in relation to the head of the user. While mobile VR devices are limited by the processing power of the smartphone inserted into the viewer dedicated, VR devices are much more powerful, although, again, dependent on the processing power of the PC used. Most manufacturers recommend using a recent PC with a very recent graphics card. Once that’s taken care of, the much higher bottom line concerning cost can be felt across the board: Close to full HD pixel resolution per eye at an 110 degree field of view (compared to 55-78 degree, and half of the smartphones screen resolution for mobile devices). 90 hz refresh rate compared to 60 hz on mobile devices. Tracks your head and hands in VR space, allowing for upper-body range of motion. The HTC Vive even tracks free-range motion in a 3 by 3 meter area via infrared sensor. This literally let’s you walk around in VR space (until you bump into a wall or trip over the headset’s wires). The high-powered GPU lets the user access AAA quality realtime 3D environments as we know them from AAA quality computer games. Positional Tracking, Head Tracking, Hand Tracking It is great and surprisingly accurate, especially the HTC Vive’s. However, it can be a pain to set up, won’t work in just any setting, and can grow inaccurate even within a single user session. On top of that, a major pitfall is that positional user input is not something that can casually be handled without a lot of experience in at least 3D programming. A good developer and solid user experience design are a must have to turn this into a usable end-user experience. All major SDKs (Oculus, Vive, Playstation) feature different approaches on how to implement these controllers, which means double or triple programming efforts to roll out VR content using tracked user input on all devices at the same time. This is a good indication of where the VR ecosystem is at as a whole: still at its first device generation, which is not at all standardized across devices and mainly targeted at tech-savyy users. That being said, there’s about four times as many new VR devices announced for each following year than have been released the year before—so hardware and software quality are bound to improve and will become feasible for less tech-savvy users.  What about audience size? Let’s stick to established terminology: dedicated VR devices are just that—dedicated. They require an initial investment close to 2,000 dollars to get you started and (especially the Vive) require a dedicated physical space where the motion trackers can be set up. The whole setup and operating process is very techy, and each system you want to support has its own ins and outs. These requirements cut down the potential audience size tremendously. The number of units sold to date does not exceed 1.6 million devices,many of which are owned by gamers (more on that later). Playstation VR has sold 0.75 million alone. This deserves special consideration because the Playstation platform traditionally is very closed to developers with strict in place to gain access. On top of that, Sony has specifically stated that it wants game content for its users rather than media and social VR content.  Mobile vs. Dedicated devices – what’s the difference? It is very telling that through Playstation VR, most dedicated units sold have been shipped to gamers. This means that content for those devices will go beyond mere VR photo and video. Instead, highly immersive real-time 3D environments are indicative of high production values and costs, such as the IKEA VR Showroom and the Audi VR Showroom. Agencies and game studios work on these experiences for months at a time and then present them to a larger audience in a fixed setting,that is, at a store. This is “dedicated” VR concerning all aspects: hardware, software, use case, and setting. There’s nothing casual about it. Content production is complicated and takes a lot of time and money because any VR content for dedicated devices is going to be compared to triple-A games. There might be a niche for dedicated VR feature films but, so far, there are no case studies or white papers indicating that watching a 60+ minute film without much interaction in it will be something a lot of users desire. To summarize, if you want to create VR games and have a decent budget and skilled team available, dedicated VR devices are just the place for you, even if the group of potential buyers of what you create is well below 1 million people—unless you happen to be a registered Playstation developer, in which case, it’s 1.6 million people. Another field where dedicated devices make sense is in advertising, where a still decent budget and a small team can create something that can keep up with a triple-A game at least in a short session. For everybody else, mobile VR devices are going to where it is happening, and you can be content with the fact that your audience will also be there.  Read on about tools to create VR content and how you can go about optimizing your audience reach in the next post. Sources: [1] http://venturebeat.com/2017/02/04/superdata-vrs-breakout-2016-saw-6-3-million-headsets-shipped/ [2] http://www.digitaltrends.com/virtual-reality/oculus-rift-vs-htc-vive/  About the Author Andreas Zeitler is thefounder and CEO at Vuframe. He’s been working with Augmented & Virtual Reality on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone.
Read more
  • 0
  • 0
  • 2826
article-image-business-value-existing-3d-data-age-augmented-and-virtual-reality-part-1
Erich Renz
09 Mar 2017
6 min read
Save for later

The Business Value of Existing 3D Data in the Age of Augmented and Virtual Reality - Part 1

Erich Renz
09 Mar 2017
6 min read
This blog post covers the impact of emerging technologies on many of today’s business models. It is driven by the hypothetical question of “What if I use or reuse my existing 3D data to design, produce, promote or sell a product?” and addresses executives, marketers, sales representatives and public relations managers—those risk-takers and doers who make strategic decisions and decide how products can be marketed and sold to current and future clients in a new way. In this article, we will also refer to business theories and apply them to the realm of 3D, Augmented, and Virtual Reality. Getting the job done with Augmented and Virtual Reality In his latest book Competing Against Luck, Harvard Business School professor Clayton M. Christensen and his co-writers build their theory about innovation and customer choice on the premise that “we all have jobs we need to do that arise in our day-to-day lives and when we do, we hire products or services to get these jobs done.” Practically speaking, if you are a sales manager in a B2B company that is selling industrial gates, there are plenty of options of getting your job done. You could use print brochures to introduce a showpiece to an interested lead. Shortly after the intro you might pull out documents with technical specifications. To round out the conversation, you show a short movie to demonstrate the functionality of your product in a fun, engaging way. In return, your client might be telling you that this is more or less what he needs to get his job done - that is driving the truck smoothly into the garage, parking it safely and not thinking about any further maintenance costs or the safety of the truck while parked. With new technologies on the rise, we can help both our sales manager and his customer to make their working lives easier. Being equipped with a tablet and an app that contains either each product of a company’s product line or a comprehensive product configurator that allows a sales representative to choose between different models, present mechanical systems in real-time and assemble the demanded product in front of the interested lead, shows that customer needs are taken seriously then and there by the organization our sales manager is representing in the field. Together with the lead he will pick or select the product and preview it in Augmented Reality to see if it fits the conditions on-site and readjust if need be. Once he is finished with his visit, he can send out a report to the client and his own back office for documentation. It is almost a no-brainer to mention that this data can be reused for installation purposes and clarify issues before any legal actions are taken (due to wrong installation, missing mounting parts or malfunctioning).  Here is another case. Imagine you were to market properties to prospective buyers. You could perform the job by compiling all the information you want to share on a website that includes floor plans, image galleries and promotional videos. You could also advertise it on one of the many platforms that specialize in selling properties. These are two possible approaches of getting your job done as a real estate marketer.  But what if you reuse the architect’s 3D CAD model and present the future building, located in Madrid, to the family from Belgrade who is really keen on doing a virtual walk through the apartment? All they need is a tablet or a smartphone. We are not talking expensive tech that has to be newly purchased. Immersing your clients in a virtual environment that cannot be entered under normal circumstances because it is expensive, dangerous or impossible to get there or the event is so rare that it is unlikely to easily repeat it is key in getting the job done with Virtual Reality in a sensible way, which is far from being gimmicky or unsustainable.  How do AR/VR technologies affect particular building blocks in a business model? To continue with the argument of how 3D will influence future business models I will refer to Alexander Osterwalder’s and Yves Pigneur’s widely used Business Model Canvas. The canvas helps to visualize how a company generates revenue and how a value proposition is created and delivered to specific customer groups. Value proposition and customer segments are at the heart of each and every business model. 3D data has one big advantage: as long as it sits there it cannot lose any value. Anyone in research and development, sales or marketing can use it. 3D lays the foundation of a common shared language between employees and customers. If you think about your first or next step towards digitization, start with 3D. Seriously.  Value Proposition Meeting the needs of customers is a supreme discipline. Ask yourself “What value do we deliver to the customer?” or “Which one of our customer’s problems are we helping to solve?”. A big plus that comes with Augmented Reality is cost reduction by lessening the number of returning goods and product accessibility. Take for example a prospect customer who is trying to find out if a desired shelf fits into that small spot in her bedroom. Life-size product previews can become a substantial aiding tool in making purchase decisions. If she can access the product any time on her phone and get all the product specific information combined with the visual representation in real-time, it allows her to interact with the product before purchase and to test it upfront. Thus, playfully consuming information with the benefits of AR has the potential to lower the rate of returning goods.  In the visual-driven age of Augmented and Virtual Reality, the buying process shifts from linear buying - based on a mixture of descriptions and product visuals - to a non-linear experience where the consumer accesses and selects the desired products and adapts them to their personal circumstances. That said, virtual product demonstrations are bringing the value proposition closer than ever into the homes of potential buyers. This is an outstanding opportunity for producers to engage with customers on an emotional level that is informative and informal in its essence.  As you can see, opportunity abounds. In part two of this article, we will take a look at customer segments, distribution channels, revenue streams, and more.  About the author Erich Renz is Director of Product Management at Vuframe, an online platform for virtual product demonstrations with Augmented and Virtual Reality, where he is investigating and driving the development of AR, VR, 3D and 360° applications for businesses and consumers.
Read more
  • 0
  • 0
  • 2753

article-image-get-most-out-your-vr-content-maximizing-reach-your-immersive-experience-part-1
Andreas Zeitler
08 Mar 2017
6 min read
Save for later

Get the most out of your VR content: Maximizing the reach of your immersive experience - Part 1

Andreas Zeitler
08 Mar 2017
6 min read
This post gives a short summary of the landscape of VR devices and tools available today, starting with mobile devices. I will also outline some pitfalls. A follow-up post will deal with non-mobile devices and a third part will talk about the tools used to create VR content and optimizing your content for maximum reach.  Mobile Devices & Viewers The concept is simple: you use your existing smartphone and put it into a VR viewer. The smartphone then acts as a display and processing unit for the VR content at the same time. The VR content can be 360 media (images or video) or real-time 3D scenes rendered by a game engine—both types of content usually feature interactions like hotspots or animations which can be triggered while in VR mode to make the content more engaging. The content will arrive on your smartphone as part of an app or via the mobile web browser on your smartphone (more on that later). Before you can put the smartphone into the VR viewer, you need to enable Cardboard mode / VR mode / distortion mode / fisheye mode on it. This will display the content as two images on the screen, one for each eye.   Regular Mode vs. Fisheye/VR Mode for a real-time 3D scene Popular mobile VR viewers are Google Cardboard, which comes in many, many versions, shapes, sizes and finishes. And there’s Samsung’s GearVR, which comes in three distinct device generations. Google Cardboard is a standard design, which is publicly available and can be manufactured by anybody. Thus Cardboards are available from many manufactures, either made from actual cardboard or out of plastic. There’s different versions ranging from simple ones to more advanced ones which include headphones and/or the ominous “button” (more on that later). Chances are that if you own a smartphone which has at least a 4-inch screen there will be a suitable cardboard device available for you to purchase and use. The ones made from cardboard are often used as giveaways at events and trade shows. The user experience is rather complicated: you have to install a Google Cardboard-enabled app and scan the Cardboard QR code printed on the device itself with your phone. More often than not, this graphical code is missing on the device. As there are different versions of the Google Cardboard Standard being manufactured, you need to tell the Cardboard-enabled app which version you are using. Otherwise,distortedcontent is displayed whichdoes not match the lenses in your Cardboard device.That makes the experience blurry, distorted, or un-viewable altogether. In a casual survey we do with our customers, we find that while 7 out of 10 have a Cardboard device available only 1 out of 10 has used a Cardboard Device successfully with good results due to these limitations.   Google Cardboard with Smartphone inserted Samsung GearVRuses the same concept but comes as a sturdy plastic viewer, which has some additional ports built in and actually connects to the inserted Android device via the micro USB connector at the bottom of the device. Through this,GearVR can address one issue Google Cardboard cannot address: user input. As the smartphones are inserted into the VR viewer in both cases it is not possible to touch the screen.Thus the only available form of interaction ismotiongestures (i.e. shaking the device) or focusing on elements in the virtual environment, which are activated after a certain amount of time (this is called gaze-based interaction). While motion-gesture interaction is limited to only a few suitable use cases, gaze-based interaction has become the de-facto standard in VR,even with apps targeted to be used with the GearVR, which offers a D-PAD and additional button on the right side. Because of these dedicated user input controls, we at Vuframe call GearVR a “more dedicated” VR device than Google Cardboard, which we call a “casual VR device”. Google Cardboard only features a very unsuccessful attempt to emulate this with “the button”: for the Cardboard devices which include this feature, the user can press (actually it is more like a mechanical switch) a button on the side of Cardboard viewer, which then triggers a magnet. The magnet is “sensed” by the smartphone’s compass sensor and interpreted as a click. It’s as crazy as it sounds and it does not really work even half the time. An honorable mention in the category of mobile viewers and user input goes to Google’s new (or other) VR platform. It is called Daydream and comes with its dedicated, fabric-foam viewing device which includes a remote, motion-enabled remote for user input. This device allows pointing in VR space and translates a second degree of motion, which can simulate the user’s hand. It’s a much better take on VRinput and it is definitely the interaction pattern we would want to see established as the new standard. Google Daydream is not yet showing any clear traction based on reliable sales figures. In summary, you will reach most people by sticking with a device-neutral, gaze-based way for the user to interact with your content.  What about audience size? Google Cardboard is currently the device with the widest spread, with approximately 85 million devices in circulation at the end of 2016 (see [1] and [2]). Mind you, many of those devices have probably been branded giveaways; there are not really 85 million active Cardboard users out there; many are just lying in a corner somewhere gathering dust. GearVR’s reliable figures put the number of sold units in 2016 at 5 million, with a liberal guess of 6.5 million at the time of writing this post. As mentioned above: for Google Daydream, it’s still very early days. As far as mobile viewers are concerned, you are covered by sticking with Cardboard and GearVR, both will give you an approximate audience which will hit 100 million circulated devices in size sooner rather than later in the course of this year.  Read on about Dedicated VR Devices & Workstation-based VR in the next post.  Sources: [1] http://venturebeat.com/2017/02/04/superdata-vrs-breakout-2016-saw-6-3-million-headsets-shipped/ [2] http://www.hypergridbusiness.com/2016/11/report-98-of-vr-headsets-sold-this-year-are-for-mobile-phones/  About the Author Andreas Zeitler is Founder and CEO at Vuframe. He’s been working with augmented andvirtual Reality on a daily basis for the past 8 years. Vuframe’s mission is to democratize AR & VR by removing the tech barrier for everyone.
Read more
  • 0
  • 0
  • 2636