Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-mxnet-versatile-dl-framework
Aaron Lazar
05 Sep 2017
5 min read
Save for later

Why MXNet is a versatile Deep Learning framework

Aaron Lazar
05 Sep 2017
5 min read
Tools to perform Deep Learning tasks are in abundance. You have programming languages that are adapted for the job or those specifically created to get the job done. Then, you have several frameworks and libraries which allow data scientists to design systems that sift through tonnes of data and learn from it. But a major challenge for all tools lies in tackling two primary issues: The size of the data The speed of computation Now, with petabytes and exabytes of data, it’s become way more taxing for researchers to handle. Take image processing for example. ImageNet itself is such a massive dataset consisting of trillions of images from several distinct classes that tackling this scale is a serious lip-biting affair. The speed at which researchers are able to get actionable insights from the data is also an important factor. Powerful hardware like multi-core GPUs, rumbling with raw power and begging to be tamed, have waltzed into the mosh pit of big data. You may try to humble these mean machines with old school machine learning stacks like R, SciPy or NumPy, but in vain. So, the deep learning community developed several powerful libraries to solve this problem, and they succeeded to an extent. But two major problems still existed - the frameworks failed to solve the problems of efficiency and flexibility together. This is where a one-of-a-kind, powerful, and flexible library like MXNet rises up to the challenge and makes developers’ lives a lot easier. What is MXNet? MXNet sits happy at over 10k stars on Github and has recently been inducted into the Apache Software Foundation. It focuses on accelerating the development and deployment of Deep Neural Networks at scale. This means exploiting the full potential of multi-core GPUs to process tonnes of data at blazing fast speeds. We’ll take a look at some of MXNet’s most interesting features over the next few minutes. Why is MXNET so good? Efficient MXNet is backed by a C++ backend which allows it to be extremely fast on even a single machine. It allows for automatically parallelizing computation across devices as well as synchronising the computation when multithreading is introduced. Moreover, the support for linear scaling means that not only the number of GPUs can be scaled, but MXNet also supports heavily distributed computing by scaling the number of machines as well. Moreover, MXNet has a graph optimisation layer that sits on top of a dynamic dependency scheduler, which enhances memory efficiency and speed. Extremely portable MXNet is extremely portable, especially as it can be programmed in umpteen languages such as C++, R, Python, Julia, JavaScript, Go, and more. It’s widely supported across most operating systems like Linux, Windows, iOS as well as Android making it multi-platform, including low level platforms. Moreover, it works well in cloud environments, like AWS - one of the reasons AWS has officially adopted MXNet as its deep learning framework of choice. You can now run MXNet from the Deep Learning AMI. Great for data visualization MXNet uses not only the mx.viz.plot_network method for visualising neural networks but it also has in-built support for Graphviz, to visualise neural nets as a computation graph. Check Joseph Paul Cohen’s blog for a great side-by-side visualisation of CNN architectures in MXNet. Alternatively, you could strip the TensorBoard off TensorFlow and use it with MXNet. Jump here for more info on that. Flexible MXNet supports both imperative and declarative/symbolic styles of programming, allowing you to blend both styles for increased efficiency. Libraries like Numpy and Torch support plain imperative programming, while TensorFlow, Theano, and Caffe support plain declarative programming. You can get a closer look at what these styles actually are here. MXNet is the only framework so far that mixes both styles to maximise efficiency and productivity. In-built profiler MXNet comes packaged with an in-built profiler that lets you profile execution times, layer-by-layer, in the network. Now, while you’ll be interested in using your own general profiling tools like gprof and nvprof to profile at kernel, function, or instruction level, the in-built profiler is specifically tuned to provide detailed information at a symbol or operator level. Limitations While MXNet has a host of attractive features that explain why it has earned public admiration, it has its share of limitations just like any other popular tool. One of the biggest issues encountered with MXNet is that it tends to give varied results when compile settings are modified. For example, a model would work well with cuDNN3 but wouldn’t with cuDNN4. To overcome issues like this, you might have to spend some time on forums. Moreover, it is a daunting task to write your own operators or layers in C++, to achieve efficiency. Although, with v0.9, the official documentation mentions that it has become easier. Finally, the documentation is introductory and is not organised well enough to create custom operators or to perform other advanced tasks. So, should I use MXNet? MXNet is the new kid on the block that supports modern deep learning models like CNNs and LSTMs. It boasts of immense speed, scalability, and flexibility to solve your deep learning problems and consumes as little as 4 Gigs of memory when running deep networks with almost a thousand layers. The core library including its dependencies can mash into a single C++ source file, which can be compiled on both Android and iOS, as well as a browser with the JavaScript extensions. But like all other libraries, it has it’s own hurdles, which aren’t that life threatening enough, to prevent one from getting the job done, and a good one at that! Is that enough to get you excited and start using MXNet? Go get working then! And don’t forget to tell us your experiences of working with MXNet.
Read more
  • 0
  • 0
  • 18330

article-image-tensorfire-dl-browser
Sugandha Lahoti
04 Sep 2017
7 min read
Save for later

TensorFire: Firing up Deep Neural Nets in your browsers

Sugandha Lahoti
04 Sep 2017
7 min read
Machine Learning is a powerful tool with applications in a wide variety of areas including image and object recognition, healthcare, language translation, and more. However, running ML tools requires complicated backends, complex architecture pipelines, and strict communication protocols. To overcome these obstacles, TensorFire, an in-browser DL library, is bringing the capabilities of machine learning to web browsers by running neural nets at blazingly fast speeds using GPU acceleration. It’s one more step towards democratizing machine learning using hardware and software already available with most people. How did in-browser deep learning libraries come to be? Deep Learning neural networks, a type of advanced machine learning, are probably one of the best approaches for predictive tasks. They are modular, can be tested efficiently and can be trained online. However, since neural nets make use of supervised learning (i.e. learning fixed mappings from input to output) they are useful only when large quantities of labelled training data and sufficient computational budget are available. They require installation of a variety of software, packages and libraries. Also, running a neural net has a suboptimal user experience as it opens a console window to show the execution of the net. This called for an environment that could make these models more accessible, transparent, and easy to customize. Browsers were a perfect choice as they are powerful, efficient, and have interactive UI frameworks. Deep Learning in-browser neural nets can be coded using JavaScript without any complex backend requirements. Once browsers came into play, in-browser deep learning libraries (read ConvNetJS, CaffeJS, MXNetJS etc.) have been growing in popularity. Many of these libraries work well. However, they leave a lot to be desired in terms of speed and easy access. TensorFire is the latest contestant in this race aiming to solve the problem of latency. What is TensorFire? It is a Javascript library which allows executing neural networks in web browsers without any setup or installation. It’s different from other existing in-browser libraries as it leverages the power of inbuilt GPUs of most modern devices to perform exhaustive calculations at much faster rates - almost 100x faster. Like TensorFlow, TensorFire is used to swiftly run ML & DL models. However, unlike TensorFlow which deploys ML models to one or more CPUs in a desktop, server, or mobile device, TensorFire utilizes GPUs irrespective of whether they support CUDA eliminating the need of any GPU-specific middleware. At its core, TensorFire is a JavaScript runtime and a DSL built on top of WebGL shader language for accelerating neural networks. Since, it runs in browsers, which are now used by almost everyone, it brings machine and deep learning capabilities to the masses. Why should you choose TensorFire? TensorFire is highly advantageous for running machine learning capabilities in the browsers due to four main reasons: 1.Speed They also utilize powerful GPUs (both AMD and Nvidia GPUs) built in modern devices to speed up the execution of neural networks. The WebGL shader language is used to easily write fast vectorized routines that operate on four-dimensional tensors. Unlike pure Javascript based libraries such as ConvNetJS, TensorFire uses WebGL shaders to run in parallel the computations needed to generate predictions from TensorFlow models. 2. Ease of use TensorFire also avoids shuffling of data between GPUs and CPUs by keeping as much data as possible on the GPU at a time, making it faster and easier to deploy.This means that even browsers that don’t fully support WebGL API extensions (such as the floating-point pixel types for textures) can be utilized to run deep neural networks.Since it has a low-precision approach, smaller models are easily deployed to the client resulting in fast prediction capabilities. TensorFire makes use of low-precision quantized tensors. 3. Privacy This is done by the website training a network on the server end and then distributing the weights to the client.This is a great fit for applications where the data is on the client-side and the deployment model is small.Instead of bringing data to the model, the model is delivered to users directly thus maintaining their privacy.TensorFire significantly improves latencies and simplifies the code bases on the server side since most computations happen on the client side. 4. Portability TensorFire eliminates the need for downloading, installing, and compiling anything as a trained model can be directly deployed into a web browser. It can also serve predictions locally from the browser. TensorFire eliminates the need to install native apps or make use of expensive compute farms. This means TensorFire based apps can have better reach among users. Is TensorFire really that good? TensorFire has its limitations. Using in-built browser GPUs for accelerating speed is both its boon and bane. Since GPUs are also responsible for handling the GUI of the computer, intensive GPU usage may render the browser unresponsive. Another issue is that although using TensorFire speeds up execution, it does not improve the compiling time. Also, the TensorFire library is restricted to inference building and as such cannot train models. However, it allows importing models pre-trained with Keras or TensorFlow. TensorFire is suitable for applications where the data is on the client-side and the deployed model is small. You can also use it in situations where the user doesn’t want to supply data to the servers. However, when both the trained model and the data are already established on the cloud, TensorFire has no additional benefit to offer. How is TensorFire being used in the real-world? TensorFire’s low-level APIs can be used for general purpose numerical computation running algorithms like PageRank for calculating relevance or Gaussian Elimination for inverting mathematical matrices in a fast and efficient way. Having capabilities of fast neural networks in the browsers allows for easy implementation of image recognition. TensorFire can be used to perform real-time client-side image recognition. It can also be used to run neural networks that apply the look and feel of one image into another, while making sure that the details of the original image are preserved. Deep Photo Style Transfer is an example. When compared with TensorFlow which required minutes to do the task, TensorFire took only few seconds. TensorFire also paves way for making tools and applications that can quickly parse and summarize long articles and perform sentiment analysis on their text. It can also enable running RNN in browsers to generate text with a character-by-character recurrent model. With TensorFire, neural nets running in browsers can be used for gesture recognition, distinguishing images, detecting objects etc. These techniques are generally employed using the SqueezeNet architecture - a small convolutional neural net that is highly accurate in its predictions with considerably fewer parameters. Neural networks in browsers can also be used for web-based games, or for user-modelling. This involves modelling some aspects of user behavior, or content of sites visited to provide a customized user experience. As TensorFire is written in JavaScript, it is readily available for use on the server side (available on Node.js) and thus can be used for server based applications as well. Since TensorFire is relatively new, its applications are just beginning to catch fire. With a plethora of features and advantages under its belt, TensorFire is poised to become the default choice for running in-browser neural networks. Because TensorFlow natively supports only CUDA, TensorFire may even outperform TensorFlow on computers that have non-Nvidia GPUs.
Read more
  • 0
  • 0
  • 10957

article-image-how-take-business-centric-approach-security
Hari Vignesh
03 Sep 2017
6 min read
Save for later

How to take a business-centric approach to security

Hari Vignesh
03 Sep 2017
6 min read
Today’s enterprise is effectively borderless, because customers and suppliers transact from anywhere in the world, and previously siloed systems are converging on the core network. The shift of services (and data) into the cloud, or many clouds, adds further complexity to the security model. Organizations that continue to invest in traditional information security approaches either fall prey to cyber threats or find themselves unprepared to deal with cyber crimes.  I think it is about time for organizations to move their cyber security efforts away from traditional defensive approaches to a proactive approach aligned with the organization’s business objectives.  To illustrate and simplify, let’s classify traditional information security approaches into three types. IT infrastructure-centric approach In this traditional model, organizations tend to augment their infrastructure with products of a particular vendor, which form building blocks for their infrastructure. As the IT infrastructure vendors extend their reach into security, they introduce their security portfolio to solve the problems their product generally introduces. Microsoft, IBM, and Oracle are some examples who have complete a range of products in IT Infrastructure space. In most such cases the decision maker would be the CIO or Infrastructure Manger with little involvement from the CISO and Business representatives. Security-centric approach This is another traditional model whereby security products and services are selected based upon discrete needs and budgets. Generally, only research reports are referred and products with high rating are considered, with a “rip-and-replace” mentality rather than any type of long-term allegiance. Vendors like FireEye, Fortinet, Palo Alto Networks, Symantec, and Trend Micro fall in this category. Generally, the CISO or security team is involved with little to no involvement from the CIO or Business representatives. Business-centric approach This is an emerging approach, wherein decisions affecting cybersecurity of an organization are made jointly by corporate boards, CIOs, and CISOs. This new approach helps organizations to plan for an effective security program which is driven by business requirements with a holistic scope including all business representatives, CIO, CISO, 3rd parties, suppliers& partners; this improves the cybersecurity effectiveness, operational efficiency and helps to align enterprise goals and objectives.  The traditional approaches to cybersecurity are no longer working, as the critical link between the business and cybersecurity are missing. These approaches are generally governed by enterprise boundaries which no longer exist with the advent of cloud computing, mobile & social networking. Another limitation with traditional approaches, they are very audit-centric and compliance driven, which means the controls are limited by audit domain and driven largely by regulatory requirements. Business-centric approach to security Add in new breeds of threat that infiltrate corporate networks and it is clear that CIOs should be adopting a more business-centric security model. Security should be a business priority, not just an IT responsibility.  So, what are the key components of a business-centric security approach? Culture Organizations must foster a security conscious culture whereby every employee is aware of potential risks, such as malware propagated via email or saving corporate data to personal cloud services, such as Dropbox. This is particularly relevant for organizations that have a BYOD policy (and even more so for those that don’t and are therefore more likely to beat risk of shadow IT). According to a recent Deloitte survey, 70 per cent of organizations rate their employees’ lack of security awareness as an ‘average’ or ‘high’ vulnerability. Today’s tech-savvy employees are accessing the corporate network from all sorts of devices, so educating them around the potential risks is critical. Policy and procedures As we learned from the Target data breach, the best technologies are worthless without incident response processes in place. The key outcome of effective policy and procedures is the ability to adapt to evolving threats; that is, to incorporate changes to the threat landscape in a cost-effective manner. Controls Security controls deliver policy enforcement and provide hooks for delivering security information to visibility and response platforms. In today’s environment, business occurs across, inside and outside the office footprint, and infrastructure connectivity is increasing. As a result, controls for the environment need to extend to where the business operates. Key emergent security controls include: Uniform application security controls (on mobile, corporate and infrastructure platforms) Integrated systems for patch management Scalable environment segmentation (such as for PCI compliance) Enterprise Mobility Application Management for consumer devices Network architectures with Edge-to-Edge Encryption Monitoring and management A 24×7 monitoring and response capability is critical. While larger enterprises tend to build their own Security Operations Centers, the high cost of having staff around the clock and the need to find and retain skilled security resources is too costly for the medium enterprise. Moreover, according to Verizon Enterprise Solutions, companies only discover breaches through their own monitoring in 31 per cent of cases. An outsourced solution is the best option, as it enables organisations to employ sophisticated technologies and processes to detect security incidents, but in a cost-effective manner. A shift in focus It’s never been more critical for organizations to have a robust security strategy. But despite the growing number of high-profile data breaches, too much information security spending is dedicated to the prevention of attacks, and not enough is going into improving (or establishing) policies and procedures, controls and monitoring capabilities. A new approach to security is needed, where the focus is on securing information from the inside out, rather than protecting information from the outside in. There is still value in implementing endpoint security software as a preventative measure, but those steps now need to be part of a larger strategy that must address the fact that so much information is outside the corporate network.  The bottom line is, planning Cybersecurity with a business-centric approach can lead to concrete gains in productivity, revenue, and customer retention. If your organization is among the majority of firms that don’t, now would be a great time to start.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 8583

article-image-what-software-stack-does-netflix-use
Richard Gall
03 Sep 2017
5 min read
Save for later

What software stack does Netflix use?

Richard Gall
03 Sep 2017
5 min read
Netflix is a company that has grown at an incredible pace. In July 2017 it reached 100 million subscribers around the world - for a company that started life as a DVD subscription service, Netflix has proved to be adaptable, constantly one step ahead of changes in the market and changes in user behaviour. It’s an organization that has been able to scale, while maintaining a strong focus on user experience. This flexibility and adaptability has been driven - or enabled by its approach to software. But what software does Netflix use, exactly? How and why has it made decisions about its software stack? Netflix's front end development tools User experience is critical for Netflix. That’s why React is such a valuable framework for the engineering team. The team moved to React in 2014 as a means to completely overhaul their UI - essentially to make it fit for purpose for the future. As they outline in this piece from January 2015, their core considerations were startup speed, runtime performance and modularity. Or to summarize, to manage scale in terms of users, content, and devices. The piece goes on to explain why React fit the bill. It mentions how important isomorphic JavaScript is to the way they work: React enabled us to build JavaScript UI code that can be executed in both server (e.g. Node.js) and client contexts. To improve our start up times, we built a hybrid application where the initial markup is rendered server-side and the resulting UI elements are subsequently manipulated as done in a single-page application. How Netflix manages microservices at scale A lot has been written about Netflix and microservices (we recommend this blog post as a good overview of how microservices have been used by Netflix over the last decade). A big challenge for a platform that has grown like Netflix has is managing those microservices. To do this, the team built their own tool - called Netflix Conductor. “In a microservices world, a lot of business process automations are driven by orchestrating across services. Conductor enables orchestration across services while providing control and visibility into their interactions.” To get microservices to interact with one another, the team write Shell/Python scripts - however this ran into issues, largely due to scale. To combat this, the engineers developed a tool called Scriptflask.Scriptflask ‘exposes the functionality of utilities as REST endpoints.’ This was developed using Flask (hence the name Scriptflask) - which offered good interoperability with Python, a language used across a wide range of Netflix applications. The team also use Node.js with Docker to manage services - it’s well worth watching this video from Node.JS Interactive in December 2016 where Yunong Xiao, Principal Software Engineer at Netflix talks ‘slaying monoliths’. Netflix and AWS migration Just as we saw with Airbnb, AWS has proven crucial to Netflix success. In fact for the last few years, it has been undergoing a huge migration project to move the bulk of its architecture into AWS. This migration was finally complete at the start of 2016. “We chose Amazon Web Services (AWS) as our cloud provider because it provided us with the greatest scale and the broadest set of services and features,” writes Yuri Izrailevsky, VP of cloud and platform engineering at Netflix. Netflix and big data The move to AWS has, of course, been driven by data related challenges. In fact, the team use Amazon S3 as their data warehouse. The scale of data is stunning - it’s been said that the data warehouse is 60 petabytes. This post from InfoQ elaborates on the Netflix big data infrastructure. How Netflix does DevOps For a company that has proven itself so adaptable at a macro level, it’s unsurprising that the way the engineering teams at Netflix build code is incredibly flexible and agile too. It’s worth quoting this from the team - it says a lot about the culture: The Netflix culture of freedom and responsibility empowers engineers to craft solutions using whatever tools they feel are best suited to the task. In our experience, for a tool to be widely accepted, it must be compelling, add tremendous value, and reduce the overall cognitive load for the majority of Netflix engineers. Clearly the toolchain that supports development teams is open ended - it’s constantly subject to revision and change. But we wanted to flag some of the key tools that helps keep the culture running. First, there is gradle - the team write that “Gradle was chosen because it was easy to write testable plugins, while reducing the size of a project’s build file.” It also makes sense in that Java makes up such a large proportion of the Netflix codebase too. To provide additional support, the team also developed something called Nebula - “an opinionated set of plugins for the Gradle build system, to help with the heavy lifting around building applications”. When it comes to integration, Jenkins is essential for Netflix. “We started with a single massive Jenkins master in our datacenter and have evolved to running 25 Jenkins masters in AWS”. This is just a snapshot of the tools used by the team to build and deploy code - for a much deeper exploration we recommend this post on Medium.
Read more
  • 0
  • 31
  • 80013

article-image-4-ways-use-machine-learning-enterprise-security
Kartikey Pandey
29 Aug 2017
6 min read
Save for later

4 Ways You Can Use Machine Learning for Enterprise Security

Kartikey Pandey
29 Aug 2017
6 min read
Cyber threats continue to cost companies money and reputation. Yet security seems to be undervalued. Or maybe it's just misunderstood. With a series of large-scale cyberattack events and the menace of ransomware, earlier WannaCry and now Petya, continuing to affect millions globally, it’s time you reimagined how your organization stays ahead of the game when it comes to software security.  Fortunately, machine learning can help support a more robust, reliable and efficient security initiative. Here are just 4 ways machine learning can support your software security strategy. Revamp of your company’s endpoint protection with machine learning We have seen in the past how a single gap in endpoint protection resulted in serious data breaches. In May this year, Mexican fast food giant Chipotle learned the hard way when cybercriminals exploited the company's point of sale systems to steal credit card information. The Chiptole incident was a very real reminder for many retailers to patch critical endpoints on a regular basis. It is crucial to guard your company’s endpoints which are virtual front doors to your organization’s precious information. Your cybersecurity strategy must consider a holistic endpoint protection strategy to secure against a variety of threats, both known and unknown. Traditional endpoint security approaches are proving to be ineffective and costing businesses millions in terms of poor detection and wasted time. The changing landscape of the cybersecurity market brings with it its own set of unique challenges (Palo Alto Networks have highlighted some of these challenges in their whitepaper here). Sophisticated Machine Learning techniques can help fight back threats that aren’t easy to defend with traditional ways. One could achieve this by adopting any of the three ML approaches: Supervised machine learning, unsupervised machine learning and reinforcement learning. Establishing the right machine learning approach entails a significant understanding of your expectations from the endpoint protection product. You might consider checking on the speed, accuracy, and efficiency of the machine learning based endpoint protection solution with the vendor to make an informed choice of what you are opting for. We recommend the use of a supervised machine learning approach for endpoint protection as it’s a proven way of malware detection and it delivers accurate results. The only catch is that these algorithms require relevant data in sufficient quantity to work on and the training rounds need to be speedy and effective to guarantee efficient malware detection. Some of the popular ML-based endpoint protection options available in the market are Symantec Endpoint Protection 14, CrowdStrike, and TrendMicro’s XGen. Use machine learning techniques to predict security threats based on historical data Predictive analytics is no longer just restricted to data science. By adopting predictive analytics, you can take a proactive approach to cybersecurity too. Predictive analytics makes it possible to not only identify infections and threats after they have caused damage, but also to raise an alarm for any future incidents or attacks. Predictive analytics is a crucial part of the learning process for the system. With sophisticated detection techniques the system can monitor network activities and report real-time data. One incredibly effective technique organizations are now beginning to use is a combination of  advanced predictive analytics with a red team approach. This enables organizations to think like the enemy and model a broad range of threats. This process mines and captures large sets of data which is then processed. The real value here is the ability to generate meaningful insights out of the large data set collected and then letting the red team work on processing and identifying potential threats. This is then used by the organization to evaluate its capabilities, to prepare for future threats and to mitigate potential risks. Harness the power of behavior analytics to detect security intrusions Behavior analytics is a highly trending area today in the cybersecurity space. Traditional systems such as antiviruses are skilled in identifying attacks based on historical data and matching signatures. Behavior analytics, on the other hand, detects anomalies and makes a judgement against what would be considered normal behaviour. As such, behavior analytics in enterprises is proving very effective when it comes to detecting intrusions that otherwise evade firewalls or antivirus software. It complements existing security measures such as firewall and antivirus rather than replacing them. Behavior analytics work well within private cloud and infrastructures and is able to detect threats within internal networks. One popular example is Enterprise Immune System, by the vendor Darktrace, which uses machine learning to detect abnormal behavior in the system. It helps IT staff narrow down their perimeter of search and look out for specific security events through a visual console. What’s really promising is that because Darktrace uses machine learning, the system is not just learning from events within internal systems, but from events happening globally as well. Use machine learning to close down IoT vulnerabilities Trying to manage large amounts of data and logs generated from millions of IoT devices manually could be overwhelming if your company relies on the Internet of Things. Many a time, IoT devices are directly connected to the network which means it is fairly easy for attackers and hackers to take advantage of your inadequately protected networks. It could therefore be next to impossible to build a secure IoT system, if you set out to identify and fix vulnerabilities manually. Machine learning can help you analyze and make sense of millions of data logs generated from IoT capable devices. Machine learning powered cybersecurity systems placed and seated directly inside your system can learn about security events as they happen. It can then monitor both incoming and outgoing IoT traffic in devices connected to the network and generate profiles for appropriate and inappropriate behavior inside your IoT ecosystem. This way the security system is able to react to even the slightest of irregularities and detect anomalies that were not experienced before. Currently, only a handful number of software and tools use Machine Learning or Artificial Intelligence for IoT security. But we are already seeing development on this front by major security vendors such as Symantec. Surveys carried out frequently on IoT continue to highlight security as a major barrier to IoT adoption and we are hopeful that Machine Learning will come to the rescue. Cyber crimes are evolving at a breakneck speed while businesses remain slow in adapting their IT security strategies to keep up with the times. Machine learning can help businesses make that leap to proactively address cyber threats and attacks by: Having an intelligent revamp of your company’s endpoint protection Investing in machine learning techniques that predict threats based on historical data Harnessing the power of behavior analytics to detect intrusions Using machine learning to close down IoT vulnerabilities And, that’s just the beginning. Have you used machine learning in your organization to enhance cybersecurity? Share with us your best practices and tips for using machine learning in cybersecurity in the comments below!
Read more
  • 0
  • 0
  • 10539

article-image-blockchain-iot-security
Savia Lobo
29 Aug 2017
4 min read
Save for later

How Blockchain can level up IoT Security

Savia Lobo
29 Aug 2017
4 min read
IoT contains hoard of sensors, vehicles and all devices that have embedded electronics which can communicate over the Internet. These IoT enabled devices generate tons of data every second. And with IoT Edge Analytics, these devices are getting much smarter - they can start or stop a request without any human intervention. 25 billion connected ”things” will be connected to the internet by 2020. - Gartner Research With so much data being generated by these devices, the question on everyone’s mind is: Will all this data be reliable and secure? When Brains meet Brawn: Blockchain for IoT Blockchain, an open distributed ledger, is highly secure and difficult to manipulate/corrupt by anyone connected over the network. It was initially designed for cryptocurrency based financial transactions. Bitcoin is a famous example which has Blockchain as its underlying technology. Blockchain has come a long way since then and can now be used to store anything of value. So why not save data in it? And this data will be secure just like every digital asset in a Blockchain is. Blockchain, decentralized and secured, is an ideal structure suited to form the underlying foundation for IoT data solutions. Current IoT devices and their data rely on the client service architecture. All devices are identified, authenticated, and connected via the cloud servers, which are capable of storing ample amount of data. But this requires huge infrastructure, which is all the more expensive. Blockchain not only provides an economical alternative but also since it works in a decentralized fashion it eliminates all single point of failures, creating a much secure and tougher network for IoT devices. This makes IoT more secure and reliable. Customers can therefore relax knowing their information is in safe hands. Today, Blockchain’s capabilities extend beyond processing financial transactions - It can now track billions of connected devices, process transactions and even co-ordinate between devices - a good fit for the IoT industry. Why Blockchain is perfect for IoT Inherent weak security features make IoT devices suspect. On the other hand, Blockchain with its tamper-proof ledger makes it hard to manipulate for malicious activities - thus, making it the right infrastructure for IoT solutions. Enhancing security through decentralization Blockchain makes it hard for intruders to intervene as it spans across a network of secure blocks. Change at a single location, therefore, does not affect the other blocks. The data or any value remains encrypted and is only visible to the person who has encrypted it using a private key. The cryptographic algorithms used in Blockchain technology ensure the IoT data remain private either for an individual organization or for the organizations connected in a network . Simplicity through autonomous 3rd-party-free transactions Blockchain technology is already a star in the finance sector thanks to the adoption of smart contracts, Bitcoin and other cryptocurrencies. Apart from providing a secured medium for financial transactions, it eliminates the need for third-party brokers such as banks to provide guarantee over peer-to-peer payment services. With Blockchain, IoT data can be treated in a similar manner, wherein smart contracts can be made between devices to exchange messages and data. This type of autonomy is possible because each node in the blockchain network can verify the validity of the transaction without relying on a centralized authority. Blockchain backed IoT solutions will thus enable trustworthy message sharing. Business partners can easily access and exchange confidential information within the IoT without a centralized management/regulatory authority. This means quicker transactions, lower costs and lesser opportunities for malicious intent such as data espionage. Blockchain's immutability for predicting IoT security vulnerabilities Blockchains maintain a history of all transactions made by smart devices connected within a particular network. This is possible because once you enter data in a Blockchain, it lives there forever in its immutable ledger. The possibilities for IoT solutions that leverage Blockchain’s immutability are limitless. Some obvious uses cases are more robust credit-scores and preventive health-care solutions that use data accumulated through wearables. For all the above reasons, we see significant Blockchain adoption by IoT based businesses in the near future.
Read more
  • 0
  • 0
  • 21311
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-python-r-war
Amey Varangaonkar
28 Aug 2017
7 min read
Save for later

Is Python edging R out in the data science wars?

Amey Varangaonkar
28 Aug 2017
7 min read
When it comes to the ‘lingua franca’ of data science, there seems to be a face-off between R and Python. R has long been established as the language of researchers and statisticians but Python has come up quickly as a bona-fide challenger, helping embed analytics as a necessity for businesses and other organizations in 2017. If  a tech war does exist between the two languages, it’s a battle fought not so much on technical features but instead on the wider changes within modern business and technology. R is a language purpose-built for statistics, for performing accurate and intensive analysis. So, the fact that R is being challenged by Python — a language that is flexible, fast, and relatively easy to learn — suggests we are seeing a change in who’s actually doing data science, where they’re doing it, and what they’re trying to achieve. Python versus R — A Closer Look Let’s make a quick comparison of the two languages on aspects important to those working with data and see what we can learn about the two worlds where R and Python operate. Learning curve Python is the easier language to learn. While R certainly isn’t impenetrable, Python’s syntax marks it as a great language to learn even if you’re completely new to programming. The fact that such an easy language would come to rival R within data science indicates the pace at which the field is expanding. More and more people are taking on data-related roles, possibly without a great deal of programming knowledge — Python makes the barrier to entry much lower than R. That said, once you get to grips with the basics of R, it becomes relatively easier to learn the more advanced stuff. This is why statisticians and experienced programmers find R easier to use. Packages and libraries Many R packages are in-built. Python, meanwhile, depends upon a range of external packages. This obviously makes R much more efficient as a statistical tool — it means that if you’re using Python you need to know exactly what you’re trying to do and what external support you’re going to need. Data Visualization R is well-known for its excellent graphical capabilities. This makes it easy to present and communicate data in varied forms. For statisticians and researchers, the importance of that is obvious. It means you can perform your analysis and present your work in a way that is relatively seamless. The ggplot2 package in R, for example, allows you to create complex and elegant plots with ease and as a result, its popularity in the R community has increased over the years. Python also offers a wide range of libraries which can be used for effective data storytelling. The breadth of external packages available with Python means the scope of what’s possible is always expanding. Matplotlib has been a mainstay of Python data visualization. It’s also worth remarking on upcoming libraries like Seaborn. Seaborn is a neat little library that sits on top of Matplotlib, wrapping its functionality and giving you a neater API for specific applications. So, to sum up, you have sufficient options to perform your data visualization tasks effectively — using either R or Python! Analytics and Machine Learning Thanks to libraries like scikit-learn, Python helps you build machine learning systems with relative ease. This takes us back to the point about barrier to entry. If machine learning is upending how we use and understand data, it makes sense that more people want a piece of the action without having to put in too much effort. But Python also has another advantage; it’s great for creating web services where data can be uploaded by different people. In a world where accessibility and data empowerment have never been more important (i.e., where everyone takes an interest in data, not just the data team), this could prove crucial. With packages such as caret, MICE, and e1071, R too gives you the power to perform effective machine learning in order to get crucial insights out of your data. However, R falls short in comparison to Python, thanks to the latter’s superior libraries and more diverse use-cases. Deep Learning Both R and Python have libraries for deep learning. It’s much easier and more efficient with Python though — most likely because the Python world changes much more quickly, new libraries and tools springing up as quickly as the data science world hooks on to a new buzzword. Theano, and most recently Keras and TensorFlow have all made a huge impact on making it relatively easy to build incredibly complex and sophisticated deep learning systems. If you’re clued-up and experienced with R it shouldn’t be too hard to do the same, using libraries such as MXNetR, deepr, and H2O — that said, if you want to switch models, you may need to switch tools, which could be a bit of a headache. Big Data With Python, you can write efficient MapReduce applications with ease, or scale your R program on Hadoop to work with petabytes of data. Both R and Python are equally good when it comes to working with Big Data, as they can be seamlessly integrated with Big Data tools such as Apache Spark and Apache Hadoop, among many others. It’s likely that it’s in this field that we’re going to see R moving more and more into industry as businesses look for a concise way to handle large datasets. This is true in industries such as bioinformatics which have a close connection with the academic world and necessarily depend upon a combination of size and accuracy when it comes to working with data. So, where does this comparison leave us? Ultimately, what we see are two different languages offering great solutions to very different problems in data science. In Python, we have a flexible and adaptable language with a vibrant community of developers working on a huge range of problems and tasks, each one trying to find more effective and more intelligent ways of doing things. In R, we have a purely statistical language with a large repository of over 8000 packages for data analysis and visualization. While Python is production-ready and is better suited for organizations looking to harness technical innovation to its advantage, R’s analytical and data visualization capabilities can make your life as a statistician or data analyst easier. Recent surveys indicate that Python commands a higher salary than R — that is because it’s a language that can be used across domains; a problem-solving language. That’s not to say that R isn’t a valuable language; rather, Python is the language that just seems to fit the times at the moment. In the end, it all boils down to your background, and the kind of data problems you want to solve. If you come from a statistics or research background and your problems only revolve around statistical analysis and visualization, then R would best fit your bill. However, if you’re a Computer Science graduate looking to build a general-purpose, enterprise-wide data model which can integrate seamlessly with the other business workflows, you will find Python easier to use. R and Python are two different animals. Instead of comparing the two, maybe it’s time we understood where and how each can be best used and then harnessed their power to the fullest to solve our data problems. One thing is for sure, though — neither is going away anytime soon. Both R and Python occupy a large chunk of the data science market-share today, and it will take a major disruption to take either one of them out of the equation completely.
Read more
  • 0
  • 1
  • 30923

article-image-what-software-stack-does-airbnb-use
Richard Gall
20 Aug 2017
4 min read
Save for later

What software stack does Airbnb use?

Richard Gall
20 Aug 2017
4 min read
Airbnb is one of the most disruptive organizations of the last decade. Since its inception in 2008, the company has developed a platform that allows people to ‘belong anywhere’ (to quote their own mission statement). In doing so, the very nature of tourism has changed. But what software does Airbnb use? What tools are enabling their level of innovation? How Airbnb develops a dynamic front end Let’s start with  the key challenge for Airbnb. Like many similar platforms, one of the central difficulties handling data in a way that’s incredibly dynamic. That means you need to ensure your JavaScript is working hard for you without taking too much strain. That’s where a reactive approach comes in. As an asynchronous paradigm, it’s able to manage how data moves from source to the components that react to it. But the paradigm can only do so much. By using ReactJS, Airbnb have a library that is capable of giving you the necessary dynamism in your UI. The Airbnb team have written a lot on their love for ReactJS, making it their canonical front end framework in 2015. But they’ve also built a large number of other tools around React to make life easier for their engineers. In this post, for example, the team discuss React Sketch.app which ‘allows you to write React components that render to Sketch documents.’ Elsewhere, Ruby also forms an important part of the development stack. However, as with React, the team are committed to innovating with the tools at their disposal. In this post, they discuss how the built a ‘blazing fast thrift bindings for Ruby with C extensions.’ How Airbnb manages data If managing data on the front end has been a crucial part of their software consideration, what about the tools that actually manage and store data? The company use MySQL to manage core business data; this hasn’t been without challenges - not least because of scalability. However, the team have found ways of making MySQL work to their advantage. Redis is also worth a mention here - read here how Airbnb use Redis to monitor customer issues at scale. But Airbnb have always been a big data company at heart - that’s why Hadoop is so important to their data infrastructure. A number of years ago, Airbnb ran Hadoop on Mesos which allows you to deploy a single configuration on different servers; this worked for a while, but owing to a number of challenges, (which you can read about here) the team moved away from Mesos, running a more straightforward Hadoop infrastructure. Spark is also an important tool for Airbnb. The team actually built something called Airstream, which is a computational framework that sits on top of Spark Streaming and Spark SQL, allowing engineers and the data team to get quick insights. Ultimately, for an organization that depends on predictions and machine learning, something like Spark - alongside other open source machine learning libraries - is crucial in the Airbnb stack. Cloud - how Airbnb takes advantage of AWS If you take a close look at how they work, the Airbnb team have a true hacker mentality, where it’s about playing, building, creating new tools to tackle new challenges. This has arguably been enabled by the way they use AWS. It’s perhaps no coincidence that around the time Airbnb was picking up speed and establishing itself that the Amazon cloud offering was reaching maturity. Airbnb adopted a number of AWS services such as S3 and EC2 early on. But the reason Airbnb have stuck with AWS comes down to cultural fit. “For us, an investment in AWS is really about making sure our engineers are focused on the things that are uniquely core to our business. Everything that we do in engineering is ultimately about creating great matches between people,” Kevin Rice, Director of Engineering has said. How Airbnb creates a DevOps culture But there’s more to it than AWS; there’s a real DevOps culture inside Airbnb that further facilitates a mixture of agility and creativity. The tools used for DevOps are an interesting mix - some of which are unsurprising - like GitHub, and Nginx (which powers some of the busiest sites on the planet), but some slightly more surprising features, such as Kibana, which is used by the company to monitor data alongside Elasticsearch. When it comes to developing and provisioning environments, Airbnb use Vagrant and Chef. It’s easy to see the benefits here - it makes setting up and configuring environments incredibly easy and fast. And if you’re going to live by the principles of DevOps, this is essential - it’s the foundation of everything you do.
Read more
  • 0
  • 0
  • 14865

article-image-five-most-surprising-applications-iot
Raka Mahesa
16 Aug 2017
5 min read
Save for later

Five Most Surprising Applications of IoT

Raka Mahesa
16 Aug 2017
5 min read
The Internet of Things has been growing for quite a while now. The promise of smart and connected gadgets has resulted in many, many applications of Internet of Things. Some of these projects are useful, yet some are not. Some of these applications, like smart TV, smartwatch, and smart home, are expected, whereas others are not. Let's look at a few surprising applications that tap into the Internet of Things.  Let’s get started with a project from Google.   1. Google's Project Jacquard  Simply put, project Jacquard is a smart jacket, a literal piece of clothing that you can wear that is connected to your smartphone. By tapping and swiping on the jacket sleeve, you can control the music player and map application on your smartphone. This project is actually a collaboration between Google and Levi's, where Google invented a fabric that can read touch input and Levi's applied the technology to a product people will actually want to wear.  Even right now, the idea of a fabric that we can interact with boggles my mind. My biggest problem with wearables like smart watch and smart band is that they felt like another device we need to take care of. Meanwhile, a jacket is something that we just wear, with its smart capability being an additional benefit. Not to mention that connected fabric allows more aspects of our daily life to be integrated with our digital life.  That said, project Jacquard is not the first smart clothing, there are other projects like Athos that embeds sensor to their clothing. Still, project Jacquard is the first one that allows people to actually interact with their clothing.  2. Hapifork  Hapifork is actually one of the first smart gadgets that I was aware of. As the name alludes to, Hapifork is a smart fork with capacitive sensor, motion sensor, vibration motor and a micro USB port. You might wonder why a fork needs all those bells and whistles. Well, you see, Hapifork uses those sensors to detect your eating motion and alerts you if you are eating too fast. After all, eating too fast can cause weight gain and other physical issues, so the fork tries to help you live a healthier life.  While the idea has some merits, I'm still not sure an unwieldy smart fork is a good way to make us eat healthier. I think actually eating healthy food is a better way to do that. That said, the idea of smart eating utensils is fascinating. I would totally get a smart plate with the capability of counting the amount of calories in our food.   3. Smart food maker  In 2016 there was a wave of smart food-making devices that started and successfully completed their crowdfunding project. These devices are designed to make it easier and quicker for people to prepare food. They are designed to be much easier than just using a microwave oven, that is. The problem is, these devices are pricey and are only able to prepare a specific type of food. There is CHiP, which can bake various kind of cookies from a set of dough and there is Flatev that can bake tortillas from a pod of dough.  While the concept may initially sound weird, having a specific device to make a specific type of food is actually not that weird. After all, we already have a machine that only makes a cup of fresh coffee, so having a machine that only makes a fresh plate of cookies could be the next natural step.  4. Smart tattoo  Of all the things that can be smart and connected, a tattoo is definitely not the one that comes to my mind. But apparently that's not the case with plenty of researchers from all over the world. There have been a couple of bleeding edge projects that resulted in connected tattoos. L'Oreal has created tattoos that are able to detect ultraviolet exposure, and Microsoft and MIT have created tattoos that users can use to interact with smartphones. And late last year a group of researchers created a tattoo with an accelerometer that can detect a user's heartbeat.  So far wearables have been smart accessories that you wear daily. Since you also wear your skin every day, would it also count as wearable?   5. Oombrella If you ever thought that human isn't a creative creature, just remember that it's also a human who invented the concept of smart umbrella. Oombrella is a connected umbrella that will notify you when it's about to rain and also will notify you if you’ve left it behind in a restaurant. These functionalities may sound passable at first, until you realize that the weather notification comes from your smartphone and you just need a weather app instead of a smart umbrella. That said, this project has been successfully crowdfunded, so maybe people actually want a smart umbrella.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 20682

article-image-best-automation-tools-sysadmins
Rick Blaisdell
09 Aug 2017
4 min read
Save for later

The best automation tools for sysadmins

Rick Blaisdell
09 Aug 2017
4 min read
Artificial Intelligence and cognitive computing have made huge strides over the past couple of years. Today, software automation has become an important tool that provides businesses with the necessary assets to keep up with market competition. Just take a look at ATMs, which have replaced bank tellers, or smart apps that have brought airline boarding passes to your fingertips. Moreover, as some statistics reveal, in the next couple of years, around 45 percent of work activities will be replaced or affected by robotic process automation. However, this is not the focus of this post. Our focus here is about how automation is helping us to keep up the pace and streamline our activities. So let’s take a look at system administrators. There are plenty of tasks performed by sysadmins that could easily be automated. To make the job easier, here is a list of automation software that any system administrator would be interested in: WPKG – The automated software deployment, upgrade, and removal program that allows you to build dependency trees of applications. The tool runs in the background and it doesn’t need any user interaction. The WPKG tool can be used to automate Windows 8 deployment tasks, so it’s good to have in any toolbox. AutoHotkey– The open-source scripting language for Microsoft Windows that allows you to create mouse macros manually. One of the most advantageous features that this tool provides is the ability to create stand-alone, fully executable .exe files, from any script, and operates on other PCs. Puppet Open Source – I think every IT professional has heard about Puppet and how it has captured the market during the last couple of years. This tool allows you to automate your IT infrastructure from acquisition to provisioning and management stages. The advantages? Scalability and scope!  As I mentioned, automation has already started to change the way we do business, and it will continue doing so in the upcoming years. It can represent a strong motive to increase service to your end users. Let’s take a dive into the benefits of automation: Reliability – This might be considered one of the highest advantages that automation can provide to an organization. Let’s take computer operations as an example. It requires a professional with both technical skills, and agility in pressing buttons and other physical operations. Also, we all know that human error is one of the most common problems in any business. Automation removes these errors. System Performance – Every business wishes to improve performance.  Automation, due to its flexibility and agility, makes that possible. Productivity – Today, we rely on computers and, most of the time, we work on complex tasks. One of the perks that automation has to offer is to increase productivity using job-scheduling software. It eliminates the lag time between jobs, while minimizing operator intervention. Availability – We all know how far cloud computing has come and how much hours of unavailability can cost. Automation can help by delivering high availability. Let’s take, for example, service interruptions. If your system crashes, and automated back-ups are available, then you have nothing to worry about. Automated recovery will always play a huge role in business continuity. Automation will always be in our future, and it will continue to grow. The key to using it to its full potential is to understand how it works and how it can help our business, regardless of the industry, as well as finding the best software to maximize efficiency. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 12818
article-image-what-are-limits-self-service-bi
Graham Annett
09 Aug 2017
4 min read
Save for later

What are the limits of self-service BI?

Graham Annett
09 Aug 2017
4 min read
While many of the newer self-service BI offerings are a progressive step forward, they still have some limitations that are not so easily overcome. One of the biggest advantages of self-service business intelligence is the ability for smaller and previously much more limited revenue and smaller sized companies to utilize their data and various other software-as-a-service offerings that previously would have been restricted to enterprise companies needing in house developers and data architects. This alone is an incredible barrier to overcome and helps in lessening the gap and burden that traditional small and medium businesses may have previously faced when hoping to use their collection of data in a real and impactful way.  What is self-service BI?  Self-service BI is the idea that traditional “Business Intelligence” tasks may be automated, and pipelines created in such a way that business insights can be optimized in a way that previously required both enterprise-sized datasets and enterprise-quality engineering to incorporate real and actionable insights. This idea has largely become outdated with the influx of both easily integrate-able third-party services (such as Azure BI tools and IBM Cognos Analytics), and the push towards using a company's collected data at any scale that can be used in supervised or unsupervised learning (with automated model tuning and feature engineering).  Limitations  One of the limits of self-service business intelligence services is that the abilities of the software are often so broad, that they cannot provide the level of insight that would be helpful or necessary to be applied to increase revenue. While these insights and self-service BI services might be useful for initial, or exploratory visualization purposes, the current implementations cannot provide the expertise and thoroughness that an established data scientist would be able to provide, and cannot divulge into the minutia that a boss may ask of someone with fine tuned statistical knowledge of their models.  Often these insights are limited in ways such as: feature engineering, data warehousing abilities, data pipeline integration, machine learning algorithms available, and a multitude of other aspects that while they are slowly being incorporated into the self-service BI platforms (and as an early Azure ML user, they have become incredibly adept and useful), will always be slightly behind the latest and greatest just by nature of the need to depend on others to implement the newest idea into the platform.  Another limitation of self-service platforms is that you are somewhat locked into these platforms once you create a solution that may be suitable for your needs. Some of the issues with this can be that you are subject to the platform's rising cost, the platform can at any minute change its API format, or the way that data is integrated that would break your system or worse, the self-service BI platform could simply cease to exist if the provider deems that it is no longer something they wish to pursue for a multitude of reasons. While these issues are somewhat avoidable if engineering is done with them in mind (i.e. know your data pipeline and how to translate it to another service or what happens were the third-party service goes down), they are still reasonable issues that could potentially have wide implications for a business depending on how integral they are in the engineering stack. This is probably the most poignant limitation.          All that said, the latest and greatest in BI may be unnecessary for most use cases though, and a general approach that is broad and simple may cover the applicable use cases for most companies, and most big cloud providers of self service BI platforms are unlikely to go down or disappear suddenly without notice. Having hyper-specific pipelines that take months to engineer and integrate optimizations that would have real impacts may far outweigh a simple approach that can be highly adaptable, and creating real insights is one of the best ways a business can start on the path of incorporating machine learning and data science services into their core business.  About the Author  Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me 
Read more
  • 0
  • 0
  • 18859

article-image-5-common-misconceptions-about-devops
Hari Vignesh
08 Aug 2017
4 min read
Save for later

5 common misconceptions about DevOps

Hari Vignesh
08 Aug 2017
4 min read
DevOps is a transformative operational concept designed to help development and production teams coordinate operations more effectively. In theory, DevOps is designed to be focused on cultural changes that stimulate collaboration and efficiency, but the focus often ends up being placed on everyday tasks, distracting organizations from the core principles — and values  — that DevOps is built around. This has led to many technology professionals developing misconceptions about DevOps because they have been part of deployments,or know people who have been involved in DevOps plans, who have strayed from the core principles of the movement. Let’s discuss a few of the misconceptions. We need to employ ‘DevOps’ DevOps is not a job title or a specific role. Your organization probably already has Senior Systems guys and Senior Developers who have many of the traits needed to work in the way that DevOps promotes. With a bit of effort and help from outside consultants, mailing lists or conferences, you might easily be able to restructure your business around the principles you propose without employing new people — or losing old ones. Again, there is no such thing as a DevOp person. It is not a job title. Feel free to advertise for people who work with a DevOps mentality, but there are no DevOps job titles. Oftentimes, good people to consider in the role as a bridge between teams are generalists, architects, and Senior Systems Administrators and Developers. Many companies in the past decade have employed a number of specialists — a DNS Administrator is not unheard of. You can still have these roles, but you’ll need some generalists who have a good background in multiple technologies. They should be able to champion the values of simple systems over complex ones, and begin establishing automation and cooperation between teams. Adopting tools makes you DevOps Some who have recently caught wind of the DevOps movement believe they can instantly achieve this nirvana of software delivery simply by following a checklist of tools to implement within their team. Their assumption is, that if they purchase and implement a configuration management tool like Chef, a monitoring service like Librato, or an incident management platform like VictorOps, then they’ve achieved DevOps! But that's not quite true. DevOps requires a cultural shift beyond simply implementing a new lineup of tools. Each department, technical or not, needs to understand the cultural shift behind DevOps. It’s one that emphasizes empathy and better collaboration. It’s more about people. DevOps emphasizes continuous change There’s no way around it — you will need to deal with more change and release tasks when integrating DevOps principles into your operations — the focus is placed heavily on accelerating deployment through development and operations integration, after all. This perception comes out of DevOps’ initial popularity among web app developers. It has been explained that most businesses will not face change that is so frequent, and do not need to worry about continuous change deployment just because they are supporting DevOps. DevOps does not equal “developers managing production” DevOps means development and operations teams working together collaboratively to put the operations requirements about stability, reliability, and performance into the development practices, while at the same time bringing development into the management of the production environment (e.g. by putting them on call, or by leveraging their development skills to help automate key processes). It doesn’t mean a return to the laissez-faire “anything goes” model, where developers have unfettered access to the production environment 24/7 and can change things as and when they like. DevOps eliminates traditional IT roles If, in your DevOps environment, your developers suddenly need to be good system admins, change managers and database analysts, something went wrong. DevOps as a movement that eliminates traditional IT roles will put too much strain on workers. The goal is to break down collaboration barriers, not ask your developers to do everything. Specialized skills play a key role in support effective operations, and traditional roles are valuable in DevOps.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 7054

article-image-who-are-set-be-biggest-players-iot
Raka Mahesa
07 Aug 2017
5 min read
Save for later

Who are set to be the biggest players in IoT?

Raka Mahesa
07 Aug 2017
5 min read
The Internet of Things, also known as IoT, may sound like some technological buzzword, but in reality it's a phenomenon that's taking place right now, as more and more devices get connected to the Internet. It's an ecosystem with $6.7 billion in revenue in 2015 alone and is projected to grow even more in the future. So, with those kinds of numbers, who are the biggest players in an ecosystem with such high value?  Let's clear up one thing before we go further. How exactly do we define "the biggest players" in a technological ecosystem? After all, there are many, many ways to measure the size of an industry player. The quickest way is probably to simply check the their revenues or their market share number. Another way to do that, and also the way that we'll use in this post, is to see how much influence a company has on the ecosystem.  Whatever action they take, the biggest players in an ecosystem will have an impact that can be felt throughout the industry. For example, when Apple unveiled that the latest iPhone had no headphone jack, many smartphone manufacturers followed suit, and a lot of audio hardware vendors introduced new wireless headsets. Or imagine if Samsung, the company with the biggest smartphone market share, suddenly stopped using Android and instead usedtheir own mobile platform, the impact would be massive. The bigger the size of the player, the bigger the impact it will have on the ecosystem.  IoT companies  So, with that part cleared up, let's talk about IoT companies. Companies that dabble in the IoT ecosystem can be segregated into two categories: those that focus on consumer products like Amazon and Apple, and those that focus on enterprise products like Cisco, Oracle, and SalesForce. As for companies that offer solutions for both segments like Samsung, they tend to fall into the consumer-focused category.  Companies that focus on enterprise products, with a few exceptions, are more driven by their sales performance instead of their technology innovation. Because of that, those companies tend to not have as much impact on the ecosystem as their consumer-focused counterparts. And that's why we'll focus on companies that focus on consumer products when we're talking about the biggest players in IoT. Big players: ARM and Amazon  Well, it's finally time for the big reveal on who the biggest players in the Internet of Things are. The IoT ecosystem is pretty interesting; it has so many components that make it quite difficult for one single company to tackle the whole thing. And it has not matured yet, which means there are still many segments with empty leading position, ready to be taken by any company who can rise up to the challenge.  That said, there is actually one company that drives the whole ecosystem: ARM, a.k.a. the company whose chipset infrastructure became the basis of the entire smartphone technology. If you have a smart device that can process information and do calculation, there is a high chance that it's powered by an ARM-based chipset. With such widespread usage, any technological progress made by the company will increase the capability of IoT technology as a whole.  While ARM has the advantage in market share on the hardware side, it's Amazon who has the market share advantage on the software side with AWS. Similar to how Google has a hand in every aspect of the web, Amazon also seems to have a hand in every part of IoT. They provide the services to connect smart devices to the Internet, as well as the platform for developers to host their cloud apps. And for mainstream consumers, Amazon directly sells smart devices like Amazon Dash and Amazon Echo, the latter, which also serves as a platform for developers to create home applications. In short, wherever you look in the IoT ecosystem, Amazon usually has a part in it.  Wearables If there is one segment of IoT that Amazon doesn't seem to have an interest in, it is probably the wearable segment. It was predicted that this segment of the market was to be dominated by smart watches, but instead, the fitness tracker devices from Fitbit won this category. With wearable devices being much more personal than smartphones, if Fitbit can expand beyond fitness tracking, they'll become the dominant force in the IoT ecosystem.  The smart home  Surprisingly, no one seems to have conquered the most obvious space for Internet of Things, the smart home segment. The leading companies in this segment seem to be Amazon, Apple, and Google, but none of them is the dominant force yet. Apple plays with their HomeKit library and doesn't seem to be catching much interest, though maybe they'll have better luck with Apple HomePod. Google is actually the one with the most potential here, with their Google Home, Google Cloud IoT service, and Android Embedded version. However, other than Google Home, these projects are still in beta and not ready for launch yet.  Those are the biggest players in the still-evolving ecosystem of the Internet of Things. It's still pretty early however, a lot of thing can still change, and what is true right now may not be true in a couple of years. After all, before the iPhone, no one expected Apple to be the biggest player in the mobile phone industry.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 4645
article-image-things-remember-building-your-first-game
Raka Mahesa
07 Aug 2017
6 min read
Save for later

Things to remember before building your first game

Raka Mahesa
07 Aug 2017
6 min read
I was 11 years old when I decided I want to make games. And no, it wasn't because there was a wonderful game that inspired me to make games; the reason was something more childish. You see, Sony Playstation 2 was released just a year earlier and a lot of games were being made for the new console. The 11-year-old me, who only had a Playstation 1, got annoyed because games for PS1 weren't developed anymore. So, out of nowhere, I decided to just make those games myself. Two years after that, I finally built my first game, and after a couple more years, I actually started developing games for a living.  It was childish thinking, but it actually gave me a goal very early in life and helped me to choose a path for me to follow. And while I think my life turned out quite okay, there are still things that I wish the younger me would have known back then. Things that would have helped me a lot back when I was building my first game. And even though I can't go back and tell those things to myself, I can tell them to you. And hopefully, it will help you in your quest to build your first game.  Why do you want to build a game?  Let's start with the most important thing you need to understand when you want to build a game: yourself. Why do you want to build a game? What goal do you hope to achieve by developing a game? There are multiple possible reasons for someone to start creating a game. You could develop a game because you want to learn about a particular programming language or library, or because you want to make a living from selling your own game, or maybe because you have free time and think that building a game is a cool way to pass it.  Whatever it is, it's important for you to know your reasons, because it will help you decide what is actually needed for the game you're building. For example, if you develop a game to learn about programming, then your game doesn't need to use fancy graphics. On the other hand, if you develop a game to commercialize it, then having a visually appealing game is highly important.  One more thing to clarify before we go further. There are two phases that people have before they build their first game. The first one is when someone has the desire to build a game, but has absolutely no idea about how to achieve it, and the other one is when someone has both the desire and knowledge needed to build a game, but hasn't started doing it. Both of them have their own set of problems, so I'll try to address both phases here, starting with the first one.  Learn and master the tools  Naturally, one of the first questions that comes to mind when wanting to create games is how to actually start the creation process. Fortunately, the answer to this question is the same for any kind of creative projects you'll want to attempt: learn and master the tools. For game development, this means game creation tools, and they come in all shapes and sizes. There are those that don't need any kind of programming, like Twine and RPGMaker, those that require a tiny bit of programming like Stencyl and GameMaker, and the professional ones that need a lot of coding like Unity and Unreal Engine. Though, if you want to build games to learn about programming, there's no better way than to just use a programming language with some game-making library,like MonoGame.  With so many tools ready for you to use, will your choice of tools matter? If you're building your first game, then no, the tool that you use will not matter that much. Whilst it's true that you'd probably want to use a tool with better capabilities in the future, at this stage, what's important is learning the actual game development process. And this process is actually the same no matter what tool you use.  KISS: Keep It Simple, Sugar So, now that you know how to build a game, what else do you need to be aware of before you start building it? Well, here's the one thing that most people only realize after they actually start building their game: Game development is hard. For every feature that you want to add into a game, there will be dozens of cases that you have to think about to make sure the feature works fine. And that's why one of the most effective mantras when you're starting game development is KISS: Keep It Simple, Sugar (a change may have been made to the original, slightly more insulting acronym). Are you sure you need to add enemies to your game? Does your game actually need a health bar, or would a health counter be enough? If developing a game is hard, then finishing it is even harder. Keeping the game simple increases your chance of finishing it, and you can always build a more complex game in the future. After all, a released game is better than a perfect, but unreleased game.  That said, it's possible that you're building a game that you've been dreaming of since forever, and you'd never settle for less. If you're hell bent on completing this dream game of yours, who am I to tell you to not pursue it? After all, there are successful games out there that were actually the developer's first game project. If that's how it is, just remember that motivation loss is your biggest enemy and you need to actively combat it. Show your work to other people, or take a break when you've been working on it too much. Just make sure you don't lose your drive to complete your project. It’ll be worth it in the end!  I hope this advice is helpful for you. Good luck with building your games! About the Author  RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 5975

article-image-how-can-data-scientist-get-game-development
Graham Annett
07 Aug 2017
5 min read
Save for later

How can a data scientist get into game development?

Graham Annett
07 Aug 2017
5 min read
One of the most interesting uses for data science is within the aspects and process around game development.  While not immediately obvious that data science can be applicable to game development, it is increasingly becoming an enticing area both from a user engagement perspective, and as a source of data collection for deep learning and data science related tasks. Games and data collection  With the increase of reinforcement learning oriented deep learning tasks in the past few years, the concept of using games as a method for collection of data (somewhat in parallel to collecting data on mturk or various other crowdsourcing platforms) has never been greater.  The main idea behind data collection for these types of tasks is capturing the graphical display at some time and recording the user input for that image frame.  From this data, it's possible to connect these inputs into some end result (such that the final score) that can later be optimized and used as an objective cost function to be minimized or maximized.  With this, it’s possible to collect a large corpus of a user's data for deep learning algorithms to initially train off of, which they can then use for the computer to play itself (something akin to this was done for AlphaGo and various other game related reinforcement learning bots). With the incredible influx of processing power now available, it’s possible for computers to play themselves thousands and millions of times to learn from themselves and their own shortcomings.  Deep learning uses  Practical uses of this type of deep learning that a data scientist may find interesting range from creating smart AI systems that are more engaging to a player, to finding transferable algorithms and data sources that can be used elsewhere. For example, many of the OpenAI algorithms are intended to be trained in one game with the hope that they will be transferable to another game and still do well (albeit with new parameters and learned cost function). This type of deep learning is incredibly interesting from a data scientist perspective because it is useful to not have to focus on highly optimizing each game or task that a data scientist may be working on and instead find commonalities and generalizable methodologies that translate across systems and games.  Technical skills Many of the technical skills for creating pipelines of data collection from game development are much more development oriented than a traditional data scientist may be used to, and it may require learning new skills. These skills are much broader and encompassing of traditional developer roles, and initially include things such as data collection and data pipelining from the games, to scaling deep learning training and implementing new algorithms during training.These are becoming more vital to a data scientist as the need to both provide insight as well as create integrations into a product is becoming an incredibly vital skillset.  Exploring projects and tools  A data scientist may go about getting into this area by exploring such projects and tools such as OpenAI’s gym and Facebook's MazeBase. These projects are very deep learning oriented though, and may not be what a traditional data scientist thinks of when they are interested in game development.  Data oriented/driven game design Another approach is data oriented/driven game design. While this is not a new concept by any means, it has become increasingly ubiquitous as in-app purchasing and subscription based gaming plans have become a common theme among mobile and other gaming platforms. These types of data science tasks are not unlike normal data science related projects, in that they seek to understand from a statistical perspective what is happening to users at specific points along the games. There is a pretty big overlap in projects like this and projects that aim to understand, for instance, when a user abandons a cart during an online order. The data for the games may be oriented around when the gamer gave up on a quest, or at what point users are willing to make an in-app purchase to quicker achieve a goal. Since these are quantifiable and objective goals, they are an incredibly fit for traditional supervised learning tasks and can be approached with traditional supervised learning baselines and algorithms.  The end result of these tasks may include things such as making a quest or goal easier, or making an in-app purchase cheaper during some specific interval that the user would be more inclined to purchase (much like offering a user a coupon if a cart is abandoned during checkout often entices the user to come back and finish the purchase). While both of these paths are game development oriented, they differ quite a lot in that one is much more traditionally data analytical, and one is much more deep learning engineering oriented. They both are highly interesting areas to explore from a professional standpoint, but data driven game development may be somewhat limited from a hobbyist standpoint outside of Kaggle competitions (which a quick search didn’t seem to show any previous competitions having this sort of data) since many companies would be quite hesitant to provide this sort of data if their entire business model is based around in-app purchases and recurring revenue from players.  Overall, these are both incredibly enticing areas and are great avenues to pursue and provide plenty of interesting problems that you may not encounter outside of game development.  About the Author Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me. 
Read more
  • 0
  • 0
  • 7343