Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-why-is-data-science-important
Richard Gall
24 Apr 2018
3 min read
Save for later

Why is data science important?

Richard Gall
24 Apr 2018
3 min read
Is data science important? It's a term that's talked about a lot but often misunderstood. Because it's a buzzword it's easy to dismiss; but data science is important. Behind the term lies very specific set of activities - and skills - that businesses can leverage to their advantage. Data science allows businesses to use the data at their disposal, whether that's customer data, financial data or otherwise, in an intelligent manner. It's results should be a key driver of growth. However, although it’s not wrong to see data science as a real game changer for business, that doesn’t mean it’s easy to do well. In fact, it’s pretty easy to do data science badly. A number of reports suggest that a large proportion of analytics projects fail to deliver results. That means a huge number of organizations are doing data science wrong. Key to these failures is a misunderstanding of how to properly utilize data science. You see it so many times - buzzwords like data science are often like hammers. They make all your problems look like nails. And not properly understanding the business problems you’re trying to solve is where things go wrong. What is data science? But what is data science exactly? Quite simply, it’s about using data to solve problems. The scope of these problems is huge. Here are a few ways data science can be used: Improving customer retention by finding out what the triggers of churn might be Improving internal product development processes by looking at points where faults are most likely to happen Targeting customers with the right sales messages at the right time Informing product development by looking at how people use your products Analyzing customer sentiment on social media Financial modeling As you can see data science is a field that can impact every department. From marketing to product management to finance, data science isn’t just a buzzword, it’s a shift in mindset about how we work. Data science is about solving business problems To anyone still asking is data science important, the answer is actually quite straightforward. It's important because it solves business problems. Once you - and management - recognise that fact, you're on the right track. Too often businesses want machine learning, big data projects without thinking about what they’re really trying to do. If you want your data scientists to be successful, present them with the problems - let them create the solutions. They won’t want to be told to simply build a machine learning project. It’s crucial to know what the end goal is. Peter Drucker once said “in God we trust… everyone else must bring data”. But data science didn’t really exist then - if it did it could be much simpler: trust your data scientists.
Read more
  • 0
  • 0
  • 27284

article-image-5-reasons-why-your-next-app-should-be-a-pwa-progressive-web-app
Sugandha Lahoti
24 Apr 2018
3 min read
Save for later

5 reasons why your next app should be a PWA (progressive web app)

Sugandha Lahoti
24 Apr 2018
3 min read
Progressive Web Apps (PWA) are a progression to web apps and native apps. So you get the experience of a native app from a mobile’s browser. They take up less storage space (the pitfall of native apps) and reduce loading time (the main reason why people leave a website). In the past few years, PWA has taken everything that is great about a native app – the functionality, touch gestures, user experience, push notifications – and transferred it into a web browser. Antonio Cucciniello’s article on ‘What is PWA?’ highlights some of their characteristics with details on how they affect you as a developer. Many companies have already started with their PWAs and are reaping benefits of this reliable, fast, and engaging approach. Twitter, for instance, introduced Twitter Lite and achieved 65% increase in pages per session, 75% increase in Tweets sent and 20% decrease in bounce rate. Lancôme has rebuilt their mobile website as a PWA and increased conversions by 17%. Most recently, George.com, a leading UK clothing brand, after upgrading their site to a Progressive Web App, saw a 31% increase in mobile conversion. Still not convinced? Here are 5 reasons why your next app should be a Progressive Web App. PWAs are device-agnostic What this means is that PWAs can work with various systems without requiring any special adaptations. As PWAs are hosted online, they are hyper-connected and work across all devices.  Developers will no longer need to develop multiple apps across multiple mobile platforms meaning huge savings in app development time and effort. PWAs have a Seamless UX The user experience provided by a PWA across different devices is the same. You could use a PWA on your phone,  switch to a tablet and notice no difference. They offer a complete full-screen experience with help from a web app manifest file. Smooth animations, scrolling, and navigation keep the user experience smooth.Additionally, Web push notifications show relevant and timely notifications even when the browser is closed, making it easy to re-engage with users. PWAs are frictionless Native apps take a lot of time to download. With a PWA, users won’t have to wait to install and download the app. PWAs use Service Workers and Javascript separately from the main thread to allow apps to load near instantly and reliably no matter the kind of network connection. Moreover, PWAs have improved app cross-functionality. So switching between apps, and sharing information between them becomes less intrusive. The user experience is faster and more intuitive. PWAs are secure PWAs are secured using HTTPS. HTTPS secures the connection between a PWA and the user-base by preventing intruders from actively tampering with the communications between your PWA and your users’ browsers. It also prevents intruders from being able to passively listen to communications between your PWA and your users. PWAs have better SEO According to a report, over 60 percent of searches now occur on mobile devices. This makes PWA very powerful from an SEO perspective as a simple google search might pull up your website and then launch the visitor into your PWA. PWAs can be indexed easily by search engines, making them more likely to appear in search results. The PWA fever is on and as long as PWAs increase app functionality and offer more offline capabilities, the list of reasons to consider them will only grow. Perhaps even more than a native app. Read More Windows launches progressive web apps… that don’t yet work on mobile 5 things that will matter in web development in 2018  
Read more
  • 0
  • 0
  • 26992

article-image-why-is-hadoop-dying
Aaron Lazar
23 Apr 2018
5 min read
Save for later

Why is Hadoop dying?

Aaron Lazar
23 Apr 2018
5 min read
Hadoop has been the definitive big data platform for some time. The name has practically been synonymous with the field. But while its ascent followed the trajectory of what was referred to as the 'big data revolution', Hadoop now seems to be in danger. The question is everywhere - is Hadoop dying out? And if it is, why is it? Is it because big data is no longer the buzzword it once was, or are there simply other ways of working with big data that have become more useful? Hadoop was essential to the growth of big data When Hadoop was open sourced in 2007, it opened the door to big data. It brought compute to data, as against bringing data to compute. Organisations had the opportunity to scale their data without having to worry too much about the cost. It obviously had initial hiccups with security, the complexity of querying and querying speeds, but all that was taken care off, in the long run. Still, although querying speeds remained quite a pain, however that wasn’t the real reason behind Hadoop dying (slowly). As cloud grew, Hadoop started falling One of the main reasons behind Hadoop's decline in popularity was the growth of cloud. There cloud vendor market was pretty crowded, and each of them provided their own big data processing services. These services all basically did what Hadoop was doing. But they also did it in an even more efficient and hassle-free way. Customers didn't have to think about administration, security or maintenance in the way they had to with Hadoop. One person’s big data is another person’s small data Well, this is clearly a fact. Several organisations that used big data technologies without really gauging the amount of data they actually would need to process, have suffered. Imagine sitting with 10TB Hadoop clusters when you don’t have that much data. The two biggest organisations that built products on Hadoop, Hortonworks and Cloudera, saw a decline in revenue in 2015, owing to their massive use of Hadoop. Customers weren’t pleased with nature of Hadoop’s limitations. Apache Hadoop v Apache Spark Hadoop processing is way behind in terms of processing speed. In 2014 Spark took the world by storm. I’m going to let you guess which line in the graph above might be Hadoop, and which might be Spark. Spark was a general purpose, easy to use platform that was built after studying the pitfalls of Hadoop. Spark was not bound to just the HDFS (Hadoop Distributed File System) which meant that it could leverage storage systems like Cassandra and MongoDB as well. Spark 2.3 was also able to run on Kubernetes; a big leap for containerized big data processing in the cloud. Spark also brings along GraphX, which allows developers to view data in the form of graphs. Some of the major areas Spark wins are Iterative Algorithms in Machine Learning, Interactive Data Mining and Data Processing, Stream processing, Sensor data processing, etc. Machine Learning in Hadoop is not straightforward Unlike MLlib in Spark, Machine Learning is not possible in Hadoop unless tied with a 3rd party library. Mahout used to be quite popular for doing ML on Hadoop, but its adoption has gone down in the past few years. Tools like RHadoop, a collection of 3 R packages, have grown for ML, but it still is nowhere comparable to the power of the modern day MLaaS offerings from cloud providers. All the more reason to move away from Hadoop, right? Maybe. Hadoop is not only Hadoop The general misconception is that Hadoop is quickly going to be extinct. On the contrary, the Hadoop family consists of YARN, HDFS, MapReduce, Hive, Hbase, Spark, Kudu, Impala, and 20 other products. While e folks may be moving away from Hadoop as their choice for big data processing, they will still be using Hadoop in some form or the other. As with Cloudera and Hortonworks, though the market has seen a downward trend, they’re in no way letting go of Hadoop anytime soon, although they have shifted part of their processing operations to Spark. Is Hadoop dying? Perhaps not... In the long run, it’s not completely accurate to say that Hadoop is dying. December last year brought with it Hadoop 3.0, which is supposed to be a much improved version of the framework. Some of the most noteworthy features are its improved shell script, more powerful YARN, improved fault tolerance with erasure coding, and many more. Although, that hasn’t caused any major spike in adoption, there are still users who will adopt Hadoop based on their use case, or simply use another alternative like Spark along with another framework from the Hadoop family. So, Hadoop’s not going away anytime soon. Read More Pandas is an effective tool to explore and analyze data - Interview insights  
Read more
  • 0
  • 1
  • 28205

article-image-top-7-modern-virtual-reality-hardware-systems
Sugandha Lahoti
20 Apr 2018
7 min read
Save for later

Top 7 modern Virtual Reality hardware systems

Sugandha Lahoti
20 Apr 2018
7 min read
Since its early inception, virtual reality has offered an escape. Donning a headset can transport you to a brand new world, full of wonderment and excitement. Or it can let you explore a location too dangerous for human existence.  Or it can even just present the real world to you in a new manner. And now that we have moved past the era of bulky goggles and clumsy helmets, the hardware is making the aim of unfettered escapism a reality. In this article, we present a roundup of the modern VR hardware systems. Each product is presented giving an overview of the device, and its price as of February 2018. Use this information to compare systems and find the device which best suits your personal needs. There has been an explosion of VR hardware in the last three years. They range from cheaply made housings around a pair of lens to full headsets with embedded screens creating a 110-degree field of view. Each device offers distinct advantages and use cases. Many have even dropped significantly in price over the past 12 months making them more accessible to a wider audience of users. Following is a brief overview of each device, ranked in terms of price and complexity. Google Cardboard Cardboard VR is compatible with a wide range of contemporary smartphones. Google Cardboard's biggest advantage is its low cost, broad hardware support, and portability. As a bonus, it is wireless. Using the phone's gyroscopes, the VR applications can track the user in 360 degrees of rotation. While modern phones are very powerful, they are not as powerful as desktop PCs. But the user is untethered and the systems are lightweight: Cost: $5-20 (plus an iOS or Android smartphone) [box type="shadow" align="" class="" width=""]Check out this post to Build Virtual Reality Solar System in Unity for Google Cardboard[/box] Google Daydream Rather than plastic, the Daydream is built from a fabric-like material and is bundled with a Wii-like motion controller with a trackpad and buttons. It does have superior optics compared to a Cardboard but is not as nice as the higher end VR systems. Just as with the Gear VR, it works only with a very specific list of phones: Cost: $79 (plus a Google or Android Smartphone) Gear VR Gear VR is part of the Oculus ecosystem. While it still uses a smartphone (Samsung only), the Gear VR Head-Mounted Display (HMD) includes some of the same circuitry from the Oculus Rift PC solution. This results in far more responsive and superior tracking compared to Google Cardboard, although it still only tracks rotation: Cost: $99 (plus Samsung Android Smartphone) Oculus Rift The Oculus Rift is the platform that reignited the VR renaissance through its successful Kickstarter campaign. The Rift uses a PC and external cameras that allow not only rotational tracking but also positional tracking, allowing the user a full VR experience. The Samsung relationship allows Oculus to use Samsung screens in their HMDs. While the Oculus no longer demands that the user remain seated, it does want the user to move within a smaller 3 m x 3 m area. The Rift HMD is wired to the PC. The user can interact with the VR world with the included Xbox gamepad, mouse, and keyboard, a one-button clicker, or proprietary wireless controllers: Cost: $399 plus $800 for a VR-ready PC Vive The HTC Vive from Valve uses smartphone panels from HTC. The Vive has its own proprietary wireless controllers, of a different design than Oculus (but it can also work with gamepads, joysticks, mouse/keyboards). The most distinguishing characteristic is that the Vive encourages users to explore and walk within a 4 m x 4 m, or larger, cube: Cost: $599 plus an $800 VR-ready PC Sony PSVR While there are persistent rumors of an Xbox VR HMD, Sony is currently the only video game console with a VR HMD. It is easier to install and set up than a PC-based VR system, and while the library of titles is much smaller, the quality of the titles is higher overall on average. It is also the most affordable of the positional tracking VR options. But, it is also the only one that cannot be developed on by the average hobbyist developer: Cost: $400, plus Sony Playstation 4 console Microsoft's HoloLens Microsoft's HoloLens provides a unique AR experience in several ways. The user is not blocked off from the real world; they can still see the world around them (other people, desks, chairs, and so on) through the HMD's semitransparent optics. The HoloLens scans the user's environment and creates a 3D representation of that space. This allows the Holograms from the HoloLens to interact with objects in the room. Holographic characters can sit on couches in the room, fish can avoid table legs, screens can be placed on walls in the room, and so on. The system is completely wireless. It's the only commercially available positional tracking device that is wireless. The computer is built into HMD with the processing power that sits in between a smartphone and a VR-ready PC. The user can walk, untethered, in areas as large as 30 m x 30 m. While an Xbox controller and a proprietary single-button controller can be used, the main interaction with the HoloLens is through voice commands and two gestures from the user's hand (Select and Go back). The final difference is that the holograms only appear in a relatively narrow field of view. Because the user can still see other people, either sharing the same Holographic projections or not, the users can interact with each other in a more natural manner: Cost: Development Edition: $3000; Commercial Suite: $5000 Headset costs and comparison across various features The following chart is a sampling of VR headset prices, accurate as of February 1, 2018. VR/AR hardware is rapidly advancing and prices and specs are going to change annually, sometimes quarterly. As of now, the price of the Oculus has dropped by $200: Google Cardboard Gear VR Google Daydream Oculus Rift HTC Vive Sony PS VR HoloLens Complete cost for HMD, trackers, default controllers $5 $99 $79 $399 $599 $299 $3000 Total cost with CPU: phone, PC, PS4 $200 $650 $650 $1,400 $1,500 $600 $3000 Built-in headphones NO No No Yes No No Yes Platform Apple Android Samsung Galaxy Google Pixel PC PC Sony PS4 Proprietary PC Enhanced rotational tracking No Yes No Yes Yes Yes yes Positional tracking No No No Yes Yes Yes Yes Built-in touch panel No* Yes No No No No no Motion controls No No No Yes Yes Yes Yes Tracking system No No No Optical Lighthouse Optical Laser True 360 tracking No No No Yes Yes No Yes Room scale and size No No No Yes Yes Yes Yes Remote No No Yes Yes No No Yes Gamepad support No Yes No Yes 2m x 2m Yes 4m x 4m Yes 3m x 3m Yes 10mX10m Resolution per eye Varies 1440 x1280 1440 x1280 1200 x1080 1200 x1080 1080 x960 1268 X720 Field of view Varies 100 90 110 110 100 30 Refresh rate 60 Hz 60 Hz 60 Hz 90 Hz 90 Hz 90-120 Hz 60 Hz Wireless Yes Yes Yes No No No Yes Optics adjustment No Focus No IPD IPD IPD IPD Operating system iOS Android Android Oculus Android Daydream Win 10 Oculus Win 10 Steam Sony PS4 Win 10 Built-in Camera Yes Yes Yes* No Yes* No Yes AR/VR VR* VR* VR VR VR* VR AR Natural user Interface No No No No No Yes Choosing which HMD to support comes down to a wide range of issues: cost, access to hardware, use cases, image fidelity/processing power, and more. The previous chart is provided to help the user understand the strengths and weaknesses of each platform. There are many HMDs not included in this overview. Some are not commercially available at the time of this writing (Magic Leap, the Win 10 HMD licensed from Microsoft, the Starbreeze/IMAX HMD, and others) and some are not yet widely available or differentiated enough: Razer's Open Source HMD. You enjoyed an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create immersive 3D games and applications with Cardboard VR, Gear VR, OculusVR, and HTC Vive. The hype behind Magic Leap’s New Augmented Reality Headsets Create Your First Augmented Reality Experience Using the Programming Language You Already Know  
Read more
  • 0
  • 0
  • 20602

article-image-what-is-aiops-why-going-to-be-important
Aaron Lazar
19 Apr 2018
4 min read
Save for later

What is AIOps and why is it going to be important?

Aaron Lazar
19 Apr 2018
4 min read
Woah, woah, woah! Wait a minute! First there was that game SpecOps that I usually sucked at, then there came ITOps and DevOps that took the world by storm, now there’s another something-Ops?? Well, believe it or not, there is, and they’re calling it AIOps. What does AIOps stand for? AIOps basically means Artificial Intelligence for IT Operations. It means IT operations are enhanced by using analytics and machine learning to analyze the data that’s collected from various IT operations tools and devices. This helps in spotting and reacting to issues in real time. Coined by Gartner, the term has grown in popularity over the past year. Gartner believes that AIOps will be a major transformation for ITOps professionals mainly due to the fact that traditional IT operations cannot cope with the modern digital transformation. Why is AIOps important? With the massive and rapid shift towards cloud adoption, automation and continuous improvement, AIOps is here to take care of the new entrants into the digital ecosystem - Machine agents, artificial intelligence, IoT devices, etc. These new entrants are impossible to service and maintain by humans and with billions of devices connected together, the only way forward is to employ algorithms that tackle known problems. Some of the solutions it provides are maintaining high availability and monitoring performance, event correlation and analysis, automation and IT service management. How does AIOps work? As depicted in Gartner’s diagram, there are two primary components to AIOps. Big Data Machine Learning Data is gathered from the enterprise. You then implement a comprehensive analytics and machine learning strategy alongside the combined IT data (monitoring data + job logs + tickets + incident logs). The processed data yields continuous insights, continuous improvements and fixes. It bridges three different IT disciplines to accomplish its goals: Service management Performance management, and Automation To put it simply, it is a strategic focus. It argues for a new approach in a world where big data and machine learning have changed everything. How to move from ITOps to AIOps Machine Learning Most of AIOps will involve supervised learning and professionals will need a good understanding of the underlying algorithms. Now don’t get me wrong, they don’t need to be full blown data scientists to build the system, but just having sufficient knowledge to be able to train the system to pick up anomalies. Auditing these systems to ensure they’re performing the tasks as per the initial vision is necessary and this will go hand in hand with scripting them. Understanding modern application technologies With the rise of Agile software development and other modern methodologies, AIOps professionals are expected to know all about microservices, APIs, CI/CD, containers, etc. With the giant leaps that cloud development is taking, it is expected to gain visibility into cloud deployments, with an emphasis on cost and performance. Security Security is critical, for example, it’s important for personnel to understand how to engage a denial of service attack or maybe a ransomware attack, like the ones we’ve seen in the recent past. Training machines to detect/predict such events is pertinent to AIOps. The key tools in AIOps There are a wide variety of AIOps platforms available in the market that bring AI and Intelligence to IT Operations. One of the most noteworthy ones is Splunk, which has recently incorporated AI for intelligence driven operations. Another one is the Moogsoft AIOps platform, that is quite similar to Splunk. BMC also has entered the fray, launching TrueSight 11, their AIOps platform that promises to address use cases to improve performance and capacity management, the service desk, and application development disciplines. Gartner has a handy list of top platforms. If you’re planning the transition from ITOps, do check out the list. Companies like Frankfurt Cargo Services and Revtrak have already added the AI to their Ops. So, are you going to make the transition? According to Gartner, 40% of large enterprises would have made the transition to AIOps by 2022. If you’re one of them, I recommend you do it for the right reasons, but don’t do it overnight. The transition needs to be gradual and well planned. The first thing you need to do is getting your enterprise data together. If you don’t have sufficient data that’s worthy of analysis, AIOps isn’t going to help you much. Read more: Bridging the gap between data science and DevOps with DataOps.
Read more
  • 0
  • 0
  • 40006

article-image-ai-on-mobile-how-ai-is-taking-over-the-mobile-devices-marketspace
Sugandha Lahoti
19 Apr 2018
4 min read
Save for later

AI on mobile: How AI is taking over the mobile devices marketspace

Sugandha Lahoti
19 Apr 2018
4 min read
If you look at the current trends in the mobile market space, a lot of mobile phone manufacturers portray artificial intelligence as the chief feature in their mobile phones. The total number of developers who build for mobile is expected to hit 14m mark by 2020, according to Evans Data survey. With this level of competition, developers have resorted to Artificial Intelligence to distinguish their app, or to make their mobile device stand out. AI on Mobile is the next big thing. AI on Mobile can be incorporated in multiple forms. This may include hardware, such as AI chips as seen on Apple’s iPhone X or software-based, such as Google’s TensorFlow for Mobile. Let’s look in detail how smartphone manufacturers and mobile developers are leveraging the power of AI for Mobile for both hardware and software specifications. Embedded chips and In-device AI Mobile Handsets nowadays are equipped with specialized AI chips. These chips are embedded alongside CPUs to handle heavy lifting tasks in smartphones to bring AI on Mobile. These built-in AI engines can not only respond to your commands but also lead the way and make decisions about what it believes is best for you. So, when you take a picture, the smartphone software, leveraging the power of AI hardware correctly identifies the person, object, or location being photographed and also compensates for low-resolution images by predicting the pixels that are missing. When we talk about battery life, AI allocates power to relevant functions eliminating unnecessary use of power. Also, in-device AI reduces data-processing dependency on cloud-based AI, saving both energy, time and associated costs. The past few months have seen a large number of AI-based silicon popping everywhere. The trend first began with Apple’s neural engine, a part of the new A11 processor Apple developed to power the iPhone X.  This neural engine powers the machine learning algorithms that recognize faces and transfer facial expressions onto animated emoji. Competing head first with Apple, Samsung revealed the Exynos 9 Series 9810. The chip features an upgraded processor with neural network capacity for AI-powered apps. Huawei also joined the party with Kirin 970 processor, a dedicated Neural Network Processing Unit (NPU) which was able to process 2,000 images per minute in a benchmark image recognition test. Google announced the open beta of its Tensor Processing Unit 2nd Gen. ARM announced its own AI hardware called Project Trillium, a mobile machine learning processor.  Amazon is also working on a dedicated AI chip for its Echo smart speaker. Google Pixel 2 features a Visual Core co-processor for AI. It offers an AI song recognizer, superior imaging capabilities, and even helps the Google Assistant understand the user commands/questions better. The arrival of AI APIs for Mobile Apart from in-device hardware, smartphones also have witnessed the arrival of Artificially intelligent APIs. These APIs add more power to a smartphone’s capabilities by offering personalization, efficient searching, accurate video and image recognition, and advanced data mining. Let’s look at a few powerful machine learning APIs and libraries targeted solely to Mobile devices. It all began with Facebook announcing Caffe2Go in 2016. This Caffe version was designed for running deep learning models on mobile devices. It condensed the size of image and video processing AI models by 100x, to run neural networks with high efficiency on both iOS and Android. Caffe2Go became the core of Style Transfer, Facebook’s real-time photo stylization tool. Then came Google’s TensorFlow Lite in 2017 announced at the Google I/O conference. Tensorflow Lite is a feather-light upshot for mobile and embedded devices. It is designed to be Lightweight, Speedy, and Cross-platform (the runtime is tailormade to run on various platforms–starting with Android and iOS.) TensorFlow Lite also supports the Android Neural Networks API, which can run computationally intensive operations for machine learning on mobile devices. Following TensorFlow Lite came Apple’s CoreML, a programming framework designed to make it easier to run machine learning models on iOS. Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees. CoreML makes it easier for apps to process data locally using machine learning without sending user information to the cloud. It also optimizes models for Apple mobile devices, reducing RAM and power consumption. Artificial Intelligence is finding its way into every aspect of a mobile device, whether it be through hardware with dedicated AI chips or through APIs for running AI-enabled services on hand-held devices. And this is just the beginning. In the near future, AI on Mobile would play a decisive role in driving smartphone innovation possibly being the only distinguishing factor consumers think of while buying a mobile device.
Read more
  • 0
  • 0
  • 20130
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-reasons-to-choose-aws-iot-core-for-your-next-iot-project
Savia Lobo
19 Apr 2018
5 min read
Save for later

5 reasons to choose AWS IoT Core for your next IoT project

Savia Lobo
19 Apr 2018
5 min read
Many cloud service providers have been marching towards adopting IoT (Internet of Things) services to attract more customers. This league includes top cloud merchants such as AWS, Microsoft Azure, IBM, and much recently, Google. Among these, Amazon Web Services have been the most popular. Its AWS IoT Core service is a fully-managed cloud platform that provides IoT devices with an easy and secure connection to interact with cloud applications and other IoT devices. AWS IoT Core can keep track of billions of IoT devices, with the messages travelling to and from them. It processes and routes those messages to the AWS endpoints and to other devices reliably and securely. This means, with the help of AWS IoT Core, you can keep track of all your devices and have a real-time communication with them. Undoubtedly, there is a lot of competition around cloud platforms to host IoT services. Users are bound to a specific cloud platform for a varied set of reasons such as a yearly subscription, by choice, or other reasons. Here are 5 reasons to choose AWS IoT core for your IoT projects: Build applications on the platform of your choice with AWS IoT Core Device SDK AWS IoT Core Device SDK is the primary mode of connection between your application and the AWS IoT core. It uses the MQTT, HTTP, or webSockets protocols to effectively connect and exchange messages with this service. The languages supported by the AWS IoT device SDK are C, Arduino, and JavaScript. The SDK provides developers with mobile SDKs for Android and iOS, and a bunch of SDKs for Embedded C, Python and many more. It also includes open-source libraries, developer guides with samples, and porting guides. With these features, developers can build novel IoT products and solutions on the hardware platform of their choice. AWS IoT Summit 2018 held recently in Sydney shed light on cloud technologies and how it can help businesses lower costs, improve efficiency and innovate at scale. It had sessions dedicated to IoT. (Intelligence of Things: IoT, AWS DeepLens, and Amazon SageMaker) Handle the underlying infrastructure and protocol support with Device Gateway The device gateway acts as an entry gate for IoT devices to connect to the Amazon Web Services (AWS). It handles multiple protocols, which ensures secure and effective connection of the IoT devices with the IoT Core. The list of protocols include MQTT, WebSockets, and HTTP 1.1. Also, with the device gateway, one does not have to worry about the infrastructure as it automatically manages and scales huge amount of devices at ease. Authentication and Authorization is now easy with AWS methods of authentication AWS IoT Core supports SigV4, an AWS method of authentication, X.509 certificate based authentication, and customer created token based authentication. The user can create, deploy and manage certificates and policies for the devices from the console or using the API. AWS IoT Core also supports connections from users’ mobile apps using Amazon Cognito, which creates a unique ID for app users and can be used to retrieve temporary, limited-privilege AWS credentials. AWS IoT Core also enables temporary AWS credentials after a device has authenticated with an X.509 certificate. This is done so that the device can more easily access other AWS services such as DynamoDB or S3. Determine device’s current state automatically with Device Shadow Device shadow is a JSON document, which stores and retrieves the current state for a device. It provides persistent representations such as the last reported state and the desired future state of one’s device even when the device is offline. With Device Shadow, one can easily build applications to interact with the applications by providing REST APIs. It aids applications to set their desired future state without having to request for device starting state. AWS IoT core differentiates between the desire state and the last reported state. It can further command the device to make up the difference. Route messages both internally and externally using AWS Rules Engine The Rules Engine helps build IoT applications without having to manage any infrastructure. Based on the rules defined, the Rules engine evaluates all the incoming messages within the AWS IoT Core, transforms it, and delivers them to other devices or cloud services. One can author or write rules within the management console using the SQL-like syntax The Rules Engine can also route messages to AWS endpoints such as AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, Amazon DynamoDB, Amazon CloudWatch, and Amazon Elasticsearch Service with built-in Kibana integration. It can also reach external endpoints using AWS Lambda, Amazon Kinesis, and Amazon Simple Notification Service (SNS). There are many other reasons to choose AWS IoT Core for your projects. However, it is purely one’s choice as many might already be using or bound to other cloud services. For those, who haven’t yet started, they may choose AWS for a plethora of other cloud services that they offer, which includes AWS IoT Core too.  
Read more
  • 0
  • 0
  • 26713

article-image-how-machine-learning-as-a-service-transforming-cloud
Vijin Boricha
18 Apr 2018
4 min read
Save for later

How machine learning as a service is transforming cloud

Vijin Boricha
18 Apr 2018
4 min read
Machine learning as a service (MLaaS) is an innovation that is growing out of 2 of the most important tech trends - cloud and machine learning. It's significant because it enhances both. It makes cloud an even more compelling proposition for businesses. That's because cloud typically has three major operations: computing, networking and storage. When you bring machine learning into the picture, the data that cloud stores and processes can be used in radically different ways, solving a range of business problems. What is machine learning as a service? Cloud platforms have always competed to be the first or the best to provide new services. This includes platform as a service (PaaS) solutions, infrastructure as a service (IaaS) solutions and software as a service (SaaS) solutions. In essense, cloud providers like AWS and Azure provide sets of software to different things so their customers don't have to. Machine learning as a service is simply another instance of the services offered by cloud providers. It could include a wide range of features, from data visualization to predictive analytics and natural language processing. It makes running machine learning models easy, effectively automating some of the work that might have typically done manually by a data engineering team. Here are the biggest cloud providers who offer machine learning as a service: Google Cloud Platform Amazon Web Services Microsoft Azure IBM Cloud Every platform provides a different suite of services and features. It will ultimately depend on what's most important to you which one you choose. Let's take a look now at the key differences between these cloud providers' machine learning as a service offerings. Comparing the leading MLaaS products Google Cloud AI Google Cloud Platform has always provided their own services to help businesses grow. They provide modern machine learning services with pre-trained models and a service to generate your own tailored models. Majority of Google applications like Photos (image search), the Google app (voice search), and Inbox (Smart Reply) have been built using the same services that they provide to their users. Pros: Cheaper in comparison to other Cloud providers Provides IaaS and PaaS Solutions Cons: Google Prediction API is going to be discontinued (May 1st, 2018) Lacks a visual interface You'll need to know TensorFlow Amazon Machine Learning Amazon Machine Learning provides services for building ML models and generating predictions which help users develop robust, scalable, and cost-effective smart applications. With the help of Amazon Machine Learning you are able to use powerful machine learning technology without having any prior experience in machine learning algorithms and techniques. Pros: Provides versatile automated solutions It's accessible - users don't need to be machine learning experts Cons: The more you use, the more expensive it is Azure Machine Learning Studio Microsoft Azure provides you with Machine Learning Studio - a simple browser-based, drag-and-drop environment which functions without any kind of coding. You are provided with fully-managed cloud services that enable you to easily build, deploy and share predictive analytics solutions. Here you are also provided with a platform (Gallery) to share and contribute to the community. Pros: Consists of most versatile toolset for MLaaS You can contribute to and reuse machine learning solutions from the community Cons: Comparatively expensive A lot of manual work is required Watson Machine Learning Similar to the above platforms, IBM Watson Machine Learning is a service that helps  users to create, train, and deploy self-learning models to integrate predictive capabilities within their applications. This platform provides automated and collaborative workflows to grow intelligent business applications. Pros: Automated workflows Data science skills is not necessary Cons: Comparatively limited APIs and services Lacks streaming analytics Selecting the machine learning as a service solution that's right for you There are so many machine learning as a service solutions out there that it's easy to get confused. The crucial step to take before you make a decision to purchase anything is to plan your business requirements. Think carefully not only about what you want to achieve, but what you already do too. You want your MLaaS solution to easily integrate into the way you currently work. You also don't want it to replicate any work you're currently doing that you're pretty happy with. It gets repeated so much but it remains as true as it has ever been - make sure your software decisions are fully aligned with your business needs. It's easy to get seduced by the promise of innovative new tools, but without the right alignment they're not going to help you at all.
Read more
  • 0
  • 0
  • 28415

article-image-ibm-think-2018-key-takeaways-developers
Amey Varangaonkar
17 Apr 2018
5 min read
Save for later

IBM Think 2018: 6 key takeaways for developers

Amey Varangaonkar
17 Apr 2018
5 min read
This year, IBM Think 2018 was hosted in Las Vegas from March 20 to 22. It was one of the most anticipated IBM events in 2018, with over 40,000 developers as well as technology and business leaders in attendance. Considered IBM’s flagship conference, Think 2018 combined previous conferences such as IBM InterConnect and World of Watson. IBM Think 2018: Key Takeaways IBM Watson Studio announced - A platform where data professionals in different roles can come together and build end-to-end Artificial Intelligence workflows Integration of IBM Watson with Apple's Core ML, for incorporating custom machine learning models into iOS apps IBM Blockchain platform announced, for Blockchain developers to build enterprise-grade decentralized applications Deep Learning as a Service announced as a part of the Watson Studio, allowing you to train deep learning models more efficiently Fabric for Deep Learning open-sourced, so that you can use the open source deep learning framework to train your models and then integrate them with the Watson Studio Neural Network Modeler announced for Watson Studio, a GUI tool to design neural networks efficiently, without a lot of manual coding IBM Watson Assistant announced, an AI-powered digital assistant, for automotive vehicles and hospitality Here are some of the announcements and key takeaways which have excited us, as well as the developers all around the world! IBM Watson Studio announced One of the biggest announcements of the event was the IBM Watson Studio - a premier tool that brings together data scientists, developers and data engineers to collaborate, build and deploy end-to-end data workflows. Right from accessing your data source to deploying accurate and high performance models, this platform does it all. It is just what enterprises need today to leverage Artificial Intelligence in order to accelerate research, and get intuitive insights from their data. IBM Watson Studio's Lead Product Manager, Armand Ruiz, gives a sneak-peek into what we can expect from Watson Studio. Collaboration with Apple Core ML IBM took their relationship with Apple to another level by announcing their collaboration to develop smarter iOS applications. IBM Watson’s Visual Recognition Service can be used to train custom Core ML machine learning models, which can be directly used by iOS apps. The latest announcement at IBM Think 2018 comes as no surprise to us, considering IBM had released new developer tools for enterprise development using the Swift language. IBM Watson Assistant announced IBM Think 2018 also announced the evolution of Watson Conversation to Watson Assistant, introducing new features and capabilities to deliver a more engaging and personalized customer experience. With this, IBM plans to take the concept of AI assistants for businesses on to a new level. Currently in the beta program, there are 2 domain-specific solutions available for use on top of Watson Assistant - namely Watson Assistant for Automotive and Watson Assistant for Hospitality. IBM Blockchain Platform Per Juniper Research, more than half of the world’s big corporations are considering adoption of or are already in the process of adopting Blockchain technology. This presents a serious opportunity for a developer centric platform that can be used to build custom decentralized networks. IBM, unsurprisingly, has identified this opportunity and come up with a Blockchain development platform of their own - the IBM Blockchain Platform. Recently launched as a beta, this platform offers a pay-as-you-use option for Blockchain developers to develop their own enterprise-grade Blockchain solutions without any hassle. Deep Learning as a Service Training a deep learning model is quite tricky, as it requires you to design the right kind of neural networks along with having the right hyperparameters. This is a significant pain point for the data scientists and machine learning engineers. To tackle this problem,  IBM announced the release of Deep Learning as a Service as part of the Watson Studio. It includes the Neural Network Modeler (explained in detail below) to simplify the process of designing and training neural networks. Alternatively, using this service, you can leverage popular deep learning libraries and frameworks such as PyTorch, Tensorflow, Caffe, Keras to train your neural networks manually. In the process, IBM also open sourced the core functionalities of Deep Learning as a Service as a separate project - namely Fabric for Deep Learning. This allows models to be trained using different open source frameworks on Kubernetes containers, and also make use of the GPUs’ processing power. These models can then eventually be integrated to the Watson Studio. Accelerating deep learning with the Neural Network Modeler In a bid to reduce the complexities and the manual work that go into designing and training neural networks, IBM introduced a beta release of the Neural Network Modeler within the Watson Studio. This new feature allows you to design and model standardized neural network models without going into a lot of technical details, thanks to its intuitive GUI. With this announcement, IBM aims to accelerate the overall process of deep learning, so that the data scientists and machine learning developers can focus on the thinking more than operational side of things. At Think 2018, we also saw the IBM Research team present their annual ‘5 in 5’ predictions. This session highlighted the 5 key innovations that are currently in research, and are expected to change our lives in the near future. With these announcements, it’s quite clear that IBM are well in sync with the two hottest trends in the tech space today - namely Artificial Intelligence and Blockchain. They seem to be taking every possible step to ensure they’re right up there as the preferred choice of tool for data scientists and machine learning developers. We only expect the aforementioned services to get better and have more mainstream adoption with time, as most of these services are currently in the beta stage. Not just that, there’s scope for more improvements and addition of newer functionalities as they develop these platforms. What did you think of these announcements by IBM? Do let us know!
Read more
  • 0
  • 0
  • 8524

article-image-what-is-the-reactive-manifesto
Packt Editorial Staff
17 Apr 2018
3 min read
Save for later

What is the Reactive Manifesto?

Packt Editorial Staff
17 Apr 2018
3 min read
The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner (you can find him on Twitter: @jboner). Jonas wrote this in a blog post explaining the reasons behind the manifesto: "Application requirements have changed dramatically in recent years. Both from a runtime environment perspective, with multicore and cloud computing architectures nowadays being the norm, as well as from a user requirements perspective, with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability. This all demands writing applications in a fundamentally different way than what most programmers are used to." A number of high-profile programmers signed the reactive manifesto. Some of the names behind it include Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, and Roland Kuhn. A second, updated version of the Reactive Manifesto was released in 2014 - to date more than 22,000 people have signed it. The Reactive Manifesto underpins the principles of reactive programming You can think of it as the map to the treasure of reactive programming, or like the bible for the programmers of the reactive programming religion. Everyone starting with reactive programming should have a read of the manifesto to understand what reactive programming is all about and what its principles are. The 4 principles of the Reactive Manifesto Reactive systems must be responsive The system should respond in a timely manner. Responsive systems focus on providing rapid and consistent response times, so they deliver a consistent quality of service. Reactive systems must be resilient In case the system faces any failure, it should stay responsive. Resilience is achieved by replication, containment, isolation, and delegation. Failures are contained within each component, isolating components from each other, so when failure has occurred in a component, it will not affect the other components or the system as a whole. Reactive systems must be elastic Reactive systems can react to changes and stay responsive under varying workload. They achieve elasticity in a cost effective way on commodity hardware and software platforms. Reactive systems must be message driven Message driven: In order to establish the resilient principle, reactive systems need to establish a boundary between components by relying on asynchronous message passing. Those are the core principles behind reactive programming put forward by the manifesto. But there's something else that supports the thinking behind reactive programming. That's the standard specification on reactive streams. Reactive Streams standard specifications Everything in the reactive world is accomplished with the help of Reactive Streams. In 2013, Netflix, Pivotal, and Lightbend (previously known as Typesafe) felt a need for a standards specification for Reactive Streams as the reactive programming was beginning to spread and more frameworks for reactive programming were starting to emerge, so they started the initiative that resulted in Reactive Streams standard specification, which is now getting implemented across various frameworks and platforms. You can take a look at the Reactive Streams standard specification here. This post has been adapted from Reactive Programming in Kotlin. Find it on the Packt store here.
Read more
  • 0
  • 1
  • 21557
article-image-aws-fargate-makes-container-infrastructure-management-a-piece-of-cake
Savia Lobo
17 Apr 2018
3 min read
Save for later

AWS Fargate makes Container infrastructure management a piece of cake

Savia Lobo
17 Apr 2018
3 min read
Containers such as Docker, FreeBSD Jails, and many more, are a substantial way for developers to develop and deploy their applications. Also, with container orchestration solutions such as Amazon ECS and EKS (Kubernetes), developers can easily manage and scale these containers, thus enabling them to perform other activities quickly. However, in spite of these management solutions at hand, one also has to take an account of the infrastructure maintenance, its availability, capacity and so on which are added tasks. AWS Fargate eases out these tasks and streamlines all deployments for you, resulting in faster completion of deliverables. At the Re:Invent in November 2017, AWS launched Fargate, a technology which enables one to manage containers without having to worry about managing the container infrastructure underneath. AWS Fargate comes to your rescue here. It is an easy way to deploy your containers on AWS. One can start using Fargate on ECS or EKS, try processes and workloads and later migrate workloads to Fargate. It eliminates most of the management such as placement of resources, scheduling, scaling, and so on, which is a requirement for containers. All you have to do is, Build your container image, Specify the CPU and memory requirements, Define your networking and IAM policies, and Launch your container application Some key benefits of AWS Fargate It allows developers to focus on design, development, and deployment of applications. This eliminates the need to manage a cluster of Amazon EC2 instances. One can easily scale applications using Fargate. Once, the application requirements such as CPU, memory, and so on are defined, Fargate manages effective scaling and infrastructure needed to make containers highly-available. One can launch thousands of containers in no time and easily scale them to run most of the mission-critical applications. AWS Fargate is integrated with Amazon ECS and EKS. Fargate launches and manages containers once CPU and memory needed, IAM policies that container needs are defined and uploaded to Amazon ECS. With Fargate, one gets flexible configuration options that matches one’s applications’ needs. Also, one pays on the basis of per-second granularity. Adoption of Container management as a trend is steadily increasing. Kubernetes, at present, is one of the popular and most used containerized application management platforms. However, users and developers are often confused about who the best Kubernetes provider is. Microsoft and Google have their managed Kubernetes services, but AWS Fargate provides an added ease to Amazon’s EKS (Elastic Container Service for Kubernetes) by eliminating the hassle of container infrastructure management. Read more about AWS Fargate on AWS’ official website.
Read more
  • 0
  • 0
  • 19730

article-image-organisation-needs-to-know-about-gdpr
Aaron Lazar
16 Apr 2018
5 min read
Save for later

What your organisation needs to know about GDPR

Aaron Lazar
16 Apr 2018
5 min read
GDPR is an acronym that has been doing the rounds for a couple of years now. It’s become even more visible in the last few weeks, thanks to the Facebook and Cambridge Analytica data hijacking scandal. And with the deadline date looming - 25 May 2018 - every organization on the planet needs to make sure their on top of things. But what is GDPR exactly? And how is it going to affect you? What is GDPR? Before April, 2016, a data protection directive enforced in 1995 was in place. This governed all organisations that dealt with collecting, storing and processing data. This directive became outdated with rapidly evolving technological trends, which meant a revised directive was needed. In April 2016, the European Union drew up General Data Protection Regulation. It has been specifically created to to protect the personal data and privacy of European citizens. It's important to note at this point that the directive doesn't just apply to EU organizations - it applies to anyone who deals with data on EU citizens. A relatively new genre of crime involving stealing data, has cropped up over the past decade. Data is so powerful, that its misuse could be devastating, possibly resulting in another world war. GDPR aims to set a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations will now be responsible for guarding every quantum of information that is connected to an individual, including IP addresses and web cookies! Read more: Why GDPR is good for everyone. Why should organizations bother with GDPR? In December 2017, the RSA, one of the first cryptosystems and security organisations, surveyed 7,500 customers in France, Italy, Germany, the UK and the US, and the results were interesting. When asked what their main concern was, customers responded that lost passwords, banking information, passports and other important documents were their major concern. The more interesting part was that over 60% of the respondents said that in the event of a breach, they would blame the organisation that lost their data rather than the hacker. If you work for or own a company that deals with the data of EU citizens, you’ll probably have GDPR on your radar. If you don’t comply, you’ll face a hefty fine - more on that below. What kind of data are we talking about? The GDPR aims to protect data related to identity information like name, physical address, sexual orientation and more. It also covers any ID numbers; IP addresses, cookies and RFID tags; genetic and any data related to health; biometric data like fingerprints, retina scans, etc; racial or ethnic data; political opinions. Who must comply with GDPR? You’ll be governed by GDPR if: You’re a company located in the EU You’re not located in the EU but you still process data of EU citizens You have more than 250 employees You have lesser than 250 employees but process data that could impact the rights and freedom of EU citizens When does GDPR come into force? In case you missed it in the first paragraph, GDPR comes into effect on 25 May 2018. If you're not ready yet, now is the time to scramble to get things right and make sure you comply with GDPR regulations. What if you don’t make the date? Unlike an invitation to a birthday party, if you miss the date to comply with the GDPR, you’re likely to be fined to the tune of €20 million or 4% of the worldwide turnover of your company. A more relaxed fine includes €10 million or 2% of the worldwide turnover of your company, for misusing data in ways involving failure to report a data breach, failure to incorporate privacy by design and failure to ensure that data protection is applied at the initial stage of a project. It also includes the failure to hire a Data Protection Officer/Chief Data Officer, who has professional experience and knowledge of data protection laws that are proportionate to what the organisation carries out. If it makes you feel any better, you’re not the only one. A report from Ovum states that more than 50% of the companies feel they’re most likely to be fined for non compliance. How do you prepare for GDPR? Well, here are a few honest steps that you could perform to ensure a successful compliance: Prepare to shell out between $1 million to $10 million to meet GDPR requirements Hire a DPO or a CDO who’s capable of handling all your data policies and migration Fully understand GDPR and its requirements Perform a risk assessment, understand what kind of data you store and what implications it might have Strategize to mitigate that risk Review/Create your data protection plan Plan for a 72 hour incident response system Implement internal plans and policies to ensure employees follow For the third time then - time is running out! It’s imperative that you ensure your organisation complies with GDPR before the 25th of May, 2018. We’ll follow up with some more thoughts to help you make the shift, as well as give you more insight into this game changing regulation. If you own or are part of an organisation that has migrated to comply with GDPR, please share some tips in the comments section below to help others still in the midst of the transition.
Read more
  • 0
  • 0
  • 23098

article-image-ibm-researchs-5-in-5-predictions-think-2018
Amey Varangaonkar
16 Apr 2018
4 min read
Save for later

What we learnt from IBM Research’s ‘5 in 5’ predictions presented at Think 2018

Amey Varangaonkar
16 Apr 2018
4 min read
IBM’s mission has always been to innovate and in the process, change the way the world operates. With this objective in mind, IBM Research started a conversation termed as ‘5 in 5’ way back in 2012, giving their top 5 predictions every year at IBM Think 2018 on how technology would change the world. These predictions are usually the drivers for their research and innovation - and eventually solving the problems by coming up with efficient solutions to them. Here are the 5 predictions made by IBM Research for 2018: More secure Blockchain products: In order to avoid counterfeit Blockchain products, the technology will be coupled with cryptographic solutions to develop decentralized solutions. Digital transactions are often subject to frauds, and securing them with crypto-anchors is seen as the way to go forward. Want to know how this can be achieved? You might want to check out IBM’s blog on crypto-anchors and their real world applications. If you are like me, you’d rather watch IBM researcher Andres Kind explain what crypto-anchors are in a fast paced science slam session. Sophisticated cyber attacks will continue to happen: Cyber attacks resulting in the data leaks or stealing of confidential data is not news to us. The bigger worry, though, is that the current methodologies to prevent these attacks are not proving to be good enough. IBM predicts this is only going to get worse, with more advanced and sophisticated cyber attacks breaking into the current secure systems with ease. IBM Research also predicted the rise of ‘lattice cryptography’, a new security mechanism offering a more sophisticated layer of protection for the systems. You can read more about lattice cryptography technology on IBM’s official blog. Or, you can watch IBM researcher Cecilia Boschini explain what is lattice cryptography in 5 minutes on one of IBM’s famous science slam sessions. Artificial Intelligence-powered bots will help clean the oceans: Our marine ecosystem seems to be going from bad to worse. This is mainly due the pollution and toxic wastes being dumped into it. IBM predicts that AI-powered autonomous bots, deployed and controlled on the cloud, can help relieve this situation by monitoring the water bodies for water quality and pollution levels. You can learn more about how these autonomous bots will help save the seas in this interesting talk by Tom Zimmerman. An unbiased AI system: Artificially designed systems  are only as good as the data being used to build them. This data may be impure, or may contain flaws or bias pertaining of color, race, gender and so on. Going forward, new models which mitigate these biases and ensure more standard, bias-free predictions will be designed. With these models, certain human values and principles will be considered for effective decision-making. IBM researcher Francesca Rossi talks about bias in AI and the importance of building fair systems that help us make better decisions. Quantum Computing will go mainstream: IBM predicts that quantum computing will get out of research labs and gain mainstream adoption in the next 5 years. Problems considered to be difficult or unsolvable today due to their sheer scale or complexity can be tackled with the help of quantum computing. To know more, let IBM researcher Talia Gershon take you through the different aspects of quantum computing and why it is expected to be a massive hit. Amazingly, most of the predictions from the past have turned out to be true. For instance, IBM predicted the rise of Computer Vision technology in 2012, where computers would be able to not only process images, but also understand their ‘features’. It remains to be seen how true this year’s predictions will turn out to be. However, considering the rate at which the research on AI and other tech domains is progressing and being put to practical use, we won’t be surprised if they all become a reality soon. What do you think?
Read more
  • 0
  • 0
  • 6031
article-image-gdpr-is-good-for-everyone-businesses-developers-customers
Richard Gall
14 Apr 2018
5 min read
Save for later

GDPR is good for everyone: businesses, developers, customers

Richard Gall
14 Apr 2018
5 min read
Concern around GDPR is palpable, but time is running out. It appears that many businesses don’t really know what they’re doing. At his Congressional hearing, Mark Zuckerberg’s notes read “don’t say we’re already GDPR compliant” - if Facebook aren’t ready yet, how could the average medium sized business be? But the truth is that GDPR couldn’t have come at a better time. Thanks in part to the Facebook and Cambridge Analytica scandal, the question of user data and online privacy has never been so audible within public discourse. That level of public interest wasn’t around a year ago. Ultimately, GDPR is the best way to tackle these issues. It forces businesses to adopt a different level of focus - and care - towards its users. It forces everyone to ask: what counts as personal data? who has access to it? who is accountable for the security of that data? These aren’t just points of interest for EU bureaucrats. They are fundamental questions about how businesses own and manage relationships with customers. GDPR is good news for web developers In turn, this means GDPR is good news for those working in development too. If you work in web development or UX it’s likely that you’ve experienced frustration when working against the requirements and feedback of senior stakeholders. Often, the needs of users are misunderstood or even ignored at the expense of what the business needs. This is especially true when management lacks technical knowledge and makes too many assumptions. At its worst, it can lead down the path of ‘dark patterns’ where UX is designed in such a way to ‘trick’ customers in behaving in a certain way. But even when intentions aren’t that evil, the mindset that refuses to take user privacy - and simple user desires - seriously can be damaging. Ironically, the problems this sort of negligence is causing isn’t just leading to legal issues. It’s also bad for business. That’s because when you engineer everything around what’s best for the business in a crude and thoughtless way, you make life hard for users and customers. This means: Customers simply have a bad experience and could get a better one elsewhere Customers lose trust Your brand is damaged GDPR will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. GDPR will change bad habits in businesses What GDPR does, then, is it will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. This could mean a change in the way that developers work within their businesses in the future. Siloes won’t just be inefficient, they might just lead to a legal crisis. Development teams will have to work closely with legal, management and data teams to ensure that the software they are developing is GDPR compliant. Of course, this will also require a good deal of developer training to be fully briefed on the new landscape. It also means we might see new roles like Chief Data Officer becoming more prominent. But it’s worth remembering that for non-developers, GDPR is going to also require much more technical understanding. If recent scandals have shown us anything, it’s that a lot of people don’t fully understand the capabilities that even the smallest organizations have at their disposal. GDPR will force the non-technical to become more informed about how software and data interact - and most importantly how software can sometimes exploit or protect users. GDPR will give developers a renewed focus on the code they write Equally, for developers, GDPR also forces a renewed focus on the code they write. Discussions around standards have been a central point of contention in the open source world for some time. There has always been an unavoidable, quiet tension between innovation and standard compliance. Writing in Smashing Magazine, digital law expert Heather Burns has some very advice on this: "Your coding standards must be preventive as well. You should disable unsafe or unnecessary modules, particularly in APIs and third-party libraries. An audit of what constitutes an unsafe module should be about privacy by design, such as the unnecessary capture and retention of personal data, as well as security vulnerabilities. Likewise, code reviews should include an audit for privacy by design principles, including mapping where data is physically and virtually stored, and how that data is protected, encrypted, and sandboxed." Sure, all of this seems like a headache, but all of this should make life better for users and customers. And while it might seem frustrating to not be able to track users in the way that we might have in the old world, by forcing everyone to focus on what users really want  - not what we want them to want - we’ll ultimately get to a place that’s better for everyone.
Read more
  • 0
  • 0
  • 10237

article-image-data-science-windows-big-no
Aaron Lazar
13 Apr 2018
5 min read
Save for later

Data science on Windows is a big no

Aaron Lazar
13 Apr 2018
5 min read
I read a post from a Linkedin connection about a week ago. It read: “The first step in becoming a data scientist: forget about Windows.” Even if you’re not a programmer, that's pretty controversial. The first nerdy thought I had was, that’s not true. The first step to Data Science is not choosing an OS, it’s statistics! Anyway, I kept wondering what’s wrong with doing data science on Windows, exactly. Why is the legacy product (Windows), created by one of the leaders in Data Science and Artificial Intelligence, not suitable to support the very thing it is driving? As a publishing professional and having worked with a lot of authors, one of the main issues I’ve faced while collaborating with them is the compatibility of platforms, especially when it comes to sharing documents, working with code, etc. At least 80 percent of the authors I’ve worked with have been using something other than Windows. They are extremely particular about the platform they’re working on, and have usually chosen Linux. I don’t know if they consider it a punishable offence, but I’ve been using Windows since I was 12, even though I have played around with Macs and machines running Linux/Unix. I’ve never been affectionately drawn towards those machines as much as my beloved laptop that is happily rolling on Windows 10 Pro. Why is data science on Windows is a bad idea? When Microsoft created Windows, its main idea was to make the platform as user friendly as possible, and it focused every ounce of energy on that and voila! They created one of the most simplest operating systems that one could ever use. Microsoft wanted to make computing easy for everyone - teachers, housewives, kids, business professionals. However, they did not consider catering to the developer community as much as its users. Now that’s not to say that you can’t really use a Windows machine to code. Of course, you can run Python or R programs. But you’re likely to face issues with compatibility and speed. If you’re choosing to use the command line, and something goes wrong, it’s a real PITA to debug on Windows. Also, if you’re doing cluster computing with other Linux/Macs, it’s better to have one of them yourself. Many would agree that Windows is more likely to suffer a BSoD (Blue Screen of Death) than a Mac or a Unix machine, messing up your algorithm that’s been running for a long time. [box type="note" align="" class="" width=""]Check out our most read post 15 useful Python libraries to make your Data science tasks easier. [/box] Is it all that bad? Well, not really. In fact, if you need to pump in a couple more gigs of RAM, you can’t think of doing that on a Mac. Although you might still encounter some weird stuff like those mentioned above, on a Windows PC, you can always Google up a workaround. Don’t beat yourself up if you own a PC. You can always set up a dual boot, running a Linux distribution parallely. You might want to check out Vagrant for this. Also, you’ll be surprised if you’re a Mac owner and you plan some heavy duty Deep Learning on a GPU, you can’t really run CUDA without messing things up. CUDA will only work well with NVIDIAs GPUs on a PC. In Joey Tribbiani's words “This is a moo point.” To me, data science is really OS agnostic. For instance, now with Docker, you don’t really have to worry much about which OS you’re running - so from that perspective, data science on Windows may work for you. Still feel for Windows? Well, there are obviously drawbacks. You’ll still keep living with the fear of isolation that Microsoft tries to create in the minds of customers. Moreover, you’ll be faced with “slowdom” if that’s a word, what with all the background processes eating away your computing power! You’ll be defying everything that modern computing is defined by - KISS, Open Source, Agile, etc. Another important thing you need to keep in mind is that when you’re working with so much data, you really don’t wanna get hacked! Last but not the least, if you’re intending to dabble with AI and Blockchain, your best bet is not going to be Windows. All said and done, if you’re a budding data scientist who’s looking to buy some new equipment, you might want to consider a few things before you invest in your machine. Think about what you’ll be working with, what tools you might want to use and if you want to play safe, it’s best to go with a Linux system. If you have the money and want to flaunt it, while still enjoying support from most tools, think about a Mac. And finally, if you’re brave and are not worried about having two OSes running on your system, go in for a Windows PC. So the next time someone decides to gift you a Windows PC, don’t politely decline right away. Grab it and swiftly install a Linux distro! Happy coding! :) *I will put an asterisk here, for the thoughts put in this article are completely my personal opinion and it might differ from person to person. Go ahead and share your thoughts in the comments section below.
Read more
  • 0
  • 4
  • 15656