Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-decoding-chief-robotics-officer-cro
Aaron Lazar
20 Dec 2017
7 min read
Save for later

Decoding the Chief Robotics Officer (CRO) Role

Aaron Lazar
20 Dec 2017
7 min read
The world is moving swiftly towards an automated culture and this means more and more machines will enter the fray. There’ve been umpteen debates on whether this is a good or bad thing - talks of how fortune tellers might be overtaken by Artificial Intelligence, etc. From the progress mankind has made, benefiting from these machines, we can safely say for now, that it’s only been a boon. With this explosion of moving and thinking metal, there’s a strong need for some governance at a top level. It now looks like we need to shift a bit to make more room at the C-level table, cos we’ve got the Master of the Machines arriving! Well “Master of the Machines” does sound cool, although not many companies would appreciate the “professionalism” of the term. The point is, the rise of a brand new C-level role, the Chief Robotics Officer, seems just on the horizon. Like we did in the Chief Data Officer article, we’re going to tell you more about this role and its accompanying responsibilities and by the end, you’ll be able to understand whether your organisation needs a CRO. Facts and Figures As far as I can remember, one of the first Chief Robotics Officers (CRO), was John Connor (#challengeme). Jokes apart, the role was introduced at the Chief Robotics Officer (CRO) Summit after having been spoken quite a lot in 2015. You’ve probably heard about this role by another name - Chief Autonomy Officer. Gartner predicts that 10% of large enterprises in supply-chain industries will have created a CRO position by 2020. Cisco states that as many as 60% of industries like logistics, healthcare, manufacturing, energy, etc,will have a CRO by 2025. The next generation of AI and Robots will affect workforce, business models, operations and competitive position of leading organisations. Therefore, it’s not surprising that the Boston Consulting Group projects that the market for robots will reach $67 billion by 2025. Why all the fuss? It’s quite evident that robots and smart machines will soon take over/redefine the way a lot of jobs are currently being performed by humans. This means that robots will be working alongside humans and as such there’s a need for the development of principles, processes, and disciplines that govern or manage this collaboration. According to Myria research, “The CROs (and their teams) will be at the forefront of technology, to translate technology fluency into clear business advantages, and to maintain Robotics and Intelligent Operational Systems (RIOS) capabilities that are intimately linked to customer-facing activities, and ultimately, to company performance”. With companies like Amazon, Adidas, Crowne Plaza Hotels and Walmart already deploying robots worth millions in research, to move in the direction of automation, there is clearly a need for a CRO. What might the Chief Robotics Officer’s responsibilities be? If you search for job listings of the role, you probably won’t succeed because the role is still in the making and there are no properly defined responsibilities. Although, if we were to think of what the CRO’s responsibilities might me, here’s what we could expect: Piece the Puzzle together: CROs will be responsible for bringing business functions like Engineering, HR, IT, and others together, implementing and maintaining automation technologies within the technical, social and financial contexts of a company. Manage the Robotics Life Cycle: The CRO would be responsible for defining and managing different aspects of the robotics life cycle. They will need to identify ways and means to improve the way robots function and boost productivity. Code of Conduct: CROs will need to design and develop the principles, processes and disciplines that will manage robots and smart machines, to enable them to collaborate seamlessly with a human workforce. Define Integration: CROs will define robotic environments and integration touch points with other business functions such as supply chain, manufacturing, agriculture, etc. Brand Guardians: With the addition of non-humans in the workforce, CROs will be responsible for the brand health and any violations caused by their robots. Define Management Techniques: CROs will bridge the gap between the machines and humans and will develop techniques that humans can use to manage robotic workers. On a broader level, these responsibilities look quite similar to those of a Chief Information Officer, Chief Digital Information Officer or even the Director of IT. Key CRO Skills Well, with the robots in place people management skills would be lesser required, or not. You might think that a CRO is expected to possess only technical skills because of the nature of the job. Although, they still will have to interact with humans and manage their collaboration with the machines. This brings in the challenge of managing change. Not everyone is comfortable working with machines and a certain amount of understanding and skill will need to be developed. With Brand Management and other strategic goals involved, the CRO must be on their toes moulding the technological side of the business to achieve short and long term goals. IT Managers, those in charge of automation and Directors who are skilled in Robotics, will be interested in scaling up to the position. On another note, there might be over 35% vacant robotics jobs by 2020, owing to the rapid growth of the field. Futuristic Challenges Some of the major challenges we expect to see could be managing change and an environment where humans and bots work together. The European Union has been thinking of considering robots as “electronic persons” with rights in the near future. This will result in complications about who is right and who is wrong. Moreover, there are plans about rewarding and penalising bots, based on their performance. How do you penalize a bot? Maybe penalising would come in the form of not charging the bot for a few days or formatting it’s memory, if it’s been naughty! Or rewards could be in the form of a software update or debugging it more frequently? These probably sound silly at the moment, but you never know what the future might have in store. The Million $ Question: Do we need a CRO? So far, there haven’t been any companies that have publicly announced about hiring a CRO, although many manufacturing companies already have senior roles related to robotics, such as Vice President of Advanced Automation and Robotics, or Vice President of Advanced Engineering. However, these roles are purely technical and not strategic. It’s clear that there needs to be someone at the high table calling the shots and strategies for a collaborative future, and world where robots and machines will work in harmony. Remy Glaisner of Myria Research predicts that the CROs will occupy a strategic role on a par with CIOs within the next five to eight years. CIOs might even get replaced by CROs in the long run. You never know, in the future the CRO might work with a bot themselves - the bot helping in taking decisions at an organisation/strategic level. The sky's the limit! In the end, small, medium or even a large sized businesses that are already planning to hire a CRO to drive automation, are on the right track. A careful evaluation of the benefits of having one in your organisation to lead your strategy, will help you decide on whether to take the CRO path or not. With automation bound to increase in importance in a coming years, it looks as though strategic representation will be inevitable for people with skills in the field.
Read more
  • 0
  • 0
  • 5167

article-image-6-ways-ai-transforming-web-development
Sugandha Lahoti
19 Dec 2017
8 min read
Save for later

6 ways you can give an Artificially Intelligent Makeover to the Web Development

Sugandha Lahoti
19 Dec 2017
8 min read
The web is an ever changing world! Users are always seeking personalized content and richer experiences - websites which can do whatever users want, however they want, and exactly as they want. In other words, end-users expect smarter applications with self-learning capabilities and hyper-customized user experiences. Now this poses a major challenge for developers - How do they design websites that deliver new and personalized content every time? Traditional approaches for web development are not the answer. They can, in fact, be problematic. Why you ask? Here are some clues. Building basic layouts and designing the website alone takes time. Forget customizing for dynamic content. Web app testing is a tedious, time-intensive process prone to errors. Even mundane web development decisions depend on the developer, slowing down the time taken to go live. Automating the web development process, starting with the more structured, repetitive and clearly defined tasks, can help developers pay less attention to cumbersome details and focus on the more value adding aspects of development such as formulating design strategy, planning the ultimate user experience and other such activities. Artificial Intelligence can help not just with this kind of intelligent automation, but also help do a lot more - from assisting with design conceptualization, website implementation to web analytics. This human-machine collaboration have the potential to transform the web as we know it. How AI can improve web development Through the lens of a typical software development lifecycle, let’s look at some ways in which AI is transforming web development. 1. Automating complex requirement gathering and analysis Using an AI powered chatbot or voice assistant, for instance, one can automate the process of collecting client requirements and end-user stories without human intervention. It can also prepare a detailed description of the gathered data and use data extraction tools to generate insights that then drive the web design and development strategy. This is possible through a carefully constructed system that employs NLP, ML, computer vision and image recognition algorithms and tools amongst others. Kore.ai is one such platform which empowers decision makers with insights they need to drive business outcomes within data-driven analytics. 2. Simplifying web designing with AI virtual assistants Designing basic layouts and templates of web pages is a tedious job for all developers. AI tools such as virtual assistants like The Grid’s Molly can help here by simplifying the design process. By prompting questions to the user (in this case the web owner or even the developer), and extracting content from their answers, AI assistants can create personalized content with the exact combination of branding, layout, design, and content required by that user. Take Adobe Sensei for instance - it can automatically analyze inputs and recommend design elements to the user. These range from automating basic photo editing skills such as cropping using image recognition techniques to creating elements in images which didn’t exist before by studying the neighbouring pixels. Developers now need only to focus on training a machine to think and perform like a designer. 3. Redefining web programming with self-learning algorithms AI can also facilitate programming. AI programs can perform basic tasks like updating and adding records to a database, predict which bits of code are most likely to be used to solve a problem, and then utilize those predictions to prompt developers to adopt a particular solution. An interesting example is Pix2Code that aims at automating front-end development. Infact, AI algorithms can also be utilized to create self-modifying codes right from scratch! Think of it as a fully-functional piece of code without human involvement. Developers can therefore build smarter apps and bots using AI tech at much faster rates than before. They would however need to train these machines and feed them with good datasets to start with. The smarter the design and more comprehensive the training, the better the results these systems produce. This is where the developers’ skills make a crucial difference.    4. Putting testing and quality assurance on auto-pilot AI algorithms can help an application test itself, with little or no human input. They can predict the key parameters of software testing processes based on historical data. They can also detect failure patterns and amplify failure prediction at a much higher efficiency than traditional QA approaches. Thus, bug identification and remedial will no longer be a long and slow process. As we speak, Microsoft is readying the release of an AI Bug Finder for developers, Microsoft Security Risk Detection. In this new AI powered QA environment, developers can discover more effective ways of testing, identify outliers faster, and work on effective code coverage techniques all with no or basic testing experience. In simple terms, developers can just focus on perfecting the build while the AI handles complex test cases and resultant bugs automatically. 5. Harnessing web analytics for SEO with AI SEO strategies rely heavily on number crunching. Many web analytics tools are good, however their potential is currently limited by the processing capabilities of humans who interpret that data for their websites. With AI backed data mining and analytics, one can maximise the usefulness of a website’s meta-data and other user generated data and metadata. The Predictive Engines built using AI technologies can generate insights which can point the developers to irregularities in their website architecture or highlight ill-fit content from an SEO point of view. Using such insights, AI can list out better ways to design websites and develop web content that connect with the target audience. Market Brew is one such Artificially Intelligent SEO platform which uses AI to help developers react and plan the content for their websites in ways that search engines might perceive them. 6. Providing superior end-user experience with chatbots AI powered chatbots can take customer support and interaction to the next level. A simple rule based chatbot responds to only specific preset commands. An AI infused chatbot, on the other hand, can simulate an actual conversation by learning something new from every conversation and tailoring answers and actions accordingly. They can automate routine tasks and provide relevant information and services. Imagine the possibilities here - these chatbots can enhance visitor engagement by responding to queries, to comments on blog posts, and provide real-time assistance and personalization. eBay’s AI-powered ShopBot is one such chatbot built on facebook messenger which can help consumers narrow down best deals from eBay from their entire listings and answer customer-centric queries. Skill up Developers! With the rise of AI, it is clear that developers will play a different role from the traditional role of a programmer-cum-designer. Developers will need to adapt their technical skillsets to rise above and complement the web development work that AI is capable of taking over. For example, they will now need to focus more on training AI algorithms to learn web and mobile usage patterns for better recommendations. A data-driven approach is now required with more focus on curating the data and taking the software through the process of learning by itself, and writing scripts to interact with the software. To do these, web developers need to get upto speed with the basics of machine learning, natural language processing, deep learning, etc and apply the tools and techniques related to AI into their web development workflow. The Future Perfect Artificial Intelligence has found its way into everything imaginable. Within web development this translates to automated web design, intelligent application development, highly proficient recommendation engines and many more. Today the use of AI in web development is a nice to have ammunition in an web developer’s toolkit. Soon, AI will make itself indispensable to the web development ecosystem ushering in the intelligent web. Developers will be a hybrid of designers, programmers and ML engineers who have a good grasp of user experience, are comfortable thinking in abstracts and algorithms and equally well versed with translating them into AI assisted elegant code . The next milestone for AI in web development is building self-improving apps which can think beyond the confines of human thought. Such apps would have the ability to perceive connections between data points that have not been previously considered, or are out of the reach of human intelligence. The ultimate goal of such machines, on the web and otherwise, would be to gain clairvoyance on aspects humans are missing or are oblivious to. Here’s hoping that when such a revolutionary AI hits the market, it impacts society for the greater good.
Read more
  • 0
  • 0
  • 5180

article-image-behind-scenes-deep-learning-evolution-core-concepts
Shoaib Dabir
19 Dec 2017
6 min read
Save for later

Behind the scenes: Deep learning evolution and core concepts

Shoaib Dabir
19 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book will help you build and analyze various deep learning models and apply them to real-world problems.[/box] This article will take you through the history of Deep learning and how it has grown over time. It will walk you through some of the core concepts of Deep Learning like sigmoid activation, rectified linear unit(ReLU), etc. Evolution of deep learning A lot of the important work on neural networks happened in the 80's and in the 90's, but back then computers were slow and datasets very tiny. The research didn't really find many applications in the real world. As a result, in the first decade of the 21st century, neural networks have completely disappeared from the world of machine learning. It's only in the last few years, first seeing speech recognition around 2009, and then in computer vision around 2012, that neural networks made a big comeback with (LeNet, AlexNet). What changed? Lots of data (big data) and cheap, fast GPU's. Today, neural networks are everywhere. So, if you're doing anything with data, analytics, or prediction, deep learning is definitely something that you want to get familiar with. See the following figure: Deep learning is an exciting branch of machine learning that uses data, lots of data, to teach computers how to do things only humans were capable of before, such as recognizing what's in an image, what people are saying when they are talking on their phone, translating a document into another language, helping robots explore the world and interact with it. Deep learning has emerged as a central tool to solve perception problems and it's the state of the art with computer vision and speech recognition. Today many companies have made deep learning a central part of their machine learning toolkit—Facebook, Baidu, Amazon, Microsoft, and Google are all using deep learning in their products because deep learning shines wherever there is lots of data and complex problems to solve. Deep learning is the name we often use for "deep neural networks" composed of several layers. Each layer is made of nodes. The computation happens in the node, where it combines input data with a set of parameters or weights, that either amplify or dampen that input. These input-weight products are then summed and the sum is passed through activation function, to determine what extent the value should progress through the network to affect the final prediction, such as an act of classification. A layer consists of row of nodes that that turn on or off as the input is fed through the network based. The input of the first layer becomes the input of the second layer and so on. Here's a diagram of what neural network might look like: Let's get familiarize with some deep neural network concepts and terminology. Sigmoid activation Sigmoid activation function used in neural network has an output boundary of (0, 1), and α is the offset parameter to set the value at which the sigmoid evaluates to 0. Sigmoid function often works fine for gradient descent as long as input data x is kept within a limit. For large values of x, y is constant. Hence, the derivatives dy/dx (the gradient) equates to 0, which is often termed as the vanishing gradient problem. This is a problem because when the gradient is 0, multiplying it with the loss (actual value - predicted value) also gives us 0 and ultimately networks stops learning. Rectified Linear Unit (ReLU) A neural network can be built from combining some linear classifier with some non-linear function. The Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function f(x) = max(0,x) f(x)=max(0,x). In other words, the activation is simply thresholded at zero. Unfortunately, ReLU units can be fragile during training and can die as a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again and so the gradient flowing through the unit will forever be zero from that point on. To overcome this problem a leaky ReLU function will have a small negative slope (of 0.01, or so) instead of zero when x<0: f(x)= (x<0)(αx)+ (x>=0)(x)f(x)=1(x<0)(αx)+1(x>=0)(x) where αα is a small constant. Exponential Linear Unit (ELU) The mean of ReLU activation is not zero and hence sometime makes the learning difficult for the network. Exponential Linear Unit (ELU) is similar to ReLU activation function when input x is positive, but for negative values it is a function bounded by a fixed value -1, for α=1 (hyperparameter α controls the value to which an ELU saturates for negative inputs). This behavior helps to push the mean activation of neurons closer to zero, that helps to learn representations that are more robust to noise. Stochastic Gradient Descent (SGD) Scaling batch gradient descent is cumbersome because it has to compute a lot if the dataset is big and as a rule of thumb. If computing your loss takes n floating point operations, computing its gradient takes about three times that compute. But in practice we want to be able to train lots of data because on real problems we will always get more gains the more data we use. And because gradient descent is iterative and have to do that for many steps. So, that means that in-order to update the parameters in a single step, it has to go through all the data samples and then doing this iteration over the data tens or hundreds of times. Instead of computing the loss over entire data samples for every step, we can compute the average loss for a very small random fraction of the training data. Think between 1 and 1000 training samples each time. This technique is called Stochastic Gradient Descent (SGD) and is at the core of deep learning. That's because SGD scales well with both data and model size. SGD gets its reputation for being black magic as it has lots of hyper-parameters to play and tune such as initialization parameters, learning rate parameters, decay, momentum, and you have to get them right. Deep Learning has emerged over time with its evolution from neural networks to machine learning. It is an intriguing segment of machine learning that uses huge amount of data, to teach computers how to do things that only humans were capable of. It highlights some of the key players who have adopted this concept at the very early stage that are Facebook, Baidu, Amazon, Microsoft, and Google. It shows the different concept layers through which deep learning is executed. If Deep Learning has got you hooked, wait till you learn what GANs are from the book Learning Generative Adversarial Networks.
Read more
  • 0
  • 0
  • 11156

article-image-3-natural-language-processing-models-every-engineer-should-know
Amey Varangaonkar
18 Dec 2017
5 min read
Save for later

3 Natural Language Processing Models every ML Engineer should know

Amey Varangaonkar
18 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This interesting excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. The book gives an advanced view of different natural language processing techniques and their implementation in R. [/box] Our article given below aims to introduce to the concept of language models and their relevance to natural language processing. In terms of natural language processing, language models generate output strings that help to assess the likelihood of a bunch of strings to be a sentence in a specific language. If we discard the sequence of words in all sentences of a text corpus and basically treat it like a bag of words, then the efficiency of different language models can be estimated by how accurately a model restored the order of strings in sentences. Which sentence is more likely: I am learning text mining or I text mining learning am? Which word is more likely to follow “I am…”? Basically, a language model assigns the probability of a sentence being in a correct order. The probability is assigned over the sequence of terms by using conditional probability. Let us define a simple language modeling problem. Assume a bag of words contains words W1, W2,………………….,Wn. A language model can be defined to compute any of the following: Estimate the probability of a sentence S1: P (S1) = P (W1, W2, W3, W4, W5) Estimate the probability of the next word in a sentence or set of strings: P (W3|W2, W1) How to compute the probability? We will use chain law, by decomposing the sentence probability as a product of smaller string probabilities: P(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W1W2)P(W4|W1W2W3) N-gram models N-grams are important for a wide range of applications. They can be used to build simple language models. Let's consider a text T with W tokens. Let SW be a sliding window. If the sliding window consists of one cell (wi wi wi wi) then the collection of strings is called a unigram; if the sliding window consists of two cells, the output Is (w1 , w2)(w3 , w4)(w5 w5 , w5)(w1 , w2)(w3 , w4)(w5 , w5), this is called a bigram .Using conditional probability, we can define the probability of a word having seen the previous word; this is known as bigram probability. So the conditional probability of an element, given the previous element (wi -1), is: Extending the sliding window, we can generalize that n-gram probability as the conditional probability of an element given previous n-1 element: The most common bigrams in any corpus are not very interesting, such as on the, can be, in it, it is. In order to get more meaningful bigrams, we can run the corpus through a part-of-speech (POS) tagger. This would filter the bigrams to more content-related pairs such as infrastructure development, agricultural subsidies, banking rates; this can be one way of filtering less meaningful bigrams. A better way to approach this problem is to take into account collocations; a collocation is the string created when two or more words co-occur in a language more frequently. One way to do this over a corpus is pointwise mutual information (PMI). The concept behind PMI is for two words, A and B, we would like to know how much one word tells us about the other. For example, given an occurrence of A, a, and an occurrence of B, b, how much does their joint probability differ from the expected value of assuming that they are independent. This can be expressed as follows: Unigram model: Punigram(W1W2W3W4) = P(W1)P(W2)P(W3)P(W4) Bigram model: Pbu(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W2)P(W4|W3) P(w1w2…wn ) = P(wi | w1w2…wi"1) Applying the chain rule on n contexts can be difficult to estimate; Markov assumption is applied to handle such situations. Markov Assumption If predicting that a current string is independent of some word string in the past, we can drop that string to simplify the probability. Say the history consists of three words, Wi, Wi-1, Wi-2, instead of estimating the probability P(Wi+1) using P(Wi,i- 1,i-2) , we can directly apply P(Wi+1 | Wi, Wi-1). Hidden Markov Models Markov chains are used to study systems that are subject to random influences. Markov chains model systems that move from one state to another in steps governed by probabilities. The same set of outcomes in a sequence of trials is called states. Knowing the probabilities of states is called state distribution. The state distribution in which the system starts is the initial state distribution. The probability of going from one state to another is called transition probability. A Markov chain consists of a collection of states along with transition probabilities. The study of Markov chains is useful to understand the long-term behavior of a system. Each arc associates to certain probability value and all arcs coming out of each node must have exhibit a probability distribution. In simple terms, there is a probability associated to every transition in states: Hidden Markov models are non-deterministic Markov chains. They are an extension of Markov models in which output symbol is not the same as state. Language models are widely used in machine translation, spelling correction, speech recognition, text summarization, questionnaires, and many more real-world use-cases. If you would like to learn more about how to implement the above language models. check out our book Mastering Text Mining with R. This book will help you leverage the language models using popular packages in R for effective text mining.  
Read more
  • 0
  • 0
  • 17240

article-image-ux-designers-can-teach-machine-learning-engineers-start-model-interpretability
Sugandha Lahoti
18 Dec 2017
7 min read
Save for later

What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability

Sugandha Lahoti
18 Dec 2017
7 min read
Machine Learning is driving many major innovations happening around the world. But while complex algorithms drive some of the most exciting inventions, it's important to remember that these algorithms are always designed. This is why incorporating UX into machine learning engineering could offer a way to build even better machine learning systems that put users first. Why we need UX design in machine learning Machine learning systems can be complex. They require pre-trained data, and depend on a variety of variables to allow the algorithm to make 'decisions'. This means transparency can be difficult - and when things go wrong, it isn't easy to fix. Consider the ways that machine learning systems can be biased against certain people - that's down to problems in the training set and, subsequently, how the algorithm is learning. If machine learning engineers took a more user-centric approach to building machine learning systems - borrowing some core principles from UX design - they could begin to solve these problems and minimize the risk of algorithmic bias. After all, every machine learning model has an end user. Whether its for recommending products, financial forecasting, or driving a car, the model is always serving a purpose for someone. How UX designers can support machine learning engineers By and large, machine learning today is mostly focused on the availability of data and improving model performance by increasing their learning capabilities. However, in this a user-centric approach may be compromised. A tight interplay between UX design practices and machine learning is therefore highly essential to make ML discernible to all and to achieve model interpretability. UX Designers can contribute in a series of tasks that can improve algorithmic clarity. Most designers create a wireframe, which is a rough guide for the layout of a website or an app. The principles behind wireframing can be useful for machine learning engineers as they prototype their algorithms. It provides a space to make considerations about what's important from a user perspective. User testing is also useful in the context of machine learning. Just as UX designers perform user testing for applications, going through a similar process for machine learning systems makes sense. This is most clear in the way companies test driverless cars, but anywhere that machine learning systems require or necessitate human interaction should go through some period of user testing.  UX Design approach can help in building ML algorithms according to different contexts and different audiences. For example, we take a case of an emergency room in an hospital. Often the data required for building a decision support system for Emergency patient cases is quite sparse. Machine Learning can help in mining relevant datasets and divide them into subgroup of patients. UX Design here, can play the role of designing a particular part of the Decision Support system. UX professionals bring in a Human Centered Design to ML components. This means they also consider user perspective while integrating ML components. Machine Learning models generally tend to take the entire control from the user. For instance, in a driverless vehicle, the car determines the route, speed, and other decisions. Designers also include user controls so that they do not lose their voice in the automated system. Machine Learning developers, at times may unintentionally introduce implicit biases in the systems, which can have serious or negative side effects. A recent example of this was Microsoft’s Tay, a Twitter bot that started tweeting racist comments spending just a few hours on Twitter. UX Designers plan for these biases on a project by project level as well as on a larger level, advocating for a broad range of voices. They also keep an eye on the social impact of the ML systems by keeping a check on the input (as was the case with Microsoft Tay). This is done to ensure that an uncontrolled input does not lead to an unintended output. What are the benefits of bringing UX design into machine learning? All Machine Learning systems and practitioners can benefit from incorporating UX design practice as a standard. Some benefits of this collaboration are: Results generated from UX enabled ML algorithms will be transparent and easy to understand It helps end-users understand the product functioning and visualize the results better Better understanding of algorithm results builds user’s trust towards the system. This is important if the consequences of incorrect results are detrimental to the user. It helps data scientists better analyse the results of an algorithm to subsequently make better predictions. It aids in understanding the different components of model building: from designing, to development, to final deployment. UX designers focus on building transparent ML systems by defining the problem through a storyboard rather than on constraints placed by data and other aspects. They become aware of and catch biases ensuring an unbiased Machine learning system. All of this, ultimately results in better product development and improved user experience. How do companies leverage UX Design with ML Top-notch companies are looking at combining the benefits of UX design with Machine Learning to build systems which balance the back-end work (performance and usability) with the front-end (user-friendly outputs). Take Facebook for example. Their News Feed Ranking algorithm, an amalgamation of ML and UX design, works on two goals. The first is showing the right content at the right time, which involves Machine Learning capabilities. The other is enhancing user interaction by displaying posts more prominently so as to create more customer engagement and increase user dwelling time. Google’s UX community has combined UX Design with machine learning in an initiative known as—human-centered machine learning (HCML). In this project, UX designers work in sync with ML developers to help them create unique Machine Learning products catering to human understanding. ML developers are in turn taught how to integrate UX into ML algorithms for better user experience. Airbnb created an algorithm to dynamically alter and set prices for their customers units. However, on interacting with their customers, they found that users were hesitant of giving full control to the system. Hence the UX Design team altered the design, to add functionalities of minimum and maximum rent allowed. They also created a setting that allowed customers to set the general frequency of rentals. Thus, they approached the machine learning project with user experience keeping in mind. Salesforce has a Lightning Design System which includes a centralized design systems team of researchers, accessibility specialists, lead product designers, prototypers, and UX engineers. They work towards documenting visual systems and abstraction of design patterns to assist ML developers. Netflix has also plunged into this venture by offering their customers with personalized recommendations as well as personalized visuals. They have a personalized artwork or imagery to portray their titles. The artwork representing their titles is adjusted to capture the attention of a particular user. This, in turn, acts as a gateway into that title and gives users a visual perception as to why a TV show or a movie is good for them. Thus helping them achieve user engagement as well as user retention. The road ahead In future, we would see most organizations having a blend of UX Designers and data scientists in their teams to create user-friendly products. UX Designers would work closely with developers to find unique ways of incorporating design ethics and abilities in machine learning findings and predictions. This would lead to new and better job opportunities for both designers and developers with further expansion on their skill sets. In fact, it would give rise to a hybrid language, where algorithmic implementations will be consolidated with design to make ML frameworks simpler for the clients.
Read more
  • 0
  • 0
  • 15521

article-image-handpicked-weekend-reading-15th-dec-2017
Aarthi Kumaraswamy
16 Dec 2017
2 min read
Save for later

Handpicked for your Weekend Reading - 15th Dec, 2017

Aarthi Kumaraswamy
16 Dec 2017
2 min read
As you gear up for the holiday season and the year-end celebrations, make a resolution to spend a fraction of your weekends in self-reflection and in honing your skills for the coming year. Here is the best of the DataHub for your reading this weekend. Watch out for our year-end special edition in the last week of 2017! NIPS Special Coverage A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel For the complete coverage, visit here. Experts in Focus Ganapati Hegde and Kaushik Solanki, Qlik experts from Predoole Analytics on How Qlik Sense is driving self-service Business Intelligence 3 things you should know that happened this week Generative Adversarial Networks: Google open sources TensorFlow-GAN (TFGAN) “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#? “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains Try learning/exploring these tutorials weekend Implementing a simple Generative Adversarial Network (GANs) How Google’s MapReduce works and why it matters for Big Data projects How to write effective Stored Procedures in PostgreSQL How to build a cold-start friendly content-based recommender using Apache Spark SQL Do you agree with these insights/opinions Deep Learning is all set to revolutionize the music industry 5 reasons to learn Generative Adversarial Networks (GANs) in 2018 CapsNet: Are Capsule networks the antidote for CNNs kryptonite? How AI is transforming the manufacturing Industry
Read more
  • 0
  • 0
  • 6295
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-ai-is-transforming-the-manufacturing-industry
Kunal Parikh
15 Dec 2017
7 min read
Save for later

How AI is transforming the manufacturing Industry

Kunal Parikh
15 Dec 2017
7 min read
After more than five decades of incremental innovation, the manufacturing sector is finally ready to pivot thanks to Industry 4.0. Self-consciousness of technology plays a key role in ushering in Industry 4.0. AI in the manufacturing sector aims to achieve just that i.e. the creation of systems that can perceive their environment and take action consequently. One of the prominent minds in AI, Andrew Ng believes Factories to be AI's next frontier! Andrew is on a mission to AI-ify manufacturing with its new start-up called Landing.AI. For this initiative he partnered with Foxconn, world's largest contract manufacturer and makers of Apple iPhones. Together they aim to develop a wide range of AI transformation programs, from introduction of new technologies, operational processes, automated quality control and much more. In this AI-powered industrial revolution, machines are becoming smarter and interconnected. Manufacturers are using embedded intelligence of machines for collecting and analyzing data to generate meaningful insights. These are then used to run equipment efficiently, optimize workflows of operations and supply chains, among other things. Thus, AI is leaving an indelible impact across the manufacturing cycle. Further, a new wave of automation is transforming the role of human workforce wherein AI-driven robots are empowering production 24-hours a day. This is helping industrial environments gear up for the shift towards the smart factory environment. Below are some ways AI is revolutionizing manufacturing. Predictive analytics for increased production output Smart manufacturing systems are leveraging the power of predictive analysis and machine learning algorithms to enhance the production capacity. Predictive analytics derives its power from the data collected from the devices or sensors embedded in a manufacturer’s industrial equipments. These sensors become part of the IoT (Internet of Things) which collects and shares data with data scientists on the cloud. This setup is helping manufacturing industries to move from repair-and-replace to a predict-and-fix maintenance model. They do this by enabling these businesses to retrieve the right information at the right time to make the right decisions. For instance, in a pump manufacturing company, data scientists could collect, store and analyze sensor data based on machine attributes like heat, vibration, noise etc. This data can be stored in the cloud allowing for an array of analyses to be performed from understanding machine performance to predicting and monitoring disruption in processes and equipment remotely. Further, syncing up production schedules with parts availability can ensure enhanced production output. Enhancing product and service quality with machine and deep learning algorithms Manufacturers can deploy supervised/unsupervised ML, DL and reinforcement learning algorithms to monitor quality issues in the manufacturing process. For instance, researchers at Lappeenranta University of Technology in Finland have developed an innovative welding system for high-strength steel. They used unsupervised learning to allow the system to learn to mimic human’s ability to self-explore and self-correct. This welding system detects imperfections and self-corrects during the welding process using a new kind of sensor system controlled by a neural network program. Further, it also calculates other faults that may arise during the entire process. Visual inspection technology in an industrial environment identifies both functional and cosmetic defects. IBM has developed a new offering for manufacturing clients to automate visual quality inspections. Rooted in deep learning, a centralized ‘learning service’ collects images of all products - normal and abnormal. Next, it builds analytical models to recognize and classify different characteristics of machine parts and components into OK or NG. Characteristics that meet quality specifications are considered as OK while those that don't are classified as ‘NG’. Predictive maintenance for enhanced MRO (Maintenance, Repair and Overhaul) performance Manufacturing industries strive to achieve excellence throughout the production process. To ensure this, machinery embedded with sensors generate real-time performance and workload data. This helps in diagnosing faults and in predicting the need for equipment maintenance. For instance, a machine may break down due to lack of maintenance in the long run, incurring losses to business. With predictive maintenance, businesses can be better equipped to handle equipment malfunction by identifying significant causal factors like weather, temperature etc. Targeted predictive maintenance generates critical information such as which machine parts will need replacing and when. This helps in reducing equipment downtime, lowering maintenance costs and pre-emptively addressing aging equipment. Reinforcement Learning for managing warehouses Large warehouses face challenging times in streamlining space, managing inventories and reducing transit time. Manufacturing industries are employing reinforcement learning for efficient warehouse management. RL approach uses trial and error iterations within an environment to achieve a particular goal. Imagine what a breeze warehousing could be and the associated cost savings, if robots could pick up the right products from various lots and move them to the right destinations with great precision. Here, reinforcement learning based algorithms can improve the efficiency of such intelligent warehouses with multirobot systems by addressing task scheduling and path planning issues. Fanuc, a Tokyo based company, employs robots having reinforced learning ability to perform such tasks with great agility and precision. AI in supply chain management AI is helping manufacturers gain an in-depth understanding of the complex variables at play in the supply chain and in predicting future scenarios. To enable seamless insights generation, businesses are opting for more flexible and efficient cyber-physical systems. These intelligent systems are self-configurative and self-optimizing structures that can predict problems and minimize losses. Thus they help businesses to innovate rapidly by reducing the time to market, foresee uncertainties and deal with them promptly. Siemens, for example, is creating a self-organizing factory that aims to automate the entire supply chain by generating work orders using the demand and order information. Implication of AI in Industry 4.0 Industry 4.0 is the new way of manufacturing using automation, devices connected on the IoT, cloud and cognitive computing. It propagates the concept of the “smart factory” in which cyber-physical systems observe the physical processes of the factory and make discrete decisions accordingly. As AI finds its application in Industry 4.0, computers will merge together with robotics to automate and maximize the efficiency of the industrial processes. Powered by machine learning algorithms, the computer systems could control the robots with minimum human intervention. For instance, in a manufacturing setup, AI can work alongside systems like SCADA - to control industrial processes in an efficient manner. These systems can monitor, collect and process real-time data by directly interacting with devices such as sensors, pumps, motors etc. through human-machine interface (HMI) software. These machine-to-machine communication systems give new direction to the human-machine collaboration potential thus changing the way we see workforce management. Industry 4.0 will favor those who can build software, hardware, and firmware - those who can adapt and maintain new equipment and those who can design automation and robotics. Within Industry 4.0, augmented reality and virtual reality are other cutting edge production ready technologies that are making the idea of a smart factory a reality. The recent relaunch of Google Glass especially designed for the factory floor is worth a mention here. The Wi-Fi-enabled glasses allow factory workers, mechanics, and other technicians to view instructional videos, manuals, training videos etc., all in their line of sight. This helps in maintaining higher standards of work while ensuring safety with agility. In Conclusion Manufacturing industries are gearing themselves to harness AI along with IoT, AR/VR to create an agile manufacturing environment and to make smarter and real-time decisions. AI is helping realize the full potential of Industrial Internet of Things (IIoT) by applying machine learning, deep learning and other evolutionary algorithms to the sensor data. Human-machine collaboration is transforming the scenario at the fulfillment centers creating a win-win situation for both humans and robots. Robots employed at the fulfillment centers having motion sensors move on to the field of QR codes with precision and agility withouting running into each other creating a fascinating view. Imagine a real-life JARVIS from the movie Iron Man managing entire supply chains or factory spaces. The day is not far away when we can see a JARVIS like advanced virtual assistant uses sensors to collect real-time data, AI to process data,and blockchain to securely transmit the information while using AR to interact with us visually. It could take care of system and mechanical failures remotely while ceasing control of the factory for efficient energy management. Manufacturers could go save the world or unveil new products, Iron Man style!
Read more
  • 0
  • 0
  • 27327

article-image-glancing-fintech-growth-story-powered-ml-ai-apis
Kartikey Pandey
14 Dec 2017
4 min read
Save for later

Glancing at the Fintech growth story - Powered by ML, AI & APIs

Kartikey Pandey
14 Dec 2017
4 min read
When MyBucks, a Luxembourg based Fintech firm, started scaling up their business in other countries. They faced a daunting challenge of reducing the timeline for processing credit requests from over a week’s time to just under few minutes. Any financial institution dealing with lending could very well relate to the nature of challenges associated with giving credit - checking credit history, tracking past fraudulent activities, and so on. This automatically makes the lending process tedious and time consuming. To add to this, MyBucks also aimed to make their entire lending process extremely simple and attractive to customers. MyBucks’ promise to its customers: No more visiting branches and seeking approvals. Simply login from your mobile phone and apply for a loan - we will handle the rest in a matter of minutes. Machine Learning has triggered a whole new segment in the Fintech industry- Automated Lending Platforms. MyBucks is one such player. Some other players in this field are OnDeck, Kabbage, and Lend up. What might appear transformational with Machine Learning in MyBucks’ case is just one of the many examples of how Machine Learning is empowering a large number of finance based companies to deliver disruptive products and services. So what makes Machine Learning so attractive to Fintech and how has Machine Learning fuel this entire industry’s phenomenal growth? Read on. Quicker and efficient credit approvals Long before Machine Learning was established in large industries unlike today, it was quite commonly used to solve fraud detection problems. This primarily involved building a self-learning model that used a training dataset to begin with and further expanding its learning based on incoming data. This way the system could distinguish a fraudulent activity from a non-fraudulent one. Modern day Machine Learning systems are no different. They use the very same predictive models that rely on segmentation algorithms and methods. Fintech companies are investing in big data analytics and machine learning algorithms to make credit approvals quicker and efficient. These systems are designed in such a way that they pull data from several sources online, develop a good understanding of transactional behaviours, purchasing patterns, and social media behavior and accordingly decide creditworthiness. Robust fraud prevention and error detection methods Machine Learning is empowering banking institutions and finance service providers to embrace artificial intelligence and combat what they fear the most-- fraudulent activities. Faster and accurate processing of transactions has always been the fundamental requirement in the finance industry. An increasing number of startups are now developing Machine Learning and Artificial Intelligence systems to combat the challenges around fraudulent transactions or even instances of incorrectly reported transactions. Billguard is one such company that uses big data analytics and makes sense of millions of consumers who report billing complaints. The AI system then builds its intelligence by using this crowd-sourced data and reports incorrect charges back to consumers thereby helping get their money back. Reinventing banking solutions with the powerful combination of APIs and Machine Learning Innovation is key to survival in the finance industry. The 2017 PwC global fintech report suggests that the incumbent finance players are worried about the advances in the Fintech industry that poses direct competition to banks. But the way ahead for banks definitely goes through Fintech that is evolving everyday. In addition to Machine Learning, ‘API’ is the other strong pillar driving innovation in Fintech. Developments in Machine Learning and AI are reinventing the traditional lending industry and APIs are acting as the bridge between classic banking problems and the future possibilities. Established banks are now taking the API (Application Programming Interface) route to tie up with innovative Fintech players in their endeavor to deliver modern solutions to customers. Fintech players are also able to reap the benefits of working with the old guard, banks, in a world where APIs have suddenly become the new common language. So what is this new equation all about? API solutions are helping bridge the gap between the old and the new - by helping collaborate in newer ways to solve traditional banking problems. This impact can be seen far and wide within this industry and Fintech as an industry isn’t just limited to lending tech and everyday banking alone. There are several verticals within the industry that now find increased impact of Machine Learning -payments, wealth management, capital markets, insurance, blockchain and now even chatbots for customer service to name a few. So where do you think this partnership is headed? Please leave your comments below and let us know.
Read more
  • 0
  • 0
  • 14960

article-image-challenges-training-gans-generative-adversarial-networks
Shoaib Dabir
14 Dec 2017
5 min read
Save for later

Why training a Generative Adversarial Network (GAN) Model is no piece of cake

Shoaib Dabir
14 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book gives a complete coverage of Generative adversarial networks. [/box] The article highlights some of the common challenges that a developer might face while using GAN models. Common challenges faced while working with GAN models Training a GAN is basically about two networks, generator G(z) and discriminator D(z) trying to race against each other and trying to reach an optimum, more specifically a nash equilibrium. The definition of nash equilibrium as per Wikipedia: (in economics and game theory) a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged. 1. Setting up failure and bad initialization If you think about it, this is exactly what a GAN is trying to do; the generator and discriminator reach a state where they cannot improve further given the other is kept unchanged. Now the setup of gradient descent is to take a step in a direction that reduces the loss measure defined on the problem—but we are by no means enforcing the networks to reach Nash equilibrium in GAN, which have non-convex objective with continuous high dimensional parameters. The networks try to take successive steps to minimize a non-convex objective and end up in an oscillating process rather than decreasing the underlying true objective. In most cases, when your discriminator attains a loss very close to zero, then right away you can figure out something is wrong with your model. But the biggest pain-point is figuring out what is wrong. Another practical thing done during the training of GAN is to purposefully make one of the networks stall or learn slower, so that the other network can catch up. And in most scenarios, it's the generator that lags behind so we usually let the discriminator wait. This might be fine to some extent, but remember that for the generator to get better, it requires a good discriminator and vice versa. Ideally the system would want both the networks to learn at a rate where both get better over time. The ideal minimum loss for the discriminator is close to 0.5— this is where the generated images are indistinguishable from the real images from the perspective of the discriminator. 2. Mode collapse One of the main failure modes with training a generative adversarial network is called mode collapse or sometimes the helvetica scenario. The basic idea is that the generator can accidentally start to produce several copies of exactly the same image, so the reason is related to the game theory setup we can think of the way that we train generative adversarial networks as first maximizing with respect to the discriminator and then minimizing with respect to the generator. If we fully maximize with respect to the discriminator before we start to minimize with respect to the generator everything works out just fine. But if we go the other way around and we minimize with respect to the generator and then maximize with respect to the discriminator, everything will actually break and the reason is that if we hold the discriminator constant it will describe a single region in space as being the point that is most likely to be real rather than fake and then the generator will choose to map all noise input values to that same most likely to be real point. 3. Problem with counting GANs can sometimes be far-sighted and fail to differentiate the number of particular objects that should occur at a location. As we can see, it gives more numbers of eyes in the head than originally present: 4. Problems with perspective GANs sometime are not capable of differentiating between front and back view and hence fail to adapt well with 3D objects while generating 2D representation from it as follows: 5. Problems with global structures GANs do not understand a holistic structure similar to problems with perspective. For example, in the bottom left image, it generates an image of a quadruple cow, that is, a cow standing on its hind legs and simultaneously on all four legs. That is definitely unrealistic and not possible in real life! It is very important when its comes to train GAN models towards the execution and there would be some common challenges that can come ahead. The major challenge that arises is the failure of the setup and also the one that is mainly faced in training GAN model is mode collapse or sometimes the helvetica scenario. It highlights some of the common problems like with counting, perspective or be global structure. The above listings are some of the major issues faced while training a GAN model. To read more on solutions with real world examples, you will need to check out this book Learning Generative Adversarial Networks.  
Read more
  • 0
  • 0
  • 12597

article-image-capsnet-capsule-networks-convolutional-neural-networks-cnns
Savia Lobo
13 Dec 2017
5 min read
Save for later

CapsNet: Are Capsule networks the antidote for CNNs kryptonite?

Savia Lobo
13 Dec 2017
5 min read
Convolutional Neural networks (CNNs), are a group from the neural network family that has manifested in areas such as Image recognition, classification, etc. They are one of the popular neural network models present in nearly all of the image recognition tasks that provide state-of-the-art-results. However, these CNNs have drawbacks, which are to be discussed later in the article. In order to address the issue with CNNs, Geoffrey Hinton, popularly known as the Godfather of Deep Learning, recently proposed a research paper along with two other researchers, Sara Sabour and Nicholas Frosst. In this paper, they introduced CapsNet or Capsule Network--a neural network, based on multi-layer capsule system. Let’s explore the issue with CNNs and how CapsNet came as an advancement to it. What is the issue with CNNs? Convolutional Neural Network or CNNs are known to seamlessly handle image classification tasks. They are experts in learning at a granular level; where the lower layers detect edges and shape of an object, and the higher layers detect the image as a whole. However, CNNs perform poorly when an image possesses a slightly different orientation (rotation or a tilt), as it compares every image with the ones it learns during training. For instance, if an image of a face is to be detected, it checks for facial features such as nose, two eyes, mouth, eyebrows, etc; irrespective of the placement. This means CNNs may identify an incorrect face in cases where the placement of an eye and the nose is not as conventionally expected, for example in case of the profile view. So, the orientation and the spatial relationships between the objects within an image is not considered by a CNN. To make CNNs understand orientation and spatial relationships, they were trained profusely with images taken from all possible angles. Unfortunately, it resulted in excess amount of time required to train the model. Also, the performance of the CNNs did not improve largely. Pooling methods were also introduced at each layer within the CNN model for two reasons; first  to reduce the time invested in training, and second to bring out positional invariance within CNNs. It resulted in triggering false positives in an image, i.e., it detected the object within an image but did not check its orientation. Also it incorrectly declared it as a right image. Thus, positional invariance made the CNNs susceptible to minute changes in viewpoint. Instead of invariance, what CNNs require is equivariance-- a feature that makes CNNs adapt to change in rotation or proportion within an image. This equivariance feature is now possible via Capsule Network! The Solution: Capsule Network CapsNet or Capsule network is an encapsulation of nested neural network layers. Traditional neural network contains multiple layers whereas a capsule network contains multiple layers within a single capsule. CNNs go deeper in terms of height, whereas the capsule network deepens in terms of nesting or internal structure. Such a model is highly robust to geometric distortions and transformations, which are a result of non-ideal camera angles. Thus, it is able to exceptionally handle orientations, rotations and so on. CapsNet Architecture Source: https://arxiv.org/pdf/1710.09829.pdf Key Features: Layer based Squashing In a typical Convolutional Neural Network, the squashing function is added to each layer of the CNN model. A squashing function compresses the input to one of the ends of a small interval, introducing nonlinearity to the neural network and enables the network to be effective. Whereas, in a Capsule network, the squashing function is applied to the vector output of each capsule. Given below is a squashing function proposed by Hinton in his research paper. Squashing function Source: https://arxiv.org/pdf/1710.09829.pd Instead of applying non-linearity to each neuron, the squashing function applies squashing to a group of neurons i.e the capsule. To be more precise, it applies nonlinearity to the vector output of each capsule. The squashing function also tries to squash the vector output to zero if it is a small vector. If the vector is too long, the function tries to limit the output vector to 1. Dynamic Routing Dynamic routing algorithm in CapsNet replaces the scalar-output feature detectors of the CNN with the vector-output capsules. Also, the max pooling feature in CNNs, which led to positional invariance, is replaced with ‘routing by agreement’. The algorithm ensures that when they forward propagate the data, it goes to the next most relevant capsule in the layer above. Although dynamic routing adds an extra computational cost to the capsule network, it has been proved to be advantageous to the network by making it more scalable and adaptable. Training the Capsule Network The capsule network is trained using the MNIST. MNIST is a dataset which includes more than 60,000 handwritten digit images. It is used to test machine learning algorithms. The capsule model is trained for 50 epochs with a batch size of 128 parts, where each epoch is responsible for a complete run through the training dataset. A TensorFlow implementation of the CapsNet based on Hinton’s research paper is available in GitHub repository. Similarly, CapsNet can also be implemented using other deep learning frameworks such as Keras, PyTorch, MXNet, etc. CapsNet is a recent breakthrough in the field of Deep learning and have a promise to benefit organizations with accurate image recognition tasks. Also, implementations with CapsNet is slowly catching up and is expected to reach at par like CNNs. They have been trained on a very simplistic dataset i.e the MNIST. They will still require to prove themselves on various other datasets. However, as time advances and we see CapsNet being trained within different domains, it will be exciting to discern how it moulds itself as a faster and more efficient training technique for deep learning models.
Read more
  • 0
  • 0
  • 16272
article-image-5-application-development-tool-matter-2018
Richard Gall
13 Dec 2017
3 min read
Save for later

5 application development tools that will matter in 2018

Richard Gall
13 Dec 2017
3 min read
2017 has been a hectic year. Not least in application development. But it’s time to look ahead to 2018. You can read what ‘things’ we think are going to matter here, but here are the key tools we think are going to define the next 12 months in the area. 1. Kotlin Kotlin has been one of the most notable languages in 2017. It’s adoption has been dramatic over the last 12 months, and signals significant changes in what engineers want and need from a programming language. We think it’s likely to challenge Java's dominance throughout 2018 as more and more people adopt it. If you want a run down of the key reasons why you should start using Kotlin, you could do a lot worse than this post on Medium. Learn Kotlin. Explore Kotlin eBooks and videos. 2. Kubernetes Kubernetes is a tool that’s been following in the slipstream of Docker. It has been a core part of the growth of containerization, and we’re likely to see it move from strength to strength in 2018 as the technology matures and the size of container deployments continues to grow in size and complexity. Kubernetes’ success and importance was underlined earlier this year when Docker announced that its enterprise edition would support Kubernetes. Clearly, if Docker paved the way for the container revolution, Kubernetes is consolidating and helping teams take the next step with containerization. Find Packt’s Kubernetes eBooks and videos here. 3. Spring Cloud This isn’t a hugely well known tool, but 2018 might just be the year that the world starts to pay it more attention. In many respects Spring Cloud is a thoroughly modern software project, perfect for a world where microservices reign supreme. Following the core principles of Spring Boot, it essentially enables you to develop distributed systems in a really efficient and clean way. Spring is interesting because it represents the way Java is responding to the growth of open source software and the decline of the more traditional enterprise system. 4. Java 9 This nicely leads us on to the new version of Java 9. Here we have a language that is thinking differently about itself, that is moving in a direction that is heavily influenced by a software culture that is distinctive from where it belonged 5-10 years ago. The new features are enough to excite anyone that’s worked with Java before. They have all been developed to help reduce the complexity of modern development, modeled around the needs of developers in 2017 - and 2018. And they all help to radically improve the development experience - which, if you’ve been reading up, you’ll know is going to really matter for everyone in 2018. Explore Java 9 eBooks and videos here. 5. ASP.NET Core Microsoft doesn’t always get enough attention. But it should. Because a lot has changed over the last two years. Similar to Java, the organization and its wider ecosystem of software has developed in a way that moves quickly and responds to developer and market needs in an impressive way. ASP.NET Core is evidence of that. A step forward from the formidable ASP.NET, this cross-platform framework has been created to fully meet the needs of today’s cloud based, fully connected applications that run on microservices. It’s worth comparing it with Spring Cloud above - both will help developers build a new generation of applications, and both represent two of software’s old-guard establishment embracing the future and pushing things forward. Discover ASP.NET Core eBooks and videos.
Read more
  • 0
  • 0
  • 15069

article-image-5-web-development-tools-matter-2018
Richard Gall
12 Dec 2017
4 min read
Save for later

5 web development tools will matter in 2018

Richard Gall
12 Dec 2017
4 min read
It's been a year of change and innovation in web development. We've seen Angular shifting quickly, React rising to dominate, and the surprising success of Vue.js. We've discussed what 'things' will matter in web development in 2018 here, but let's get down to the key tools you might be using or learning. Read what 5 trends and issues we think will matter in 2018 in web development here. 1. Vue.js If you remember back to 2016, the JavaScript framework debate centred on React and Angular. Which one was better? You didn't have to look hard to find Quora and Reddit threads, or Medium posts comparing and contrasting the virtues of one or the other. But in 2017, Vue has started to pick up pace to enter the running as a real competitor to the two hyped tools. What's most notable about Vue.js is simply how much people enjoy using it. The State of Vue.js report reported that 96% of users would use it for their next project. While it's clearly pointless to say that one tool is 'better' than another, the developer experience offered by Vue says a lot about what's important to developers - it's only likely to become more popular in 2018. Explore Vue eBooks and videos. 2.Webpack Webpack is a tool that's been around for a number of years but has recently seen its popularity grow. Again, this is likely down to the increased emphasis on improving the development experience - making development easier and more enjoyable. Webpack, is, quite simply brings all the assets you need in front end development - like JavaScript, fonts, and images, in one place. This is particularly useful if you're developing complicated front ends. So, if you're looking for something that's going to make complexity more manageable in 2018, we certainly recommend spending some time with Webpack. Learn Webpack with Deploying Web Applications with Webpack. 3. React Okay, you were probably expecting to see React. But why not include it? It's gone from strength to strength throughout 2017 and is only going to continue to grow in popularity throughout 2018. It's important though that we don't get caught up in the hype - that, after all, is one of the primary reasons we've seen JavaScript fatigue dominate the conversation. Instead, React's success is dependent on how we integrate it within our wider tech stacks - tools like webpack, for example. Ultimately, if React continues to allow developers to build incredible UI in a way that's relatively stress-free it won't be disappearing any time soon. Discover React content here. 4. GraphQL GraphQL might seem a little left field, but this tool built by Facebook has quietly been infiltrating its way into development toolchains since it was made public back in 2015. It's seen by some as software that's going to transform the way we build APIs. This article explains everything you need to know about GraphQL incredibly well, but to put it simply, GraphQL "is about designing your APIs more effectively and getting very specific about how clients access your data". Being built by Facebook, it's a tool that integrates very well with React - if you're interested, this case study by the New York Times explains how GraphQL and React played a part in their website redesign in 2017. Learn GraphQL with React and Relay. Download or stream our video. 5. WebAssembly While we don't want to get sucked into the depths of the hype cycle, WebAssembly is one of the newest and most exciting things in web development. WebAssembly is, according to the project site, "a new portable size- and load-time-efficient format suitable for the web". The most important thing you need to know is that it's fast - faster than JavaScript. "Unlike other approaches that require plug-ins to achieve near-native performance in the browser, WebAssembly runs entirely within the Web Platform. This means that developers can integrate WebAssembly libraries for CPU-intensive calculations (e.g. compression, face detection, physics) into existing web apps that use JavaScript for less intensive work" explains Mozilla fellow David Bryant in this Medium post. We think 2018 will be the year WebAssembly finally breaks through and makes it big - and perhaps offering a way to move past conversations around JavaScript fatigue.
Read more
  • 0
  • 0
  • 20361

article-image-5-reasons-learn-generative-adversarial-networks-gans
Savia Lobo
12 Dec 2017
5 min read
Save for later

5 reasons to learn Generative Adversarial Networks (GANs) in 2018

Savia Lobo
12 Dec 2017
5 min read
Generative Adversarial Networks (GANs) are a prominent branch of Machine learning research today. As deep neural networks require a lot of data to train on, they perform poorly if data provided is not sufficient. GANs can overcome this problem by generating new and real data, without using the tricks like data augmentation. As the application of GANs in the Machine learning industry is still at the infancy level, it is considered a highly desirable niche skill. Having an added hands-on experience raises the bar higher in the job market. It can fetch you a higher pay over your colleagues and can also be the feature that sets your resume stand apart. Source: Gartner's Hype Cycle 2017  GANs along with CNNs and RNNs are a part of the in demand deep neural network experience in the industry. Here are five reasons why you should learn GANs today and how Kuntal Ganguly’s book, Learning Generative Adversarial Networks help you do just that. Kuntal is a big data analytics engineer at Amazon Web Services. He has around 7 years of experience building large-scale, data-driven systems using big data frameworks and machine learning. He has designed, developed, and deployed several large-scale distributed applications, without any assistance. Kuntal is a seasoned author with a rich set of books ranging across the data science spectrum from machine learning, deep learning, to Generative Adversarial Networks, published under his belt.[/author] The book shows how to implement GANs in your machine learning models in a quick and easy format with plenty of real-world examples and hands-on tutorials. 1. Unsupervised Learning now a cakewalk with GANs A major challenge of unsupervised learning is the massive amount of unlabelled data one needed to work through as part of data preparation. In traditional neural networks, this labeling of data is both costly and time-consuming. A creative aspect of Deep learning is now possible using Generative Adversarial Networks. Here, the neural networks are capable of generating realistic images from the real-world datasets (such as MNIST and CIFAR). GANs provide an easy way to train the DL algorithms. This is done by slashing down the amount of data required to train the neural network models, that too, with no labeling of data required. This book uses a semi-supervised approach to solve the problem of unsupervised learning for classifying images. However, this could be easily leveraged into developer’s own problem domain. 2. GANs help you change a horse into a zebra using Image style transfer https://www.youtube.com/watch?v=9reHvktowLY Turning an apple into an orange is Magic!! GANs can do this magic, without casting a  spell. Transferring Image-to-Image style, where the styling of one image is applied to the other. What GANs can do is, they can perform image-to-image translations across various domains (such as changing apple to orange or horse to zebra) using Cycle Consistent Generative Network (Cycle GANs). Detailed examples of how to turn the image of an apple to an orange using TensorFlow, and how of turn an image of a horse into a zebra using a GAN model, are given in this book.  3. GANs inputs your text and outputs an image Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. In this book, Kuntal showcases the technique of stacking multiple generative networks to generate realistic images from textual information using StackGANs.Further, the book goes on to explain the coupling of two generative networks, to automatically discover relationships among various domains (a relationship between shoes and handbags or actor and actress) using DiscoGANs. 4. GANs + Transfer Learning = No more model generation from scratch Source: Learning Generative Adversarial Networks Data is the basis to train any Machine learning model, scarcity of which can lead to a poorly-trained model, which can have high chances of failure. Some real-life scenarios may not have sufficient data, hardware, or resources to train bigger networks in order to achieve the desired accuracy. So, is training from scratch a must-do for training the models? A well-known technique used in deep learning that adapts an existing trained model for a similar task to the task at hand is known as Transfer Learning. This book will showcase Transfer learning using some hands-on examples. Further, a combination of both Transfer learning and GANs, to generate high-resolution realistic images with facial datasets is explained. Thus, you will also understand how to create creating artistic hallucination on images beyond GAN. 5. GANs help you take Machine Learning models to Production Most Machine learning tutorials, video courses, and books, explain the training and the evaluation of the models. But how do we take this trained model to production and put it to use and make it available to customers? In this book, the author has taken an example, i.e. developing a facial correction system using an LFW dataset, to automatically correct corrupted images using your trained GAN model. This book also contains several techniques of deploying machine learning or deep learning models in production both on data centers and clouds with micro-service based containerized environments.You will also learn the way of running deep models in a serverless environment and with managed cloud services. This article just scratches the surface of what is possible with GANs and why learning it would change your thinking about deep neural networks. To know more grab your copy of Kuntal Ganguly’s book on GANs: Learning Generative Adversarial Networks.     .
Read more
  • 0
  • 0
  • 23106
article-image-5-data-science-tools-matter-2018
Richard Gall
12 Dec 2017
3 min read
Save for later

5 data science tools that will matter in 2018

Richard Gall
12 Dec 2017
3 min read
We know your time is valuable. That's why what matters is important. We've written about the trends and issues that are going to matter in data science, but here you can find 5 data science tools that you need to pay attention to in 2018. Read our 5 things that matter in data science in 2018 here. 1. TensorFlow Google's TensorFlow has been one of the biggest hits of 2017 when it comes to libraries. It’s arguably done a lot to make machine learning more accessible than ever before. That means more people actually building machine learning and deep learning algorithms, and the technology moving beyond the domain of data professionals and into other fields. So, if TensorFlow has passed you by we recommend you spend some time exploring it. It might just give your skill set the boost you’re looking for. Explore TensorFlow content here. 2.Jupyter Jupyter isn’t a new tool, sure. But it’s so crucial to the way data science is done that it’s importance can’t be understated. And as pressure is placed on data scientists and analysts to communicate and share data in ways that empower stakeholders in a diverse range of roles and departments. It’s also worth mentioning its relationship with Python - we’ve seen Python go from strength to strength throughout 2017, and showing no signs of letting up; the close relationship between the two will only serve to make Jupyter more popular across the data science world. Discover Jupyter eBooks and videos here. 3. Keras In a year when deep learning has captured the imagination, it makes sense to include both libraries helping to power it. It’s a close call between Keras and TensorFlow which deep learning framework is ‘better’ - ultimately, like everything, it’s about what you’re trying to do. This post explores the difference between Keras and TensorFlow very well - the conclusion is ultimately that while TensorFlow offers more ‘control’, Keras is the library you want if you simply need to get up and running. Both libraries have had a huge impact in 2017, and we’re only going to be seeing more of them in 2018. Learn Keras. Read Deep Learning with Keras. 4. Auto SkLearn Automated machine learning is going to become incredibly important in 2018. As pressure mounts on engineers and analysts to do more with less, tools like Auto SKLearn will be vital in reducing some of the ‘manual labour’ of algorithm selection and tuning. 5. Dask This one might be a little unexpected. We know just how popular Apache Spark is when it comes to distributed and parallel computing, but Dask represents an interesting competitor that’s worth watching throughout 2018. It’s high-level API integrates exceptionally well with Python libraries like NumPy and pandas; it’s also much more lightweight than Spark, so it could be a good option if you want to avoid building out a weighty big data tech stack. Explore Dask in the latest edition of Python High Performance.
Read more
  • 0
  • 0
  • 6078

article-image-5-things-that-matter-application-development-2018
Richard Gall
11 Dec 2017
4 min read
Save for later

5 things that will matter in application development in 2018

Richard Gall
11 Dec 2017
4 min read
Things change quickly in application development. Over the past few years we've seen it merge with other fields. With the web become more app-like, DevOps turning everyone into a part-time sysadmin (well, sort of), and the full-stack trend shifting expectations about the modern programmer skill set, the field has become incredibly fluid and open. That means 2018 will present a wealth of challenges of application developers - but of course there will also be plenty of opportunities for the curious and enterprising… But what's going to be most important in 2018? What's really going to matter? Take a look below at our list of 5 things that will matter in application development in 2018. 1. Versatile languages that can be used on both client and server Versatility is key to be a successful programmer today. That doesn't mean the age of specialists is over, but rather you need to be a specialist in everything. And when versatility is important to your skillset, it also becomes important for the languages we use. It's for that reason that we're starting to see the increasing popularity of languages like Kotlin and Go. It's why Python continues to be popular - it's just so versatile. This is important when you're thinking about how to invest your learning time. Of course everyone is different, but learning languages that can help you do multiple things and solve different problems can be hugely valuable. Investing your energy in the most versatile languages will be well worth your time in 2018. 2. The new six month Java release cycle This will be essential for Java programmers in 2018. Starting with the release of Java 9 early in 2018, the new cycle will kick in. This might mean there's a little more for developers to pay attention to, but it should make life easier, as Oracle will be able to update and add new features to the language with greater effectiveness than ever before. From a more symbolic point of view, this move hints a lot at the deepening of open source culture in 2018, with Oracle aiming to satisfy developers working on smaller systems, keen to constantly innovate, as much as its established enterprise clients. 3. Developing usable and useful conversational UI Conversational UI has been a 'thing' for some time now, but it hasn't quite captured the imagination of users. This is likely because it simply hasn't proved that useful yet - like 3D film it feels like too much of a gimmick, maybe even too much of a hassle. It's crucial - if only to satisfy the hype - that developers finally find a way to make conversational UI work. To really make it work we're ultimately going to need to join the dots between exceptionally good artificial intelligence and a brilliant user experience - making algorithms that 'understand' user needs, and can adapt to what people want. 4. Microservices Microservices certainly won't be new in 2018, but they are going to play a huge part in how software is built in 2018. Put simply, if they're not important to you yet, they will be. We're going to start to see more organizations moving away from monolithic architectures, looking to engineering teams to produce software in ways that is much more dynamic and much more agile. Yes, these conversations have been happening for a number of years; but like everything when it comes to tech, change happens at different speeds. It's only now as technologies mature, developer skillsets change, and management focus shifts that broader changes take place. 5. Taking advantage of virtual and augmented reality Augmented Reality (AR) and Virtual Reality (VR) have been huge innovations within fields like game development. But in 2018, we're going to see both expand beyond gaming and into other fields. It's already happening in many areas, such as healthcare, and for engineers and product developers/managers, it's going to be an interesting 12 months to see how the market changes.
Read more
  • 0
  • 0
  • 9121