Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-open-data-institute-jacob-ohrvik-on-digital-regulation-internet-regulators-and-office-for-responsible-technology
Natasha Mathur
02 Apr 2019
6 min read
Save for later

Open Data Institute: Jacob Ohrvik on digital regulation, internet regulators, and office for responsible technology

Natasha Mathur
02 Apr 2019
6 min read
Open Data Institute posted a video titled “Regulating for responsible technology – is the UK getting it right?”, as a part of its ODI Fridays series last week. Jacob Ohrvik Scott, a researcher at Think-tank Doteveryone, a UK based organization that promotes ideas on responsible tech. In the video, Ohrvik talks about the state of digital regulation, systemic challenges faced by independent regulators and the need for an Office for responsible tech, an independent regulatory body, in the UK. Let’s look at the key takeaways from the video. Ohrvik started off the video talking about responsible tech and three main factors that fall under responsible tech. The factors include: unintended consequences of its applications kind of value that flows to and fro the technology kind of societal context in which it operates Ohrvik states that many people in the UK have been calling for an internet regulator to carry out different digital-safety related responsibilities. For instance, the NSPCC, National Society for the Prevention of Cruelty to Children, called for an internet regulator to make sure that children are safe online. Similarly, media and Sport Committee is called out to implement an ethical code of practice for social media platforms and big search engines. Given the fact that many people were talking about the independent internet regulatory body, Doteveryone decided to come out with their own set of proposals. It had previously carried out a survey that observed the public attitude and understanding of digital technologies. As per the survey results, one of the main things that people emphasized was greater accountability from tech companies. Also, people were supportive of the idea of an independent internet regulator. “We spoke to lots of people, we did some of our own thinking and we were trying to imagine what this independent internet regulator might look like. But..we uncovered some more sort of deep-rooted systemic challenges that a single internet regulator couldn't really tackle” said Ohrvik. Systemic Challenges faced by Independent Internet Regulator The systemic challenges presented by Ohrvik are the need for better digital capabilities, society needs an agency and the need for evidence. Better digital capabilities Ohrvik cites the example of Christopher Wiley, a “whistleblower” in the Cambridge Analytica scandal.  As per Wiley, one of the weak points of the system is the lack of tech knowledge. The fact that he was asked a lot of basic questions by the Information Commissioner’s Office (UK’s data regulator) that wouldn’t be normally asked by a database engineer is indicative of the overall challenges faced by the regulatory system. Tech awareness among the public is important The second challenge is that society needs an agency that can help bring back their trust in tech. Ohrvik states that as part of the survey that Doteveryone conducted, they observed that when people were asked to give their views on reading terms and conditions, 58 percent said that they don't read terms and conditions. 47% of people feel that they have no choice but to accept the terms and conditions on the internet. While 43% of people said that there's no point in reading terms and conditions because tech companies will do what they want anyway. This last area of voters especially signals towards a wider kind of trend today where the public feel disempowered and cynical towards tech. This is also one of the main reasons why Ohrvik believes that a regulatory system is needed to “re-energize” the public and give them “more power”. Everybody needs evidence Ohrvik states that it’s hard to get evidence around online harms and some of the opportunities that arise from digital technologies. This is because: a) you need a rigorous and kind of longitudinal evidence base b)  getting access to the data for the evidence is quite difficult (esp. from a large private multinational company not wanting to engage with government) and c) hard to look under the bonnet of digital technologies, meaning, dealing with thousands of algorithms and complexities that makes it hard to make sense of  what’s really happening. Ohrvik then discussed the importance of having a separate office for responsible technology if we want to counteract the systemic challenges listed above. Having an Office for responsible technology Ohrvik states that the office for responsible tech would do three broad things namely, empowering regulators, informing policymakers and public, and supporting people to seek redress. Empowering regulators This would include analyzing the processes that regulators have in-place to ensure they are up-to-date. Also, recommending the necessary changes required to the government to effectively put the right plan in action. Another main requirement is building up the digital capabilities of regulators. This would be done in a way where the regulators are able to pay for the tech talent across the whole regulatory system, which in turn, would help them understand the challenges related to digital technologies.                                         ODI: Regulating for responsible technology Empowering regulators would also help shift the role of regulators from being kind of reactive and slow towards being more proactive and fast moving. Informing policymakers and public This would involve communicating with the public and policymakers about certain developments related to tech regulation. This would further offer guidance and make longer-term engagements to promote positive long term change in the public relationship with digital technologies.                                                                              ODI: Regulating for responsible technology For instance, a long term campaign centered around media literacy can be conducted to tackle misinformation. Similarly, a long-term campaign around helping people better understand their data rights can also be implemented. Supporting people to seek redress This is aimed at addressing the power imbalance between the public and tech companies. This can be done by auditing the processes, procedures, and technologies that tech companies have in place, to protect the public from harms.                                                    ODI: Regulating for responsible technology For instance, a spot check can be carried out on algorithms or artificial intelligence to spot harmful content. While spot checking, handling processes and moderation processes can also be checked to make sure they’re working well. So, in case, certain processes for the public don't work, then this can be easily redressed. This approach of spotting harms at an early stage can further help people and make the regulatory system stronger. In all, an office for responsible tech is quite indispensable to promote the responsible design of technologies and to predict their digital impact on society. By working with regulators to come out with approaches that support responsible innovation, an office for responsible tech can foster healthy digital space for everyone.     Microsoft, Adobe, and SAP share new details about the Open Data Initiative Congress passes ‘OPEN Government Data Act’ to make open data part of the US Code Open Government Data Act makes non-sensitive public data publicly available in open and machine readable formats
Read more
  • 0
  • 0
  • 15939

article-image-top-4-facebook-patents-to-battle-fake-news-and-improve-its-news-feed
Sugandha Lahoti
18 Aug 2018
7 min read
Save for later

Four 2018 Facebook patents to battle fake news and improve news feed

Sugandha Lahoti
18 Aug 2018
7 min read
The past few months saw Facebook struggling to maintain its integrity considering the number of fake news and data scandals linked to it - Alex Jones, accusations of discriminatory advertising and more. Not to mention, Facebook Stocks fell $120 billion in market value after Q2 2018 earnings call. Amidst these allegations of providing fake news and allowing discriminatory content on its news feed, Facebook patented its news feed filter tool last week to provide more relevant news to its users. In the past also, Facebook has made several interesting patents to enhance their news feed algorithm in order to curb fake news. This made us look into what other recent patents that Facebook have been granted around news feeds and fake news. Facebook’s News Feed has always been one of its signature features. The news feed is generated algorithmically (instead of chronologically), with a mix of status updates, page updates, and app updates that Facebook believes are interesting and relevant to you. Officially Facebook, successfully patented its News Feed in 2012, after filing for it in 2006. The patent gave the company a stronghold on the ability to let users see status messages, pictures, and links to videos of online friends, but also the actions those friends take. [box type="shadow" align="" class="" width=""]Note: According to United States Patent and Trademark Office (USPTO), Patent is an exclusive right to invention and “the right to exclude others from making, using, offering for sale, or selling the invention in the United States or “importing” the invention into the United States”.[/box] Here are four Facebook patents in 2018 pertaining to news feeds that we found interesting. Dynamically providing a feed of stories Date of Patent: April 10, 2018 Filed: December 10, 2015 Features: Facebook filed this patent to present their news feed in a more dynamic manner suiting to a particular person. Facebook’s News feed automatically generates a display that contains information relevant to a user about another user. This patent is titled Dynamically providing a feed of stories about a user of a social networking system. As per the patent application, recently, social networking websites have developed systems for tailoring connections between various users. Typically, however, these news items are disparate and disorganized. The proposed method generates news items regarding activities associated with a user. It attaches an informational link associated with at least one of the activities, to at least one of the news items. The method limits access to the news items to a predetermined set of viewers and assigns an order to the news items. Source: USPTO This patent is a viable solution to limit access to the news items which a particular section of users may find obscene. For instance, Facebook users below the age of 18, may be restricted from viewing graphic content. The patent received criticism with people ridiculing the patent for seeming to go against everything that the patent system is supposed to do. They say that such automatically generated news feeds are found in all sorts of systems and social networks these days. But now Facebook may have the right to prevent others from doing, what other social networks are inherently supposed to do. Generating a feed of content items from multiple sources Date of Patent: July 3, 2018 Filed: June 6, 2014 Features:  Facebook filed a patent allowing a feed of content items associated with a topic to be generated from multiple content sources. Per the Facebook patent, their newsfeed generation system receives content items from one or more content sources. It matches the content items to topics based on a measure of the affinity of each content item for one or more objects. These objects form a database that is associated with various topics. The feed associated with the topic is communicated to a user, allowing the user to readily identify content items associated with the topic. Source: USPTO Let us consider the example of sports. A sports database will contain an ontology defining relationships between objects such as teams, athletes, and coaches. The news feed system for a particular user interested in sports (an athlete or a coach or a player) will cover all content items associated with sports. Selecting organic content and advertisements based on user engagement Date of Patent: July 3, 2018 Filed: June 6, 2014 Features: Facebook wants to dynamically adjust its organic content items and advertisements, generated to a user by modifying a ranking. Partial engagement scores will be generated for organic content items based on an expected amount of user interaction with each organic content item. Advertisements scores will be generated based on expected user interaction and bid amounts associated with each organic content item. These advertisement and partial engagement scores are next used to determine two separator engagement scores measuring the user's estimated interaction with a content feed. One engagement score is of organic content items with advertisements and one without them. A difference between both these scores will modify a conversion factor used to combine expected user interaction and bid amounts to generate advertisement scores. This mechanism has been patented by Facebook as Selecting organic content and advertisements for presentation to social networking system users based on user engagement. For example, if a large number of advertisements are presented to a user, the user may become frustrated with the increased difficulty in viewing stories and interact less with the social networking system. However, advertisements also generate additional revenue for the social networking system. A balance is necessary. So, if the engagement score is greater than the additional engagement score by at least a threshold amount, the conversion factor is modified (e.g., decreased) to increase the number of organic content items included in the feed. If the engagement score is greater than the additional engagement score but less than the threshold amount, the conversion factor is modified (e.g., increased) to decrease the number of organic content items included in the feed. Source: USPTO Displaying news ticker content in a social networking system Date of Patent: January 9, 2018 Filed: February 10, 2016 Features: Facebook has also patented, Displaying news ticker content in a social networking system. This Facebook patent describes a system that displays stories about a user’s friends in a news ticker, as friends perform actions. The system monitors in real time for actions associated with users connected with the target user. The news ticker is updated such that stories including the identified actions and the associated connected users are displayed within a news ticker interface. The news ticker interface may be a dedicated portion of the website’s interface, for example in a column next to a newsfeed. Additional information related to the selected story may be displayed in a separate interface. Source: USPTO For example, a user may select a story displayed in the news ticker; let’s say movies. In response, additional information associated with movies (such as actors, director, songs etc) may be displayed, in an additional interface. The additional information can also depend on the movies liked by the friends of the target user. These patents talk lengths of how Facebook is trying to repair its image and make amendments to its news feed algorithms to curb fake and biased news. The dynamic algorithm may restrict content, the news ticket content and multiple source extractions will keep the feed relevant, and the balance between organic content and advertisements could lure users to stay on the site. As such there are no details currently on when or if these features will hit the Facebook feed, but once implemented could bring Zuckerberg’s vision of “bringing the world close together”, closer to reality. Read Next Four IBM facial recognition patents in 2018, we found intriguing Facebook patents its news feed filter tool to provide more relevant news to its users Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 15897

article-image-deepvariant-deep-learning-artificial-intelligence-human-genome-sequencing
Abhishek Jha
05 Dec 2017
5 min read
Save for later

DeepVariant: Using Artificial Intelligence into Human Genome Sequencing

Abhishek Jha
05 Dec 2017
5 min read
In 2003, when The New York Times announced that the human genome project was successfully complete two years ahead of its schedule (leave aside the conspiracy theory that the genome was never ‘completely’ sequenced), it heralded a new dawn in the history of modern science. The challenge thereafter was to make sense out of the staggering data that became available. The High Throughput Sequencing technology came to revolutionize the processing of genomic data in a way, but had its own limitations (such as the high rate of erroneous base calls produced). Google has now launched an artificial intelligence tool, DeepVariant, to analyze the huge data resulting from the sequencing of the genome. It took two years of research for Google to build DeepVariant. It's a combined effort from Google’s Brain team, a group that focuses on developing and applying AI techniques, and Verily Life Sciences, another Alphabet subsidiary that is focused on the life sciences. How the DeepVariant makes sense of your genome? DeepVariant uses the latest deep learning techniques to turn high-throughput sequencing readouts into a picture of a full genome. It automatically identifies small insertion and deletion mutations and single-base-pair mutations in sequencing data. Ever since the high-throughput sequencing made genome sequencing more accessible, the data produced has at best offered error-prone snapshot of a full genome. Researchers have found it challenging to distinguish small mutations from random errors generated during the sequencing process, especially in repetitive portions of a genome. A number of tools and methods have come out to interpret these readouts (both public and private funded), but all of them have used simpler statistical and machine-learning approaches to identify mutations. Google claims DeepVariant offers significantly greater accuracy than all previous classical methods. DeepVariant transforms the task of variant calling (the process to identify variants from sequence data) into an image classification problem well-suited to Google's existing technology and expertise. Google's team collected millions of high-throughput reads and fully sequenced genomes from the Genome in a Bottle (GIAB) project, and fed the data to a deep-learning system that interpreted sequenced data with a high level of accuracy. “Using multiple replicates of GIAB reference genomes, we produced tens of millions of training examples in the form of multi-channel tensors encoding the HTS instrument data, and then trained a TensorFlow-based image classification model to identify the true genome sequence from the experimental data produced by the instruments.” Google said. The result has been remarkable. Within a year, DeepVariant went on to win first place in the PrecisionFDA Truth Challenge, outperforming all state-of-the-art methods in accurate genetic sequencing. “Since then, we've further reduced the error rate by more than 50%,” the team claims. Image Source: research.googleblog.com “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems,” says Brendan Frey, CEO of Deep Genomics, one of the several companies using AI on genomics for potential drugs. DeepVariant is ‘open’ for all The best thing about DeepVariant is that it has been launched as an open source software. This will encourage enthusiastic researchers for collaboration and possibly accelerate its adoption to solve real world problems. “To further this goal, we partnered with Google Cloud Platform (GCP) to deploy DeepVariant workflows on GCP, available today, in configurations optimized for low-cost and fast turnarounds using scalable GCP technologies like the Pipelines API,” Google said. This paired set of releases could facilitate a scalable, cloud-based solution to handle even the largest genomics datasets. The road ahead: What DeepVariant means for future According to Google, DeepVariant is the first of “what we hope will be many contributions that leverage Google's computing infrastructure and Machine learning expertise” to better understand the genome and provide deep learning-based genomics tools to the community. This is, in fact, all part of a “broader goal” to apply Google technologies to healthcare and other scientific applications. As AI starts to propel different branches of medicine take big leaps forward in coming years, there is a whole lot of medical data to mine and drive insights from. But with genomic medicine, the scale is huge. We are talking about an unprecedented set of data that is equally complex. “For the first time in history, our ability to measure our biology, and even to act on it, has far surpassed our ability to understand it,” says Frey. “The only technology we have for interpreting and acting on these vast amounts of data is AI. That’s going to completely change the future of medicine.” These are exciting times for medical research. In 1990, when the human genome project was initiated, it met with a lot of skepticism from many people, including scientists and non-scientists alike. But today, we have completely worked out each A, T, C, and G that makes up the DNA of all 23 pairs of human chromosomes. After high-throughput sequencing made the genomic data accessible, Google’s DeepVariant could just be the next big thing to take genetic sequencing to a whole new level.
Read more
  • 0
  • 0
  • 15882

article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 15858

article-image-beating-jquery-making-web-framework-worth-its-weight-code
Erik Kappelman
20 Apr 2016
5 min read
Save for later

Beating jQuery: Making a Web Framework Worth its Weight in Code

Erik Kappelman
20 Apr 2016
5 min read
Let me give you a quick disclaimer. This is a bit of a manifesto. Last year I started a little technology company with some friends of mine. We were lucky enough to get a solid client for web development right away. He was an author in need of a blogging app to communicate with the fans of his upcoming book. In another post I have detailed how I used Angular.js, among other tools, to build this responsive, dynamic web app. Using Angular.js is a wonderful experience and I would recommend it to anyone. However, Angular.js really only looks good by comparison. By this I mean, if we allow any web framework to exist in a vacuum and not simply rank them against one another, they are all pretty bad. Before you gather your pitchforks and torches to defend your favorite flavor let me explain myself. What I am arguing in this post is that many of the frameworks we use are not worth their weight in code. In other words, we add a whole lot of code to our apps when we import the frameworks, and then in practice using the framework is only a little bit better than using jQuery, or even pure JavaScript. And yes I know that using jQuery means including a whole bunch of code into your web app, but frameworks like Angular.js are many times built on top of jQuery anyway. So, the weight of jQuery seems to be a necessary evil. Let’s start with a simple http request for information from the backend. This is what it looks like in Angular.js: $http.get('/dataSource').success(function(data) { $scope.pageData = data; }); Here is a similar request using Ember.js: App.DataRoute = Ember.Route.extend({ model: function(params) { return this.store.find('data', params.data_id); } }); Here is a similar jQuery request: $.get( "ajax/stuff.html", function( data ) { $( ".result" ).html( data ); alert( "Load was performed." ); }); It's important for readers to remember that I am a front-end web developer. By this, I mean I am sure there are complicated, technical, and valid reasons why Ember.js and Angular.js are far superior to using jQuery. But, as a front-end developer, I am interested in speed and simplicity. When I look at these http requests and see that they are overwhelmingly similar I begin to wonder if these frameworks are actually getting any better. One of the big draws to Angular.js and Ember.js is the use of handlebars to ease the creation of dynamic content. Angular.js using handlebars looks something like this: <h1> {{ dynamicStuff }} </h1> This is great because I can go into my controller and make changes to the dynamicStuff variable and it shows up on my page. However, the following accomplishes a similar task using jQuery. $(function () { var dynamicStuff = “This is dog”; $(‘h1’).html( dynamicStuff ); }); I admit that there are many ways in which Angular.js or Ember.js make developing easier. DOM manipulation definitely takes less code and overall the development process is faster. However, there are many times that the limitations of the framework drive the development process. This means that developers sacrifice or change functionality simply to fit the framework. Of course, this is somewhat expected. What I am trying to say with this post is that if we are going to sacrifice load-times and constrict our development methods in order to use the framework of our choice can they at least be simpler to use? So, just for the sake of advancement lets think about what the perfect web framework would be able to do. First of all, there needs to be less set up. The brevity and simplicity of the http request in Angular.js is great, but it requires injecting the correct dependencies in multiple files. This adds stress, opportunities to make mistakes and development time. So, instead of requiring the developer to grab each specific tool for each specific implementation what if the framework took care of that for you? By this I mean if I were to make an http request like this: http( ‘targetURL’ , get , data) When the source is compiled or interpreted the needed dependencies for this http request are dynamically brought into the mix. This way we can make a simpler http request and we can avoid the hassle of setting up the dependencies. As far as DOM manipulation goes, the handlebars seem to be about as good as it gets. However, there needs to be better ways to target individual instances of a repeated elements such as <p> tags holding the captions in a photo gallery. The current solutions for problems like this one are overly complex. Especially when this issue involves one of the most common things on the internet, a photo gallery. About the Author As you can see, I am more of a critic than a problem solver. I believe the issues I bring up here are valid. As we all become more and more entrenched in the Internet of Things, it would be nice if the development process caught up with the standards of ease that end users demand.
Read more
  • 0
  • 0
  • 15824

article-image-5-new-features-will-make-developers-love-android-7
Sam Wood
09 Sep 2016
3 min read
Save for later

5 New Features That Will Make Developers Love Android 7

Sam Wood
09 Sep 2016
3 min read
Android Nougat is here, and it's looking pretty tasty. We've been told about the benefits to end users - but what are some of the most exciting features for developers to dive into? We've got five that we think you'll love. 1. Data Saver If your app is a hungry, hungry data devourer then you could be losing users as you burn through their allowance of cellular data. Android 7's new data saver feature can help with that. It throttles background data usage, and signals to foreground apps to use less data. Worried that will make your app less useful? Don't worry - users can 'whitelist' applications to consume their full data desires. 2. Multi-tasking It's the big flagship feature of Android 7 - it's the ability to run two apps on the screen at once. As phones keep getting bigger (and more and more people opt for Android tablets over an iPad) having the option to run two apps alongside each other makes a lot more sense. What does this mean for developers? Well, first, you'll want to tweak your app to make sure it's multi-window ready. But what's even more exciting is the potential for drag and drop functionality between apps, dragging text and images from one pane to another. Ever miss being able to just drag files to attach them to an email like on a desktop? With Android N, that's coming to mobile - and devs should consider updating accordingly. 3. Vulkan API Nougat brings a new option to Android game developers in the form of the Vulkan graphics API. No longer restricted to just OpenGL ES, developers will find that Vulkan provides them with a more direct control over hardware - which should lead to improved game performance. Vulkan can also be used across OSes, including Windows and the SteamOS (Valve is a big backer). By adopting Vulkan, Google has really opened up the possibility for high-performance games to make it onto Android. 4. Just In Time Compiler Android 7 has added a JIT (Just In Time) compiler, which will work to constantly improve the performance of Android Apps as they run. The performance of your app will improve - but the device won't consume too much memory. Say goodbye to freezes and non-responsive devices, and hello to faster installation and updates! This means users installing more and more apps, which means more downloads for you! 5. Notification Enhancements Android 7 changes the way your notifications work on your device. Rather than just popping up at the top of your device, notifications in Nougat will have the option for a direct reply without opening the app, will be bundled together with related notifications, and can even be viewed as a 'heads-up' notification displayed to the user when the device is active. These heads-up notifications are also customizable by app developers - so better start getting creative! How will this option affect your app's UX and UI? There's plenty more... This are just some of the features of Android 7 we're most excited about - there's plenty more to explore! So dive right in to Android development, and start building for Nougat today!
Read more
  • 0
  • 0
  • 15800
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-behind-scenes-deep-learning-evolution-core-concepts
Shoaib Dabir
19 Dec 2017
6 min read
Save for later

Behind the scenes: Deep learning evolution and core concepts

Shoaib Dabir
19 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book will help you build and analyze various deep learning models and apply them to real-world problems.[/box] This article will take you through the history of Deep learning and how it has grown over time. It will walk you through some of the core concepts of Deep Learning like sigmoid activation, rectified linear unit(ReLU), etc. Evolution of deep learning A lot of the important work on neural networks happened in the 80's and in the 90's, but back then computers were slow and datasets very tiny. The research didn't really find many applications in the real world. As a result, in the first decade of the 21st century, neural networks have completely disappeared from the world of machine learning. It's only in the last few years, first seeing speech recognition around 2009, and then in computer vision around 2012, that neural networks made a big comeback with (LeNet, AlexNet). What changed? Lots of data (big data) and cheap, fast GPU's. Today, neural networks are everywhere. So, if you're doing anything with data, analytics, or prediction, deep learning is definitely something that you want to get familiar with. See the following figure: Deep learning is an exciting branch of machine learning that uses data, lots of data, to teach computers how to do things only humans were capable of before, such as recognizing what's in an image, what people are saying when they are talking on their phone, translating a document into another language, helping robots explore the world and interact with it. Deep learning has emerged as a central tool to solve perception problems and it's the state of the art with computer vision and speech recognition. Today many companies have made deep learning a central part of their machine learning toolkit—Facebook, Baidu, Amazon, Microsoft, and Google are all using deep learning in their products because deep learning shines wherever there is lots of data and complex problems to solve. Deep learning is the name we often use for "deep neural networks" composed of several layers. Each layer is made of nodes. The computation happens in the node, where it combines input data with a set of parameters or weights, that either amplify or dampen that input. These input-weight products are then summed and the sum is passed through activation function, to determine what extent the value should progress through the network to affect the final prediction, such as an act of classification. A layer consists of row of nodes that that turn on or off as the input is fed through the network based. The input of the first layer becomes the input of the second layer and so on. Here's a diagram of what neural network might look like: Let's get familiarize with some deep neural network concepts and terminology. Sigmoid activation Sigmoid activation function used in neural network has an output boundary of (0, 1), and α is the offset parameter to set the value at which the sigmoid evaluates to 0. Sigmoid function often works fine for gradient descent as long as input data x is kept within a limit. For large values of x, y is constant. Hence, the derivatives dy/dx (the gradient) equates to 0, which is often termed as the vanishing gradient problem. This is a problem because when the gradient is 0, multiplying it with the loss (actual value - predicted value) also gives us 0 and ultimately networks stops learning. Rectified Linear Unit (ReLU) A neural network can be built from combining some linear classifier with some non-linear function. The Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function f(x) = max(0,x) f(x)=max(0,x). In other words, the activation is simply thresholded at zero. Unfortunately, ReLU units can be fragile during training and can die as a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again and so the gradient flowing through the unit will forever be zero from that point on. To overcome this problem a leaky ReLU function will have a small negative slope (of 0.01, or so) instead of zero when x<0: f(x)= (x<0)(αx)+ (x>=0)(x)f(x)=1(x<0)(αx)+1(x>=0)(x) where αα is a small constant. Exponential Linear Unit (ELU) The mean of ReLU activation is not zero and hence sometime makes the learning difficult for the network. Exponential Linear Unit (ELU) is similar to ReLU activation function when input x is positive, but for negative values it is a function bounded by a fixed value -1, for α=1 (hyperparameter α controls the value to which an ELU saturates for negative inputs). This behavior helps to push the mean activation of neurons closer to zero, that helps to learn representations that are more robust to noise. Stochastic Gradient Descent (SGD) Scaling batch gradient descent is cumbersome because it has to compute a lot if the dataset is big and as a rule of thumb. If computing your loss takes n floating point operations, computing its gradient takes about three times that compute. But in practice we want to be able to train lots of data because on real problems we will always get more gains the more data we use. And because gradient descent is iterative and have to do that for many steps. So, that means that in-order to update the parameters in a single step, it has to go through all the data samples and then doing this iteration over the data tens or hundreds of times. Instead of computing the loss over entire data samples for every step, we can compute the average loss for a very small random fraction of the training data. Think between 1 and 1000 training samples each time. This technique is called Stochastic Gradient Descent (SGD) and is at the core of deep learning. That's because SGD scales well with both data and model size. SGD gets its reputation for being black magic as it has lots of hyper-parameters to play and tune such as initialization parameters, learning rate parameters, decay, momentum, and you have to get them right. Deep Learning has emerged over time with its evolution from neural networks to machine learning. It is an intriguing segment of machine learning that uses huge amount of data, to teach computers how to do things that only humans were capable of. It highlights some of the key players who have adopted this concept at the very early stage that are Facebook, Baidu, Amazon, Microsoft, and Google. It shows the different concept layers through which deep learning is executed. If Deep Learning has got you hooked, wait till you learn what GANs are from the book Learning Generative Adversarial Networks.
Read more
  • 0
  • 0
  • 15789

article-image-minecraft-modding-experiences-and-starter-advice
Martijn Woudstra
18 Mar 2015
6 min read
Save for later

Minecraft Modding Experiences and Starter Advice

Martijn Woudstra
18 Mar 2015
6 min read
For three years now, I have tried to help a lot of people enjoy Minecraft in a different way. One specific thing I want to talk about today are add-ons for Minecraft. This article covers my personal experience, as well as some advice and tips for people who want to start developing and bring their own creativity to life in Minecraft. We all know the game Minecraft, where you live in a blocky world, and you can create literally everything you can imagine. So what could possibly be more fun than making the empire state building? Inventing new machines, formulating new spells,  designigning factories and automatic builders for starters! However, as most of you probably know already, these features are not present in the Minecraft world, and this is where people like me jump in. We are Mod Developers. We are the people who bring these awesome things to life, and change the game in ways you may find much more enjoyable to make Minecraft more enjoyable. Although all of this might seem easy to do, it actually takes a lot of effort and creativity.  Let me walk you through the process. Community feedback is priority number one. You can’t have a fun mod if nobody else enjoys it. Sometimes I read an article on a forum about a really good idea. I then get to work!.  However, just like traditional game development, a small idea that is posted on the forum, must be fully thought through. People who come up with the ideas usually don’t think of everything when they post their idea. You must think about things such as how to balance a given idea with vanilla Minecraft. What do you want your creation to look like? Do you need to ask for help from other authors? All of these things are essential steps for making a good modification to the amazing Minecraft experience that you crave. You should start by writing down all of your ideas and concepts. A workflow chart helps to make sure the most essential things are done first, and the details happen later. Usually I keep all of my files in a Google Drive, so I can share my files with others. In my opinion, the actual modding phase is the coolest part, but it takes the longest amount of time. If you want to create something that is new and innovative you might soon realize it is something that you’ve never worked with before, which can be hard to create. For example, for a simple feature such as making a block spin automatically, you could easily work for two hours just to create the basic movements.  This is where experience kicks in. When you make your first modification, you might bump into the smallest problems. These little problems kept me down for quite a long time. It can be quite a stressful process, but don’t give up! Luckily for me, there were a lot of people in the Minecraft modding community who were kind enough to help me out through the early stages of my development career. At this moment I have reached a point where my experience allows most problems to easily be solved. Therefore, mod development has become a lot more fun.  I even decided to join a modding team, and I took over as lead on that project. Our final mod turned out to be amazing. A little later, I started a tutorial series together with a good friend of me, for people who wanted to start with the amazing art of making Minecraft mods. This tutorial series was quite a success, with 7000 views on the website, and almost 2000 views on YouTube. I do my best to help people make their first steps into this amazing community,  by making tutorials, writing articles about my experiences, and describing my idea on how to get into modding. What I noticed right away, is that people tend to go too fast in the beginning. Minecraft is written in Java, a programming language. I have spoken to some people, who didn’t even know this, and yet were trying to make a mod. Unfortunately, life doesn’t work like that. You need to know about the basic language, before you can use it properly. Therefore, my first advice to you is to learn the basics of Java. There are hundreds of tutorials online that can teach you what you need to know. Personally, that’s how I learned Java too! Next up is to get involved into the community: Minecraft Forge is basically a bridge between the standard Minecraft experience and the limitless possibilities of a modded Minecraft game. Minecraft Forge has a wide range of modders, who definitely do not mind giving you some advice, or helping out with problems. Another good way to learn quickly is to team up with someone. Ask around on the forums for a teacher, or someone just as dedicated as you, and work together on a project you both want to truly bring to life. Start making a dummy mod, and help each other when you get stuck. Not a single person has the same way of tackling a task, and perhaps you can absorb some good habits from your teammate. When I did this, I learned a thousand new ways to write pieces of code I would have never thought of on my own. The last and most important thing I want to mention in this post is to always have fun doing what you’re doing. If you’re having a hard time enjoying modding, take a break. Modding will pull you back if you really want it again. And I am speaking from personal experience. About this author Martijn Woudstra lives in a city called Almelo in the Netherlands. Right now he is studying Clinical Medicine at the University of Twente. He learned Java programming about 3 years ago. For over a year he has been making Minecraft mods, which required him to learn how to translate Java into the API used to make the mods. He enjoys teaching others how to mod in Minecraft, and along with his friend Wuppy, has created the Orange Tutorial site (http://orangetutorial.com). The site contains tutorials and high quality videos and understandable code. This is must see resource if you are interested in crafting Minecraft mods.
Read more
  • 0
  • 0
  • 15746

article-image-beyondcorp-is-transforming-enterprise-security
Richard Gall
16 May 2018
3 min read
Save for later

BeyondCorp is transforming enterprise security

Richard Gall
16 May 2018
3 min read
What is BeyondCorp? Beyondcorp is an approach to cloud security developed by Google. It is a zero trust security framework that not only tackles many of today's cyber security challenges, it also helps to improve accessibility for employees. As remote, multi-device working shifts the way we work, it's a framework that might just be future proof. The principle behind it is a pragmatic one: dispensing with the traditional notion of a workplace network and using a public network instead. By moving away from the concept of a software perimeter, BeyondCorp makes it much more difficult for malicious attackers to penetrate your network. You're no longer inside or outside the network; there are different permissions for different services. While these are accessible to those that have the relevant permissions, the lack of perimeter makes life very difficult for cyber criminals. Read now: Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon How does BeyondCorp work? BeyondCorp works by focusing on users and devices rather than networks and locations. It works through a device inventory service. This essentially logs information about the user accessing the service, who they are, and what device they're using. Google explained the concept in detail back in 2016: "Unlike the conventional perimeter security model, BeyondCorp doesn’t gate access to services and tools based on a user’s physical location or the originating network; instead, access policies are based on information about a device, its state, and its associated user." Of course, BeyondCorp encompasses a whole range of security practices. Implementation requires a good deal of alignment and effective internal communication. That's one of the challenges the Google team had when implementing the framework - getting the communication and buy-in from the whole organization without radically disrupting how people work. Is BeyondCorp being widely adopted by enterprises? Google has been developing BeyondCorp for some time. In fact, the concept was a response to the Operation Aurora cyber attack back in 2009. This isn't a new approach to system security, but it is only recently becoming more accessible to other organizations. We're starting to see a number of software companies offering what you might call BeyondCorp-as-a-Service. Duo is one such service: "Reliable, secure application access begins with trust, or a lack thereof" goes the (somewhat clunky) copy on their homepage. Elsewhere, ScaleFT also offer BeyondCorp services. Services like those offered by Duo and ScaleFT highlight that there is clearly an obvious demand for this type of security framework. But it is a nascent trend. Despite having been within Google for almost a decade, Thoughtworks' Radar first picked up on BeyondCorp in May 2018. Even then, ThoughtWorks placed it in the 'assess' stage. That means that it is still too early to adopt. It should simply be explored as a potential security option in the near future. Read next Amazon S3 Security access and policies IoT Forensics: Security in an always connected world where things talk
Read more
  • 0
  • 0
  • 15715

article-image-google-arcore-is-pushing-immersive-computing-forward
Sugandha Lahoti
26 Apr 2018
7 min read
Save for later

Google ARCore is pushing immersive computing forward

Sugandha Lahoti
26 Apr 2018
7 min read
Immersive computing has been touted as a crucial innovation that is going to transform the way we interact with software in the future. But like every trend, there are a set of core technologies that lie at the center, helping to drive it forward. In the context of immersive computing Google ARCore is one of these technologies. Of course, it's no surprise to see Google somewhere at the heart of one of the most exciting developments in tech. But what is Google ARCore, exactly? And how is it going to help drive immersive computing into the mainstream? But first, let's take a look at exactly what immersive computing is. After that, we'll explore how Google ARCore is helping to drive it forward, and some examples of how to put it into practice with some motion tracking and light estimation projects. What is Immersive Computing? Immersive computing is a term used to describe applications that provide an immersive experience for the user. This may come in the form of an augmented or virtual reality experience. In order to better understand the spectrum of immersive computing, let's take a look at this diagram: The Immersive Computing Spectrum The preceding diagram illustrates how the level of immersion affects the user experience, with the left-hand side of the diagram representing more traditional applications with little or no immersion, and the right representing fully immersive virtual reality applications. For us, we will stay in the middle sweet spot and work on developing augmented reality applications. Why use Google ARCore for Augmented Reality? Augmented reality applications are unique in that they annotate or augment the reality of the user. This is typically done visually by having the AR app overlay a view of the real world with computer graphics. Google ARCore is designed primarily for providing this type of visual annotation for the user. An example of a demo ARCore application is shown here: The screenshot is even more impressive when you realize that it was rendered real time on a mobile device. It isn't the result of painstaking hours of using Photoshop or other media effects libraries. What you see in that image is the entire superposition of a virtual object, the lion, into the user's reality. More impressive still is the quality of immersion. Note the details, such as the lighting and shadows on the lion, the shadows on the ground, and the way the object maintains position in reality even though it isn't really there. Without those visual enhancements, all you would see is a floating lion superimposed on the screen. It is those visual details that provide the immersion. Google developed ARCore as a way to help developers incorporate those visual enhancements in building AR applications. Google developed ARCore for Android as a way to compete against Apple's ARKit for iOS. The fact that two of the biggest tech giants today are vying for position in AR indicates the push to build new and innovative immersive applications. Google ARCore has its origins in Tango, which is/was a more advanced AR toolkit that used special sensors built into the device. In order to make AR more accessible and mainstream, Google developed ARCore as an AR toolkit designed for Android devices not equipped with any special sensors. Where Tango depended on special sensors, ARCore uses software to try and accomplish the same core enhancements. For ARCore, Google has identified three core areas to address with this toolkit, and they are as follows: Motion tracking Environmental understanding Light estimation In the next three sections, we will go through each of those core areas in more detail and understand how they enhance the user experience. Motion tracking Tracking a user's motion and ultimately their position in 2D and 3D space is fundamental to any AR application. Google ARCore allows you to track position changes by identifying and tracking visual feature points from the device's camera image. An example of how this works is shown in this figure: In the figure, we can see how the user's position is tracked in relation to the feature points identified on the real couch. Previously, in order to successfully track motion (position), we needed to pre-register or pre-train our feature points. If you have ever used the Vuforia AR tools, you will be very familiar with having to train images or target markers. Now, ARCore does all this automatically for us, in real time, without any training. However, this tracking technology is very new and has several limitations. Environmental understanding The better an AR application understands the user's reality or the environment around them, the more successful the immersion. We already saw how Google ARCore uses feature identification in order to track a user's motion. Tracking motion is only the first part. What we need is a way to identify physical objects or surfaces in the user's reality. ARCore does this using a technique called meshing. This is what meshing looks like in action: What we see happening in the preceding image is an AR application that has identified a real-world surface through meshing. The plane is identified by the white dots. In the background, we can see how the user has already placed various virtual objects on the surface. Environmental understanding and meshing are essential for creating the illusion of blended realities. Where motion tracking uses identified features to track the user's position, environmental understanding uses meshing to track the virtual objects in the user's reality. Light estimation Magicians work to be masters of trickery and visual illusion. They understand that perspective and good lighting are everything in a great illusion, and, with developing great AR apps, this is no exception. Take a second and flip back to the scene with the virtual lion. Note the lighting and detail in the shadows on the lion and ground. Did you note that the lion is casting a shadow on the ground, even though it's not really there? That extra level of lighting detail is only made possible by combining the tracking of the user's position with the environmental understanding of the virtual object's position and a way to read light levels. Fortunately, Google ARCore provides us with a way to read or estimate the light in a scene. We can then use this lighting information in order to light and shadow virtual AR objects. Here's an image of an ARCore demo app showing subdued lighting on an AR object: The effects of lighting, or lack thereof, will become more obvious as we start developing our startup applications. To summarize, we took a very quick look at what immersive computing and AR is all about. We learned about augmented reality covering the middle ground of the immersive computing spectrum, and AR is a careful blend of illusions used to trick the user into believing that their reality has been combined with a virtual one. After all, Google developed ARCore as a way to provide a better set of tools for constructing those illusions and to keep Android competitive in the AR market. After that, we learned the core concepts ARCore was designed to address and looked at each: motion tracking, environmental understanding, and light estimation, in a little more detail. This has been taken from Learn ARCore - Fundamentals of Google ARCore. Find it here. Read More Getting started with building an ARCore application for Android Types of Augmented Reality targets  
Read more
  • 0
  • 0
  • 15691
article-image-deep-learning-games-neural-networks-design-virtual-world
Amey Varangaonkar
28 Mar 2018
4 min read
Save for later

Deep Learning in games - Neural Networks set to design virtual worlds

Amey Varangaonkar
28 Mar 2018
4 min read
Games these days are closer to reality than ever. Life-like graphics, smart gameplay and realistic machine-human interactions have led to major game studios up the ante when it comes to adopting the latest and most up to date tech for developing games. In fact, not so long ago, we shared with you a few interesting ways in which Artificial Intelligence is transforming the gaming industry. Inclusion of deep learning in games has emerged as one popular solution to make the games smarter. Deep learning can be used to enhance the realism and excitement in games by teaching the game agents how to behave more accurately, and in a more life-like manner. We recently came up with this interesting implementation of deep learning to to play the game of FIFA 18, and we were quite impressed! Using just 2 layers of neural networks and with a limited amount of training, the bot that was developed managed to learn the basic rules of football (soccer). Not just that, it was also able to perform the basic movements and tasks in the game correctly. To achieve this, 2 neural networks were developed - a Convolutional Neural Network to detect objects within the game, and a second layer of LSTM (Long Short Term Memory) network to specify the movements accordingly. The same user also managed to leverage deep learning to improve the in-game graphics of the FIFA 18 game. Using the deepfakes algorithm, he managed to swap the in-game face of one of the players with the player’s real-life face. The reason? The in-game faces, although quite realistic, could be better and more realistic. The experiment ended up being near perfect, as the resultant face that was created was as good as perfect. How did he do it? After gathering some training data which was basically some images of players scraped off Google, the user managed to train two autoencoders which learnt the distinction between the in-game face and the real-world face. Then, using the deepfakes algorithm, the inputs were reversed, recreating the real-world face in the game itself. The difference is quite astonishing: Apart from improving the gameplay and the in-game character graphics, deep learning can also be used to enhance the way the opponents/adversaries interact with the player in the game. If we take the example of the FIFA game mentioned before, deep learning can be used to enhance the behaviour and appearance of the in-game crowd, who can react or cheer their team better as per their team’s performance. How can Deep Learning benefit video games? The following are some of the clear advantages of implementing deep learning techniques in games: Highly accurate results can be achieved with more and more training data Manual intervention is minimal Game developers can focus on effective storytelling than on the in-game graphics Another obvious question comes to mind at this stage, however. What are the drawbacks of implementing deep learning for games? A few come to mind immediately: Complexity of the training models can be quite high Images in games need to be generated in real-time which is quite a challenge The computation time can be quite significant The training dataset for achieving accurate results can be quite humongous With advancements in technology and better, faster hardware, many of the current limitations in developing smarter games  can be overcome. Fast generative models can look into the real-time generation of images, while faster graphic cards can take care of the model computation issue. All in all, dabbling with deep learning in games seems worth the punt which game studios should definitely think of taking. What do you think? Is incorporating deep learning techniques in games a scalable idea?
Read more
  • 0
  • 0
  • 15686

article-image-5g-mobile-data-propel-artificial-intelligence
Neil Aitken
02 Aug 2018
7 min read
Save for later

How 5G Mobile Data will propel Artificial Intelligence (AI) progress

Neil Aitken
02 Aug 2018
7 min read
Like it’s predecessors, 3G and 4G, 5G refers to the latest ‘G’ – Generation – of mobile technology. 5G will give us very fast - effectively infinitely fast - mobile data download bandwidth. Downloading a TV show to your phone over 5G, in its entirety, in HD, will take less than a second, for example. A podcast will be downloaded within a fraction of a second of you requesting it. Scratch the surface of 5G, however, and there is a great deal more to see than just fast mobile data speeds.  5G is the backbone on which a lot of emerging technologies such as AI, blockchains, IoT among others will reach mainstream adoption. Today, we look at how 5G will accelerate AI growth and adoption. 5G will create the data AI needs to thrive One feature of 5G with ramifications beyond data speed is ‘Latency.’ 5G offers virtually ‘Zero Latency’ as a service. Latency is the time needed to transmit a packet of data from one device to another. It includes the period of time between when the request was made, to the time the response is completed. [caption id="attachment_21251" align="aligncenter" width="580"] 5G will be superfast – but will also benefit from near zero ‘latency’[/caption] Source: Economist At the moment, we keep files (music, pictures or films) in our phones’ memory permanently. We have plenty of processing power on our devices. In fact, the main upgrade between phone generations these days is a faster processor. In a 5G world, we will be able to use cheap parts in our devices – processors and memory in our new phones. Data downloads will be so fast, that we can get them immediately when we need them. We won’t need to store information on the phone unless we want to.  Even if the files are downloaded from the cloud, because the network has zero latency – he or she feels like the files are on the phone. In other words, you are guaranteed a seamless user experience in a 5G world. The upshot of all this is that the majority of any new data which is generated from mobile products will move to the cloud for storage. At their most fundamental level, AI algorithms are pattern matching tools. The bigger the data trove, the faster and better performing the results of AI analysis is. These new structured data sets, created by 5G, will be available from the place where it is easiest to extract and manipulate (‘Analyze’) it – the cloud. There will be 100 billion 5G devices connected to cellular networks by 2025, according to Huawei. 5G is going to generate data from those devices, and all the smartphones in the world and send it all back to the cloud. That data is the source of the incredible power AI gives businesses. 5G driving AI in autonomous vehicles 5G’s features and this Cloud / Connected Device future, will manifest itself in many ways. One very visible example is how 5G will supercharge the contribution, especially to reliability and safety, that AI can make to self driving cars. A great deal of the AI processing that is required to keep a self driving car operating safely, will be done by computers on board the vehicle. However, 5G’s facilities to communicate large amounts of data quickly will mean that any unusual inputs (for example, the car is entering or in a crash situation) can be sent to bigger computing equipment on the cloud which could perform more serious processing. Zero latency is important in these situations for commands which might come from a centralized accident computer, designed to increase safety– for example issuing the command ‘break.’ In fact, according to manufacturers, it’s likely that, ultimately, groups of cars will be coordinated by AI using 5G to control the vehicles in a model known as swarm computing. 5G will make AI much more useful with ‘context’ - Intel 5G will power AI by providing location information which can be considered in establishing the context of questions asked of the tool – according to Intel’s Data Center Group. For example, asking your Digital Assistant where the tablets are means something different depending on whether you’re in a pharmacy or an electronics store. The nature of 5G is that it’s a mobile service. Location information is both key to context and an inherent element of information sent over a 5G connection. By communicating where they are, 5G sensors will help AI based Digital Assistants solve our everyday problems. 5G phones will enable  AI calculations on ‘Edge’ network devices  - ARM 5G will push some processing to the ‘Edge’ of the network, for manipulation by a growing range of AI chips on to the processors of phones. In this regard, smartphones like any Internet Of Things connected processor ‘in the field’ are simply an ‘AI platform’. Handset manufacturers are including new software features in their phones that customers love to use – including AI based search interfaces which allow them to search for images containing ‘heads’ and see an accurate list. [caption id="attachment_21252" align="aligncenter" width="1918"] Arm are designing new types of chips targeted at AI calculations on ‘Edge’ network devices.[/caption] Source: Arm's Project Trillium ARM, one of the world’s largest CPU producers are creating specific, dedicated AI chip sets, often derived from the technology that was behind their Graphics Processing Units. These chips process AI based calculations up to 50 times faster than standard microprocessors already and their performance is set to improve 50x over the next 3 years, according to the company. AI is part of 5G networks - Huawei Huawei describes itself as an AI company (as well as a number of other things including handset manufacturer.) They are one of the biggest electronic manufacturers in China and are currently in the process of selling networking products to the world’s telecommunications companies, as they prepare to roll out their 5G networks. Based on the insight that 70% of network system downtime comes from human error, Huawei is now eliminating humans from the network management component of their work, to the degree that they can. Instead, they’re implementing automated AI based predictive maintenance systems to increase data throughput across the network and reduce downtime. The way we use cellular networks is changing. Different applications require different backend traffic to be routed across the network, depending on the customer need. Someone watching video, for example, has a far lower tolerance for a disruption to the data throughput (the ‘stuttering Netflix’ effect) than a connected IoT sensor which is trying to communicate the temperature of a thermometer. Huawei’s network maintenance AI software optimizes these different package needs, maintaining the near zero latency that the standard demands at a lower cost. AI based network maintenance complete a virtuous loop in which 5G devices on new cellular networks give AI the raw data they need, including valuable context information, and AI helps the data flow across the 5G network better. Bringing it all together 5G and artificial intelligence (AI) are revolutionary technologies that will evolve alongside each other. 5G isn’t just fast data, it’s one of the most important technologies ever devised. Just as the smartphone did, it will fundamentally change how we relate to information, partly, because it will link us to thousands of newly connected devices on the Internet of Things. Ultimately, it could be the secondary effects of 5G, the network’s almost zero latency, which could provide the largest benefit – by creating structured data sets from billions of connected devices, in an easily accessible place – the cloud which can be used to fuel the AI algorithms which run on them. Networking equipment, chip manufacturers and governments have all connected the importance of AI with the potential of 5G. Commercial sales of 5G start in The US, UK and Australia in 2019. 7 Popular Applications of Artificial Intelligence in Healthcare Top languages for Artificial Intelligence development Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT      
Read more
  • 0
  • 0
  • 15674

article-image-gdpr-is-good-for-everyone-businesses-developers-customers
Richard Gall
14 Apr 2018
5 min read
Save for later

GDPR is good for everyone: businesses, developers, customers

Richard Gall
14 Apr 2018
5 min read
Concern around GDPR is palpable, but time is running out. It appears that many businesses don’t really know what they’re doing. At his Congressional hearing, Mark Zuckerberg’s notes read “don’t say we’re already GDPR compliant” - if Facebook aren’t ready yet, how could the average medium sized business be? But the truth is that GDPR couldn’t have come at a better time. Thanks in part to the Facebook and Cambridge Analytica scandal, the question of user data and online privacy has never been so audible within public discourse. That level of public interest wasn’t around a year ago. Ultimately, GDPR is the best way to tackle these issues. It forces businesses to adopt a different level of focus - and care - towards its users. It forces everyone to ask: what counts as personal data? who has access to it? who is accountable for the security of that data? These aren’t just points of interest for EU bureaucrats. They are fundamental questions about how businesses own and manage relationships with customers. GDPR is good news for web developers In turn, this means GDPR is good news for those working in development too. If you work in web development or UX it’s likely that you’ve experienced frustration when working against the requirements and feedback of senior stakeholders. Often, the needs of users are misunderstood or even ignored at the expense of what the business needs. This is especially true when management lacks technical knowledge and makes too many assumptions. At its worst, it can lead down the path of ‘dark patterns’ where UX is designed in such a way to ‘trick’ customers in behaving in a certain way. But even when intentions aren’t that evil, the mindset that refuses to take user privacy - and simple user desires - seriously can be damaging. Ironically, the problems this sort of negligence is causing isn’t just leading to legal issues. It’s also bad for business. That’s because when you engineer everything around what’s best for the business in a crude and thoughtless way, you make life hard for users and customers. This means: Customers simply have a bad experience and could get a better one elsewhere Customers lose trust Your brand is damaged GDPR will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. GDPR will change bad habits in businesses What GDPR does, then, is it will force businesses to get out of the habit of lazy thinking. It makes issues around UX, data protection so much more important than it otherwise would be. It also forces businesses to start taking the way software is built and managed much more seriously. This could mean a change in the way that developers work within their businesses in the future. Siloes won’t just be inefficient, they might just lead to a legal crisis. Development teams will have to work closely with legal, management and data teams to ensure that the software they are developing is GDPR compliant. Of course, this will also require a good deal of developer training to be fully briefed on the new landscape. It also means we might see new roles like Chief Data Officer becoming more prominent. But it’s worth remembering that for non-developers, GDPR is going to also require much more technical understanding. If recent scandals have shown us anything, it’s that a lot of people don’t fully understand the capabilities that even the smallest organizations have at their disposal. GDPR will force the non-technical to become more informed about how software and data interact - and most importantly how software can sometimes exploit or protect users. GDPR will give developers a renewed focus on the code they write Equally, for developers, GDPR also forces a renewed focus on the code they write. Discussions around standards have been a central point of contention in the open source world for some time. There has always been an unavoidable, quiet tension between innovation and standard compliance. Writing in Smashing Magazine, digital law expert Heather Burns has some very advice on this: "Your coding standards must be preventive as well. You should disable unsafe or unnecessary modules, particularly in APIs and third-party libraries. An audit of what constitutes an unsafe module should be about privacy by design, such as the unnecessary capture and retention of personal data, as well as security vulnerabilities. Likewise, code reviews should include an audit for privacy by design principles, including mapping where data is physically and virtually stored, and how that data is protected, encrypted, and sandboxed." Sure, all of this seems like a headache, but all of this should make life better for users and customers. And while it might seem frustrating to not be able to track users in the way that we might have in the old world, by forcing everyone to focus on what users really want  - not what we want them to want - we’ll ultimately get to a place that’s better for everyone.
Read more
  • 0
  • 0
  • 15619
article-image-quantum-intelligent-mix-quantum
Sugandha Lahoti
29 Nov 2017
6 min read
Save for later

Quantum A.I. : An intelligent mix of Quantum+A.I.

Sugandha Lahoti
29 Nov 2017
6 min read
“Mixed reality, Artificial intelligence and Quantum computing are the three path-breaking technologies that will shape the world in the coming years.” - Satya Nadella, CEO, Microsoft. The biggest scientific & technological revolution of the decade, Artificial Intelligence, has the potential to flourish human civilizations like never before. At the surface level, it seems to be all about automated functioning and intelligent coding. But at the core, algorithms require huge data, quality training, and complex models. Processing of these algorithmic computations need hardware. Presently, digital computers operate on the classical Boolean logic. Quantum computing is the next-gen hardware and software technology, based on the quantum law. It typically means that, they use qubits instead of the boolean logic in order to speed up calculations. The concoction of the both path-breaking techs, i.e. AI and Quantum Computing is said to be the future of technology. Quantum A.I. is all about implementing fast computation capabilities of quantum computers to Artificial intelligence based applications. Understanding Quantum Computing Before we jump into Quantum A.I., let us first understand Quantum Computing in detail. In physics terminology, quantum mechanics is the study of nature at the atomic and subatomic level. Totally opposite of classical physics theory which describes the nature at macroscopic level. At the quantum level, nature particles may take form of more than one state at the same time. Quantum computing utilizes this fundamental quantum phenomena of the nature to process information. Quantum computer stores information in the form of quantum bits, known as qubits, similar to the binary logic used by digital computers. However, the state of the bits is not defined. It can encode information as both 1s and 0s with the help of quantum mechanical principles of superposition, entanglement, and tunneling. The use of quantum logic enables a quantum computer to solve problems at an exponentially faster rate than present day computers. Physicists and researchers consider that quantum computers are powerful enough to outperform the present processors. Quantum Computing for Artificial Intelligence Regardless of smart AI algorithms, a high-processing hardware is essential for them to function. Current GPUs, allow algorithms to run at an operable speed, a speckle of what quantum computing does. Quantum computing approach helps AI algorithms undergo exponential speedups over existing digital computers. In this way it will ease problems related to machine learning, clustering, classification and finding constructive patterns in large quantities of data. Quantum learning amalgamates with AI to speed up ML and AI algorithms in order to develop systems which can better interpret, improve, and understand large data sets of information. Specific use cases in the area of Quantum AI: Random Number Generation Classical, digital computers are only able to generate pseudo-random numbers. They use computational difficulty for encryptions, making them easily crackable using quantum computers. Certain machine learning algorithms require pure random numbers to generate ideal results, specifically for financial applications. Quantum systems have the mechanism to generate pure random numbers as required by machine learning applications. QRNG (Quantum Random number generator) is a quantum computer by Certes Networks, used for generating high-level random numbers for secure encryption key generation. Quantum-enhanced Reinforcement Learning Reinforcement learning is an Artificial intelligence area which allows agents to learn about an environment and take actions to achieve rewards. Usually it is time consuming in the initial training process and choosing an optimal path. With the help of a quantum agent, the training time reduces dramatically. Additionally, a quantum agent is thorough with the description of the environment after the end of each learning process. This is marked as an advancement over the classical approach where reinforcement learning schemes are model-free. Quantum-Inspired Neural Nets Quantum neural networks leverage ideas from the quantum theory for a fuzzy logic based neural network implementation. Current Neural network in the areas of big data applications are generally difficult to train as they use a feedback loop to update parameters in the training phase. In quantum computers, quantum forces such as interference and entanglement can be used to quickly update parameters in the training phase, easing the entire training process. Big data Analytics Quantum computers have the ability to handle huge amount of data generated and will continue to do so at an exponential rate. Using quantum computing techniques for big data analytics, useful insights would be within every individual’s reach. This would lead to better portfolio management, optimal routing for navigation, best possible treatments, personalized medications, etc. Empowering Big data analytics with quantum computing will ease out sampling, optimizing, and analyzing large quantities of data, giving businesses and consumers better decision making ability. These are few examples in terms of measuring Quantum AI capabilities. Quantum computers powered by Artificial Intelligence is set to have tremendous impact in the field of science and engineering. Ongoing Research and Implementation Google plans to build a 49-qubit quantum chip by the end of 2017. Microsoft CEO, during his keynote session at Microsoft Ignite  made the announcement of a new programming language designed to work on quantum simulator as well as quantum computer. In this rat race, IBM successfully built and measured a 50 qubit quantum computer. Additionally, Google is collaborating with NASA to release a number of research papers pertaining to Quantum A.I. domain. Rigetti Computing plans to devise a computer that will leverage quantum physics for applications pertaining to artificial intelligence and chemistry simulations. They will offer a cloud based service, on the lines of Google and Microsoft for remote usage. Volkswagen, a German automaker, plans to collaborate with Google quantum AI to develop new-age digital features for cars and intelligent traffic-management system. They are also contemplating to build AI systems for autonomous cars. Future Scope In the near future, high-level quantum computers will help in development of complex AI models with ease. Such Quantum enhanced AI algorithms will influence application development in the field of finance, security, healthcare, molecular science, automobile and manufacturing etc. Artificial intelligence married to Quantum computing is said to be the key of a brighter, more tech-oriented future. A future that will take intelligent information processing at a whole new altitude.  
Read more
  • 0
  • 0
  • 15572

article-image-facebook-planning-to-spy-on-you-through-your-mobiles-microphones
Amarabha Banerjee
16 Jul 2018
3 min read
Save for later

Is Facebook planning to spy on you through your mobile’s microphones?

Amarabha Banerjee
16 Jul 2018
3 min read
You must have been hearing the recent cambridge analytica scandal involving facebook and user data theft. As an aftermath of the recent Facebook Cambridge Analytica scandal, many have become cautious about using Facebook, and wondering how safe their personal data’s going to be. Now, Facebook has filed for a patent for a technology that will allow an ambient audio signal to activate your mobile phone’s microphone remotely, and record without you even knowing. This news definitely comes as a shock, especially after Facebook’s senate hearing early this year and their apologetic messages regarding the cambridge analytica scandal. If you weren’t taking your data privacy seriously, then it’s high time you do. According to Facebook, this is how the patent pending tech would work: Smartphones can detect signals outside of the human perception range - meaning we can neither hear or see those signals. Advertisements on TV or or any devices will be preloaded with such signals. When your smartphone detects such hidden signals from the adverts or any other commercials, it would automatically activate the phone microphone and start recording ambient noise and sounds. The sound recorded would include everything in the background - from your normal conversations to the ambient noise of the program or any other kind of noise. This would be stored online and sent back to Facebook for analysis. Facebook claim they will only look at the user reaction to the advert. For example, if the ambient advert is heard in the background, it means the users moved away from it after seeing it. If they change channels that means they are not interested either in the advert or in the product. If the ambient sound is direct then that means the users were bound to the couch as the ad was playing. This will give Facebook a rich set of data on which ads people are more interested to watch and also get a count of the people watching a particular ad. This data in turn will help Facebook place the right kind of ads for their users with prior knowledge of their interest in it. All these are explained from the point of view of Facebook which at the moment sounds very very idealistic. Do we really believe that Facebook is applying for this patent with such naive intentions to save our time from unwanted ads and show the ads that matter to us? Or is there something more devious involved? The capability to listen to our private conversations, recording them unknowingly and then saving them online with our identities attached to it sounds more like a plot from a Hollywood espionage movie. The patent was filed back in 2016 but has resurfaced in discussions now. The only factor that is a bit comforting is that Facebook is not actively pursuing this patent. Does it mean a change of heart? Or is it a temporary pause which will resume after the current tensions are doused. The Cambridge Analytica scandal and ethics in data science Alarming ways governments are using surveillance tech to watch you F8 AR Announcements
Read more
  • 0
  • 0
  • 15409