Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-unity-machine-learning-agents-transforming-games-with-artificial-intelligence
Amey Varangaonkar
30 Mar 2018
4 min read
Save for later

Unity Machine Learning Agents: Transforming Games with Artificial Intelligence

Amey Varangaonkar
30 Mar 2018
4 min read
Unity has undoubtedly been one of the leaders when it comes to developing cross-platform products - going from strengths to strengths in developing visually stimulating 2D as well as 3D games and simulations.With Artificial Intelligence revolutionizing the way games are being developed these days, Unity have identified the power of Machine Learning and introduced Unity Machine Learning Agents. With this, they plan on empowering the game developers and researchers in their quest to develop intelligent games, robotics and simulations. What are Unity Machine Learning Agents? Traditionally, game developers have been hard-coding the behaviour of the game agents. Although effective, this is a tedious task and it also limits the intelligence of the agents. Simply put, the agents are not smart enough. To overcome this obstacle, Unity have simplified the training process for the game developers and researchers by introducing Unity Machine Learning Agents (ML-Agents, in short). Through just a simple Python API, the game agents can be now trained to use deep reinforcement learning, an advanced form of machine learning, to learn from their actions and modify their behaviour accordingly. These agents can then be used to dynamically modify the difficulty of the game. How do they work? As mentioned earlier, the Unity ML-Agents are designed to work based on the concept of deep reinforcement learning, a branch of machine learning where the agents are trained to learn from their own actions. Here is a simple flowchart to demonstrate how reinforcement learning works: The reinforcement learning training cycle The learning environment to be configured for the ML-agents consists of 3 primary objects: Agent: Every agent has a unique set of states, observations and actions within the environment, and is assigned rewards for particular events. Brain: A brain decides what action any agent is supposed to take in a particular scenario. Think of it as a regular human brain, which basically controls the bodily functions. Academy: This object contains all the brains within the environment To train the agents, a variety of scenarios are made possible by varying the connection of different components (explained above) of the environment. Some are single agents, some simultaneous single agents, and others could be co-operative and competitive multi-agents and more. You can read more about these possibilities on the official Unity blog. Apart from the way these agents are trained, Unity are also adding some cool new features in these ML-agents. Some of these are: Monitoring the agents’ decision-making to make it more accurate Incorporating curriculum learning, by which the complexity of the tasks can eventually be increased to aid more effective learning Imitation learning is a newly-introduced feature wherein the agents simply mimic the actions we want them to perform, rather than they learning on their own. What next for Unity Machine Learning Agents? Unity recently announced the release of v0.3 beta SDK of the ML-agents, and have been making significant progress in this domain to develop smarter, more intelligent game agents which can be used with the Unity game engine. Still very much in the research phase, these agents can also be used as an example by academic researchers to study the complex behaviour of trained models in different environments and scenarios where the variables associated with the in-game physics and visual appearance can be altered. Going forward, these agents can also be used by enterprises for large scale simulations, in robotics and also in the development of autonomous vehicles. These are interesting times for game developers, and Unity in particular, in their quest for developing smarter, cutting-edge games. Inclusion of machine learning in their game development strategy is a terrific move, although it will take some time for this to be perfected and incorporated seamlessly. Nonetheless, all the research and innovation being put into this direction certainly seems well worth it!
Read more
  • 0
  • 0
  • 18054

article-image-slow-down-learn-how-code-faster
Packt Editorial Staff
29 Mar 2018
6 min read
Save for later

Slow down to learn how to code faster

Packt Editorial Staff
29 Mar 2018
6 min read
Nowadays, it seems like everyone wants to do things faster. We want to pay without taking out a credit card or cash. Social media lets us share images and videos from our lives in a split second. And we get frustrated if Netflix takes more than 3 seconds to start streaming our latest TV show series binge. However, if you want to learn how to code faster, I'm going to present an odd idea: go slower. This has been taken from Skill Up: A Software Developer's Guide to Life and Career by Jordan Hudgens. This may seem like a counterintuitive concept. After all, don't coding bootcamps, even DevCamp where I teach, tell you how you can learn how to code in a few months? Well yes, and research shows that 8 weeks is a powerful number when it comes to learning. The Navy Seal training program specifically chose 8 weeks as its timeframe for conditioning candidates. And if you search the web for the phrase 8 Week Training programs, you'll find courses ranging from running 10ks to speaking Spanish fluently. So yes, I'm huge believer that individuals can learn an incredible amount of information in a short period of time. But what I'm talking about here is becoming more deliberate when it comes to learning new information. Learn how to code faster If you're like me, when you learn a new topic the first thing you'll do is either move onto the next topic or repeat the concept as quickly as humanly possible. For example, when I learn a new Ruby or Scala programming method I'll usually jump right into using it in as many different situations as possible. However, I've discovered that this may not be the best approach because it's very short-sighted. Your default mind is stopping you from coding faster When it comes to learning how to code faster, one of the most challenging requirements is moving knowledge from our short-term memory to our long-term memory. Remember the last time you learned a programming technique. Do you remember how easy it felt when you repeated what the instructor taught? The syntax seemed straightforward and it probably seemed like there was no way you would forget how to implement the feature. But after a few days, if you try to rebuild the component, is it easy or hard? If you're like me, the concept that seemed incredibly easy only a few days ago now causes you to draw a blank. But don't worry. This doesn't mean that we're incompetent. Instead, it means that this piece of knowledge wasn't given the chance to move from our short-term to our long-term memory. Hacking the mind So, if our default mindset is to forget what we've learned after a few days (or a few minutes), how can we learn anything? This is where our brain's default programming comes into play and where we can hack the way that we learn. I'm currently teaching myself the TypeScript programming language. TypeScript is the language that's recommended for Angular 2 development, so I thought it would be a good next language to learn. However, instead of taking my default approach, which is to slam through training guides and tutorials, I'm taking a more methodical approach. Slowing it down To learn TypeScript, I'm going through a number of different books and videos. And as I follow along with the guides, as soon as I learn a new topic I completely stop. I'll stand up. Write the new component on one of my whiteboards. And actually, write the program out by hand. After that, I type the program out on the keyboard… very slowly. So slowly that I know I could go around 4-5x faster. But by taking this approach I'm forcing my mind to think about the new concept instead of rushing through it. When it comes to working with our long-term memory, this approach is more effective than simply flying through a concept because it forces our minds to think through each keystroke. That means when it comes to actually writing code, it will come much more naturally to you. Bend it like Beethoven I didn't learn this technique from another developer. Instead, I heard about how one of the most successful classical music institutions in the world, the Meadowmount School of Music in New York, taught students new music compositions. As a game, the school gives out portions of the sheet music. So, where most schools will give each student the full song, Meadowmount splits the music up into pieces. From there, it hands each student a single piece for them to focus on. From that point, the student will only learn to place that single piece of music. They will start out very slowly. They won't rush through notes because they don't even know how they fit into the song. This approach teaches them how to concentrate on learning a new song one note at a time. From that point, the students trade note cards and then focus on learning another piece of the song. They continue with trading cards until each student has been able to work through the entire set of cards. By forcing the students to break a song into pieces they no longer will have any weak points in a song. Instead, the students will have focused on the notes themselves. From this point, it's trivial for all the students in the class to combine their knowledge and learn how to play the song all the way through. From classical music to coding So, can this approach help you learn how to code faster? I think so. The research shows that by slowing down and breaking concepts into small pieces, it's easier for students to transfer information from the short-term to long-term memory. So, the next time you are learning a coding concept, take a step back. Instead of simply copying what the instructor is teaching, write it down on a piece of paper. Walk through exactly what is happening in a program. If you take this approach, you will discover that you're no longer simply following a teacher's set of steps, but that you'll actually learn how the concepts work. And if you get to the stage of understanding, you will be ready to transfer that knowledge to your long-term memory and remember it for good.
Read more
  • 0
  • 0
  • 25490

article-image-how-we-improved-search-algolia
Johti Vashisht
29 Mar 2018
2 min read
Save for later

How we improved search with Algolia

Johti Vashisht
29 Mar 2018
2 min read
Packt prides itself on using the latest technology to deliver products and services to our customers effortlessly. That's why we've updated our search tool across our website to provide a more efficient and intuitive search experience with Algolia. We've loved building it and we're pretty confident you'll love using it too. Explore our content using our new search tool now. What is Algolia and why do we love it? Algolia is an incredibly powerful search platform. We love it because it's incredibly reliable and scalable - it's also a great platform for our development team to work with, which means a lot when we're working on multiple projects at the same time with tight deadlines. Why you will love our new search Our new search tool is fast and responsive, which means you'll be able to find the content you want quickly and easily. With the range of products on our website we know that can sometimes be a challenge - now you can go straight to the products best suited to your needs. How we integrated Algolia with the Packt website Back end algolia integration We built a Node.js function which was deployed as an AWS Lambda that can read from multiple sources and then push data into DynamoDB. A trigger within DynamoDB then pushes the data into a transformation and finally into Algolia. This means we only update in Algolia when the data has changed. Search setup We have 4 indices: the main index with relevance sorting and replicas to sort by title, price and release data. The results have been tuned to show conceptual matching information items first, such as relevant Tech Pages and Live Courses. Front end Algolia integration To allow rapid development and deployment, we used CircleCI 2.0 to build and deploy our project into an AWS S3 Bucket that sits behind a CloudFront CDN. The site is built using HTML, SCSS and pure Javascript together with Webpack for bundling and minification. We are using Algolia's InstantSearch.js library to show different widgets on the screen and Bootstrap for quickly implementing the design which allowed us to put together the bulk of the site in a single day.
Read more
  • 0
  • 1
  • 5096

article-image-should-you-move-python-3-7-experts-opinions
Richard Gall
29 Mar 2018
9 min read
Save for later

Should you move to Python 3? 7 Python experts' opinions

Richard Gall
29 Mar 2018
9 min read
Python is one of the most used programming languages on the planet. But when something is so established and popular across a number of technical domains the pace of change slows. Moving to Python 3 appears to be a challenge for many development teams and organizations. So, is switching to Python 3 worth the financial investment, the training and the stress? Mike Driscoll spoke to a number of Python experts about whether developers should move to Python 3 for Python Interviews, a book that features 20 interviews with leading Python programmers and community contributors. The transition to Python 3 can be done gradually Brett Cannon (@brettsky), Python core developer and Principal Software Developer at Microsoft: As someone who helped to make Python 3 come about, I'm not exactly an unbiased person to ask about this. I obviously think people should make the switch to Python 3 immediately, to gain the benefits of what has been added to the language since Python 3.0 first came out. I hope people realize that the transition to Python 3 can be done gradually, so the switch doesn't have to be abrupt or especially painful. Instagram switched in nine months, while continuing to develop new features, which shows that it can be done. Anyone starting out with Python should learn Python 3 Steve Holden (@HoldenWeb), CTO of Global Stress Index and former chairman and director of The PSF: Only when they need to. There will inevitably be systems written in 2.7 that won't get migrated. I hope that their operators will collectively form an industry-wide support group, to extend the lifetimes of those systems beyond the 2020 deadline for Python-Dev support. However, anyone starting out with Python should clearly learn Python 3 and that is increasingly the case. Python 3 resolves a lot of inconsistencies Glyph Lefkowitz (@glyph), founder of Twisted, a Python network programming framework, awarded The PSF’s Community Service Award in 2017: I'm in Python 3 in my day job now and I love it. After much blood, sweat and tears, I think it actually is a better programming language than Python 2 was. I think that it resolves a lot of inconsistencies. Most improvements should mirror quality of life issues and the really interesting stuff going on in Python is all in the ecosystem. I absolutely cannot wait for a PyPy 3.5, because one of the real downsides of using Python 3 at work is that I now have to deal with the fact that all of my code is 20 times slower. When I do stuff for the Twisted ecosystem, and I run stuff on Twisted's infrastructure, we use Python 2.7 as a language everywhere, but we use PyPy as the runtime. It is just unbelievably fast! If you're running services, then they can run with a tenth of the resources. A PyPy process will take 80 MB of memory, but once you're running that it will actually take more memory per interpreter, but less memory per object. So if you're doing any Python stuff at scale, I think PyPy is super interesting. One of my continued bits of confusion about the Python community is that there's this thing out there which, for Python 2 anyway, just makes all of your code 20 times faster. This wasn't really super popular, in fact PyPy download stats still show that it's not as popular as Python 3, and Python 3 is really experiencing a huge uptick in popularity. I do think that given that the uptake in popularity has happened, the lack of a viable Python 3 implementation for PyPy is starting to hurt it quite a bit. But it was around and very fast for a long time before Python 3 had even hit 10% of PyPy's downloads. So I keep wanting to predict that this is the year of PyPy on the desktop, but it just never seems to happen. Most actively maintained libraries support Python 3 Doug Hellmann (@doughellmann), the man behind Python Module of the Week and a fellow of The PSF: The long lifetime for Python 2.7 recognizes the reality that rewriting functional software based on backwards-incompatible upstream changes isn't a high priority for most companies. I encourage people to use the latest version of Python 3 that is available on their deployment platform for all new projects. I also advise them to carefully reconsider porting their remaining legacy applications now that most actively maintained libraries support Python 3. Migration from Python 2 to 3 is difficult Massimo Di Pierro (@mdipierro), Professor at the School of Computing at De Paul University in Chicago and creator of web2py, an open source web application framework written in Python: Python 3 is a better language than Python 2, but I think that migration from Python 2 to Python 3 is difficult. It cannot be completely automated and often it requires understanding the code. People do not want to touch things that currently work. For example, the str function in Python 2 converts to a string of bytes, but in Python 3, it converts to Unicode. So this makes it impossible to switch from Python 2 to Python 3, without actually going through the code and understanding what type of input is being passed to the function, and what kind of output is expected. A naïve conversion may work very well as long as you don't have any strange characters in your input (like byte sequences that do not map into Unicode). When that happens, you don't know if the code is doing what it was supposed to do originally or not. Consider banks, for example. They have huge codebases in Python, which have been developed and tested over many years. They are not going to switch easily because it is difficult to justify that cost. Consider this: some banks still use COBOL. There are tools to help with the transition from Python 2 to Python 3. I'm not really an expert on those tools, so a lot of the problems I see may have a solution that I'm not aware of. But I still found that each time I had to convert code, this process was not as straightforward as I would like. The divide between the worlds of Python 2 and 3 will exist well beyond 2020 Marc-Andre Lemburg (@malemburg), co-founder of The PSF and CEO of eGenix: Yes, you should, but you have to consider the amount of work which has to go into a port from Python 2.7 to 3.x. Many companies have huge code bases written for Python 2.x, including my own company eGenix. Commercially, it doesn't always make sense to port to Python 3.x, so the divide between the two worlds will continue to exist well beyond 2020. Python 2.7 does have its advantages because it became the LTS version of Python. Corporate users generally like these long-term support versions, since they reduce porting efforts from one version to the next. I believe that Python will have to come up with an LTS 3.x version as well, to be able to sustain success in the corporate world. Once we settle on such a version, this will also make a more viable case for a Python 2.7 port, since the investment will then be secured for a good number of years. Python 3 has tons of amazing new features Barry Warsaw (@pumpichank), member of the Python Foundation team at LinkedIn, former project leader of GNU Mailman: We all know that we've got to get on Python 3, so Python 2's life is limited. I made it a mission inside of Ubuntu to try to get people to get on Python 3. Similarly, within LinkedIn, I'm really psyched, because all of my projects are on Python 3 now. Python 3 is so much more compelling than Python 2. You don't even realize all of the features that you have in Python 3. One of the features that I think is really awesome is the async I/O library. I'm using that in a lot of things and think it is a very compelling new feature, that started with Python 3.4. Even with Python 3.5, with the new async keywords for I/O-based applications, asyncio was just amazing. There are tons of these features that once you start to use them, you just can't go back to Python 2. It feels so primitive. I love Python 3 and use it exclusively in all of my personal open source projects. I find that dropping back to Python 2.7 is often a chore, because so many of the cool things you depend on are just missing, although some libraries are available in Python 2 compatible back ports. I firmly believe that it's well past the time to fully embrace Python 3. I wouldn't write a line of new code that doesn't support it, although there can be business reasons to continue to support existing Python 2 code. It's almost never that difficult to convert to Python 3, although there are still a handful of dependencies that don't support it, often because those dependencies have been abandoned. It does require resources and careful planning though, but any organization that routinely addresses technical debt should have conversion to Python 3 in their plans. That said, the long life of Python 2.7 has been great. It's provided two important benefits I think. The first is that it provided a very stable version of Python, almost a long-term support release, so folks didn't have to even think about changes in Python every 18 months (the typical length of time new versions are in development). Python 2.7's long life also allowed the rest of the ecosystem to catch up with Python 3. So the folks who were very motivated to support it could sand down the sharp edges and make it much easier for others to follow. I think we now have very good tools, experience, and expertise in how to switch to Python 3 with the greatest chance of success. I think we reached the tipping point somewhere around the Python 3.5 release. Regardless of what the numbers say, we're well past the point where there's any debate about choosing Python 3, especially for new code. Python 2.7 will end its life in mid-2020 and that's about right, although not soon enough for me! At some point, it's just more fun to develop in and on Python 3. That's where you are seeing the most energy and enthusiasm from Python developers.
Read more
  • 0
  • 0
  • 24161

article-image-the-evolution-cybercrime
Packt Editorial Staff
29 Mar 2018
4 min read
Save for later

The evolution of cybercrime

Packt Editorial Staff
29 Mar 2018
4 min read
A history of cybercrime As computer systems have now become integral to the daily functioning of businesses, organizations, governments, and individuals we have learned to put a tremendous amount of trust in these systems. As a result, we have placed incredibly important and valuable information on them. History has shown, that things of value will always be a target for a criminal. Cybercrime is no different. As people flood their personal computers, phones, and so on with valuable data, they put a target on that information for the criminal to aim for, in order to gain some form of profit from the activity. In the past, in order for a criminal to gain access to an individual's valuables, they would have to conduct a robbery in some shape or form. In the case of data theft, the criminal would need to break into a building, sifting through files looking for the information of greatest value and profit. In our modern world, the criminal can attack their victims from a distance, and due to the nature of the internet, these acts would most likely never meet retribution. Cybercrime in the 70s and 80s In the 70s, we saw criminals taking advantage of the tone system used on phone networks. The attack was called phreaking, where the attacker reverse-engineered the tones used by the telephone companies to make long distance calls. In 1988, the first computer worm made its debut on the internet and caused a great deal of destruction to organizations. This first worm was called the Morris worm, after its creator Robert Morris. While this worm was not originally intended to be malicious it still caused a great deal of damage. The U.S. Government Accountability Office in 1980 estimated that the damage could have been as high as $10,000,000.00. 1989 brought us the first known ransomware attack, which targeted the healthcare industry. Ransomware is a type of malicious software that locks a user's data, until a small ransom is paid, which will result in the issuance of a cryptographic unlock key. In this attack, an evolutionary biologist named Joseph Popp distributed 20,000 floppy disks across 90 countries, and claimed the disk contained software that could be used to analyze an individual's risk factors for contracting the AIDS virus. The disk however contained a malware program that when executed, displayed a message requiring the user to pay for a software license. Ransomware attacks have evolved greatly over the years with the healthcare field still being a very large target. The birth of the web and a new dawn for cybercrime The 90s brought the web browser and email to the masses, which meant new tools for cybercriminals to exploit. This allowed the cybercriminal to greatly expand their reach. Up till this time, the cybercriminal needed to initiate a physical transaction, such as providing a floppy disk. Now cybercriminals could transmit virus code over the internet in these new, highly vulnerable web browsers. Cybercriminals took what they had learned previously and modified it to operate over the internet, with devastating results. Cybercriminals were also able to reach out and con people from a distance with phishing attacks. No longer was it necessary to engage with individuals directly. You could attempt to trick millions of users simultaneously. Even if only a small percentage of people took the bait you stood to make a lot of money as a cybercriminal. The 2000s brought us social media and saw the rise of identity theft. A bullseye was painted for cybercriminals with the creation of databases containing millions of users' personal identifiable information (PII), making identity theft the new financial piggy bank for criminal organizations around the world. This information coupled with a lack of cybersecurity awareness from the general public allowed cybercriminals to commit all types of financial fraud such as opening bank accounts and credit cards in the name of others. Cybercrime in a fast-paced technology landscape Today we see that cybercriminal activity has only gotten worse. As computer systems have gotten faster and more complex we see that the cybercriminal has become more sophisticated and harder to catch. Today we have botnets, which are a network of private computers that are infected with malicious software and allow the criminal element to control millions of infected computer systems across the globe. These botnets allow the criminal element to overload organizational networks and hide the origin of the criminals: We see constant ransomware attacks across all sectors of the economy People are constantly on the lookout for identity theft and financial fraud Continuous news reports regarding the latest point of sale attack against major retailers and hospitality organizations This is an extract from Information Security Handbook by Darren Death. Follow Darren on Twitter: @DarrenDeath. 
Read more
  • 0
  • 2
  • 18600

article-image-deep-learning-games-neural-networks-design-virtual-world
Amey Varangaonkar
28 Mar 2018
4 min read
Save for later

Deep Learning in games - Neural Networks set to design virtual worlds

Amey Varangaonkar
28 Mar 2018
4 min read
Games these days are closer to reality than ever. Life-like graphics, smart gameplay and realistic machine-human interactions have led to major game studios up the ante when it comes to adopting the latest and most up to date tech for developing games. In fact, not so long ago, we shared with you a few interesting ways in which Artificial Intelligence is transforming the gaming industry. Inclusion of deep learning in games has emerged as one popular solution to make the games smarter. Deep learning can be used to enhance the realism and excitement in games by teaching the game agents how to behave more accurately, and in a more life-like manner. We recently came up with this interesting implementation of deep learning to to play the game of FIFA 18, and we were quite impressed! Using just 2 layers of neural networks and with a limited amount of training, the bot that was developed managed to learn the basic rules of football (soccer). Not just that, it was also able to perform the basic movements and tasks in the game correctly. To achieve this, 2 neural networks were developed - a Convolutional Neural Network to detect objects within the game, and a second layer of LSTM (Long Short Term Memory) network to specify the movements accordingly. The same user also managed to leverage deep learning to improve the in-game graphics of the FIFA 18 game. Using the deepfakes algorithm, he managed to swap the in-game face of one of the players with the player’s real-life face. The reason? The in-game faces, although quite realistic, could be better and more realistic. The experiment ended up being near perfect, as the resultant face that was created was as good as perfect. How did he do it? After gathering some training data which was basically some images of players scraped off Google, the user managed to train two autoencoders which learnt the distinction between the in-game face and the real-world face. Then, using the deepfakes algorithm, the inputs were reversed, recreating the real-world face in the game itself. The difference is quite astonishing: Apart from improving the gameplay and the in-game character graphics, deep learning can also be used to enhance the way the opponents/adversaries interact with the player in the game. If we take the example of the FIFA game mentioned before, deep learning can be used to enhance the behaviour and appearance of the in-game crowd, who can react or cheer their team better as per their team’s performance. How can Deep Learning benefit video games? The following are some of the clear advantages of implementing deep learning techniques in games: Highly accurate results can be achieved with more and more training data Manual intervention is minimal Game developers can focus on effective storytelling than on the in-game graphics Another obvious question comes to mind at this stage, however. What are the drawbacks of implementing deep learning for games? A few come to mind immediately: Complexity of the training models can be quite high Images in games need to be generated in real-time which is quite a challenge The computation time can be quite significant The training dataset for achieving accurate results can be quite humongous With advancements in technology and better, faster hardware, many of the current limitations in developing smarter games  can be overcome. Fast generative models can look into the real-time generation of images, while faster graphic cards can take care of the model computation issue. All in all, dabbling with deep learning in games seems worth the punt which game studios should definitely think of taking. What do you think? Is incorporating deep learning techniques in games a scalable idea?
Read more
  • 0
  • 0
  • 11632
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-4-insights-stack-overflow-survey-might-surprise-you
Richard Gall
27 Mar 2018
3 min read
Save for later

4 surprising things from Stack Overflow's 2018 survey

Richard Gall
27 Mar 2018
3 min read
This year’s Stack Overflow survey features a wealth of insights on developers around the world. There were some takeaways that are worth noting and open the door to wider investigation. Here are 4 Stack Overflow survey highlights we think merit further discussion... 25% of developers think a regulatory body should be responsible for AI ethics The number of developers who believed a regulatory body should be responsible for AI ethics was a minority - more believed developers themselves should be responsible for ethical decisions around the artificial intelligence that they help to build. However, the fact that 1 in 4 of Stack Overflow’s survey respondents believe we need a regulatory body to monitor ethics in AI is not to be ignored - even if for the most part developers believe they are best placed to make ethical decisions, that feeling is far from unanimous. This means there is some unease about ethics and artificial intelligence that is, at the very least, worth talking about in more detail. The ethics of code remains a gray area There were a number of interesting questions around writing code for ethical purposes in this year’s survey. 58.5% of respondents said they wouldn’t write unethical code if they were asked to, 4.8% said they would, with 35.6% saying that it depends on what it is. Clearly, the notion of ethical code remains something that needs to be properly addressed within the developer and tech community. The recent Facebook and Cambridge Analytica scandal have only served to emphasize this. Equally interesting was the responses to the question around responsibility for ethical code. 57.5% said upper management were responsible for code that accomplishes something ethical, but 22.8% said it was ‘the person who came up with the idea’ and 19.7% said ‘the developer who wrote it’. Hackathons and coding competitions are a crucial part of developer learning 26% of respondents learned new skills in hackathons. When you compare that to 35% of people who say they’re getting on the job training it’s easy to see just how important a role hackathons play in the professional development of developers. A similar proportion (24.3%) said coding competitions were also an important part of their technical education. When you put the two together, there’s obvious evidence that software learning is happening in the community more than in the workplace. Arguably, today’s organizations are growing and innovating on the back of developer curiosity and ingenuity. Transgender and non-binary programmers contribute to open source at high rates This probably will go largely unnoticed but it's worth underlining this. It was, in fact, one of the Stack Overflow survey's highlights: “developers who identify as transgender and non-binary contribute to open source at higher rates (58% and 60%, respectively) than developers who identify as men or women overall (45% and 33%).” This is a great statistic and one that’s important to recognize among the diversity problems within technology. It is, perhaps, a positive signal, that things are changing.
Read more
  • 0
  • 0
  • 12462

article-image-future-python-3-experts-views
Richard Gall
27 Mar 2018
7 min read
Save for later

The future of Python: 3 experts' views

Richard Gall
27 Mar 2018
7 min read
Python is the fastest growing programming language on the planet. This year’s Stack Overflow survey produces clear evidence that it is growing at an impressive rate. And it’s not really that surprising - versatile, dynamic, and actually pretty easy to learn, it’s a language that is accessible and powerful enough to solve problems in a range of fields, from statistics to building APIs. But what does the future hold for Python? How will it evolve to meet the needs of its growing community of engineers and analysts? Read the insights from 3 Python experts on what the future might hold for the programming language, taken from Python Interviews, a book that features 20 conversations with leading figures from the Python community. In the future, Python will spawn other more specialized languages Steve Holden (@HoldenWeb), CTO of Global Stress Index and former chairman and director of The PSF: I'm not really sure where the language is going. You hear loose talk of Python 4. To my mind though, Python is now at the stage where it's complex enough. Python hasn't bloated in the same way that I think the Java environment has. At that maturity level, I think it's rather more likely that Python's ideas will spawn other, perhaps more specialized, languages aimed at particular areas of application. I see this as fundamentally healthy and I have no wish to make all programmers use Python for everything; language choices should be made on pragmatic grounds. I've never been much of a one for pushing for change. Enough smart people are thinking about that already. So mostly I lurk on Python-Dev and occasionally interject a view from the consumer side, when I think that things are becoming a little too esoteric. The needs of the Python community are going to influence where the language goes in future Carol Willing (@WillingCarol), former director of The Python Foundation, core developer of CPython, and Research Software Engineer at Project Jupyter. I think we're going to continue to see growth in the scientific programming part of Python. So things that support the performance of Python as a language and async stability are going to continue to evolve. Beyond that, I think that Python is a pretty powerful and solid language. Even if you stopped development today, Python is a darn good language. I think that the needs of the Python community are going to feed back into Python and influence where the language goes. It's great that we have more representation from different groups within the core development team. Smarter minds than mine could provide a better answer to your question. I'm sure that Guido has some things in mind for where he wants to see Python go. Mobile development has been an Achilles' heel for Python for a long time. I'm hoping that some of the BeeWare stuff is going to help with the cross-compilation. A better story in mobile is definitely needed. But you know, if there's a need then Python will get there. I think that the language is going to continue to move towards the stuff that's in Python 3. Some big code bases, like Instagram, have now transitioned from Python 2 to 3. While there is much Python 2.7 code still in production, great strides have been made by Instagram, as they shared in their PyCon 2017 keynote. There's more tooling around Python 3 and more testing tools, so it's less risky for companies to move some of their legacy code to Python 3, where it makes business sense to. It will vary by company, but at some point, business needs, such as security and maintainability, will start driving greater migration to Python 3. If you're starting a new project, then Python 3 is the best choice. New projects, especially when looking at microservices and AI, will further drive people to Python 3. Organizations that are building very large Python codebases are adopting type annotations to help new developers Barry Warsaw (@pumpichank), member of the Python Foundation team at LinkedIn, former project leader of GNU Mailman: In some ways it's hard to predict where Python is going. I've been involved in Python for 23 years, and there was no way I could have predicted in 1994 what the computing world was going to look like today. I look at phones, IoT (Internet of things) devices, and just the whole landscape of what computing looks like today, with the cloud and containers. It's just amazing to look around and see all of that stuff. So there's no real way to predict what Python is going to look like even five years from now, and certainly not ten or fifteen years from now. I do think Python's future is still very bright, but I think Python, and especially CPython, which is the implementation of Python in C, has challenges. Any language that's been around for that long is going to have some challenges. Python was invented to solve problems in the 90s and the computing world is different now and is going to become different still. I think the challenges for Python include things like performance and multi-core or multi-threading applications. There are definitely people who are working on that stuff and other implementations of Python may spring up like PyPy, Jython, or IronPython. Aside from the challenges that the various implementations have, one thing that Python has as a language, and I think this is its real strength, is that it scales along with the human scale. For example, you can have one person write up some scripts on their laptop to solve a particular problem that they have. Python's great for that. Python also scales to, let's say, a small open source project with maybe 10 or 15 people contributing. Python scales to hundreds of people working on a fairly large project, or thousands of people working on massive software projects. Another amazing strength of Python as a language is that new developers can come in and learn it easily and be productive very quickly. They can pull down a completely new Python source code for a project that they've never seen before and dive in and learn it very easily and quickly. There are some challenges as Python scales on the human scale, but I feel like those are being solved by things like the type annotations, for example. On very large Python projects, where you have a mix of junior and senior developers, it can be a lot of effort for junior developers to understand how to use an existing library or application, because they're coming from a more statically-typed language. So a lot of organizations that are building very large Python codebases are adopting type annotations, maybe not so much to help with the performance of the applications, but to help with the onboarding of new developers. I think that's going a long way in helping Python to continue to scale on a human scale. To me, the language's scaling capacity and the welcoming nature of the Python community are the two things that make Python still compelling even after 23 years, and will continue to make Python compelling in the future. I think if we address some of those technical limitations, which are completely doable, then we're really setting Python up for another 20 years of success and growth.
Read more
  • 0
  • 2
  • 22049

article-image-what-difference-between-declarative-and-imperative-programming
Antonio Cucciniello
10 Mar 2018
4 min read
Save for later

What is the difference between declarative and imperative programming?

Antonio Cucciniello
10 Mar 2018
4 min read
Declarative programming and imperative programming are two different approaches that offer a different way of working on a given project or application. But what is the difference between declarative and imperative programming? And when should you use one over the other? What is declarative programming? Let us first start with declarative programming. This is the form or style of programming where we are most concerned with what we want as the answer, or what would be returned. Here, we as developers are not concerned with how we get there, simply concerned with the answer that is received. What is imperative programming? Next, let's take a look at imperative programming. This is the form and style of programming in which we care about how we get to an answer, step by step. We want the same result ultimately, but we are telling the complier to do things a certain way in order to achieve that correct answer we are looking for. An analogy If you are still confused, hopefully this analogy will clear things up for you. The analogy is comparing wanting to learn a topic or outsourcing work to have someone else do it. If you are outsourcing work, you might not care about how the work is completed; rather you care just what the final product or result of the work looks like. This is likened to declarative programming. You know exactly what you want and you program a function to give you just that. Let us say you did not want to outsource work for your business or project. Instead, you tell yourself that you want to learn this skill for the long term. Now, when attempting to complete the task, you care about how it is actually done. You need to know the individual steps along the way in order to get it working properly. This is similar to imperative programming. Why you should use declarative programming Reusability Since the way the result is achieved does not necessarily matter here, it allows for the functions you build to be more general and could potentially be used for multiple purposes and not just one. Not rewriting code can speed up the program you are currently writing and any others that use the same functionality in the future. Reducing Errors Given that in declarative programming you tend to write functions that do not change state as you would in functional programming, the chances of errors arising are smaller and it allows for your application to become more stable. The removal of side effects from your functions allows you to know exactly what comes in and what comes out, allows for a more predictable program. Potential drawbacks of declarative programming Lack of Control In declarative programming, you may use functions that someone else created, in order to achieve the desired results. But you may need specific things to be completed behind the scenes to make your result come out properly. You do not have this control in declarative programming as you would in imperative programming. Inefficiency When the implementation is controlled by something else, you may have problems making your code efficient. In applications where there may be a time constraint, you will need to program the individual steps in order to make sure your program is running as efficient as possible. There are benefits and disadvantages to both forms. Overall, it is entirely up to you, the programmer, to decide which implementation you would like to follow in your code. If you are solely focused on the data, perhaps consider using the declarative programming style. If you care more about the implementation and how something works, maybe stick to an imperative programming approach. More importantly, you can have a mix of both styles. It is extremely flexible for you. You are in charge here.
Read more
  • 0
  • 0
  • 41820

article-image-what-is-react-js-how-does-it-work
Packt
05 Mar 2018
9 min read
Save for later

What is React.js and how does it work?

Packt
05 Mar 2018
9 min read
What is React.js? React.js is one of the most talked about JavaScript web frameworks in years. Alongside Angular, and more recently Vue, React is a critical tool that has had a big impact on the way we build web applications. But it's hard to find a better description of React.js than the single sentence on the project's home page: A JavaScript library for building user interfaces. It's a library. For building user interfaces. This is perfect because, more often than not, this is all we want. The best part about this description is that it highlights React's simplicity. It's not a mega framework. It's not a full-stack solution that's going to handle everything from the database to real-time updates over web socket connections. We don't actually want most of these pre-packaged solutions, because in the end, they usually cause more problems than they solve. Facebook sure did listen to what we want. This is an extract from React and React Native by Adam Boduch. Learn more here. React.js is just the view. That's it. React.js is generally thought of as the view layer in an application. You might have used library like Handlebars, or jQuery in the past. Just as jQuery manipulates UI elements, or Handlebars templates are inserted onto the page, React components change what the user sees. The following diagram illustrates where React fits in our frontend code. This is literally all there is to React. We want to render this data to the UI, so we pass it to a React component which handles the job of getting the HTML into the page. You might be wondering what the big deal is. On the surface, React appears to be another rendering technology. But it's much more than that. It can make application development incredibly simple. That's why it's become so popular. React.js is simple React doesn't have many moving parts for us to learn about and understand. The advantage to having a small API to work with is that you can spend more time familiarizing yourself with it, experimenting with it, and so on. The opposite is true of large frameworks, where all your time is devoted to figuring out how everything works. The following diagram gives a rough idea of the APIs that we have to think about when programming with React. React is divided into two major APIs. First, there's the React DOM. This is the API that's used to perform the actual rendering on a web page. Second, there's the React component API. These are the parts of the page that are actually rendered by React DOM. Within a React component, we have the following areas to think about: Data: This is data that comes from somewhere (the component doesn't care where), and is rendered by the component. Lifecycle: These are methods that we implement that respond to changes in the lifecycle of the component. For example, the component is about to be rendered. Events: This is code that we write for responding to user interactions. JSX: This is the syntax of React components used to describe UI structures. Don't fixate on what these different areas of the React API represent just yet. The takeaway here is that React is simple. Just look at how little there is to figure out! This means that we don't have to spend a ton of time going through API details here. Instead, once you pick up on the basics, you can spend more time on nuanced React usage patterns. React has a declarative UI structure React newcomers have a hard time coming to grips with the idea that components mix markup in with their JavaScript. If you've looked at React examples and had the same adverse reaction, don't worry. Initially, we're all skeptical of this approach, and I think the reason is that we've been conditioned for decades by the separation of concerns principle. Now, whenever we see things mixed together, we automatically assume that this is bad and shouldn't happen. The syntax used by React components is called JSX (JavaScript XML). The idea is actually quite simple. A component renders content by returning some JSX. The JSX itself is usually HTML markup, mixed with custom tags for the React components. What's absolutely groundbreaking here is that we don't have to perform little micro-operations to change the content of a component. For example, think about using something like jQuery to build your application. You have a page with some content on it, and you want to add a class to a paragraph when a button is clicked. Performing these steps is easy enough, but the challenge is that there are steps to perform at all. This is called imperative programming, and it's problematic for UI development. While this example of changing the class of an element in response to an event is simple, real applications tend to involve more than 3 or 4 steps to make something happen. Read more: 5 reasons to learn React React components don't require executing steps in an imperative way to render content. This is why JSX is so central to React components. The XML-style syntax makes it easy to describe what the UI should look like. That is, what are the HTML elements that this component is going to render? This is called declarative programming, and is very well-suited for UI development. Time and data Another area that's difficult for React newcomers to grasp is the idea that JSX is like a static string, representing a chunk of rendered output. Are we just supposed to keep rendering this same view? This is where time and data come into play. React components rely on data being passed into them. This data represents the dynamic aspects of the UI. For example, a UI element that's rendered based on a Boolean value could change the next time the component is rendered. Here's an illustration of the idea. Each time the React component is rendered, it's like taking a snapshot of the JSX at that exact moment in time. As our application moves forward through time, we have an ordered collection of rendered user interface components. In addition to declaratively describing what a UI should be, re-rendering the same JSX content makes things much easier for developers. The challenge is making sure that React can handle the performance demands of this approach. Performance matters with React Using React to build user interfaces means that we can declare the structure of the UI with JSX. This is less error-prone than the imperative approach to assembling the UI piece by piece. However, the declarative approach does present us with one challenge—performance. For example, having a declarative UI structure is fine for the initial rendering, because there's nothing on the page yet. So the React renderer can look at the structure declared in JSX, and render it into the browser DOM. This is illustrated below. On the initial render, React components and their JSX are no different from other template libraries. For instance, Handlebars will render a template to HTML markup as a string, which is then inserted into the browser DOM. Where React is different from libraries like Handlebars is when data changes, and we need to re-render the component. Handlebars will just rebuild the entire HTML string, the same way it did on the initial render. Since this is problematic for performance, we often end up implementing imperative workarounds that manually update tiny bits of the DOM. What we end up with is a tangled mess of declarative templates, and imperative code to handle the dynamic aspects of the UI. We don't do this in React. This is what sets React apart from other view libraries. Components are declarative for the initial render, and they stay this way even as they're re-rendered. It's what React does under the hood that makes re-rendering declarative UI structures possible. React has something called the virtual DOM, which is used to keep a representation of the real DOM elements in memory. It does this so that each time we re-render a component, it can compare the new content, to the content that's already displayed on the page. Based on the difference, the virtual DOM can execute the imperative steps necessary to make the changes. So not only do we get to keep our declarative code when we need to update the UI, React will also make sure that it's done in a performant way. Here's what this process looks like: When you read about React, you'll often see words like diffing and patching. Diffing means comparing old content with new content to figure out what's changed. Patching means executing the necessary DOM operations to render the new content React.js has the right level of abstraction React.js doesn't have a great deal of abstraction, but the abstractions the framework does implement are crucial to its success. In the preceding section, you saw how JSX syntax translates to the low-level operations that we have no interest in maintaining. The more important way to look at how React translates our declarative UI components is the fact that we don't necessarily care what the render target is. The render target happens to be the browser DOM with React. But this is changing. We're only just starting to see this with React Native, but the possibilities are endless. I personally will not be surprised when React Toast becomes a thing, targeting toasters that can singe the rendered output of JSX on to bread. The abstraction level with React is at the right level, and it's in the right place. The following diagram gives you an idea of how React can target more than just the browser. From left to right, we have React Web (just plain React), React Native, React Desktop, and React Toast. As you can see, to target something new, the same pattern applies: Implement components specific to the target Implement a React renderer that can perform the platform-specific operations under the hood Profit This is obviously an oversimplification of what's actually implemented for any given React environment. But the details aren't so important to us. What's important is that we can use our React knowledge to focus on describing the structure of our user interface on any platform. Disclaimer: React Toast will probably never be a thing, unfortunately.
Read more
  • 0
  • 0
  • 69390
article-image-fat-2018-conference-session-1-summary-online-discrimination-and-privacy
Aarthi Kumaraswamy
26 Feb 2018
5 min read
Save for later

FAT* 2018 Conference Session 1 Summary: Online Discrimination and Privacy

Aarthi Kumaraswamy
26 Feb 2018
5 min read
The FAT* 2018 Conference on Fairness, Accountability, and Transparency is a first-of-its-kind international and interdisciplinary peer-reviewed conference that seeks to publish and present work examining the fairness, accountability, and transparency of algorithmic systems. This article covers research papers dedicated to 1st Session on Online discrimination and Privacy. FAT*  hosted the presentation of research work from a wide variety of disciplines, including computer science, statistics, the social sciences, and law. It took place on February 23 and 24, 2018, at the New York University Law School, in cooperation with its Technology Law and Policy Clinic. The conference brought together over 450 attendees from academic researchers, policymakers, and Machine learning practitioners. It witnessed 17 research papers, 6 tutorials, and 2 keynote presentations from leading experts in the field.  Session 1 explored ways in which online discrimination can happen and privacy could be compromised. The papers presented look for novel and practical solutions to some of the problems identified. We attempt to introduce our readers to the papers that will be presented at FAT* 2018 in this area thereby summarising the key challenges and questions explored by leading minds on the topic and their proposed potential answers to those issues. Session Chair: Joshua Kroll (University of California, Berkeley) Paper 1: Potential for Discrimination in Online Targeted Advertising Problems identified in the paper: Much recent work has focused on detecting instances of discrimination in online services ranging from discriminatory pricing on e-commerce and travel sites like Staples (Mikians et al., 2012) and Hotels.com (Hannák et al., 2014) to discriminatory prioritization of service requests and offerings from certain users over others in crowdsourcing and social networking sites like TaskRabbit (Hannák et al., 2017). In this paper, we focus on the potential for discrimination in online advertising, which underpins much of the Internet’s economy. Specifically, we focus on targeted advertising, where ads are shown only to a subset of users that have attributes (features) selected by the advertiser. Key Takeaways: A malicious advertiser can create highly discriminatory ads without using sensitive attributes such as gender or race. The current methods used to counter the problem are insufficient. The potential for discrimination in targeted advertising arises from the ability of an advertiser to use the extensive personal (demographic, behavioral, and interests) data that ad platforms gather about their users to target their ads. Different targeting methods offered by Facebook: attribute-based targeting, PII-based (custom audience) targeting and Look-alike audience targeting Three basic approaches to quantifying discrimination and their tradeoffs: Based on advertiser’s intent Based on ad targeting process Based on the targeted audience (outcomes) Paper 2: Discrimination in Online Personalization: A Multidisciplinary Inquiry The authors explore ways in which discrimination may arise in the targeting of job-related advertising, noting the potential for multiple parties to contribute to its occurrence. They then examine the statutes and case law interpreting the prohibition on advertisements that indicate a preference based on protected class and consider its application to online advertising. This paper provides a legal analysis of a real case, which found that simulated users selecting a gender in Google’s Ad Settings produces employment-related advertisements differing rates along gender lines despite identical web browsing patterns. Key Takeaways: The authors’ analysis of existing case law concludes that Section 230 may not immunize advertising platforms from liability under the FHA for algorithmic targeting of advertisements that indicate a preference for or against a protected class. Possible causes of ad targeting: Targeting was fully a product of the advertiser selecting gender segmentation. Targeting was fully a product of machine learning—Google alone selects gender. Targeting was fully a product of the advertiser selecting keywords. Targeting was fully the product of the advertiser being outbid for women. Given the limited scope of Title VII the authors conclude that Google is unlikely to face liability on the facts presented by Datta et al. Thus, the advertising prohibition of Title VII, like the prohibitions on discriminatory employment practices, is ill-equipped to advance the aims of equal treatment in a world where algorithms play an increasing role in decision making. Paper 3: Privacy for All: Ensuring Fair and Equitable Privacy Protections In this position paper, the authors argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections. Key Takeaways: Research questions posed: Are technical or non-technical privacy protection schemes fair? When and how do privacy protection technologies or policies improve or impede the fairness of systems they affect? When and how do fairness-enhancing technologies or policies enhance or reduce the privacy protections of the people involved? Data linking can lead to deanonymization; live recommenders can also be attacked to leak information The authors propose a new definition for a fair privacy scheme: a privacy scheme is (group-)fair if the probability of failure and expected risk are statistically independent of the subject’s membership in a protected class.   If you have missed Session 2, Session 3, Session 4 and Session 5 of the FAT* 2018 Conference, we have got you covered.
Read more
  • 0
  • 0
  • 3590

article-image-my-friend-the-robot-artificial-intelligence-needs-emotional-intelligence
Aaron Lazar
21 Feb 2018
8 min read
Save for later

My friend, the robot: Artificial Intelligence needs Emotional Intelligence

Aaron Lazar
21 Feb 2018
8 min read
Tommy’s a brilliant young man, who loves programming. He’s so occupied with computers that he hardly has any time for friends. Tommy programs a very intelligent robot called Polly, using Artificial Intelligence, so that he has someone to talk to. One day, Tommy gets hurt real bad about something and needs someone to talk to. He rushes home to talk to Polly and pours out his emotions to her. To his disappointment, Polly starts giving him advice like she does for any other thing. She doesn’t understand that he needs someone to “feel” what he’s feeling rather than rant away on what he should or shouldn’t be doing. He naturally feels disconnected from Polly. My Friend doesn’t get me Have you ever wondered what it would be like to have a robot as a friend? I’m thinking something along the lines of Siri. Siri’s pretty good at holding conversations and is quick witted too. But Siri can’t understand your feelings or emotions, neither can “she” feel anything herself. Are we missing that “personality” from the artificial beings that we’re creating? Even if you talk about chatbots, although we gain through convenience, we lose the emotional aspect, especially at a time when expressive communication is the most important. Do we really need it? I remember watching the Terminator, where Arnie asks John, “Why do you cry?” John finds it difficult to explain to him, why humans cry. The fact is though, that the machine actually understood there was something wrong with the human, thanks to the visual effects associated with crying. We’ve also seen some instances of robots or AI analysing sentiment through text processing as well. But how accurate is this? How would a machine know when a human is actually using sarcasm? What if John was faking it and could cry at the drop of a hat or he just happened to be chopping onions? That’s food for thought now. On the contrary, you might wonder though, do we really want our machines to start analysing our emotions? What if they take advantage of our emotional state? Well, that’s a bit of a far fetched thought and what we need to understand is that it’s necessary for robots to gauge a bit of our emotions to enhance the experience of interacting with them. There are several wonderful applications for such a technology. For instance, Marketing organisations could use applications that detect users facial expressions when they look at a new commercial to gauge their “interest”. It could also be used by law enforcement as a replacement to the polygraph. Another interesting use case would be to help autism affected individuals understand the emotions of others better. The combination of AI and EI could find a tonne of applications right from cars that can sense if the driver is tired or sleepy and prevent an accident by pulling over, to a fridge that can detect if you’re stressed and lock itself, to prevent you from binge eating! Recent Developments in Emotional Intelligence There are several developments happening from the past few years, in terms of building systems that understand emotions. Pepper, a Japanese robot, for instance, can tell feelings such as joy, sadness and anger, and respond by playing you a song. A couple of years ago, Microsoft released a tool, the Emotion API, that could breakdown a person’s emotions based only on their picture. Physiologists, Neurologists and Psychologists, have collaborated with engineers to find measurable indicators of human emotion that can be taught to computers to look out for. There are projects that have attempted to decode facial expressions, the pitch of our voices, biometric data such as heart rate and even our body language and muscle movements. Bronwyn van der Merwe, General Manager of Fjord in the Asia Pacific region revealed that big companies like Amazon, Google and Microsoft are hiring comedians and script writers in order to harness the human-like aspect of AI by inducing personality into their technologies. Jerry, Ellen, Chris, Russell...are you all listening? How it works Almost 40% of our emotions are conveyed through tone of voice and the rest is read through facial expressions and gestures we make. An enormous amount of data is collected from media content and other sources and is used as training data for algorithms to learn human facial expressions and speech. One type of learning used is Active Learning or human-assisted machine learning. This is a kind of supervised learning, where the learning algorithm is able to interactively query the user to obtain new data points or an output. Situations might exist where unlabeled data is plentiful but manually labeling the data is expensive. In such a scenario, learning algorithms can query the user for labels. Since the algorithm chooses the examples, the number of examples to learn a concept turns out to be lower than what is required for usual supervised learning. Another approach is to use Transfer Learning, a method that focuses on storing the knowledge that’s gained while solving one problem and then applying it to a different but related problem. For example, knowledge gained while learning to recognize fruits could apply when trying to recognize vegetables. This works by analysing a video for facial expressions and then transfering that learning to label speech modality. What’s under the hood of these machines? Powerful robots that are capable of understanding emotions would most certainly be running Neural Nets under the hood. Complementing the power of these Neural Nets are beefy CPUs and GPUs on the likes of the Nvidia Titan X GPU and Intel Nervana CPU chip. Last year at NIPS, amongst controversial body shots and loads of humour filled interactions, Kory Mathewson and Piotr Mirowski entertained audiences with A.L.Ex and Pyggy, two AI robots that have played alongside humans in over 30 shows. These robots introduce audiences to the “comedy of speech recognition errors” by blabbering away to each other as well as to humans. Built around a Recurrent Neural Network that’s trained on dialogue from thousands of films, A.L.Ex. communicates with human performers, audience participants, and spectators through speech recognition, voice synthesis, and video projection. A.L.E.x is written in Torch and Lua code and has a word vocabulary of 50,000 words that have been extracted from 102,916 movies and it is built on an RNN with Long-Short Term Memory architecture and 512 dimensional layers. The unconquered challenges today The way I see it, there are broadly 3 challenge areas that AI powered robots face in this day: Rationality and emotions: AI robots need to be fed with initial logic by humans, failing which, they cannot learn on their own. They may never have the level of rationality or the breadth of emotions to take decisions the way humans do. Intuition, strategic thinking and emotions: Machines are incapable of thinking into the future and taking decisions the way humans can. For example, not very far into the future, we might have an AI powered dating application that measures a subscriber’s interest level while chatting with someone. It might just rate the interest level lower, if the person is in a bad mood due to some other reason. It wouldn’t consider the reason behind the emotion and whether it was actually linked to the ongoing conversation. Spontaneity, empathy and emotions: It may be years before robots are capable of coming up with a plan B, the way humans do. Having a contingency plan and implementing it in an emotional crisis is something that AI fails at accomplishing. For example, if you’re angry at something and just want to be left alone, your companion robot might just follow what you say without understanding your underlying emotion, while an actual human would instantly empathise with your situation and rather try to be there for you. Bronwyn van der Merwe said, "As human beings, we have contextual understanding and we have empathy, and right now there isn't a lot of that built into AI. We do believe that in the future, the companies that are going to succeed will be those that can build into their technology that kind of an understanding". What’s in store for the future If you ask me, right now we’re on the highway to something really great. Yes, there are several aspects that are unclear about AI and robots making our lives easier vs disrupting them, but as time passes, science is fitting the pieces of the puzzle together to bring about positive changes in our lives. AI is improving on the emotional front as I write, although there are clearly miles to go. Companies like Affectiva are pioneering emotion recognition technology and are working hard to improve the way AI understands human emotions. Biggies like Microsoft had been working on bringing in emotional intelligence into their AI since before 2015 and have come a long way since then. Perhaps, in the next Terminator movie, Arnie might just comfort a weeping Sarah Connor, saying, “Don’t cry, Sarah dear, he’s not worth it”, or something of the sort. As a parting note and just for funsies, here’s a final question for you, “Can you imagine a point in the future when robots have such high levels of EQ, that some of us might consider choosing them as a partner over humans?”
Read more
  • 0
  • 0
  • 17422

article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 12045
article-image-ian-goodfellow-et-al-better-text-generation-via-filling-blanks-using-maskgans
Savia Lobo
19 Feb 2018
5 min read
Save for later

Ian Goodfellow et al on better text generation via filling in the blanks using MaskGANs

Savia Lobo
19 Feb 2018
5 min read
In the paper, “MaskGAN: Better Text Generation via Filling in the ______”, Ian Goodfellow, along with William Fedus and Andrew M. Dai have proposed a way to improve sample quality using Generative Adversarial Networks (GANs), which explicitly trains the generator to produce high quality samples and have also shown a lot of success in image generation.  Ian Goodfellow is a Research scientist at Google Brain. His research interests lies in the fields of deep learning, machine learning security and privacy, and particularly in generative models. Ian Goodfellow is known as the father of Generative Adversarial Networks. He runs the Self-Organizing Conference on Machine Learning, which was founded at OpenAI in 2016. Generative Adversarial Networks (GANs) is an architecture for training generative models in an adversarial setup, with a generator generating images that is trying to fool a discriminator that is trained to discriminate between real and synthetic images. GANs have had a lot of success in producing more realistic images than other approaches but they have only seen limited use for text sequences. They were originally designed to output differentiable values, as such discrete language generation is challenging for them. The team of researchers, introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. The paper also shows that this GAN produces more realistic text samples compared to a maximum likelihood trained model. MaskGAN: Better Text Generation via Filling in the _______ What problem is the paper attempting to solve? This paper highlights how text classification was traditionally done using Recurrent Neural Network models by sampling from a distribution that is conditioned on the previous word and a hidden state that consists of a representation of the words generated so far. These are typically trained with maximum likelihood in an approach known as teacher forcing. However, this method causes problems when, during sample generation, the model is often forced to condition on sequences that were never conditioned on at training time, which leads to unpredictable dynamics in the hidden state of the RNN. Also, methods such as Professor Forcing and Scheduled Sampling have been proposed to solve this issue, which work indirectly by either causing the hidden state dynamics to become predictable (Professor Forcing) or by randomly conditioning on sampled words at training time, however, they do not directly specify a cost function on the output of the RNN encouraging high sample quality. The method proposed in the paper is trying to solve problem of text generation with GANs, by a sensible combination of novel approaches. MaskGANs Paper summary This paper proposes to improve sample quality using Generative Adversarial Network (GANs), which explicitly trains the generator to produce high quality samples. The model is trained on a text fill-in-the-blank or in-filling task. In this task, portions of a body of text are deleted or redacted. The goal of the model is to then infill the missing portions of text so that it is indistinguishable from the original data. While in-filling text, the model operates autoregressively over the tokens it has thus far filled in, as in standard language modeling, while conditioning on the true known context. If the entire body of text is redacted, then this reduces to language modeling. The paper also shows qualitatively and quantitatively, evidence that this new proposed method produces more realistic text samples compared to a maximum likelihood trained model. Key Takeaways One can have a hold about what MaskGANs are, as this paper introduces a text generation model trained on in-filling (MaskGAN). The paper considers the actor-critic architecture in extremely large action spaces, new evaluation metrics, and the generation of synthetic training data. The proposed contiguous in-filling task i.e. MASKGAN, is a good approach to reduce mode collapse and help with training stability for textual GANs. The paper shows that MaskGAN samples on a larger dataset (IMDB reviews) is significantly better than the corresponding tuned MaskMLE model as shown by human evaluation. One can produce high-quality samples despite the MaskGAN model having much higher perplexity on the ground-truth test set Reviewer feedback summary/takeaways Overall Score: 21/30 Average Score: 7/10. Reviewers liked the overall idea behind the paper. They appreciated the benefits they received from context (left context and right context) by solving a "fill-in-the-blank" task at training time and translating this into text generation at test time. A reviewer also stated that experiments were well carried through and very thorough. A reviewer also commented that the importance of the MaskGAN mechanism has been highlighted and the description of the reinforcement learning training part has been clarified. However, with pros, the paper has also received some cons stating, There is a lot of pre-training required for the proposed architecture Generated texts are generally locally valid but not always valid globally It was not made very clear whether the discriminator also conditions on the unmasked sequence. A reviewer also stated that there were some unanswered questions such as Was pre-training done for the baseline as well? How was the masking done? How did you decide on the words to mask? Was this at random? Is it actually usable in place of ordinary LSTM (or RNN)-based generation?
Read more
  • 0
  • 0
  • 11091

article-image-yoshua-bengio-et-al-twin-networks
Savia Lobo
16 Feb 2018
5 min read
Save for later

Yoshua Bengio et al on Twin Networks

Savia Lobo
16 Feb 2018
5 min read
The paper “Twin Networks: Matching the Future for Sequence Generation”, is written by Yoshua Bengio in collaboration with Dmitriy Serdyuk, Nan Rosemary Ke, Alessandro Sordoni, Adam Trischler, and Chris Pal. This paper proposes a simple technique for encouraging generative RNNs to plan ahead. To achieve this, the authors have presented a simple RNN model, which further has two separate networks--a forward and a backward--that run in opposite directions during training. The main motive to train them in opposite direction is the hypothesis that the states of the forward model should be able to predict the entire future sequence. Yoshua Bengio is a Canadian computer scientist.He is known for his work on artificial neural networks and deep learning. His main research ambition is to understand principles of learning that yield intelligence. Yoshua has been co-organizing the Learning Workshop with Yann Le Cun, with whom he has also created the International Conference on Representation Learning (ICLR), since the year 1999. Yoshua has also organized or co-organized numerous other events, principally the deep learning workshops and symposia at NIPS and ICML since 2007. The article talks about TwinNet i.e. Twin Networks, a method for training RNN architectures to better model the future in its internal state, which are supervised by another RNN modelling the future in reverse order. Twin Networks: Matching the Future for Sequence Generation What problem is the paper attempting to solve? Recurrent Neural Networks (RNNs) are the basis of state-of-art models for generating sequential data such as text and speech and are usually trained by teacher forcing. This corresponds to optimizing one-step ahead prediction. At present, there is no explicit bias toward planning in the training objective, the model may prefer to focus on the most recent tokens instead of capturing subtle long-term dependencies, which could contribute to global coherence. Local correlations are usually stronger than long-term dependencies and thus end up dominating the learning signal. This results obtained from this are, samples from RNNs tend to exhibit local coherence but lack meaningful global structure. Recent efforts to address this problem have involved augmenting RNNs with external memory, with unitary or hierarchical architectures, or with explicit planning mechanisms. Parallel efforts aim to prevent overfitting on strong local correlations by regularizing the states of the network, by applying dropout or penalizing various statistics. To solve this, the paper proposes TwinNet, a simple method for regularizing a recurrent neural network that encourages modeling those aspects of the past that are predictive of the long-term future. Paper summary This paper presents a simple technique which enables generative recurrent neural networks to plan ahead. A backward RNN is trained to generate a given sequence in a reverse order. The states of the forward model are implicitly forced to predict cotemporal states of the backward model. The paper empirically shows that this approach achieves 9% relative improvement for a speech recognition task, and also achieves significant improvement on a COCO caption generation task. Overall, the model is driven by the intuition: (a) The backward hidden states contain a summary of the future of the sequence (b) To predict the future more accurately, the model will have to form a better representation of the past. The paper also demonstrates the success of the TwinNet approach experimentally, through several conditional and unconditional generation tasks that include speech recognition, image captioning, language modelling, and sequential image generation. Key Takeaways The paper introduces a simple method for training generative recurrent networks that regularizes the hidden states of the network to anticipate future states The paper also provides an extensive evaluation of the proposed model on multiple tasks and concludes that it helps training and regularization for conditioned generation (speech recognition, image captioning) and for the unconditioned case (sequential MNIST, language modelling) As a deeper analysis, the paper includes a visualization, which includes the introduced cost and observes that it negatively correlates with the word frequency. Reviewer feedback summary Overall Score: 21/30 Average Score: 7/10 The reviewers stated that the paper presents a novel approach to regularize RNNs and give results on different datasets indicating wide range of application. However, based on our results, they said that further experimentation and extensive hyperparameter search is needed. Overall, the paper is detailed, simple to implement and positive empirical results support the described approach. The reviewers have also pointed out a few limitations which include: Major downside of the approach is the cost in terms of resources. The twin model requires large memory and takes longer to train (~ 2-4 times) while providing little improvement over the baseline. During evaluation we found that the attention twin model gives results like “a woman at table a with cake a”, where it forces the model to look like a sentence from the back side too. This might be the reason for low metric values observed in soft attention twin net model. The effect of twin net as a regularizer can be examined against other regularization strategies for comparison purposes.
Read more
  • 0
  • 0
  • 6565