Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-4-myths-about-git-and-github-you-should-know-about
Savia Lobo
07 Oct 2018
3 min read
Save for later

4 myths about Git and GitHub you should know about

Savia Lobo
07 Oct 2018
3 min read
With an aim to replace BitKeeper, Linus Torvalds created Git in 2005 to support the development of the Linux kernel. However, Git isn’t necessarily limited to code, any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths. Alex Magana and Joseph Mul, the authors of Introduction to Git and GitHub course discuss in this post some of the myths about the Git tool and GitHub. Git is GitHub Due to the usage of Git and GitHub as the complete set that forms the version control toolkit, adopters of the two tools misconceive Git and GitHub as interchangeable tools. Git is a tool that offers the ability to track changes on files that constitute a project. Git offers the utility that is used to monitor changes and persists the changes. On the other hand, GitHub is akin to a website hosting service. The difference here is that with GitHub, the hosted content is a repository. The repository can then be accessed from this central point and the codebase shared. Backups are equivalent to version control This emanates from a misunderstanding of what version control is and by extension what Git achieves when it’s incorporated into the development workflow. Contrary to archives created based on a team’s backup policy, Git tracks changes made to files and maintains snapshots of a repository at a given point in time. Git is only suitable for teams With the usage of hosting services such as GitHub, the element of sharing and collaboration, may be perceived as a preserve of teams. Git offers gains beyond source control. It lends itself to the delivery of a feature or product from the point of development to deployment. This means that Git is a tool for delivery. It can, therefore, be utilized to roll out functionality and manage changes to source code for teams and individuals alike. To effectively use Git, you need to learn every command to work When working as an individual or a team, the common commands required to allow you to contribute a repository encompass commands for initiating tracking of specific files, persisting changes made to tracked files, reverting changes made to files incorporating changes introduced by other developers working on the same project you are on. The four myths mentioned by the authors provides a clarification on both Git and GitHub and its uses. If you found this post useful, do check out the course titled Introduction to Git and GitHub by Alex and Joseph. GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab 11.3 released with support for Maven repositories, protected environments and more GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub  
Read more
  • 0
  • 0
  • 26724

article-image-what-is-digital-forensics
Savia Lobo
02 May 2018
5 min read
Save for later

What is Digital Forensics?

Savia Lobo
02 May 2018
5 min read
Who here hasn’t watched the American TV show, Mr. Robot? For the uninitiated, Mr. Robot is a digital crime thriller that features the protagonist Elliot. Elliot is a brilliant cyber security engineer and hacktivist who identifies potential suspects and evidences of any crime hard to solve. He does this by hacking into people’s digital devices such as smartphones, computers, machines, printers and so on. The science of identifying, preserving, and analyzing the evidences through digital media or storage media devices, in order to trace a crime is Digital Forensics. A real world example of digital forensics helping solve crime is the case of a floppy disk that helped investigators to solve the BTK serial killer case in 2005. The killer had eluded police capture since 1974 and had claimed the lives of at least 10 victims before he was caught. Types of Digital forensics The Digital world is vast. There are countless ways one can perform illegal or corrupt activities and go undetected. Digital Forensics lends a helping hand in detecting such activities. However, due to the presence of multiple digital media, the forensics carried out for each is also different.  Following are some types of forensics which can be conducted over different digital pathways. Computer Forensics refers to the branch of forensics that obtains evidences from computer systems such as computer hard drives, mobile phones, a personal digital assistant (PDA), Compact Disks CD, and so on. The digital police can also trace suspect’s e-mail or text communication logs, their internet browsing history, system or file transfer, hidden or deleted files, docs and spreadsheets, and so on. Mobile device Forensics recovers or gathers evidence from the call logs, text messages, and other data stored in the mobile devices. Tracing one’s location info via the inbuilt GPU systems or cell site logs or through in-app communication from apps such as WhatsApp, Skype, and so on on is also possible. Network forensics monitors and analyzes computer network traffic, LAN/WAN and internet traffic. The aim of network forensics is to gather information, collect evidence, detect and determine the extent of intrusions and the amount of data that is compromised. Database forensics is the forensic study of databases and their metadata.The information from database contents, log files and in-RAM data can be used for creating timelines or recover pertinent information during a forensic investigation. Challenges faced in Digital Forensics Data storage and extraction Storing data has always been tricky and expensive. An explosion in the volume of data generation has only aggravated the situation. Now data comes from different pathways such as social media, web, IoT, and many more.  The real-time analysis of data from IoT devices and other networks also contribute to the data heap. Due to this, investigators find it difficult to store and process data to extract clues or detect incidents, or to track the necessary traffic. Data gathering over scattered mediums Investigators have to face a lot of difficulty as evidence might be scattered over social networks, cloud resources, and Personal physical storage. Therefore, increased tools, expertise and time is a requirement to fully and accurately reconstruct the evidence. Automating these tasks partially may lead to deterioration of the quality of investigation. Investigations to preserve privacy At times, investigators collect information to reconstruct and locate an attack. This can violate user privacy. Also, when information has to be collected from the cloud, there are some other hurdles, such as accessing the evidence in logs, presence of volatile data, and so on. Carrying out Legitimate investigations only Modern infrastructures are complex and virtualized, often shifting their complexity at the border (such as in fog computing) or delegating some duties to third parties (such as in platform-as-a-service frameworks). An important challenge for modern digital forensics lies in executing investigations legally, for instance, without violating laws in borderless scenarios. Anti-forensics techniques on the rise Defensive measures for digital forensics comprise of encryption, obfuscation, and cloaking techniques, including information hiding.Therefore new forensics tools should be engineered in order to support heterogeneous investigations, preserve privacy, and offer scalability. The presence of digital media and electronics is a leading cause for the rise of digital forensics. Also, at this pace, digital media is on the rise, digital forensics is here to stay. Many of the investigators which include CYFOR,  and Pyramid CyberSecurity strive to offer solutions to complex cases in the digital world. One can also try to seek employment or specialize in this field by improving the skills needed for a career in digital forensics. If you are interested in digital forensics, check out our product portfolio on cyber security or subscribe today to a learning path for forensic analysts on MAPT, our digital library. How cybersecurity can help us secure cyberspace Top 5 penetration testing tools for ethical hackers What Blockchain Means for Security
Read more
  • 0
  • 0
  • 26706

article-image-google-opensorced-tensorflow
Kunal Parikh
13 Sep 2017
7 min read
Save for later

6 reasons why Google open-sourced TensorFlow

Kunal Parikh
13 Sep 2017
7 min read
On November 9, 2015, a storm loomed over the SF Bay area creating major outages. At Mountain View, California, Google engineers were busy creating a storm of their own. That day, Sundar Pichai announced to the world that TensorFlow, their machine learning system, was going Open Source. He said: “...today we’re also open-sourcing TensorFlow. We hope this will let the machine learning community—everyone from academic researchers, to engineers, to hobbyists—exchange ideas much more quickly, through working code rather than just research papers.” That day the tech world may not have fully grasped the gravity of the announcement but those in the know knew it was a pivotal moment in Google’s transformational journey into an AI first world. How did TensorFlow begin? TensorFlow was part of a former Google product called DistBelief. DistBelief was responsible for a program called DeepDream. The program was built for scientists and engineers to visualise how deep neural networks process images. As fate would have it, the algorithm went viral and everyone started visualising abstract and psychedelic art in it. Although people were having fun playing with image forms, they were unaware of the technology that powered those images - neural networks and deep learning - the exact reason why TensorFlow was built for. TensorFlow is a machine learning platform that allows one to run a wide range of algorithms like the aforementioned neural networks and deep learning based projects. TensorFlow with its flexibility, high performance, portability, and production-readiness is changing the landscape of artificial intelligence and machine learning. Be it face recognition, music, and art creation or detecting clickbait headline for blogs, the use cases are immense. With Google open sourcing TensorFlow, the platform that powers Google search and other smart Google products is now accessible to everyone - researchers, scientists, machine learning experts, students, and others. So why did Google open source TensorFlow? Yes, Google made a world of difference to the machine learning community at large by open sourcing TensorFlow. But what was in it for Google? As it turns out, a whole lot. Let’s look at a few. Google is feeling the heat from rival deep learning frameworks Major deep learning frameworks like Theano, Keras, etc., were already open source. Keeping a framework proprietary was becoming a strategic disadvantage as most DL core users i.e. scientists, engineers, and academicians prefer using open source software for their work. “Pure” researchers and aspiring “Phds” are key groups that file major patents in the world of AI. By open sourcing TensorFlow, Google gave this community access to a platform it backs to power their research. This makes migrating the world’s algorithms from other deep learning tools onto TensorFlow theoretically possible. AI as a trend is clearly here to stay and Google wants a platform that leads this trend. An open source TensorFlow can better support the Google Brain project Behind all the PR, Google does not speak much about its pet project Google Brain. When Sundar Pichai talks of Google’s transformation from Search to AI, this project is doing all the work behind the scenes. Google Brain is headed by some of the best minds in the industry like Jeff Dean, Geoffery Hilton, Andrew NG, among many others. They developed TensorFlow and they might still have some state-of-the-art features up their sleeves privy only to them. After all, they have done a plethora of stunning research in areas like parallel computing, machine intelligence, natural language processing and many more. With TensorFlow now open sourced, this team can accelerate the development of the platform and also make significant inroads into areas they are currently researching on. This research can then potentially develop into future products for Google which will allow them to expand their AI and Cloud clout, especially in the enterprise market. Tapping into the collective wisdom of the academic intelligentsia Most innovations and breakthroughs come from universities before they go mainstream and become major products in enterprises. AI, still making this transition, will need a lot of investment in research. To work on difficult algorithms, researchers will need access to sophisticated ML frameworks. Selling TensorFlow to universities is an old school way to solve the problem - that’s why we no longer hear about products like LabView. Instead, by open-sourcing TensorFlow, the team at Google now has the world’s best minds working on difficult AI problems on their platform for free. As these researchers start writing papers on AI using TensorFlow, it will keep adding to the existing body of knowledge. They will have all the access to bleeding-edge algorithms that are not yet available in the market. Their engineers could simply pick and choose what they like and start developing commercially ready services. Google wants to develop TensorFlow as a platform-as-a-service for AI application development An advantage of open-sourcing a tool is that it accelerates time to build and test through collaborative app development. This means most of the basic infrastructure and modules to build a variety of TensorFlow based applications will already exist on the platform. TensorFlow developers can develop and ship interesting modular products by mixing and matching code and providing a further layer of customization or abstraction. What Amazon did for storage with AWS, Google can do for AI with TensorFlow. It won’t come as a surprise if Google came up with their own integrated AI ecosystem with TensorFlow on the Google Cloud promising you the AI resources your company would need. Suppose you want a voice based search function on your ecommerce mobile application. Instead, of completely reinventing the wheel, you could buy TensorFlow powered services provided by Google. With easy APIs, you can get voice based search and save substantial developer cost and time. Open sourcing TensorFlow will help Google to extend their talent pipeline in a competitive Silicon Valley jobs market Hiring for AI development is  competitive in the Silicon Valley as all major companies vie for attention from the same niche talent pool. With TensorFlow made freely available, Google’s HR team can quickly reach out to a talent pool specifically well versed with the technology and also save on training cost. Just look at the interest TensorFlow has generated on a forum like StackOverflow: This indicates that growing number of users are asking and inquiring about TensorFlow. Some of these users will migrate into power users who the Google HR team can tap into. A developer pool at this scale would never have been possible with a proprietary tool. Replicating the success and learning from Android Agreed, a direct comparison with Android is not possible. However, the size of the mobile market and Google’s strategic goal of mobile-first when they introduced Android bear striking similarity with the nascent AI ecosystem we have today and Google’s current AI-first rhetoric. In just a decade since its launch, Android now owns more than 85% of the smartphone mobile OS market. Piggybacking on Android’s success, Google now has control of mobile search (96.19%), services (Google Play), a strong connection with the mobile developer community and even a viable entry into the mobile hardware market. Open sourcing Android did not stop Google from making money. Google was able to monetize through other ways like mobile search, mobile advertisements, Google Play, devices like Nexus, mobile payments, etc. Google did not have all this infrastructure planned and ready before Android was open sourced - It innovated, improvised, and created along the way. In the future, we can expect Google to adopt key learnings from its Android growth story and apply to TensorFlow’s market expansion strategy. We can also see supporting infrastructures and models for commercialising TensorFlow emerge for enterprise developers. [dropcap]T[/dropcap]he road to AI world domination for Google is on the back of an open sourced TensorFlow platform. It appears not just exciting but also promises to be one full of exponential growth, crowdsourced innovation and learnings drawn from other highly successful Google products and services. The storm that started two years ago is surely morphing into a hurricane. As Professor Michael Guerzhoy of University of Toronto quotes in Business Insider “Ten years ago, it took me months to do something that for my students takes a few days with TensorFlow.”
Read more
  • 0
  • 0
  • 26622

article-image-top-5-automated-testing-frameworks
Sugandha Lahoti
11 Jul 2018
6 min read
Save for later

Top 5 automated testing frameworks

Sugandha Lahoti
11 Jul 2018
6 min read
The world is abuzz with automation. It is everywhere today and becoming an integral part of organizations and processes. Software testing, an intrinsic part of website/app/software development has also been taken over by test automation tools. However, as it happens in many software markets, a surplus of tools complicates the selection process. We have identified top 5 testing frameworks, used by most developers for automating the testing process. These automation testing frameworks cover a broad range of devices and support different scripting languages. Each framework has their own uniques pros, cons, and learning approaches. Selenium [box type="shadow" align="" class="" width=""]Creators: Jason Huggins Language: Java Current version: 3.11.0 Popularity: 11,031 stars on GitHub[/box] Selenium is probably the most popular test automation framework, primarily used for testing web apps.  However, selenium can also be used in cloud-based services, load-testing services and for monitoring, quality assurance, test architecture, regression testing, performance analysis, and mobile testing. It is open source; i.e., the source code can be altered and modified if you want to customize it for your testing purposes. It is flexible enough for you to write your own script and add functionality to test scripts and the framework. The Selenium suite consists of four different tools: Selenium IDE, Selenium Grid, Selenium RC, and Selenium WebDriver. It also supports a wide range of programming languages such as C#, Java, Python, PHP, Ruby, Groovy, and Perl. Selenium is portable, so it can be run anywhere, eliminating the need to configure it specifically for a particular machine. It becomes quite handy when you are working in varied environments and platforms supporting various system environments - Windows, Mac, Linux and browsers - Chrome, Firefox, IE, and Headless browsers. Most importantly, Selenium has a great community which implies more forums, more resources, examples, and solved problems. Appium [box type="shadow" align="" class="" width=""]Creators: Dan Cuellar Language: C# Current version: 1.8.1 Popularity: 7,432 stars on GitHub[/box] Appium is an open source test automation framework for testing native, hybrid, and mobile web applications. It allows you to run automated tests on actual devices, emulators (Android), and simulators (iOS). It provides cross-platform solutions for native and hybrid mobile apps, which means that the same test cases will work on multiple platforms (iOS, Android, Windows, Mac).  Appium also allows you to talk to other Android apps that are integrated with App Under Test (AUT). Appium has a client-server architecture. It extends the WebDriver client libraries, which are already written in most popular programming languages. So, you are free to use any programming language to write the automation test scripts. With Appium, you can also run your test scripts in the cloud using services such as Sauce Labs and Testdroid. Appium is available on GitHub with documentations and tutorial to learn all that is needed. The Appium team is alive, active, and highly responsive as far as solving an issue is concerned. Developers can expect a reply after no more than 36 hours, after an issue is opened. The community around Appium is also pretty large and growing every month. Katalon Studio [box type="shadow" align="" class="" width=""]Creators: Katalon LLC. Language: Groovy Current version: 5.4.2[/box] Katalon Studio is another test automation solution for web application, mobile, and web services. Katalon Studio uses Groovy, a language built on top of Java. It is built on top of the Selenium and Appium frameworks, taking advantage of these two for integrated web and mobile test automation. Unlike Appium, and Selenium, which are more suitable for testers who possess good programming skills, Katalon Studio can be used by testers with limited technical knowledge. Katalon Studio has a interactive UI with drag-drop features, select keywords and test objects to form test steps functionalities. It has a manual mode for technically strong users and a scripting mode that supports development facilities like syntax highlighting, code suggestion and debugging. On the down side, Katlon has to load many extra libraries for parsing test data, test objects, and for logging. Therefore, it may be a bit slower for long test cases as compared to other testing frameworks which use Java. Robot Framework [box type="shadow" align="" class="" width=""]Creators: Pekka Klärck, Janne Härkönen et al. Language: Python Current version: 3.0.4 Popularity: 2,393 stars on GitHub[/box] Robot Framework is a Python-based, keyword-driven, acceptance test automation framework. It is a general purpose test automation framework primarily  used for acceptance testing and streamlines it into mainstream development, thus giving rise to the concept of acceptance test driven development (ATDD). It was created by Pekka Klärck as part of his master's thesis and was developed within Nokia Siemens Networks in 2005. Its core framework is written in Python, but it also supports IronPython (.NET), Jython (JVM) and PyPy. The Keyword driven approach simplifies tests and makes them readable. There is also provision for creating reusable higher-level keywords from existing ones. Robot Framework stands out from other testing tools by working on easy-to-use tabular test files that provide different approaches towards test creation. It is the extensible nature of the tool that makes it so versatile. It can be adjusted into different scenarios and used with different software backend such as by using Python and Java libraries, and also via different API’s. Watir [box type="shadow" align="" class="" width=""]Creators: Bret Pettichord, Charley Baker, and more. Language: Ruby Current version: 6.7.2 Popularity: 1126 stars on GitHub[/box] Watir is powerful test automation tool based on a family of ruby libraries. It stands for Web Application Testing In Ruby. Watir can connect to databases, export XML, and structure code as reusable libraries, and read data files and spreadsheets all thanks to Ruby. It supports cross-browser and data-driven testing and the tests are easy to read and maintain. It also integrates with other BBD tools such as Cucumber, Test/Unit, BrowserStack or SauceLabs for cross-browser testing and Applitools for visual testing. Whilst Watir supports only Internet Explorer on Windows, Watir-WebDriver, the modern version of the Watir API based on Selenium,  supports Chrome, Firefox, Internet Explorer, Opera and also can run in headless mode (HTMLUnit). [dropcap]A[/dropcap]ll the frameworks that we discussed above offer unique benefits based on their target platforms and respective audiences. One should avoid selecting a framework based solely on technical requirements. Instead, it is important to identify what is suitable to developers, their team, and the project. For instance, even though general-purpose frameworks cover a broad range of devices, they often lack hardware support. And frameworks which are device-specific often lack support for different scripting languages and approaches. Work with what suits your project and your team requirements best. Selenium and data-driven testing: An interview with Carl Cocchiaro 3 best practices to develop effective test automation with Selenium Writing Your First Cucumber Appium Test
Read more
  • 0
  • 7
  • 26490

article-image-a-ux-strategy-is-worthless-without-a-solid-usability-test-plan
Sugandha Lahoti
15 Jun 2018
48 min read
Save for later

A UX strategy is worthless without a solid usability test plan

Sugandha Lahoti
15 Jun 2018
48 min read
A UX practitioner's primary goal is to provide every user with the best possible experience of a product. The only way to do this is to connect repeatedly with users and make sure that the product is being designed according to their needs. At the beginning of a project, this research tends to be more exploratory. Towards the end of the project, it tends to be more about testing the product. In this article, we explore in detail one of the most common methods of testing a product with users--the usability test. We will describe the steps to plan and conduct usability tests. Usability tests provide insights into how to practically plan, conduct, and analyze any user research. This article is an excerpt from the book UX for the Web by Marli Ritter, and Cara Winterbottom. This book teaches you how UX and design thinking can make your site stand out from the rest of the sites on the internet. Tips to maximize the value of user testing Testing with users is not only about making their experience better; it is also about getting more people to use your product. People will not use a product that they do not find useful, and they will choose the product that is most enjoyable and usable if they have options. This is especially the case with the web. People leave websites if they can't find or do things they want. Unlike with other products, they will not take time to work it out. Research by organizations such as the Nielsen Norman group generally shows that a website has between 5 and 10 seconds to show value to a visitor. User testing is one of the main methods available to us to ensure that we make websites that are useful, enjoyable, and usable. However, to be effective it must be done properly. Jared Spool, a usability expert, identified seven typical mistakes that people make while testing with users, which lessen its value. The following list addresses how not to make those mistakes: Know why you're testing: What are the goals of your test? Make sure that you specify the test goals clearly and concretely so that you choose the right method. Are you observing people's behavior (usability test), finding out whether they like your design (focus group or sentiment analysis), or finding out how many do something on your website (web analytics)? Posing specific questions will help to formulate the goals clearly. For example, will the new content reduce calls to the service center? Or what percentage of users return to the website within a week? Design the right tasks: If your testing involves tasks, design scenarios that correspond to tasks users would actually perform. Consider what would motivate someone to spend time on your website, and use this to create tasks. Provide participants with the information they would have to complete the tasks in a real-life situation; no more and no less. For example, do not specify tasks using terms from your website interface; then participants will simply be following instructions when they complete the tasks, rather than using their own mental models to work out what to do. Recruit the right users: If you design and conduct a test perfectly, but test on people who are not like your users, then the results will not be valid. If they know too much or too little about the product, subject area, or technology, then they will not behave like your users would and will not experience the same problems. When recruiting participants, ask what qualities define your users, and what qualities make one person experience the website differently to another. Then recruit on these qualities. In addition, recruit the right number of users for your method. Ongoing research by the Nielsen Norman group and others indicate that usability tests typically require about five people per test, while A/B tests require about 40 people, and card sorting requires about 15 people. These numbers have been calculated to maximize the return on investment of testing. For example, five users in a usability test have been shown by the Nielsen Norman group (and confirmed repeatedly by other researchers) to find about 85% of the serious problems in an interface. Adding more users improves the percentage marginally, but increases the costs significantly. If you use the wrong numbers then your results will not be valid or the amount of data that you need to analyze will be unmanageable for the time and resources you have available. Get the team and stakeholders involved: If user testing is seen as an outside activity, most of the team will not pay attention as it is not part of their job and easy to ignore. When team members are involved, they gain insights into their own work and its effectiveness. Try to get team members to attend some of the testing if possible. Otherwise, make sure everyone is involved in preparing the goals and tasks (if appropriate) for the test. Share the results in a workshop afterward, so everyone can be involved in reflecting on the results and their implications. Facilitate the test well: Facilitating a test well is a difficult task. A good facilitator makes users feel comfortable so they act more naturally. At the same time, the facilitator must control the flow of the test so that everything is accomplished in the available time, and not give participants hints about what to do or say. Make sure that facilitators have a lot of practice and constructive feedback from the team to improve their skills. Plan how to share the results: It takes time and skill to create an effective user testing report that communicates the test and results well. Even if you have the time and skill, most team members will probably not read the report. Find other ways to share results to those who need them. For example, create a bug list for developers using project management software or a shared online document; have a workshop with the team and stakeholders and present the test and results to them. Have review sessions immediately after test days. Iterate: Most user testing is most effective if performed regularly and iteratively; for testing different aspects or parts of the design; for testing solutions based on previous tests; for finding new problems or ideas introduced by the new solutions; for tracking changes to results based on time, seasonality, maturity of product or user base; or for uncovering problems that were previously hidden by larger problems. Many organizations only make provision to test with users once at the end of design, if at all. It is better to split your budget into multiple tests if possible. As we explore usability testing, each of these guidelines will be addressed more concretely. Planning and conducting usability tests Before starting, let's look at what we mean by a usability test, and describe the different types. Usability testing involves watching a representative set of users attempt realistic tasks, and collecting data about what they do and say. Essentially, a usability test is about watching a user interact with a product. This is what makes it a core UX method: it persuades stakeholders about the importance of designing for and testing with their users. Team members who watch participants struggle to use their product are often shocked that they had not noticed the glaringly obvious design problems that are revealed. In later iterations, usability tests should reveal fewer or more minor problems, which provides proof of the success of a design before launch. Apart from glaring problems, how do we know what makes a design successful? The definition of usability by the International Organization for Standardization (ISO) is: Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. This definition shows us the kind of things that make a successful design. From this definition, usability comprises: Effectiveness: How completely and accurately the required tasks can be accomplished. Efficiency: How quickly tasks can be performed. Satisfaction: How pleasant and enjoyable the task is. This can become a delight if a design pleases us in unexpected ways. There are three additional points that arise from the preceding points: Discoverability: How easy it is to find out how to use the product the first time. Learnability: How easy it is to continue to improve using the product, and remember how to use it. Error proneness: How well the product prevents errors and helps users recover. This equates to the number and severity of errors that users experience while doing tasks. These six points provide us with guidance on the kinds of tasks we should be designing and the kind of observations we should be making when planning a usability test. There are three ways of gathering data in a usability test--using metrics to guide quantitative measurement, observing, and asking questions. The most important is observation. Metrics allow comparison, guide observation, and help us design tasks, but they are not as important as why things happen. We discover why by observing interactions and emotional responses during task performance. In addition, we must be very careful when assigning meaning to quantitative metrics because of the small numbers of users involved in usability tests. Typically, usability tests are conducted with about five participants. This number has been repeatedly shown to be most effective when considering testing costs against the number of problems uncovered. However, it is too small for statistical significance testing, so any numbers must be reported carefully. If we consider observation against asking questions, usability tests are about doing things, not discussing them. We may ask users to talk aloud while doing tasks to help us understand what they are thinking, but we need the context of what they are doing. "To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior." - Jakob Nielsen This means that usability tests trump questionnaires and surveys. It also means that people are notoriously bad at remembering what they did or imagining what they will do. It does not mean that we never listen to what users say, as there is a lot of value to be gained from a well-formed question asked at the right time. We must just be careful about how we understand it. We need to interpret what people say within the context of how they say it, what they are doing when they say it, and what their biases might be. For example, users tend to tell researchers what they think we want to hear, so any value judgment will likely be more positive than it should. This is called experimenter bias. Despite the preceding cautions, all three methods are useful and increase the value of a test. While observation is core, the most effective usability tests include tasks carefully designed around metrics, and begin and end with a contextual interview with the user. The interviews help us to understand the user's previous and current experiences, and the context in which they might use the website in their own lives. Planning a usability test can seem like a daunting task. There are so many details to work out and organize, and they all need to come together on the day(s) of the test. The following diagram is a flowchart of the usability test process. Each of the boxes represents a different area that must be considered or organized: However, by using these areas to break the task down into logical steps and keeping a checklist, the task becomes manageable. Planning usability tests In designing and planning a usability test, you need to consider five broad questions: What: What are the objectives, scope, and focus of the test? What fidelity are you testing? How: How will you realize the objectives? Do you need permissions and sign off? What metrics, tasks, and questions are needed? What are the hardware and software requirements? Do you need a prototype? What other materials will you need? How will you conduct the test? Who: How many participants and who will they be? How will you recruit them? What are the roles needed? Will team members, clients, and stakeholders attend? Will there be a facilitator and/or a notetaker? Where: What venue will you use? Is the test conducted in an internal or external lab, on the streets/in coffee shops, or in users' homes/work? When: What is the date of the test? What will the schedule be? What is the timing of each part? Documenting these questions and their answers forms your test plan. The following figure illustrates the thinking around each of these broad questions: It is important to remember that no matter how carefully you plan usability testing, it can all go horribly wrong. Therefore, have backup plans wherever you can. For example, for participants who cancel late or do not arrive, have a couple of spares ready; for power cuts, be prepared with screenshots so you can at least simulate some tasks on paper; for testing live sites when the internet connection fails, have a portable wireless router or cache pages beforehand. Designing the test - formulating goals and structure The first thing to consider when planning a usability test is its goal. This will dictate the test scope, focus, and tasks and questions. For example, if your goal is a general usability test of the whole website, the tasks will be based on the business reasons for the site. These are the most important user interactions. You will ask questions about general impressions of the site. However, if your goal is to test the search and filtering options, your tasks will involve finding things on the website. You will ask questions about the difficulty of finding things. If you are not sure what the specific goal of the usability test might be, think about the following three points: Scope: Do you want to test part of the design, or the whole website? Focus: Which area of the website will you focus on? Even if you want to test the whole website, there will be areas that are more important. For example, checkout versus contact page. Behavioral questions: Are there questions about how users behave, or how different designs might impact user behavior, that are being asked within the organization? Thinking about these questions will help you refine your test goals. Once you have the goals, you can design the structure of the test and create a high-level test plan. When deciding on how many tests to conduct in a day and how long each test should be, remember moderator and user fatigue. A test environment is a stressful situation. Even if you are testing with users in their own home, you are asking them to perform unfamiliar tasks with an unfamiliar system. If users become too tired, this will affect test results negatively. Likewise, facilitating a test is tiring as the moderator must observe and question the user carefully, while monitoring things like the time, their own language, and the script. Here are details to consider when creating a schedule for test sessions: Test length: Typically, each test should be between 60 and 90 minutes long. Number of tests: You should not be facilitating more than 5-6 tests in one day. When counting the hours, leave at least half an hour cushioning space between each test. This gives you time to save the recording, make extra notes if necessary, communicate with any observers, and it provides flexibility if participants arrive later or tests run longer than they should. Number of tasks: This is roughly the number of tasks you hope to include in the test. In a 60-minute test, you will probably have about 40-45 minutes for tasks. The rest of the time will be taken with welcoming the participant, the initial interview, and closing questions at the end. In 45 minutes, you can fit about 5-8 tasks, depending on the nature of the tasks. It is important to remember that less is more in a test. You want to give participants time to explore the website and think about their options. You do not want to be rushing them on to the next task. The last thing to consider is moderating technique. This is how you interact with the participant and ask for their input. There are two aspects: thinking aloud and probing. Thinking aloud is asking participants to talk about what they are thinking and doing so you can understand what is in their heads. Probing is asking participants ad-hoc questions about interesting things that they do. You can do both concurrently or retrospectively: Concurrent thinking aloud and probing: Here, the participant talks while they do tasks and look at the interface. The facilitator asks questions as they come up, while the participant is doing tasks. Concurrent probing interferes with metrics such as time on task and accuracy, as you might distract users. However, it also takes less test time and can deliver more accurate insights, as participants do not have to remember their thoughts and feelings; these are shared as they happen. Retrospective thinking aloud and probing: This involves retracing the test or task after it is finished and asking participants to describe what they were thinking in retrospect. The facilitator may note down questions during tasks, and ask these later. While retrospective techniques simulate natural interaction more closely, they take longer because tasks are retraced. This means that the test must be longer or there will be fewer tasks and interview questions. Retrospective techniques also require participants to remember what they were thinking previously, which can be faulty. Concurrent moderating techniques are preferable because of the close alignment between users acting and talking about those actions. Retrospective techniques should only be used if timing metrics are very important. Even in these cases, concurrent thinking aloud can be used with retrospective probing. Thinking aloud concurrently generally interferes very little with task times and accuracy, as users are ideally just verbalizing ideas already in their heads. At each stage of test planning, share the ideas with the team and stakeholders and ask for feedback. You may need permission to go forward with test objectives and tasks. However, even if you do not need sign off, sharing details with the team gets everyone involved in the testing. This is a good way to share and promote design values. It also benefits the test, as team members will probably have good ideas about tasks to include or elements of the website to test that you have not considered. Designing tasks and metrics As we have stated previously, usability testing is about watching users interacting with a product. Tasks direct the interactions that you want to see. Therefore, they should cover the focus area of the test, or all important interactions if the whole website is tested. To make the test more natural, if possible create scenarios or user stories that link the tasks together so participants are performing a logical sequence of activities. If you have scenarios or task analyses from previous research, choose those that relate to your test goals and focus, and use them to guide your task design. If not, create brief scenarios that cover your goals. You can do this from a top-down or bottom-up perspective: Top down: What events or conditions in their world would motivate people to use this design? For example, if the website is a used goods marketplace, a potential user might have an item they want to get rid of easily, while making some money; or they might need an item and try to get it cheaply secondhand. Then, what tasks accomplish these goals? Bottom up: What are the common tasks that people do on the website? For example, in the marketplace example, common tasks are searching for specific items; browsing through categories of items; adding an item to the site to sell, which might include uploading photographs or videos, adding contact details and item descriptions. Then, create scenarios around these tasks to tie them together. Tasks can be exploratory and open-ended, or specific and directed. A test should have both. For example, you can begin with an open-ended task, such as examining the home page and exploring the links that are interesting. Then you can move onto more directed tasks, such as finding a particular color, size, and brand of shoe and adding it to the checkout cart. It is always good to begin with exploratory tasks, but these can be open-ended or directed. For example, to gather first impressions of a website, you could ask users to explore as they prefer from the home page and give their impressions as they work; or you could ask users to look at each page for five seconds, and then write down everything they remember seeing. The second option is much more controlled, which may be necessary if you want more direct comparison between participants, or are testing with a prototype where only parts of the website are available. Metrics are needed for task, observation, and interview analysis, so that we can evaluate the success of the design we are testing. They guide how we examine the results of a usability test. They are based on the definition of usability, and so relate to effectiveness, efficiency, satisfaction, discoverability, learnability, and error proneness. Metrics can be qualitative or quantitative. Qualitative metrics aim to encode the data so that we can detect patterns and trends in it, and compare the success of participants, tasks, or tests. For example, noting expressions of delight or frustration during a task. Quantitative metrics collect numbers that we can manipulate and compare against each other or benchmarks. For example, the number of errors each participant makes in a task. We must be careful how we use and assign meaning to quantitative metrics because of the small sample sizes. Here are some typical metrics: Task success or completion rates: This measures effectiveness and should always be captured as a base. It relates most closely to conversion, which is the primary business goal for a website, whether it is converting browsers to buyers, or visitors to registered users. You may just note success or failure, but it is more revealing to capture the degree of task success. For example, you can specify whether the task is completed easily, with some struggle, with help, or is not completed successfully. Time on task: A measure of efficiency. How long it takes to complete tasks. Errors per task: A measure of error-proneness. The number and severity of errors per task, especially noting critical errors where participants may not even realize they have made a mistake. Steps per task: A measure of efficiency. A number of steps or pages needed to complete each task, often against a known minimum. First click: A measure of discoverability. Noting the first click to accomplish each task, to report on findability of items on the web page. This can also be used in more exploratory tasks to judge what attracts the user's attention first. When you have designed tasks, consider them against the definition of usability to make sure that you have covered everything that you need or want to cover. The preceding diagram shows the metrics typically associated with each component of the usability definition. A valid criticism of usability testing is that it only tests first-time use of a product, as participants do not have time to become familiar with the system. There are ways around this problem. For example, certain types of task, such as search and browsing, can be repeated with different items. In later tasks, participants will be more familiar with the controls. The facilitator can use observation or metrics such as task time and accuracy to judge the effect of familiarity. A more complicated method is to conduct longitudinal tests, where participants are asked to return a few days or a week later and perform similar tasks. This is only reasonable to spend time and money on if learnability is an important metric. Planning questions and observation The interview questions that are asked at the beginning and end of a test provide valuable context for user actions and reactions, such as the user's background, their experiences with similar websites or the subject-area, and their relationship to technology. They also help the facilitator to establish rapport with the user. Other questions provide valuable qualitative information about the user's emotional reaction to the website and the tasks they are doing. A combination of observation and questions provides data on aspects such as ease of use, usefulness, satisfaction, delight, and frustration. For the initial interview, questions should be about: Welcome: These set the participant at ease, and can include questions about the participant's lifestyle, job, and family. These details help UX practitioners to present test participants as real people with normal lives when reporting on the test. Domain: These ask about the participant's experience with the domain of the website. For example, if the website is in the domain of financial services, questions might be around the participant's banking, investments, loans, and their experiences with other financial websites. As part of this, you might investigate their feelings about security and privacy. Tech: These questions ask about the participant's usage and experience with technology. For example, for testing a website on a computer, you might want to know how often the participant uses the internet or social media each day, what kinds of things they do on the internet, and whether they buy things online. If you are testing mobile usage, you might want to inquire about how often the participant uses the internet on their phone each day, and what kind of sites they visit on mobile versus desktop. Like tasks, questions can be open-ended or closed. An example of an open-ended question is: Tell me about how you use your phone throughout a normal workday, beginning with waking up in the morning and ending with going to sleep at night. The facilitator would then prompt the participant for further details suggested during the reply. A closed question might be: What is your job? These generate simple responses, but can be used as springboards into deeper answers. For example, if the answer is fireman, the facilitator might say, That's interesting. Tell me more about that. What do you do as a fireman? Questions asked at the end of the test or during the test are more about the specific experience of the website and the tasks. These are often made more quantifiable by using a rating scale to structure the answer. A typical example is a Likert scale, where participants specify their agreement or disagreement with a statement on a 5- to 7-point scale. For example, a statement might be: I can find what I want easily using this website. #1 is labeled Strongly Agree and #7 is labelled Strongly Disagree. Participants choose the number that corresponds to the strength of their agreement or disagreement. You can then compare responses between participants or across different tests. Examples of typical questions include: Ease of use (after every task): On a scale of 1-7, where 1 is really hard and 7 is really easy, how difficult or easy did you find this task? Ease of use (at the end): On a scale of 1-7, how easy or difficult did you find working on this website? Usefulness: On a scale of 1-7, how useful do you think this website would be for doing your job? Recommendation: On a scale of 1-7, how likely are you to recommend this website to a friend? It is important to always combine these kinds of questions with observation and task performance, and to ask why afterwards. People tend to self-report very positively, so often you will pay less attention to the number they give and more to how they talk about their answer afterwards. The final questions you ask provide closure for the test and end it gracefully. These can be more general and conversational. They might deliver useful data, but that is not the priority. For example, What did you think of the website? or Is there anything else you'd like to say about the website? Questions during the test often arise ad hoc because you do not understand why the participant does an action, or what they are thinking about if they stare at a page of the website for a while. You might also want to ask participants what they expect to find before they select a menu item or look at a page. In preparing for observation, it is helpful to make a list of the kinds of things you especially want to observe during the test. Typical options are: Reactions to each new page of the website First reactions when they open the Home page The variety of steps used to complete each task Expressions of delight or frustration Reactions to specific elements of the website First clicks for each task First click off the Home page Much of what you want to observe will be guided by the usability test objectives and the nature of the website. Preparing the script Once you have designed all the elements of the usability test, you can put them together in a script. This is a core document in usability testing, as it acts as the facilitator's guide during each test. There are different ideas about what to include in a script. Here, we describe a comprehensive script that describes the whole test procedure. This includes, in rough order: The information that must be told to the participant in the welcome speech. The welcome speech is very important, as it is the participant's first experience of the facilitator. It is where the rapport will first be established. The following information may need to be included: Introduction to the facilitator, client, and product. What will happen during the test, including the length. The idea that the website is being tested, not the participant, and that any problems are the fault of the product. This means the participant is valuable and helpful to the team developing a great website. Asking the participant to think aloud as much as possible, and to be honest and blunt about what they think. Asking them to imagine that they are at home in a natural situation, exploring the website. If there are observers, indication that people may be watching and that they should be ignored. Asking permission to record the session, and telling the participant why. Assuring them of their privacy and the limited usage of the recordings to analysis and internal reporting. A list of any documents that the participant must look at or sign first, for example, an NDA. Instructions on when to switch on and off any recording devices. The questions to ask in thematic sections, for example, welcome, domain, and technology. These can include potential follow - on questions, to delve for more information if necessary. A task section, that has several parts: An introduction to the prototype if necessary. If you are testing with a prototype, there will probably be unfinished areas that are not clickable. It is worth alerting participants so they know what to expect while doing tasks and talking aloud. Instructions on how to use the technology if necessary. Ideally your participants should be familiar with the technology, but if this is not the case, you want to be testing the website, not the technology. For example, if you are testing with a particular screen reader and the participant has not used it before, or if you are using eye tracking technology. An introduction to the tasks, including any scenarios provided to the participant. The description of each task. Be careful not to use words from the website interface when describing tasks, so you do not guide the participant too much. For example, instead of: How would you add this item to your cart?, say How would you buy this item? Questions to include after each task. For example, the ease of use question. Questions to prompt the participant if they are not thinking aloud when they should, especially for each new page of the website or prototype. For example: What do you see here? What can you do here? What do you think these options mean? Final questions to finish off the test, and give the participant a chance to emphasize any of their experiences. A list of any documents the participant must sign at the end, and instructions to give the incentive if appropriate. Once the script is created, timing is added to each task and section, to help the facilitator make sure that the tests do not run over time. This will be refined as the usability test is practiced. The script provides a structure to take notes in during the test, either on paper or digitally: Create a spreadsheet with rows for each question and task Use the first column for the script, from the welcome questions onwards Capture notes in subsequent columns for the user Use a separate spreadsheet for each participant during the test After all the tests, combine the results into one spreadsheet so you can easily analyze and compare The following is a diagram showing sections of the script for notetaking, with sample questions and tasks, for a radio station website: Securing a venue and inviting clients and team members If you are testing at an external venue, this is one of the first things you will need to organize for a usability test, as these venues typically need to be booked about one-two months in advance. Even if you are testing in your own offices, you will still need to book space for the testing. When considering a test venue, you should be looking for the following: A quiet, dedicated space where the facilitator, participant, and potentially a notetaker, can sit. This needs surfaces for all the equipment that will be used during the test, and comfortable space for the participant. Consider the lighting in the test room. This might cause glare if you are testing on mobile phones, so think about how best to handle the glare. For example, where the best place is for the participant to sit, and whether you can use indirect lighting of some kind. A reception room where participants can wait for their testing session. This should be comfortable. You may want to provide refreshments for participants here. Ideally, an observation room for people to watch the usability tests. Observers should never be in the same space as the testing, as this will distract participants, and probably make them uncomfortable. The observation room should be linked to the test room, either with cables or wirelessly, so observers can see what is happening on the participant's screen, and hear (and ideally see) the participant during the test. Some observation rooms have two-way mirrors into the test room, so observers can watch the facilitator and participant directly. Refreshments should be available for the observers. We have discussed various testing roles previously. Here, we describe them formally: Facilitator: This is the person who conducts the test with the participant. They sit with the participant, instruct them in the tasks and ask questions, and take notes. This is the most important role during the test. We will discuss it further in the Conducting usability tests section. Participant: This is the person who is doing the test. We will discuss recruiting test participants in the next section. Notetaker: This is an optional role. It can be worth having a separate notetaker, so the facilitator does not have to take notes during the test. This is especially the case if the facilitator is inexperienced. If there is a notetaker, they sit quietly in the test room and do not engage with the participant, except when introduced by the facilitator. Receptionist: Someone must act as receptionist for the participants who arrive. This cannot be the facilitator, as they will be in the sessions. Ask a team member or the office receptionist to take this role. Observers: Everyone else is an observer. These can be other team members and/or clients. Observers should be given guidelines for their behavior. For example, they should not interact with test participants or interrupt the test. They watch from a separate room, and should not be too noisy so that they can be heard in the test room (often these rooms are close to each other). The facilitator should discuss the tests with observers between sessions, to check if they have any questions they would like added to the test, and to discuss observations. It is worth organizing a debriefing for immediately after the tests, or the next day if possible, for the observers and facilitator to discuss the tests and observations. It is important that as many stakeholders as possible are persuaded to watch at least some of the usability testing. Watching people using your designs is always enlightening, and really helps to bring a team together. Remember to invite clients and team members early, and send reminders closer to the day. Recruiting participants When recruiting participants for usability tests, make sure that they are as close as possible to your target audience. If your website is live and you have a pool of existing users, then your job is much easier. However, if you do not have a user pool, or you want to test with people who have not used your site, then you need to create a specification for appropriate users that you can give to a recruiter or use yourself. To specify your target audience, consider what kinds of people use your website, and what attributes will cause them to behave differently to other users. If you have created personas during previous research, use these to help identify target user characteristics. If you are designing a website for a client, work with them to identify their target users. It is important to be specific, as it is difficult to look for people who fulfill abstract qualities. For example, instead of asking for tech savvy people, consider what kinds of technology such people are more likely to use, and what their activities are likely to be. Then ask for people who use the technology in the ways you have identified. Consider the behaviors that result from certain types of beliefs, attitudes, and lifestyle choices. The following are examples of general areas you should consider: Experience with technology: You may want users who are comfortable with technology or who have used specific technology, for example, the latest smartphones, or screen readers. Consider the properties that will identify these people. For example, you can specify that all participants must own a specific type or generation of mobile device, and must have owned it for at least two months. Online experience: You may want users with a certain level and frequency of internet usage. To elicit this, you can specify that you want people who have bought items online within the last few months, or who do online banking, or have never done these things. Social media presence: Often, you want people who have a certain amount of social media interaction, potentially on specific platforms. In this case you would specify that they must regularly post to or read social media such as Facebook, Twitter, Instagram, Snapchat, or more hobbyist versions such as Pinterest and/or Flickr. Experience with the domain: Participants should not know too much or too little about the domain. For example, if you are testing banking software, you may want to exclude bank employees, as they are familiar with how things work internally. Demographics: Unless your target audience is very skewed, you probably want to recruit a variety of people demographically. For example, a range of ages, gender ethnicity, economic, and education levels. There may be other characteristics you need in your usability test participants. The previous characteristics should give you an idea of how to specify such people. For example, you may want hobbyist photographers. In this case, you would recruit people who regularly take photographs and share them with friends in some way. Do not use people who you have previously used in testing, unless you specifically need people like this, as they will be familiar with your tests and procedures, which will interfere with results. Recruiting takes time and is difficult to do well. There are various ways of recruiting people for user testing, depending on your business. You may be able to use people or organizations associated with your business or target audience members to recruit people using the screening questions and incentives that you give them. You can set up social media lists of people who follow your business and are willing to participate. You can also use professional recruiters, who will get you exactly the kinds of people you need, but will charge you for it. For most tests, an incentive is usually given to thank participants for their time. This is often money, but it can also be a gift, such as stationery or gift certificates. A recruitment brief is the document that you give to recruiters. The following are the details you need to include: Day of the test, the test length, and the ideal schedule. This should state the times at which the first and last participants may be scheduled, how long each test will take, and the buffer period that should be between each session. The venue. This should include an address, maps, parking, and travel information. Contact details for the team members who will oversee the testing and recruitment. A description of the test that can be given to participants. The incentives that will be provided. The list of qualities you need in participants, or screening questions to check for these. This document can be modified to share with less formal recruitment associates. The benefit of recruiters is that they handle the whole recruitment process. If you and your team recruit participants yourselves, you will need to remind them a week before the test, and the day before the test, usually by messaging or emailing them. On the day of the test, phone participants to confirm that they will be arriving, and that they know how to get to the venue. Participants still often do not attend tests, even with all the reminders. This is the nature of testing with real people. Ideally you will be given some notice, so try to recruit an extra couple of possible participants who you can call in a pinch on the day. Setting up the hardware, software, and test materials Depending on the usability test, you will have to prepare different hardware, software, and test materials. These include screen recording software and hardware, notetaking hardware, the prototype to test, screen sharing options, and so on. The first thing to consider is the prototype, as this will have implications for hardware and software. Are you testing a live website, an electronic prototype, or a paper prototype? Live website: Set up any accounts or passwords that may be necessary. Make sure you have reliable access to the internet, or a way to cache the website on your machine if necessary. Electronic prototype: Make sure the prototype works the way it is supposed to, and that all the parts that are accessed during the tasks can be interacted with, if required. Try not to make it too obvious which parts work and which parts do not work, as this may guide participants to the correct actions during the test. Be prepared to talk participants through parts of the prototype that do not work, so they have context for the tasks. Have a safe copy of the prototype in case this copy becomes corrupted in some way. Paper prototype: Make sure that you have sketches or printouts of all the screens that you need to complete the tasks. With paper prototype testing, the facilitator takes the role of the computer, shows the results of the actions that the participant proposes, and talks participants through the screens. Make sure that you are prepared for this and know the order of the screens. Have multiple copies of the paper prototype in case parts get lost or destroyed. For any of the three, make sure the team goes through the test tasks to make sure that everything is working the way it should be. For hardware and other software, keep an equipment list, so you can check it to make sure you have all the necessary hardware with you. You may need to include: Hardware for participant to interact with the prototype or live site: This may be a desktop, laptop, or mobile device. If testing on a mobile device, you can ask participants to use their own familiar phones instead of an unfamiliar test mobile device. However, participants may have privacy issues with using their own phones, and you will not be able to test the prototype or live site on the phone beforehand. If you provide a laptop, include a separate mouse as people often have difficulty with unfamiliar mouse pads. Recording the screen and audio: This is usually screen capture software. There are many options for screen capturing software, such as Lookback, an inexpensive option for iOS and Android, and CamStudio, a free option for the PC. Specialist software that handles multiple camera inputs allows you to record face and screen at the same time. Examples are iSpy, free CCTV software, Silverback, an inexpensive option for the Mac, and Morae, an expensive but impressive option for the PC. Mobile recording alternative: You can also record mobile video with an external camera that captures the participant's fingers on screen. This means you do not have to install additional software on the phone, which might cause performance problems. In this case, you would use a document camera attached to the table, or a portable rig with a camera containing the phone and attached to a nearby PC. The video will include hesitations and hovering gestures, which are useful for understanding user behavior, but fingers might occlude the screen. In addition, rigs may interfere with natural usage of the mobile phone, as participants must hold the rig as well as the phone. Observer viewing screen: This is needed if there are observers. The venue might have screen sharing set up; if not, you will have to bring your own hardware and software. This could be an external monitor and extension cables to connect to a laptop in the interview room. It could also be screen sharing software, for example, join.me. Capturing notes: You will need a method to capture notes. Even if you are screen recording, notes will help you to review the recordings more efficiently, and remind you about parts of the recording you wanted to pay special attention to. One method is using a tablet or laptop and spreadsheet. Typing is fast and the electronic notes are easy to put together after the tests. An alternative is paper and pencil. The benefit of this is that it is least disruptive to the participant. However, these notes must be captured electronically. Camera for participant face: Capturing the participant's face is not crucial. However, it provides good insight into their feelings about tasks and questions. If you don't record face, you will only have tone of voice and the notes that were taken to remind you. Possible methods are using a webcam attached to the computer doing screen recording, or using inbuilt software such as Hangouts, Skype, or FaceTime for Apple devices. Microphone: Often sound quality is not great on screen capturing software, because of feedback from computer equipment. Using an external microphone improves the quality of sound. Wireless router: A portable wireless router in case of internet problems (if you are using the internet). Extra extension cables and chargers for all devices. You will also need to make sure that you have multiple copies of all documents needed for the testing. These might include: Consent form: When you are testing people, they typically need to give their permission to be tested. You also typically need proof that the incentive has been received by the participant. These are usually combined into a form that the participant signs to give their permission and acknowledge receipt of the incentive. Non-disclosure agreement (NDA): Many businesses require test participants to sign NDAs before viewing the prototype. This must be signed before the test begins. Test materials: Any documents that provide details to the participants for the test. Checklists: It is worth printing out your checklists for things to do and equipment, so that you can check them off as you complete actions, and be sure that you have done everything by the time it needs to be done. The following figure shows a basic sample checklist for planning a usability test. For a more detailed checklist, add in timing and break the tasks down further. These refinements will depend on the specific usability test. Where you are uncertain about how long something will take, overestimate. Remember that once you have fixed the day, everything must be ready by then. Checklist for usability test preparation Conducting usability tests On the day(s) of the usability test, if you have planned properly, all you should have to worry about are the tests themselves, and interacting with the participants. Here is a list of things to double-check on the day of each test: Before the first test: Set up and check equipment and rooms. Have a list of participants and their order. Make sure there are refreshments for participants and observers. Make sure you have a receptionist to welcome participants. Make sure that the prototype is installed or the website is accessible via the internet and working. Test all equipment, for example, recording software, screen sharing, and audio in observations room. Turn off anything on the test computer or device that might interfere with the test, for example, email, instant messaging, virus scans, and so on. Create bookmarks for any web pages you need to open. Before each test: Have the script ready to capture notes from a new participant. Have the screen recorder ready. Have the browser open in a neutral position, for example, Google search. Have sign sheets and incentive ready. Start screen sharing. Reload sample data if necessary, and clear the browser history from the last test. During each test: Follow the script, including when the participant must sign forms and receive the incentive. Press record on the screen recorder. Give the microphone to the participant if appropriate. After each test: Stop recording and save the video. Save the script. End screen sharing. Note extra details that you did not have time for during the session. Once you have all the details organized, the test session is in the hands of the facilitator. Best practices for facilitating usability sessions The facilitator should be welcoming and friendly, but relatively ordinary and not overly talkative. The participant and website should be the focus of the interview and test, not the facilitator. To create rapport with the participant, the facilitator should be an ally. A good way to do this is to make fun of the situation and reassure participants that their experiences in the test will be helpful. Another good technique is to ask more like an apprentice than an expert, so that the participant answers your questions, for example: Can you tell me more about how this works? and What happens next?. Since you want participants to feel as natural and comfortable as possible in their interactions, the facilitator should foster natural exploration and help satisfy participant curiosity as much as possible. However, they need to remain aware of the script and goals of the test, so that the participant covers what is needed. Participants often struggle to talk aloud. They forget to do so while doing tasks. Therefore, the facilitator often needs to nudge participants to talk aloud or for information. Here are some useful questions or comments: What are you thinking? What do you think about that? Describe the steps you're doing here. What's going on here? What do you think will happen next? Is that what you expected to happen? Can you show me how you would do that? When you are asking questions, you want to be sure that you help participants to be as honest and accurate as possible. We've previously stated that people are notoriously bad at projecting what they will do or remembering what they did. This does not mean that you cannot ask about what people do. You must just be careful about how you ask and always try to keep it concrete. The priorities in asking questions are: Now: Participants talking aloud about what they are doing and thinking now. Retrospective: Participants talking about what they have done or thought in the past. Never prospective: Never ask participants about what they would do in the future. Rather ask about what they have done in similar situations in the past. Here are some other techniques for ensuring you get the best out of the participants, and do not lead them too much yourself: Ask probing questions such as why and how to get to the real reasons for actions. Do not assume you know what participants are going to say. Check or paraphrase if you are not sure what they said or why they said it. For example, So are you saying the text on the left is hard to read? or You're not sure about what? or That picture is weird? How? Do not ask leading questions, as people will give positive answers to please you. For example, do not say Does that make sense?, Do you like that? or Was that easy? Rather say Can you explain how this works? What do you think of that? and How did you find doing that task? Do not tell participants what they are looking at. You are trying to find out what they think. For example, instead of Here is the product page, say Tell me what you see here, or Tell me what this page is about. Return the question to the participant if they ask what to do or what will happen: I can't tell you because we need to find out what you would do if you were alone at home. What would you normally do? or What do you think will happen? Ask one question at a time, and make time for silence. Don't overload the participants. Give them a chance to reply. People will often try to fill the silence, so you may get more responses if you don't rush to fill it yourself. Encourage action, but do not tell them what to do. For example, Give it a try. Use acknowledgment tokens to encourage action and talking aloud. For example, OK, uh huh, mm hmm. A good facilitator makes participants feel comfortable and guides them through the tasks without leading while observing carefully and asking questions where necessary. It takes practice to accomplish this well. The facilitator (and the notetaker if there is one) must also think about the analysis that will be done. Analysis is time-consuming; think about what can be done beforehand to make it easier. Here are some pointers: Taking notes on a common spreadsheet with a script is helpful because the results are ready to be combined easily. If you are gathering quantitative results, such as timing tasks or counting steps to accomplish activities, prepare spaces to note these on the spreadsheet before the test, so all the numbers are easily accessible afterward. If you are rating task completion, then note a preliminary rating as the task is completed. This can be as simple as selecting appropriate cell colors beforehand and coloring each cell as the task is completed. This may change during analysis, but you will have initial guidance. Listen for useful and illustrative quotes or video segment opportunities. Note down the quote or roughly note the timestamp, so you know where to look in the recording. In general, have a timer at hand, and note the timestamp of any important moments in each test. This will make reviewing the recordings easier and less time-consuming. We examined how to plan, organize, and conduct a usability test. As part of this, we have discussed how to design a test with goals, tasks, metrics, and questions using the definition of usability. If you liked this article, be sure to check out this book UX for the Web  to make a web app fully accessible from a development and design perspective. 3 best practices to develop effective test automation with Selenium Unit Testing in .NET Core with Visual Studio 2017 for better code quality Unit Testing and End-To-End Testing
Read more
  • 0
  • 0
  • 26427

article-image-how-not-to-get-hacked-by-state-sponsored-actors
Guest Contributor
19 Jun 2019
11 min read
Save for later

How not to get hacked by state-sponsored actors

Guest Contributor
19 Jun 2019
11 min read
News about Russian hackers creating chaos in the European Union or Chinese infiltration of a US company has almost become routine. In March 2019, a Russian hacking group was discovered operating on Czech soil by Czech intelligence agencies. Details are still unclear, however, speculations state that the group is part of a wider international network based out of multiple EU countries and was operating under Russian diplomatic cover. The cybercriminal underground is complex, multifaceted, and by its nature, difficult to detect. On top of this, hackers are incentivized not to put their best foot forward in order to evade detection. One of the most common tactics is to disguise an attack so that it looks like the work of another group. These hackers frequently prefer to use the most basic hacking software available because it avoids the unique touches of more sophisticated software. Both of these processes make it more difficult to trace a hack back to its source. Tracing high-level hacking is not impossible; however, there are some clear signs investigators use to determine the origin of a hacking group. Different hacker groups have distinct motivations, codes of conduct, tactics, and payment methods. Though we will be using Russian and Chinese hacking as our main examples, the tips we give can be applied to protecting yourself from any state-sponsored attack. Chinese and Russian hacking – knowing the difference Russian speaking hacker forums are being exposed with increasing frequency, revealing not just the content shared in their underground network, but the culture that members have built up. They first gained notoriety during the 90s when massive economic changes saw the emergence of vast criminal networks – online and offline. These days, Russian hacks typically have two different motivations: geopolitical and financial. Geopolitical attacks are generally designed to create confusion. The role of Russian hackers in the 2016 US election was one of the most covered stories by international media. However, these attacks are most effective and most common in countries with weak government institutions. Many of them are also former Soviet territories where Russia has a pre-existing geopolitical interest. For example, the Caucasus region and the Baltic states have long been targeted by state-sponsored hackers. The tactics of these “active measures” are multivariate and highly complex. Hacking and other digital attacks are just one arm of this hybrid war. However, the hacks that affect average web users the most, tend to be financially motivated. Russian language forums on the dark web have vast sections devoted to “carder” communities. Carder forums are where hackers go to buy and sell everything from identity details, credit card details, data dumps, or any other information that has been stolen. For hackers looking to make a quick buck, carder forums are bread and butter. These forums and subforums include detailed tutorials on how to spoof a credit card number. The easiest way to steal from unsuspecting people is to buy a fake card. However, card scanners that steal a person’s credit card number and credentials are becoming increasingly popular. Unlike geopolitical hacks, financial attacks are not necessarily state-sponsored. Though individual Western hackers may be more skilled when it comes to infiltrating more complex system, Russian hackers have several distinct advantages. Unlike in Western countries, Russian authorities tend to turn a blind eye to hacking that targets either Western countries or former Soviet states. This allows hackers to work together in groups, something they’re discouraged from doing in countries that crack down on cyber attacks. This means Russian hackers can target more people at a greater speed than individual bad actors working in other countries. Why the Chinese do it? There are a number of distinct differences when it comes to Chinese hacking projects. The goal of state-sponsored Chinese attacks is to catch up to the US and European level of technological expertise in fields ranging from AI, biomedicine, alternative energy to robotics, and space technology. These goals were outlined in Xi Jinping's Made in China 2025 announcement. This means, the main target for Chinese hackers is economic and intellectual property, which can be corporate or government. In the public sector, targeting US defense forces yields profitable designs for state-of-the-art technology. The F-22 and F-35, two fighter aircraft developed for the US military, were copied and produced almost identically by China’s People’s Liberation Army. In the private sector, Chinese agents target large scale industries that use and develop innovative technology, like oil and gas companies. For example, a group might attack an oil firm to get details about exploration and steal geological assessments. This information would then be used to underbid their US competitor. After a bilateral no-hacking agreement between the US and Chinese leaders was signed in 2016, attacks dropped significantly. However, since mid-2018, these attacks have begun to increase again. The impact of these new Chinese-sponsored cyber attacks has been farther reaching than initially expected. Chinese hacking groups aren’t simply taking advantage of system vulnerabilities in order to steal corporate secrets. Many top tech companies believed they were compromised by a possible supply chain attack that saw Chinese microchips secretly inserted into servers. Though Chinese and Russian hackers may have different motivations, one thing is certain: they have the numbers on their side. So how can you protect yourself from these specific hacking schemes? How to stay safe – tips for everyday online security Cyber threats are a part of life connected to the internet. While there’s not a lot you can do to stop someone else from launching an attack, there are steps you can take to protect yourself as much as possible.  Of course, no method is 100% foolproof, but it’s likely that you can be protected. Hackers look for vulnerabilities and flaws to exploit. Unless you are the sole gatekeeper of a top-secret and lucrative information package that you’ve placed under heavy security, you may find yourself the target of a hacking scheme at some point or another.  Nevertheless, if a hacker tries to infiltrate your network or device and finds it too difficult, they will probably move onto an easier target. There are some easy steps you can take to bolster your safety online. This is not an exhaustive list. Rather, it’s a round-up of some of the best tools available to bolster your security and make yourself a difficult – and therefore unattractive – target. Make use of security and scanning tools The search tool Have I Been Pwned is a great resource for checking if your accounts have been caught up in a recent data breach. You can enter your email address or your password for any account to see whether either has been exposed. You can also set up notifications on your accounts or domains that will tell you immediately if they are caught in a data breach. This kind of software can be especially helpful for small business networks, which are more likely to find themselves on the receiving end of a Chinese hack. Hackers know that small businesses have fewer resources than large corporations, which can make their attacks even more devastating. Read Also: ‘Have I Been Pwned’ up for acquisition; Troy Hunt code names this campaign ‘Project Svalbard’ Manage your passwords One of the most common security mistakes is also one of the most dangerous. You should use a unique, complicated password for each one of your accounts. The best way to manage a lot of complicated passwords is with a password manager. There are browser extensions but they have an obvious drawback if you lose your device. It’s best to use a separate application. Use a passphrase, rather than a password, to access your password manager. A passphrase is exactly what it sounds like. Rather than trusting that hackers won’t be able to figure out a single word, using multiple words to create a full phrase is both easier to remember and harder to hack. If your device offers biometric access (like fingerprint), switch it on. Many financial apps also offer an additional layer of biometric security before you send money. Use a VPN A VPN encrypts your traffic, making it unreadable to outsiders. It also spoofs your IP address, which conceals your true location. This prevents sensitive information from falling into the hands of unscrupulous users and prevents your location details being used to identify you. Some of the premium VPNs integrate advanced security features into their applications. For example, malware blockers will protect your device from malware and spyware. Some also contain ad-blockers. Read Also: How to protect your VPN from Data Leaks Keep in mind that free VPNs can themselves be a threat to your online privacy. In fact, some free VPNs have been used by the Chinese government to spy on their citizens. That’s why you should only use a high-quality VPN like CyberGhost to protect yourself from hackers and online trackers. If you’re looking for the fastest VPN on the market, ExpressVPN has consistently been the best competition in speed tests. NordVPN is our pick for best overall VPN when comparing it based on price, security, and speed. VPNs are an important tool for both individuals and businesses. However, because Russian hackers prefer individual targets, using a VPN while dealing with any sensitive data, such as a bank, will help keep your money in your own account. Learn to identify and deal with phishing Phishing for passwords is one of the most common and most effective ways to extract sensitive information from a target. Russian hackers were famously able to sabotage Hillary Clinton’s presidential campaign when they leaked emails from campaign manager John Podesta. Thousands of emails on that server were stolen via a phishing scam. Phishing scams are an easy way for hackers to infiltrate companies especially. Many times, employee names and email addresses are easy to access online. Hackers then use those names for false email accounts, which tricks their coworkers into open an email that contains a malware file. The malware then opens a direct line into the company’s system. Crucially, phishing emails will ask for your passwords or sensitive information. Reputable companies would never do that. One of the best ways to prevent a phishing attack is to properly train yourself, and everyone in your company, on how to detect a phishing email. Typically – but not always – phishing emails use badly translated English with grammatical errors. Logos and icons may also appear ill-defined. Another good practice is to simply hover your mouse over the email, which will generally reveal the actual sender. Check the hosting platform and the spelling of the company name as these are both techniques used by hackers to confuse unwitting employees. You can also use a client-based anti-phishing software, like one from avast! or Kaspersky Labs, which will flag suspicious emails. VPNs with an anti-malware feature also offer reliable protection against phishing scams. Read Also: Using machine learning for phishing domain detection [Tutorial] Keep your apps and devices up-to-date Hackers commonly take advantage of flaws in old systems. Usually, when an update is released, it fixes these vulnerabilities. Make a habit of installing each update to keep your devices protected. Disable Flash Flash is a famously insecure piece of software that hackers can infiltrate easily. Most websites have moved away from flash, but just to be sure, you should disable it in your browser. If you need it later you can give Flash permission to run for just video at a time. What to do if you have been hacked If you do get a notice that your accounts have been breached, don’t panic. Follow the steps given below: Notify your workplace Notify your bank Order credit reports to keep track of any activity Get identity theft insurance Place a credit freeze on your accounts or a fraud alert Chinese and Russian hackers may seem impossible to avoid, but the truth is, we are probably not protecting ourselves as well as we should be. Though individuals are less likely to find themselves the target of Chinese hacks, most hackers are out for financial gain above all else. That makes it is more crucial to protect our private data. The simple tips provided above are a great baseline to secure your devices and protect your privacy, whether you want to protect against state-sponsored hacking or individual actors. Author Bio Ariel Hochstadt is a successful international speaker and author of 3 published books on computers and the internet. He’s an ex-Googler where he was the Global Gmail Marketing Manager and today he is the co-founder of vpnMentor and an advocate of online privacy. He’s also very passionate about traveling around the world with his wife and three kids.   Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns How to beat Cyber Interference in an Election process The most asked questions on Big Data, Privacy, and Democracy in last month’s international hearing by Canada Standing Committee
Read more
  • 0
  • 0
  • 26282
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-are-recurrent-neural-networks-capable-of-warping-time
Savia Lobo
07 May 2018
2 min read
Save for later

Are Recurrent Neural Networks capable of warping time?

Savia Lobo
07 May 2018
2 min read
‘Can Recurrent neural networks warp time?’ is authored by Corentin Tallec and Yann Ollivier to be presented at ICLR 2018. This paper explains that plain RNNs cannot account for warpings, leaky RNNs can account for uniform time scalings but not irregular warpings, and gated RNNs can adapt to irregular warpings. Gating mechanism of LSTMS (and GRUs) to time invariance / warping What problem is the paper trying to solve? In this paper, that authors prove that learnable gates in a recurrent model formally provide quasi-invariance to general time transformations in the input data. Further, the authors try to recover part of the LSTM architecture from a simple axiomatic approach. This leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Paper summary The authors have derived the self loop feedback gating mechanism of recurrent networks from first principles via a postulate of invariance to time warpings. Gated connections appear to regulate the local time constants in recurrent models. With this in mind, the chrono initialization, a principled way of initializing gate biases in LSTMs, has been introduced. Experimentally, chrono initialization is shown to bring notable benefits when facing long term dependencies. Key takeaways In this paper, the authors show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models. The paper provides precise prescriptions on how to initialize gate biases depending on the range of time dependencies to be captured. The empirical benefits of the new initialization on both synthetic and real world data have been tested. The authors also observed a substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate. Reviewer comments summary Overall Score: 25/30 Average Score: 8 According to a reviewer, the core insight of the paper is the link between recurrent network design and its effect on how the network reacts to time transformations. This insight is simple, elegant and valuable, as per the reviewer. A minor complaint highlighted is that there are an unnecessarily large number of paragraph breaks, which make reading slightly jarring. Recurrent neural networks and the LSTM architecture Build a generative chatbot using recurrent neural networks (LSTM RNNs) How to recognize Patterns with Neural Networks in Java  
Read more
  • 0
  • 0
  • 26151

article-image-why-you-should-analyze-user-behavior-data-before-developing-a-mobile-app
Guest Contributor
16 Jan 2019
6 min read
Save for later

Why you should analyze user-behavior data before developing a mobile app?

Guest Contributor
16 Jan 2019
6 min read
What is the first thing that comes to your mind when we say “mobile app”? If you are a user, you are probably thinking of, it is something that’s convenient, and eases your life. However, in a business context, an idea that can be converted into an app model and helps boost your profitability. When successful entrepreneurs launch their original idea, they do not just design and develop it for the market; they research, understand the market minutely and more importantly study the users in-depth. One part that leads you to success is the complete understanding of the user. Here, we will try to understand why user behavior analysis is important and how you can best deliver it. Why analyze user behavior? Instead of asking this question, let’s ask the most important question- who are you designing the app for? The users of the app will be members of the target audience, and technically it is for them that you are planning the app layout and coding the all new idea for. In this case, you need to ensure it is usable for them, and they find the app convenient. You need to understand every aspect of user behavior, ranging from an understanding of how they use the app to what engages them. Analysis of user behavior will help you design the UX accordingly, and allows you to deliver effective app solutions. For this, we need to identify the different ways in which you can identify user behavior and what you need to consider, in order to deliver a perfect app solution. 4 Effective ways to analyze user behavior data Here’re four effective ways that will help you to analyze user behavior data to design and develop a mobile app accordingly. The app goal: Whenever the user uses an app, they do it with a specific goal in mind. For instance, when you use Uber, you are choosing travel convenience and avoiding haggling with the driver over the fare. The Uber app allows its users to book their ride with ease and know the amount for the ride beforehand. When you are designing for the user, you need to understand the goal they are attempting to achieve with the app, and how best you can help them achieve it, in the simplest way possible. The mobile usage: While designing an app for the users, you need to understand how they use the mobile phone. What is most convenient for them? For instance, 79% people use their left hand instead of the right hand to cradle their phone or use the apps. Have you considered them while designing the app? Most people prefer the portrait mode for certain apps; however, when they are viewing videos, they prefer to hold it in the landscape mode. If your app does not change the view according to the preferences stated, then you are likely to lose out on the customers. Do users use the thumb to access the buttons on the screen or, do they use their finger? How do they navigate through the screens? Do they hold the phone in one hand or cradle the phone? When you are able to answer these questions, then you have nailed the design strategy You would know just where to place the buttons and how to design the interaction? There are places within the mobile screen which have been marked as inaccessible. If you place the buttons or other clickable elements in that part of the screen, then you are halting the access to the mobile app. Acknowledge feedback: What do users like the most about the mobile app in general, and what are the aspects that frustrate them? For instance, there are mobile app designs that don’t connect well with the user. An app that takes more than 3 seconds to load can be frustrating. If the images don’t load faster, then the app can be discarded immediately. This is true for e-commerce apps, as there are lots of images, and people tend to expect an immediate response from these apps. When the users give you their feedback, make sure you incorporate that into the app. The motivations: Finally, you need to take into account the motivations of the user towards using the app and completing an action. What makes them want to click on the buy now or, the action button in your app? Study your users. For some users, safety plays the predominant motivator while for others, the motivation factor is the value for money the app delivers. Along with the motivators, there are barriers too, which you need to consider in order to design the best user-centric app for the business idea. After identifying different ways to identify user behavior, now, let’s talk about two simple methods that can be used to analyze user behavior data: Questionnaire: Prepare a questionnaire including questions like what do you like the best about our app? Which other apps would you use as our alternative? What do you want us to improve? The questions are endless, but make sure these questions give you insights on your users. Spread this questionnaire among a group of people and based on their answers, you can drive user-behavior data and develop a mobile app accordingly. Mobile App Analytics Platforms: Another method is mobile app analytics platforms. Prepare navigation flow, a flowchart of all the app screens, to submit it on mobile app analytics platforms and identify how users are going from one screen to another. Through this navigation flow of your app, you can identify how users are interacting with screens and how they move through your app. This data will help you to know user behavior. This data helps you make data-driven changes. Conclusion Analyzing user behavior always must be a high priority for businesses who want to make a successful app and grow over time. When it comes to analyzing user behavior, top companies and brands like Uber, Airbnb, Pinterest, and Starbucks are using AI (Artificial Intelligence) to provide a personalized experience to their users. Through AI and machine learning, businesses will learn about customer or user behaviors on a deeper level and get help in delivering a better application. The possibilities are endless. The point is - are you utilizing already existing data to optimize the overall process? Author Bio Yuvrajsinh is a Marketing Manager at Space-O Technologies, a firm having expertise in developing Uber-like apps. He spends most of his time researching on the mobile app and startup trends. He is a regular contributor to popular publications like Entrepreneur, Yourstory, and Upwork. If you have any confusion, or question, or need any consultation regarding the mobile app development process, feel free to contact him. 5 UX design tips for building a great e-commerce mobile app 4 key benefits of using Firebase for mobile app development 9 reasons to choose Agile Methodology for Mobile App Development  
Read more
  • 0
  • 0
  • 25947

article-image-top-5-open-source-static-site-generators
Sugandha Lahoti
21 May 2018
6 min read
Save for later

Top 5 open source static site generators

Sugandha Lahoti
21 May 2018
6 min read
Static sites are back and stronger than ever. A large number of businesses have realized the importance of sticking to trendy, beautiful, static websites which have less hassle of server maintenance and security exploits. For example, Nest and MailChimp, prominent design companies, are using static site generators for their primary websites. Vox Media has built an entire publishing system around Middleman static site generator. A static website contains Web pages coded in HTML, with fixed content, so they look same to every user. These websites are made using static site generators, which automate the process of creating websites, with minimal coding required from developers. If you’re looking to implement static sites in your next business project, we have compiled a list of the top 5 static site generators to help you design interactive and fast websites. Before we dive in, first let’s understand when choosing static sites makes sense. Why choose static sites? Static sites typically take the content stored in flat files as opposed to dynamic sites where databases serve as content stores. This content is applied against templates and is used to generate a structure of static HTML files. These static files function as the website for the users. Agreed, they lack real-time content and have limited functionalities. But these static sites come in real handy when you want to avoid the hassle of server maintenance while also keeping your pocket light. Not to mention, they are the best option available when your product doesn’t require timely upgrades. Another important factor which contributes to their popularity is the ability to be indexed easily by Google search engines. Since Google has indicated site speed to be one of the signals used to rank pages, static sites have truly shined through. With pure HTML static websites, you have total control over your SEO, and the HTML and CSS are fully understood by search engines. Unlike dynamic websites, you don’t need a special plugin to manage your SEO or need to optimize page load time. Static sites are fast loading, secure, and most importantly, well prepared for traffic surges. This is why their popularity is only surging with the growth of publishing content online. Now that I have ignited your interest to make your next website static, let’s look at some frameworks for building these sites. Static site generators have exploded in popularity in recent years, with a total of more than 100,000 stars for static website generator repositories. Hence, navigating the wide range of choices can be difficult. Here are my top five picks to get you started. Jekyll: The most mature player Jekyll is perhaps the most mature and popular static site generator (Quite obvious from the Github stars). It is built with Ruby and is typically used for transforming plain texts into static websites and blogs. It takes a directory filled with text files, renders that content with Markdown and Liquid templates, and generates a publish-ready static website. Jekyll comes with a big bonus of being natively supported by GitHub pages. So you can easily deploy your site using GitHub for absolutely free. It also has a huge community and wide array of plugins, making it easier for Wordpress and Drupal developers to import content. Hugo: The fastest player Blazingly fast, Hugo is a static HTML and CSS website generator built around Google’s Go programming language. It is optimized for speed, ease of use, and configurability. As with Jekyll, Hugo takes a directory of text files and templates, albeit written in Go, and generates them into a full HTML website. It is extremely fast with build times less than 1 ms per page It is cross-platform, with easy installation on macOS, Linux, Windows, and more. It renders changes on the fly with LiveReload as you develop. It provides full i18n support for multi-language sites. Hexo: The One-command player Hexo is a powerful framework built with Node.js. It offers super fast rendering even for extremely large sites. Hexo is highly extensible as it offers support for GitHub flavored Markdown and most Octopress plugins. It has a One-command deployment to GitHub Pages, Heroku, and other sites. Hexo also features a powerful plugin system. You can install more plugins for Jade, CoffeeScript plugins and many Jekyll plugins with minor adjustments. Gatsby: The multi-tasker Gatsby is a static site generator for React. It is optimized for speed as it loads only critical parts for fast loading. Once loaded, Gatsby prefetches resources for other pages so that clicking around the site feels incredibly fast. Gatsby.js can also be used to generate static Progressive Web apps. It does automatic routing based on the directory structure. The HTML code is generated server-side and additional code need not be included for configuring the router. Gatsby has a pre-configured Webpack-based build system and allows easy data integration from CMSs, SaaS services, APIs, databases, and file systems. VuePress: The new player VuePress, the new player in town, is a minimalistic static site generator powered by Vue.js. VuePress creates a single-page application with pre-rendered static HTML from a Markdown file. This markdown file is powered by Vue, VueRouter, and Webpack. It is composed of two parts: A theming system A default theme optimized for writing technical documentation. This default theme has a header-based search, customizable navbar and sidebar, optional homepage, auto-generated Github link and Page edit links. VuePress also comes with integrated Google Analytics and multi-language support. Here's a short table summarising all static site generators. Static Site Generator GitHub stars Languages Templates Features Jekyll 34k + Ruby Liquid - Most mature and popular - Supported by GitHub pages - A wide array of Plugins Hugo 25k + Go Go - Extremely fast - Cross-platform - Renders changes on the fly - Full i18n support for multi-language sites. Hexo 22k + Javascript EJS, Pug - Highly extensible - One-command deployment - Powerful plugin system Gatsby 21k + Javascript React - Optimized for speed - Generates static PWA - A pre-configured Webpack-based build system VuePress 7k + (growing fast) Javascript Vue - Theming system - Default theme - Google analytics support - Multi-Language support   Apart from these, you also have Next, GitBook, Nuxt, Pelican, among others as some of the other static site generators to choose from. Before going with your choice of static site generator, you need to first make an informed decision on whether or not a static site is right for your next project. Consider your website needs and the kind of business you’re running. If your website has too much going on, it may be killing your traffic. In such cases having a fast, secure and beautiful static site is much more beneficial than a massive, unwieldy dynamic website. Firefox 60 arrives with exciting updates for web developers: Quantum CSS engine, new Web APIs and more [news] Get ready for Bootstrap v4.1; Web developers to strap up their boots [news] How to create a generic reusable section for a single page based website [tutorial]
Read more
  • 0
  • 6
  • 25902

article-image-3-cybersecurity-lessons-for-e-commerce-website-administrators
Guest Contributor
25 Jun 2019
8 min read
Save for later

3 cybersecurity lessons for e-commerce website administrators

Guest Contributor
25 Jun 2019
8 min read
In large part, the security of an ecommerce company is the responsibility of its technical support team and ecommerce software vendors. In reality, cybercriminals often exploit the security illiteracy of the staff to hit a company. Of all the ecommerce team, web administrators are often targeted for hacker attacks as they control access to the admin panel with lots of sensitive data. Having broken into the admin panel, criminals can take over an online store, disrupt its operation, retrieve customer confidential data, steal credit card information, transfer payments to their own account, and do more harm to business owners and customers. Online retailers contribute to the security of their company greatly when they educate web administrators where security threats can come from and what measures they can take to prevent breaches. We have summarized some key lessons below. It’s time for a quick cybersecurity class! Lesson 1. Mind password policy Starting with the basis of cybersecurity, we will proceed to more sophisticated rules in the lessons that follow. The importance of secure password policy may seem obvious, it's still shocking how careless people can be with choosing a password. In e-commerce, web administrators set credentials for accessing the admin panel and they can “help” cybercriminals greatly if they neglect basic password rules. Never use similar or alike passwords to log into different systems. In general, sticking to the same patterns when creating passwords (for example, using a date of birth) is risky. Typically, people have a number of personal profiles in social networks and email services. If they use identical passwords to all of them, cybercriminals can steal credentials just to one social media profile to crack the others. If employees are that negligent about accessing corporate systems, they endanger the security of the company. Let’s outline the worst-case scenario. Criminals take advantage of the leaked database of 167 million LinkedIn accounts to hack a large online store. As soon as they see the password of its web administrator (the employment information is stated in the profile just for hackers’ convenience), they try to apply the password to get access to the admin panel. What luck! The way to break into this web store was too easy. Use strong and impersonalized passwords. We need to introduce the notion of doxing to fully explain the importance of this rule. Doxing is the process of collecting pieces of information from social accounts to ultimately create a virtual profile of a person. Cybercriminals engage doxing to crack a password to an ecommerce platform by using an admin’s personal information in it. Therefore, a strong password shouldn’t contain personal details (like dates, names, age, etc.) and must consist of eight or more characters featuring a mix of letters, numbers, and unique symbols. Lesson 2. Watch out for phishing attacks With the wealth of employment information people leave in social accounts, hackers hold all the cards for implementing targeted, rather than bulk, phishing attacks. When planning a malicious attack on an ecommerce business, criminals can search for profiles of employees, check their position and responsibilities, and conclude what company information they have access to. In such an easy way, hackers get to know a web store administrator and follow with a series of phishing attacks. Here are two possible scenarios of attacks: When hackers target a personal computer. Having found a LinkedIn profile of a web administrator and got a personal email, hackers can bombard them with disguised messages, for example, from bank or tax authorities. If the admin lets their guard down and clicks a malicious link, malware installs itself on their personal computer. Should they remotely log in the admin panel, hackers steal their credentials and immediately set a new password. From this moment, they take over the control over a web store. Hackers can also go a different way. They target a personal email of the web administrator with a phishing attack and succeed in taking it over. Let’s say they have already found out a URL to the admin panel by that time. All they have to do now is to request to change the password to the panel, click the confirmation link from the admin’s email and set a new password. In the described scenario, the web administrator has made three security mistakes of using a personal email for work purposes, not changing the default admin URL, and taking the bait of a phishing email. When hackers target a work computer. Here is how a cyberattack may unfold if web administrators have been reckless to disclose a work email online. This time, hackers create a targeted malicious email related to work activities. Let’s say, the admin can get a legitimate-looking email from FedEx informing about delivery problems. Not alarmed, they open the email, click the link to know the details, and compromise the security of the web store by giving away the credentials to the admin panel to hackers. The main mistake in dealing with phishing attacks is to expect a fraudulent email to look suspicious. However, phishers falsify emails from real companies so it can be easy to fall into the trap. Here are recommendations for ecommerce web administrators to follow: Don’t use personal emails to log in to the admin panel. Don’t make your work email publicly available. Don’t use work email for personal purposes (e.g., for registration in social networks). Watch out for links and downloads in emails. Always hover over the link prior to click it – in malicious emails, the destination URL doesn’t match the expected destination website. Remember that legitimate companies never ask for your credentials, credit card details or any other sensitive information in emails. Be wary of emails with urgent notifications and deadlines – hackers often try to allay suspicions by provoking anxiety and panic among their victims. Engage two-step verification for an ecommerce admin panel. Lesson 3.  Stay alert while communicating with a hosting provider Web administrators of companies that have chosen a hosted ecommerce platform for their e-shop will need to contact the technical support of their hosting provider now and then. Here, a cybersecurity threat comes unexpected. If hackers have compromised the security of the web hosting company, they can target its clients (e-commerce websites) as well. Admins are in serious danger if the hosting company stores their credentials unencrypted. In this case, hackers can get direct access to the admin panel of a web store. Otherwise, more sophisticated attacks are developed. Cybercriminals can mislead web administrators by speaking for tech support agents. When communicating with their hosting provider, web administrators should mind several rules to protect their confidential data and the web store from hacking. Use unique email and password to log in your web hosting account. The usage of similar credentials for different work services or systems leads to a company security breach in case the hosting company has been hacked. Never reveal any credentials on request of tech support agents. Having shared their password to the admin panel, web administrators can no longer authenticate themselves by using it. Track your company communication with tech support. Web administrators can set email notifications to track requests from team members to the tech support and control what information is shared. Time for an exam As a rule, ecommerce software vendors and retailers do their best for the security of ecommerce businesses. Thus, software vendors take the major role in providing for the security of SaaS ecommerce solutions (like Shopify or Salesforce Commerce Cloud), including the security of servers, databases and the application itself. In IaaS solutions (like Magento), retailers need to put more effort in maintaining the security of the environment and system, staying current on security updates, conducting regular audits and more (you can see the full list of Magento security measures as an example). Still, cybercriminals often target company employees to hack an online store. Retailers are responsible for educating their team what security rules are compulsory to follow and how to identify malicious intents. In our article, we have outlined the fundamental security lessons for web administrators to learn in order to protect a web store against illicit access. In short, they should be careful with personal information they publish online (in their social media profiles) and use unique credentials for different services and systems. There are no grades in our lessons – rather an admin’s contribution to the security of their company can become the evaluation of knowledge they have gained. About the Author Tanya Yablonskaya is Ecommerce Industry Analyst at ScienceSoft, an IT consulting and software development company headquartered in McKinney, Texas. After 2+ years of exploring the cryptocurrency and blockchain sphere, she has shifted the focus of interest to ecommerce industry. Delving into this enormous world, Tanya covers key challenges online retailers face and unveils a wealth of tools they can use to outpace competitors. The US launched a cyber attack on Iran to disable its rocket launch systems; Iran calls it unsuccessful All Docker versions are now vulnerable to a symlink race attack 12,000+ unsecured MongoDB databases deleted by Unistellar attackers
Read more
  • 0
  • 0
  • 25888
article-image-how-is-artificial-intelligence-changing-the-mobile-developer-role
Bhagyashree R
15 Oct 2018
10 min read
Save for later

How is Artificial Intelligence changing the mobile developer role?

Bhagyashree R
15 Oct 2018
10 min read
Last year, at Google I/O, Sundar Pichai, the CEO of Google, said: “We are moving from a mobile-first world to an AI-first world” Is it only applicable to Google? Not really. In the recent past, we have seen several advancements in Artificial Intelligence and in parallel a plethora of intelligent apps coming into the market. These advancements are enabling developers to take their apps to the next level by integrating recommendation service, image recognition, speech recognition, voice translation, and many more cool capabilities. Artificial Intelligence is becoming a potent tool for mobile developers to experiment and innovate. The Artificial Intelligence components that are integral to mobile experiences, such as voice-based assistants and location-based services, increasingly require mobile developers to have a basic understanding of Artificial Intelligence to be effective. Of course, you don’t have to be Artificial Intelligence experts to include intelligent components in your app. But, you should definitely understand something about what you’re building into your app and why. After all AI in mobile is not just limited to calling an API, isn't it? There’s more to it and in this article we will explore how Artificial Intelligence will shape the mobile developer role in the immediate future. Read also: AI on mobile: How AI is taking over the mobile devices marketspace What is changing in the mobile developer role? Focus shifting to data With Artificial Intelligence becoming more and more accessible, intelligent apps are becoming the new norm for businesses. Artificial Intelligence strengthens the relationship between brands and customers, inspiring developers to build smart apps that increase user retention. This also means that developers have to direct their focus to data. They have to understand things like how the data will be collected? How will the data be fed to machines and how often will data input be needed? When nearly 1 in 4 people abandon an app after its first use, as a mobile app developer, you need to rethink how you drive in-app personalization and engagement. Explore “humanized” way of user-app interaction With so many chatbots such as Siri and Google Assistant coming into the market, we can see that “humanizing” the interaction between the user and the app is becoming mainstream. “Humanizing” is the process where the app becomes relatable to the user, and the more effective it is conducted, the more the end user will interact with the app. Users now want easy navigation and searching system and Artificial Intelligence fits perfectly in the scenario. The advances in technologies like text-to-speech, speech-to-text, Natural Language Processing, and cloud services, in general, have contributed to the mass adoption of these types of interfaces. Companies are increasingly expecting mobile developers to be comfortable working with AI functionalities Artificial Intelligence is the future. Companies are now expecting their mobile developers to know how to handle the huge amount of data generated every day and how to use it. Here's is an example of what Google wants their engineers to do: “We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.” This open-ended requirement list shows that it is the right time to learn and embrace Artificial Intelligence as soon as possible. What skills do you need to build intelligent apps? Ideally, data scientists are the ones who conceptualize mathematical models and machine learning engineers are the ones who translate it into the code and train the model. But, when you are working in a resource-tight environment, for example in a start-up, you will be responsible for doing the end-to-end job. It is not as scary as it sounds, because you have several resources to get started with! Taking your first steps with machine learning as a service Learning anything starts with motivating yourself. Directly diving into the maths and coding part of machine learning might exhaust and bore you. That's why it's a good idea to know what the end goal of your entire learning process is going to be and what types of solutions are possible using machine learning. There are many products available that you can try to quickly get started such as Google Cloud AutoML (Beta), Firebase MLKit (Beta), and Fritz Mobile SDK, among others. Read also: Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Getting your hands dirty After getting a “warm-up” the next step will involve creating and training your own model. This is where you’ll be introduced to TensorFlow Lite, which is going to be your best friend throughout your journey as a machine learning mobile developer. There are many other machine learning tools coming into the market that you can make use of. These tools make building AI in mobile easier. For instance, you can use Dialogflow, a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can then integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. Read also: 7 Artificial Intelligence tools mobile developers need to know For practicing you can leverage an amazing codelab by Google, TensorFlow For Poets. It guides you through creating and training a custom image classification model. Through this codelab you will learn the basics of data collection, model optimization, and other key components involved in creating your own model. The codelab is divided into two parts. The first part covers creating and training the model, and the second part is focused on TensorFlow Lite which is the mobile version of TensorFlow that allows you to run the same model on a mobile device. Mathematics is the foundation of machine learning Love it or hate it, machine learning and Artificial Intelligence are built on mathematical principles like calculus, linear algebra, probability, statistics, and optimization. You need to learn some essential foundational concepts and the notation used to express them. There are many reasons why learning mathematics for machine learning is important. It will help you in the process of selecting the right algorithm which includes giving considerations to accuracy, training time, model complexity, number of parameters and number of features. Maths is needed when choosing parameter settings and validation strategies, identifying underfitting and overfitting by understanding the bias-variance tradeoff. Read also: Bias-Variance tradeoff: How to choose between bias and variance for your machine learning model [Tutorial] Read also: What is Statistical Analysis and why does it matter? What are the key aspects of Artificial Intelligence for mobile to keep in mind? Understanding the problem Your number one priority should be the user problem you are trying to solve. Instead of randomly integrating a machine learning model into an application, developers should understand how the model applies to the particular application or use case. This is important because you might end up building a great machine learning model with excellent accuracy rate, but if it does not solve any problem, it will end up being redundant. You must also understand that while there are many business problems which require machine learning approaches, not all of them do. Most business problems can be solved through simple analytics or a baseline approach. Data is your best friend Machine learning is dependent on data; the data that you use, and how you use it, will define the success of your machine learning model. You can also make use of thousands of open source datasets available online. Google recently launched a tool for dataset search named, Google Dataset Search which will make it easier for you to search the right dataset for your problem. Typically, there’s no shortage of data; however, the abundant existence of data does not mean that the data is clean, reliable, or can be used as intended. Data cleanliness is a huge issue. For example, a typical company will have multiple customer records for a single individual, all of which differ slightly. If the data isn’t clean, it isn’t reliable. The bottom line is, it’s a bad practice to just grabbing the data and using it without considering its origin. Read also: Best Machine Learning Datasets for beginners Decide which model to choose A machine learning algorithm is trained and the artifact that it creates after the training process is called the machine learning model. An ML model is used to find patterns in data without the developer having to explicitly program those patterns. We cannot look through such a huge amount of data and understand the patterns. Think of the model as your helper who will look through all those terabytes of data and extract knowledge and insights from the data. You have two choices here: either you can create your own model or use a pre-built model. While there are several pre-built models available, your business-specific use cases may require specialized models to yield the desired results. These off-the-shelf model may also need some fine-tuning or modification to deliver the value the app is intended to provide. Read also: 10 machine learning algorithms every engineer needs to know Thinking about resource utilization is important Artificial Intelligence-powered apps or apps, in general, should be developed with resource utilization in mind. Though companies are working towards improving mobile hardware, currently, it is not the same as what we can accomplish with GPU clusters in the cloud. Therefore, developers need to consider how the models they intend to use would affect resources including battery power and memory usage. In terms of computational resources, inferencing or making predictions is less costly than training. Inferencing on the device means that the models need to be loaded into RAM, which also requires significant computational time on the GPU or CPU. In scenarios that involve continuous inferencing, such as audio and image data which can chew up bandwidth quickly, on-device inferencing is a good choice. Learning never stops Maintenance is important, and to do that you need to establish a feedback loop and have a process and culture of continuous evaluation and improvement. A change in consumer behavior or a market trend can make a negative impact on the model. Eventually, something will break or no longer work as intended, which is another reason why developers need to understand the basics of what it is they’re adding to an app. You need to have some knowledge of how the Artificial Intelligence component that you just put together is working or how it could be made to run faster. Wrapping up Before falling for the Artificial Intelligence and machine learning hype, it’s important to understand and analyze the problem you are trying to solve. You should examine whether applying machine learning can improve the quality of the service, and decide if this improvement justifies the effort of deploying a machine learning model. If you just want a simple API endpoint and don’t want to dedicate much time in deploying a model, cloud-based web services are the best option for you. Tools like ML Kit for Firebase looks promising and seems like a good choice for startups or developers just starting out. TensorFlow Lite and Core ML are good options if you have mobile developers on your team or if you’re willing to get your hands a little dirty. Artificial Intelligence is influencing the app development process by providing us a data-driven approach for solving user problems. It wouldn't be surprising if in the near future Artificial Intelligence becomes a forerunning factor for app developers in their expertise and creativity. 10 useful Google Cloud Artificial Intelligence services for your next machine learning project [Tutorial] How Artificial Intelligence is going to transform the Data Center How Serverless computing is making Artificial Intelligence development easier
Read more
  • 0
  • 0
  • 25873

article-image-how-are-mobile-apps-transforming-the-healthcare-industry
Guest Contributor
15 Jan 2019
5 min read
Save for later

How are Mobile apps transforming the healthcare industry?

Guest Contributor
15 Jan 2019
5 min read
Mobile App Development has taken over and completely re-written the healthcare industry. According to Healthcare Mobility Solutions reports, the Mobile healthcare application market is expected to be worth more than $84 million by the year 2020. These mobile applications are not just limited to use by patients but are also massively used by doctors and nurses. As technology evolves, it simultaneously opens up the possibility of being used in multiple ways. Similar has been the journey of healthcare mobile app development that has originated from the latest trends in technology and has made its way to being an industry in itself. The technological trends that have helped build mobile apps for the healthcare industry are Blockchain You probably know blockchain technology, thanks to all the cryptocurrency rage in recent years. The blockchain is basically a peer-to-peer database that keeps a verified record of all transactions, or any other information that one needs to track and have it accessible to a large community. The healthcare industry can use a technology that allows it to record the medical history of patients, and store it electronically, in an encrypted form, that cannot be altered or hacked into. Blockchain succeeds where a lot of health applications fail, in the secure retention of patient data. The Internet of Things The Internet of Things (IoT) is all about connectivity. It is a way of interconnecting electronic devices, software, applications, etc., to ensure easy access and management across platforms. The loT will assist medical professionals in gaining access to valuable patient information so that doctors can monitor the progress of their patients. This makes treatment of the patient easier, and more closely monitored, as doctors can access the patient’s current profile anywhere and suggest treatment, medicine, and dosages. Augmented Reality From the video gaming industry, Augmented Reality has made its way to the medical sector. AR refers to the creation of an interactive experience of a real-world environment through superimposition of computer-generated perceptual information. AR is increasingly used to develop mobile applications that can be used by doctors and surgeons as a training experience. It stimulates a real-world experience of diagnosis and surgery, and by doing so, enhances the knowledge and its practical application that all doctors must necessarily possess. This form of training is not limited in nature, and can, therefore, simultaneously train a large number of medical practitioners. Big Data Analytics Big Data has the potential to provide comprehensive statistical information, only accessed and processed through sophisticated software. Big Data Analytics becomes extremely useful when it comes to managing the hospital’s resources and records in an efficient manner. Aside from this, it is used in the development of mobile applications that store all patient data, thus again, eliminating the need for excessive paperwork. This allows medical professionals to focus more on attending and treating the patients, rather than managing database. These technological trends have led to the development of a diverse variety of mobile applications to be used for multiple purposes in the healthcare industry. Listed below are the benefits of the mobile apps deploying these technological trends, for the professionals and the patients alike. Telemedicine Mobile applications can potentially play a crucial role in making medical services available to the masses. An example is an on-call physician on telemedicine duty. A mobile application will allow the physician to be available for a patient consult without having to operate via  PC. This will make the doctors more accessible and will bring quality treatment to the patients quickly. Enhanced Patient Engagement There are mobile applications that place all patient data – from past medical history to performance metrics, patient feedback, changes in the treatment patterns and schedules, at the push of a button on the smartphone application for the medical professional to consider and make a decision on the go. Since all data is recorded in real-time, it makes it easy for doctors to change shifts without having to explain to the next doctor the condition of the patient in person. The mobile application has all the data the supervisors or nurses need. Easy Access to Medical Facilities There are a number of mobile applications that allow patients to search for medical professionals in their area, read their reviews and feedback by other patients, and then make an online appointment if they are satisfied with the information that they find. Apart from these, they can also download and store their medical lab reports, and order medicines online at affordable prices. Easy Payment of Bills Like in every other sector, mobile applications in healthcare have made monetary transactions extremely easy. Patients or their family members, no longer need to spend hours waiting in the line to pay the bills. They can instantly pick a payment plan and pay bills immediately or add reminders to be notified when a bill is due. Therefore, it can be safely said that the revolution that the healthcare industry is undergoing and has worked in the favor of all the parties involved – Medical Professionals, Patients, Hospital Management and the Mobile App Developers. Author's Bio Ritesh Patil is the co-founder of Mobisoft Infotech that helps startups and enterprises in mobile technology. He’s an avid blogger and writes on mobile application development. He has developed innovative mobile applications across various fields such as Finance, Insurance, Health, Entertainment, Productivity, Social Causes, Education and many more and has bagged numerous awards for the same. Social Media – Twitter, LinkedIn Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 25804

article-image-5-reasons-learn-generative-adversarial-networks-gans
Savia Lobo
12 Dec 2017
5 min read
Save for later

5 reasons to learn Generative Adversarial Networks (GANs) in 2018

Savia Lobo
12 Dec 2017
5 min read
Generative Adversarial Networks (GANs) are a prominent branch of Machine learning research today. As deep neural networks require a lot of data to train on, they perform poorly if data provided is not sufficient. GANs can overcome this problem by generating new and real data, without using the tricks like data augmentation. As the application of GANs in the Machine learning industry is still at the infancy level, it is considered a highly desirable niche skill. Having an added hands-on experience raises the bar higher in the job market. It can fetch you a higher pay over your colleagues and can also be the feature that sets your resume stand apart. Source: Gartner's Hype Cycle 2017  GANs along with CNNs and RNNs are a part of the in demand deep neural network experience in the industry. Here are five reasons why you should learn GANs today and how Kuntal Ganguly’s book, Learning Generative Adversarial Networks help you do just that. Kuntal is a big data analytics engineer at Amazon Web Services. He has around 7 years of experience building large-scale, data-driven systems using big data frameworks and machine learning. He has designed, developed, and deployed several large-scale distributed applications, without any assistance. Kuntal is a seasoned author with a rich set of books ranging across the data science spectrum from machine learning, deep learning, to Generative Adversarial Networks, published under his belt.[/author] The book shows how to implement GANs in your machine learning models in a quick and easy format with plenty of real-world examples and hands-on tutorials. 1. Unsupervised Learning now a cakewalk with GANs A major challenge of unsupervised learning is the massive amount of unlabelled data one needed to work through as part of data preparation. In traditional neural networks, this labeling of data is both costly and time-consuming. A creative aspect of Deep learning is now possible using Generative Adversarial Networks. Here, the neural networks are capable of generating realistic images from the real-world datasets (such as MNIST and CIFAR). GANs provide an easy way to train the DL algorithms. This is done by slashing down the amount of data required to train the neural network models, that too, with no labeling of data required. This book uses a semi-supervised approach to solve the problem of unsupervised learning for classifying images. However, this could be easily leveraged into developer’s own problem domain. 2. GANs help you change a horse into a zebra using Image style transfer https://www.youtube.com/watch?v=9reHvktowLY Turning an apple into an orange is Magic!! GANs can do this magic, without casting a  spell. Transferring Image-to-Image style, where the styling of one image is applied to the other. What GANs can do is, they can perform image-to-image translations across various domains (such as changing apple to orange or horse to zebra) using Cycle Consistent Generative Network (Cycle GANs). Detailed examples of how to turn the image of an apple to an orange using TensorFlow, and how of turn an image of a horse into a zebra using a GAN model, are given in this book.  3. GANs inputs your text and outputs an image Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. In this book, Kuntal showcases the technique of stacking multiple generative networks to generate realistic images from textual information using StackGANs.Further, the book goes on to explain the coupling of two generative networks, to automatically discover relationships among various domains (a relationship between shoes and handbags or actor and actress) using DiscoGANs. 4. GANs + Transfer Learning = No more model generation from scratch Source: Learning Generative Adversarial Networks Data is the basis to train any Machine learning model, scarcity of which can lead to a poorly-trained model, which can have high chances of failure. Some real-life scenarios may not have sufficient data, hardware, or resources to train bigger networks in order to achieve the desired accuracy. So, is training from scratch a must-do for training the models? A well-known technique used in deep learning that adapts an existing trained model for a similar task to the task at hand is known as Transfer Learning. This book will showcase Transfer learning using some hands-on examples. Further, a combination of both Transfer learning and GANs, to generate high-resolution realistic images with facial datasets is explained. Thus, you will also understand how to create creating artistic hallucination on images beyond GAN. 5. GANs help you take Machine Learning models to Production Most Machine learning tutorials, video courses, and books, explain the training and the evaluation of the models. But how do we take this trained model to production and put it to use and make it available to customers? In this book, the author has taken an example, i.e. developing a facial correction system using an LFW dataset, to automatically correct corrupted images using your trained GAN model. This book also contains several techniques of deploying machine learning or deep learning models in production both on data centers and clouds with micro-service based containerized environments.You will also learn the way of running deep models in a serverless environment and with managed cloud services. This article just scratches the surface of what is possible with GANs and why learning it would change your thinking about deep neural networks. To know more grab your copy of Kuntal Ganguly’s book on GANs: Learning Generative Adversarial Networks.     .
Read more
  • 0
  • 0
  • 25761
article-image-the-rise-of-machine-learning-in-the-investment-industry
Natasha Mathur
15 Feb 2019
13 min read
Save for later

The rise of machine learning in the investment industry

Natasha Mathur
15 Feb 2019
13 min read
The investment industry has evolved dramatically over the last several decades and continues to do so amid increased competition, technological advances, and a challenging economic environment. In this article, we will review several key trends that have shaped the investment environment in general, and the context for algorithmic trading more specifically. This article is an excerpt taken from the book 'Hands on Machine Learning for algorithmic trading' written by Stefan Jansen. The book explores the strategic perspective, conceptual understanding, and practical tools to add value from applying ML to the trading and investment process. The trends that have propelled algorithmic trading and ML to current prominence include: Changes in the market microstructure, such as the spread of electronic trading and the integration of markets across asset classes and geographies The development of investment strategies framed in terms of risk-factor exposure, as opposed to asset classes The revolutions in computing power, data-generation and management, and analytic methods The outperformance of the pioneers in algorithmic traders relative to human, discretionary investors In addition, the financial crises of 2001 and 2008 have affected how investors approach diversification and risk management and have given rise to low-cost passive investment vehicles in the form of exchange-traded funds (ETFs). Amid low yield and low volatility after the 2008 crisis, cost-conscious investors shifted $2 trillion from actively-managed mutual funds into passively managed ETFs. Competitive pressure is also reflected in lower hedge fund fees that dropped from the traditional 2% annual management fee and 20% take of profits to an average of 1.48% and 17.4%, respectively, in 2017. Let's have a look at how ML has come to play a strategic role in algorithmic trading. Factor investing and smart beta funds The return provided by an asset is a function of the uncertainty or risk associated with financial investment. An equity investment implies, for example, assuming a company's business risk, and a bond investment implies assuming default risk. To the extent that specific risk characteristics predict returns, identifying and forecasting the behavior of these risk factors becomes a primary focus when designing an investment strategy. It yields valuable trading signals and is the key to superior active-management results. The industry's understanding of risk factors has evolved very substantially over time and has impacted how ML is used for algorithmic trading. Modern Portfolio Theory (MPT) introduced the distinction between idiosyncratic and systematic sources of risk for a given asset. Idiosyncratic risk can be eliminated through diversification, but systematic risk cannot. In the early 1960s, the Capital Asset Pricing Model (CAPM) identified a single factor driving all asset returns: the return on the market portfolio in excess of T-bills. The market portfolio consisted of all tradable securities, weighted by their market value. The systematic exposure of an asset to the market is measured by beta, which is the correlation between the returns of the asset and the market portfolio. The recognition that the risk of an asset does not depend on the asset in isolation, but rather how it moves relative to other assets, and the market as a whole, was a major conceptual breakthrough. In other words, assets do not earn a risk premium because of their specific, idiosyncratic characteristics, but because of their exposure to underlying factor risks. However, a large body of academic literature and long investing experience have disproved the CAPM prediction that asset risk premiums depend only on their exposure to a single factor measured by the asset's beta. Instead, numerous additional risk factors have since been discovered. A factor is a quantifiable signal, attribute, or any variable that has historically correlated with future stock returns and is expected to remain correlated in future. These risk factors were labeled anomalies since they contradicted the Efficient Market Hypothesis (EMH), which sustained that market equilibrium would always price securities according to the CAPM so that no other factors should have predictive power. The economic theory behind factors can be either rational, where factor risk premiums compensate for low returns during bad times, or behavioral, where agents fail to arbitrage away excess returns. Well-known anomalies include the value, size, and momentum effects that help predict returns while controlling for the CAPM market factor. The size effect rests on small firms systematically outperforming large firms, discovered by Banz (1981) and Reinganum (1981). The value effect (Basu 1982) states that firms with low valuation metrics outperform. It suggests that firms with low price multiples, such as the price-to-earnings or the price-to-book ratios, perform better than their more expensive peers (as suggested by the inventors of value investing, Benjamin Graham and David Dodd, and popularized by Warren Buffet). The momentum effect, discovered in the late 1980s by, among others, Clifford Asness, the founding partner of AQR, states that stocks with good momentum, in terms of recent 6-12 month returns, have higher returns going forward than poor momentum stocks with similar market risk. Researchers also found that value and momentum factors explain returns for stocks outside the US, as well as for other asset classes, such as bonds, currencies, and commodities, and additional risk factors. In fixed income, the value strategy is called riding the yield curve and is a form of the duration premium. In commodities, it is called the roll return, with a positive return for an upward-sloping futures curve and a negative return otherwise. In foreign exchange, the value strategy is called carry. There is also an illiquidity premium. Securities that are more illiquid trade at low prices and have high average excess returns, relative to their more liquid counterparts. Bonds with higher default risk tend to have higher returns on average, reflecting a credit risk premium. Since investors are willing to pay for insurance against high volatility when returns tend to crash, sellers of volatility protection in options markets tend to earn high returns. Multifactor models define risks in broader and more diverse terms than just the market portfolio. In 1976, Stephen Ross proposed arbitrage pricing theory, which asserted that investors are compensated for multiple systematic sources of risk that cannot be diversified away. The three most important macro factors are growth, inflation, and volatility, in addition to productivity, demographic, and political risk. In 1992, Eugene Fama and Kenneth French combined the equity risk factors' size and value with a market factor into a single model that better explained cross-sectional stock returns. They later added a model that also included bond risk factors to simultaneously explain returns for both asset classes. A particularly attractive aspect of risk factors is their low or negative correlation. Value and momentum risk factors, for instance, are negatively correlated, reducing the risk and increasing risk-adjusted returns above and beyond the benefit implied by the risk factors. Furthermore, using leverage and long-short strategies, factor strategies can be combined into market-neutral approaches. The combination of long positions in securities exposed to positive risks with underweight or short positions in the securities exposed to negative risks allows for the collection of dynamic risk premiums. As a result, the factors that explained returns above and beyond the CAPM were incorporated into investment styles that tilt portfolios in favor of one or more factors, and assets began to migrate into factor-based portfolios. The 2008 financial crisis underlined how asset-class labels could be highly misleading and create a false sense of diversification when investors do not look at the underlying factor risks, as asset classes came crashing down together. Over the past several decades, quantitative factor investing has evolved from a simple approach based on two or three styles to multifactor smart or exotic beta products. Smart beta funds have crossed $1 trillion AUM in 2017, testifying to the popularity of the hybrid investment strategy that combines active and passive management. Smart beta funds take a passive strategy but modify it according to one or more factors, such as cheaper stocks or screening them according to dividend payouts, to generate better returns. This growth has coincided with increasing criticism of the high fees charged by traditional active managers as well as heightened scrutiny of their performance. The ongoing discovery and successful forecasting of risk factors that, either individually or in combination with other risk factors, significantly impact future asset returns across asset classes is a key driver of the surge in ML in the investment industry. Algorithmic pioneers outperform humans at scale The track record and growth of Assets Under Management (AUM) of firms that spearheaded algorithmic trading has played a key role in generating investor interest and subsequent industry efforts to replicate their success. Systematic funds differ from HFT in that trades may be held significantly longer while seeking to exploit arbitrage opportunities as opposed to advantages from sheer speed. Systematic strategies that mostly or exclusively rely on algorithmic decision-making were most famously introduced by mathematician James Simons who founded Renaissance Technologies in 1982 and built it into the premier quant firm. Its secretive Medallion Fund, which is closed to outsiders, has earned an estimated annualized return of 35% since 1982. DE Shaw, Citadel, and Two Sigma, three of the most prominent quantitative hedge funds that use systematic strategies based on algorithms, rose to the all-time top-20 performers for the first time in 2017 in terms of total dollars earned for investors, after fees, and since inception. DE Shaw, founded in 1988 with $47 billion AUM in 2018 joined the list at number 3. Citadel started in 1990 by Kenneth Griffin, manages $29 billion and ranks 5, and Two Sigma started only in 2001 by DE Shaw alumni John Overdeck and David Siegel, has grown from $8 billion AUM in 2011 to $52 billion in 2018. Bridgewater started in 1975 with over $150 billion AUM, continues to lead due to its Pure Alpha Fund that also incorporates systematic strategies. Similarly, on the Institutional Investors 2017 Hedge Fund 100 list, five of the top six firms rely largely or completely on computers and trading algorithms to make investment decisions—and all of them have been growing their assets in an otherwise challenging environment. Several quantitatively-focused firms climbed several ranks and in some cases grew their assets by double-digit percentages. Number 2-ranked Applied Quantitative Research (AQR) grew its hedge fund assets 48% in 2017 to $69.7 billion and managed $187.6  billion firm-wide. Among all hedge funds, ranked by compounded performance over the last three years, the quant-based funds run by Renaissance Technologies achieved ranks 6 and 24, Two Sigma rank 11, D.E. Shaw no 18 and 32, and Citadel ranks 30 and 37. Beyond the top performers, algorithmic strategies have worked well in the last several years. In the past five years, quant-focused hedge funds gained about 5.1% per year while the average hedge fund rose 4.3% per year in the same period. ML driven funds attract $1 trillion AUM The familiar three revolutions in computing power, data, and ML methods have made the adoption of systematic, data-driven strategies not only more compelling and cost-effective but a key source of competitive advantage. As a result, algorithmic approaches are not only finding wider application in the hedge-fund industry that pioneered these strategies but across a broader range of asset managers and even passively-managed vehicles such as ETFs. In particular, predictive analytics using machine learning and algorithmic automation play an increasingly prominent role in all steps of the investment process across asset classes, from idea-generation and research to strategy formulation and portfolio construction, trade execution, and risk management. Estimates of industry size vary because there is no objective definition of a quantitative or algorithmic fund, and many traditional hedge funds or even mutual funds and ETFs are introducing computer-driven strategies or integrating them into a discretionary environment in a human-plus-machine approach. Morgan Stanley estimated in 2017 that algorithmic strategies have grown at 15% per year over the past six years and control about $1.5 trillion between hedge funds, mutual funds, and smart beta ETFs. Other reports suggest the quantitative hedge fund industry was about to exceed $1 trillion AUM, nearly doubling its size since 2010 amid outflows from traditional hedge funds. In contrast, total hedge fund industry capital hit $3.21 trillion according to the latest global Hedge Fund Research report. The market research firm Preqin estimates that almost 1,500 hedge funds make a majority of their trades with help from computer models. Quantitative hedge funds are now responsible for 27% of all US stock trades by investors, up from 14% in 2013. But many use data scientists—or quants—which, in turn, use machines to build large statistical models (WSJ). In recent years, however, funds have moved toward true ML, where artificially-intelligent systems can analyze large amounts of data at speed and improve themselves through such analyses. Recent examples include Rebellion Research, Sentient, and Aidyia, which rely on evolutionary algorithms and deep learning to devise fully-automatic Artificial Intelligence (AI)-driven investment platforms. From the core hedge fund industry, the adoption of algorithmic strategies has spread to mutual funds and even passively-managed exchange-traded funds in the form of smart beta funds, and to discretionary funds in the form of quantamental approaches. The emergence of quantamental funds Two distinct approaches have evolved in active investment management: systematic (or quant) and discretionary investing. Systematic approaches rely on algorithms for a repeatable and data-driven approach to identify investment opportunities across many securities; in contrast, a discretionary approach involves an in-depth analysis of a smaller number of securities. These two approaches are becoming more similar to fundamental managers take more data-science-driven approaches. Even fundamental traders now arm themselves with quantitative techniques, accounting for $55 billion of systematic assets, according to Barclays. Agnostic to specific companies, quantitative funds trade patterns and dynamics across a wide swath of securities. Quants now account for about 17% of total hedge fund assets, data compiled by Barclays shows. Point72 Asset Management, with $12 billion in assets, has been shifting about half of its portfolio managers to a man-plus-machine approach. Point72 is also investing tens of millions of dollars into a group that analyzes large amounts of alternative data and passes the results on to traders. Investments in strategic capabilities Rising investments in related capabilities—technology, data and, most importantly, skilled humans—highlight how significant algorithmic trading using ML has become for competitive advantage, especially in light of the rising popularity of passive, indexed investment vehicles, such as ETFs, since the 2008 financial crisis. Morgan Stanley noted that only 23% of its quant clients say they are not considering using or not already using ML, down from 44% in 2016. Guggenheim Partners LLC built what it calls a supercomputing cluster for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim's quant investment funds. Electricity for the computers costs another $1 million a year. AQR is a quantitative investment group that relies on academic research to identify and systematically trade factors that have, over time, proven to beat the broader market. The firm used to eschew the purely computer-powered strategies of quant peers such as Renaissance Technologies or DE Shaw. More recently, however, AQR has begun to seek profitable patterns in markets using ML to parse through novel datasets, such as satellite pictures of shadows cast by oil wells and tankers. The leading firm BlackRock, with over $5 trillion AUM, also bets on algorithms to beat discretionary fund managers by heavily investing in SAE, a systematic trading firm it acquired during the financial crisis. Franklin Templeton bought Random Forest Capital, a debt-focused, data-led investment company for an undisclosed amount, hoping that its technology can support the wider asset manager. We looked at how ML plays a role in different industry trends around algorithmic trading. If you want to learn more about design and execution of algorithmic trading strategies, and use cases of ML in algorithmic trading, be sure to check out the book 'Hands on Machine Learning for algorithmic trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 25725

article-image-blockchain-iot-security
Savia Lobo
29 Aug 2017
4 min read
Save for later

How Blockchain can level up IoT Security

Savia Lobo
29 Aug 2017
4 min read
IoT contains hoard of sensors, vehicles and all devices that have embedded electronics which can communicate over the Internet. These IoT enabled devices generate tons of data every second. And with IoT Edge Analytics, these devices are getting much smarter - they can start or stop a request without any human intervention. 25 billion connected ”things” will be connected to the internet by 2020. - Gartner Research With so much data being generated by these devices, the question on everyone’s mind is: Will all this data be reliable and secure? When Brains meet Brawn: Blockchain for IoT Blockchain, an open distributed ledger, is highly secure and difficult to manipulate/corrupt by anyone connected over the network. It was initially designed for cryptocurrency based financial transactions. Bitcoin is a famous example which has Blockchain as its underlying technology. Blockchain has come a long way since then and can now be used to store anything of value. So why not save data in it? And this data will be secure just like every digital asset in a Blockchain is. Blockchain, decentralized and secured, is an ideal structure suited to form the underlying foundation for IoT data solutions. Current IoT devices and their data rely on the client service architecture. All devices are identified, authenticated, and connected via the cloud servers, which are capable of storing ample amount of data. But this requires huge infrastructure, which is all the more expensive. Blockchain not only provides an economical alternative but also since it works in a decentralized fashion it eliminates all single point of failures, creating a much secure and tougher network for IoT devices. This makes IoT more secure and reliable. Customers can therefore relax knowing their information is in safe hands. Today, Blockchain’s capabilities extend beyond processing financial transactions - It can now track billions of connected devices, process transactions and even co-ordinate between devices - a good fit for the IoT industry. Why Blockchain is perfect for IoT Inherent weak security features make IoT devices suspect. On the other hand, Blockchain with its tamper-proof ledger makes it hard to manipulate for malicious activities - thus, making it the right infrastructure for IoT solutions. Enhancing security through decentralization Blockchain makes it hard for intruders to intervene as it spans across a network of secure blocks. Change at a single location, therefore, does not affect the other blocks. The data or any value remains encrypted and is only visible to the person who has encrypted it using a private key. The cryptographic algorithms used in Blockchain technology ensure the IoT data remain private either for an individual organization or for the organizations connected in a network . Simplicity through autonomous 3rd-party-free transactions Blockchain technology is already a star in the finance sector thanks to the adoption of smart contracts, Bitcoin and other cryptocurrencies. Apart from providing a secured medium for financial transactions, it eliminates the need for third-party brokers such as banks to provide guarantee over peer-to-peer payment services. With Blockchain, IoT data can be treated in a similar manner, wherein smart contracts can be made between devices to exchange messages and data. This type of autonomy is possible because each node in the blockchain network can verify the validity of the transaction without relying on a centralized authority. Blockchain backed IoT solutions will thus enable trustworthy message sharing. Business partners can easily access and exchange confidential information within the IoT without a centralized management/regulatory authority. This means quicker transactions, lower costs and lesser opportunities for malicious intent such as data espionage. Blockchain's immutability for predicting IoT security vulnerabilities Blockchains maintain a history of all transactions made by smart devices connected within a particular network. This is possible because once you enter data in a Blockchain, it lives there forever in its immutable ledger. The possibilities for IoT solutions that leverage Blockchain’s immutability are limitless. Some obvious uses cases are more robust credit-scores and preventive health-care solutions that use data accumulated through wearables. For all the above reasons, we see significant Blockchain adoption by IoT based businesses in the near future.
Read more
  • 0
  • 0
  • 25683