Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-serverless-computing-aws-lambdas-azure-functions
Vijin Boricha
03 May 2018
5 min read
Save for later

Serverless computing wars: AWS Lambdas vs Azure Functions

Vijin Boricha
03 May 2018
5 min read
In recent times, local servers and on-premises computers are counted as old school. Users and organisations have shifted their focus on Cloud to store, manage, and process data. Cloud computing has evolved in ways that DevOps teams can now focus on improving code and processes rather than focusing on provisioning, scaling, and maintaining servers. This means we have now entered the Serverless era, and the big players of this era are AWS Lambda and Azure Functions. So if you are a developer now you need not worry about low-level infrastructure decision. Coming to the bigger question. What is Serverless Computing / Function-as-a-Service? Serverless Computing / Function-as-a-Service FaaS can be described as a concept of serverless computing where applications depend on third party services to manage server-side logics. This means application developers can concentrate on building their applications rather than thinking about servers. So if you want to build any type of application or backend service, just go about with it as everything required to run and scale your application is already being handled for you. Following are popular platforms that support Faas. AWS Lambda Azure Functions Cloud Functions Iron.io Webtask.io Benefits of Serverless Computing Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless technology rapidly reduces production time and minimizes your costs, while you still have the freedom to customize your code, without hindering functionalities. For good reason, the serverless-based software takes care of many of the problems developers face when running systems and servers such as fault-tolerance, centralized logging, horizontal scalability, and deployments, to name a few. Additionally, the serverless pay-per-invocation model can result in drastic cost savings. Since AWS Lambda and Azure Functions are the most popular and widely used serverless computing platforms, we will discuss these services further. AWS Lambda AWS is recognized as one of the largest market leaders for cloud computing. One of the recent services within the AWS umbrella that has gained a lot of traction is AWS Lambda. It is the part of Amazon Web Services that lets you run your code without provisioning or managing servers. AWS Lambda is a compute service that enables you to deploy applications and back-end services that operate with zero upfront cost and requires no system administration. Although seemingly simple and easy to use, Lambda is a highly effective and scalable compute service that provides developers with a powerful platform to design and develop serverless event-driven systems and applications. Pros: Supports automatic scaling Support unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.00001667/GB per sec Cons: Limited concurrent executions (1000 executions per account) Supports lesser languages in comparison to Azure (JavaScript, Java, C#, and Python) Azure Functions Microsoft provides a solution you can use to easily run small segments of code in the Cloud: Azure Functions. It provides solutions for processing data, integrating systems, and building simple APIs and microservices. Azure Functions help you easily run small pieces of code in cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. Pros: Supports unlimiter concurrent executions Supports C#, JavaScript, F#, Python, Batch, PHP, PowerShell Supports unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.000016/GB per sec Cons: Manual scaling (App Service Plan) Conclusion When compared with the traditional Client-server approach, serverless architecture saves a lot effort and proves to cost effective for many organisations, no matter its size. The most important aspect of choosing the right platform is understanding which platform benefits your organisation the best. AWS Lambda has been around for a while with infinite support to Linux-based platforms but Azure Functions is not behind in supporting Windows-based suite even after entering the serveless market recently. If you are going to adopt AWS you will be to make the most of its; availability to open source integration, pay-as-you-go model, and high performance computing environment. Azure, on the other hand is easier to use as it’s a Windows platform. It also supports a precise pricing model where they charge by the minute and it has extended support for MacOS and Linux. So if you are looking for a clear winner here you shouldn't be surprised that AWS and Azure are similar in many ways and it would be a tie if it was to choose who is better or worse than the other. This battle would always be heated and experts will be placing their bets on who wins the race. In the end, the entire discussion would drill down to what your business needs. After all the mission would always be to grow your business at a marginal cost. The Lambda programming model How to Run Code in the Cloud with AWS Lambda Download Microsoft Azure serverless computing e-book for free
Read more
  • 0
  • 0
  • 17084

article-image-align-your-product-experience-strategy-with-business-needs
Packt Editorial Staff
02 May 2018
10 min read
Save for later

Align your product experience strategy with business needs

Packt Editorial Staff
02 May 2018
10 min read
Build a product experience strategy around the needs of stakeholders Product experience strategists need to conduct thorough research to ensure that the products being developed and launched align with the goals and needs of the business. Alignment is a bit of a buzzword that you're likely to see in HBR and other publications, but don't dismiss it - it isn't a trivial thing, and it certainly isn't an abstract thing. One of the pitfalls of product experience strategy - and product management more generally - is that understanding the needs of the business isn't actually that straightforward. There's lots of moving parts, lots of stakeholders. And while everyone should be on the same page, even subtle differences can make life difficult. This is why product experience strategists do detailed internal research. It: Helps designers to understand the company's vision and objectives for the product. It allows them to understand what's at stake. Based on this, they work with stakeholders to align product objectives and reach a shared understanding on the goals of design. Once organizational alignment is achieved, the strategist uses research insights to develop a product experience strategy. The research is simply a way of validating and supporting that strategy. The included research activities are: Stakeholder and subject-matter expert (SME) interviews Documents review Competitive research Expert product reviews   Talk to key stakeholders Stakeholders are typically senior executives who have a direct responsibility for, or influence on, the product. Stakeholders include product managers, who manage the planning and day-to-day activities associated with their product, and have a direct decision-making authority over its development. In projects that are important to the company, it is not uncommon for the executive leadership from the chief executive and down to be among the stakeholders due to their influence and authority to the direct overall product strategy. The purpose of stakeholder interviews is to gather and understand the perspective of each individual stakeholder and align the perspectives of all stakeholders around a unified vision around the scope, purpose, outcomes, opportunities and obstacles involved in undertaking a new product development project. Gaps among stakeholders on fundamental project objectives and priorities, will lead to serious trouble down the road. It is best to surfaces such deviations as early as possible, and help stakeholders reach a productive alignment. The purpose of subject-matter experts (SMEs) interviews is to balance the strategic high- level thinking provided by stakeholders, with detailed insights of experienced employees who are recognized for their deep domain expertise. Sales, customer service, and technical support employees have a wealth of operational knowledge of products and customers, which makes them invaluable when analyzing current processes and challenges. Prior to the interviews, the experience strategist prepares an interview guide. The purpose of the guide is to ensure the following: All stakeholders can respond to the same questions All research topics are covered if interviews are conducted by different interviewers Interviews make the best use of stakeholders' valuable time Some of the questions in the guide are general and directed at all participants, others are more specific and focus on the stakeholders specific areas of responsibility. Similar guides are developed for SME interviews. In-person interviews are the best, because they take place at the onset of the project and provide a good opportunity to build rapport and trust between the designer and interviewee. After a formal introduction regarding the purpose of the interview and general questions regarding the person's role and professional experience, the person is asked for their personal assessment and opinions on various topics. Here is a sample of different topics: Objectives and obstacles Prioritized goals for the project What does success look like What kind of obstacles the project is facing, and suggestions to overcome them Competition Who are your top competitors Strength and weaknesses relative to the competition Product features and functionality Which features are missing Differentiating features Features to avoid The interviews are designed to last no more than an hour and are documented with notes and audio recordings, if possible. The answers are compiled and analyzed and the result is presented in a report. The report suggests a unified list of prioritized objectives, and highlights gaps and other risks that have been reported. The report is one of the inputs into the development of the overall product experience strategy. Experts understand product experience better than anyone Product expert reviews, sometimes referred to as heuristic evaluations, are professional assessments of a current product, which are performed by design experts for the purpose of identifying usability and user experience issues. The thinking behind the expert review technique is very practical. Experience designers have the expertise to assess the experience quality of a product in a systematic way, using a set of accepted heuristics. A heuristic is a rule of thumb for assessing products. For example, the error prevention heuristic deals with how well the evaluated product prevents the user from making errors. The word heuristic often raises questions about its meaning, and the method has been criticized for its inherent weaknesses due to the following: Subjectivity of the evaluator Expertise and domain knowledge of the evaluator Cultural and demographic background of the evaluator These weaknesses increase the probability that the outcome of an expert evaluation will reflect the biases and preferences of the evaluator, resulting in potentially different conclusions about the same product. Still, expert evaluations, especially if conducted by two evaluators, and their aligned findings, have proven to be an effective tool for experience practitioners who need a fast and cost-effective assessment of a product, particularly digital interfaces. Jacob Nielsen developed the method in the early 1990s. Although there are other sets of heuristics, Nielsen's are probably the most known and commonly used. His initial set of heuristics was first published in his book, Usability Engineering, and is brought here verbatim, as there is no need for modification: Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators--unseen by the novice user--may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Every product experience strategy needs solid competitor research Most companies operate in a competitive marketplace, and having a deep understanding of the competition is critical to the success and survival. Here are few of the questions that a competitive research helps addresses: How does a product or service compare to the competition? What are the strength and weaknesses of competing offerings? What alternatives and choices does the target audience have? Experience strategists use several methods to collect and analyze competitive information. From interviews with stakeholder and SMEs, they know who the direct competition is. In some product categories, such as automobiles and consumer products, companies can reverse-engineer competitive products and try to match or surpass their capabilities. Additionally, designers can develop extensive experience analysis of such competitive products, because they can have a first-hand experience with it. With some hi-tech products, however, some capabilities are cocooned within proprietary software or secret production processes. In these cases, designers can glean the capabilities from an indirect evidence of use. The Internet is a main source of competitive information, from the ability to have a direct access to a product online, to reading help manuals, user guides, bulletin boards, reviews, and analysis in trade publications. Occasionally, unauthorized photos or documents are leaked to the public domain, and they provide clues, sometimes real and sometimes bogus, about a secret upcoming product. Social media too is an important source of competitive data in the form of customers reviews on Yelp, Amazon, or Facebook. With the wealth of this information, a practical strategy to surpass the competition and delivering a better experience can be developed. For example, Uber has been a favorite car hailing service for a while. This service has also generated public controversy and had dissatisfied riders and drivers who are not happy with its policies, including its resistance for tips. By design, a tipping function is not available in the app, which is the primary transaction method between the rider, company and, driver. Research indicates, however, that tipping for the service is a common social norm and that most people tip because it makes them feel better. Not being able to tip places riders in an uncomfortable social setting and stirs negative emotions against Uber. The evidence of dissatisfaction can be easily collected from numerous web sources and from interviewing actual riders and drivers. For Uber competitors, such as Lyft and Curb, by making tipping an integrated part of their apps, provides an immediate competitive edge that improves the experience of both riders, who have an option to reward the driver for their good service, and drivers, who benefit from an increased income. This, and additional improvements over the inferior Uber experience, become a part of an overall experience strategy that is focused on improving the likelihood that riders and drivers will dump Uber in their favor. [box type="note" align="" class="" width=""]You read an extract from the book Exploring Experience Design written by Ezra Schwartz. This book will help you unify Customer Experience, User Experience and more to shape lasting customer engagement in a world of rapid change.[/box] 10 tools that will improve your web development workflow 5 things to consider when developing an eCommerce website RESTful Java Web Services Design  
Read more
  • 0
  • 0
  • 11159

article-image-should-you-go-with-arduino-uno-or-raspberry-pi-3-for-your-next-iot-project
Savia Lobo
02 May 2018
5 min read
Save for later

Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?

Savia Lobo
02 May 2018
5 min read
Arduino Uno and Raspberry Pi 3 are the go-to options for IoT projects. They're tiny computers that can make a big impact in how we connect devices to each other, and to the internet. But they can also be a lot of fun too - at their best, they do both. For example, Arduino Uno and Raspberry Pi were used to make a custom underwater camera solution for filming the Netflix documentary, Chasing Coral. They were also behind the Autonomous racing robot. However, how are the two microcomputers different? If you're confused about which one you should start using, here's a look at the key features of both the Arduino Uno and the Raspberry Pi 3.This will give you a clearer view on what fits your project well, or maybe just help you decide what to include on your birthday wishlist. Comparing the Arduino Uno and Raspberry Pi 3 Raspberry Pi 3 has a Broadcom BCM2837 SoC with it can handle multiple tasks at one time. It is a Single Board Computer (SBC), which means it is a fully functional computer with a dedicated processor, memory, and is capable of running an OS - Raspberry Pi 3 runs on Linux. It can run multiple programs as it has its own USB ports, audio outputs, a graphic driver for HDMI output. Arduino Uno is a microcontroller board based on the ATmega328, an 8-bit microcontroller with 32KB of Flash memory and 2KB of RAM, which is not as powerful as SBCs. However, they are a great choice for quick setups. Microcontrollers are a good pick when controlling small devices  such as LEDs, motors, several different types of sensors, but cannot run a full operating system. The Arduino Uno runs one program at a time. One can also install other operating systems such as Android, Windows 10, or Firefox OS. Let's look at the features and how one stands out better than the other: Speed The Raspberry Pi 3 (1.2 GHz) is much faster than Arduino (16 MHz). This means it can complete day-to-day tasks such as web surfing, playing videos, with greater ease From this perspective, Raspberry Pi is the go-to choice for media centered applications. Winner: Raspberry Pi 3 Easy time interface Arduino Uno offers a simplified approach for project building. It has easy time interfacing with presence of analog sensors, motor, and other components. By contrast, the Raspberry Pi 3  has a more complicated route if you want to set up projects. For example, to take sensor readings you'll need to install libraries and connect to a monitor, keyboard and mouse. Winner: Arduino Uno Bluetooth/ Internet connectivity Raspberry Pi 3 connects to Bluetooth devices and the internet directly using Ethernet or by connecting to Wi-Fi. The Arduino Uno can do that only with the help of a Shield that adds internet or Bluetooth connectivity. HATS (Hardware Attached on Top) and Shields can be used on both devices to give them additional functionality. For example. HATs are used on the Raspberry Pi 3, to control an RBG Matrix, add a touchscreen, or even create an arcade system. Shields that can be used on the Arduino Uno include a Relay Shield, a Touchscreen Shield, or a Bluetooth Shield. There are hundreds of Shields and HATs that provide the functionality that you regularly use. Winner: Raspberry Pi 3 Supporting ports The Raspberry Pi 3 has an HDMI port, audio port, 4 USB ports, camera port, and LCD port, which is ideal for media applications. On the other hand, Arduino Uno does not have any of these ports in the board. However, some of these ports can be added on the Arduino Uno with the help of Shields. Arduino Uno has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. Winner: Raspberry Pi 3 Other features Set-up time Raspberry Pi 3 takes longer to set up. You'll also probably need additional components such as a HDMI cable, a monitor, a cable, and a keyboard and mouse. For the Arduino Uno you simply have to plug it in. The code then runs immediately. Winner: Arduino Uno Affordable Price Arduino Uno is much cheaper. It's around $20 compared to Raspberry Pi 3, which is around $35. It's important to note that this excludes the cost of cables, keyboards, mouse and other additional hardware.As mentioned above, you don't need those extras with the Arduino Uno. Winner: Arduino Uno Both Arduino Uno and Raspberry Pi 3 are great in their individual offerings. Arduino Uno would be an ideal board if you want to get started with electronics, and begin building fun and engaging hands-on projects. It's great for learning the basics of how sensors and actuators work, and an essential tool for one's rapid prototyping needs. On the other hand, Raspberry Pi 3 is great for projects that need an online connection and have multiple operations running  at the same time. Pick as per your need! You can also check some of our exciting books on Arduino Uno and Raspberry Pi. Raspberry Pi 3 Home Automation Projects: Bringing your home to life using Raspberry Pi 3, Arduino, and ESP8266 Build Supercomputers with Raspberry Pi 3 Internet of Things with Arduino Cookbook   How to build a sensor application to measure Ambient Light 5 reasons to choose AWS IoT Core for your next IoT project Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 26625

article-image-what-is-digital-forensics
Savia Lobo
02 May 2018
5 min read
Save for later

What is Digital Forensics?

Savia Lobo
02 May 2018
5 min read
Who here hasn’t watched the American TV show, Mr. Robot? For the uninitiated, Mr. Robot is a digital crime thriller that features the protagonist Elliot. Elliot is a brilliant cyber security engineer and hacktivist who identifies potential suspects and evidences of any crime hard to solve. He does this by hacking into people’s digital devices such as smartphones, computers, machines, printers and so on. The science of identifying, preserving, and analyzing the evidences through digital media or storage media devices, in order to trace a crime is Digital Forensics. A real world example of digital forensics helping solve crime is the case of a floppy disk that helped investigators to solve the BTK serial killer case in 2005. The killer had eluded police capture since 1974 and had claimed the lives of at least 10 victims before he was caught. Types of Digital forensics The Digital world is vast. There are countless ways one can perform illegal or corrupt activities and go undetected. Digital Forensics lends a helping hand in detecting such activities. However, due to the presence of multiple digital media, the forensics carried out for each is also different.  Following are some types of forensics which can be conducted over different digital pathways. Computer Forensics refers to the branch of forensics that obtains evidences from computer systems such as computer hard drives, mobile phones, a personal digital assistant (PDA), Compact Disks CD, and so on. The digital police can also trace suspect’s e-mail or text communication logs, their internet browsing history, system or file transfer, hidden or deleted files, docs and spreadsheets, and so on. Mobile device Forensics recovers or gathers evidence from the call logs, text messages, and other data stored in the mobile devices. Tracing one’s location info via the inbuilt GPU systems or cell site logs or through in-app communication from apps such as WhatsApp, Skype, and so on on is also possible. Network forensics monitors and analyzes computer network traffic, LAN/WAN and internet traffic. The aim of network forensics is to gather information, collect evidence, detect and determine the extent of intrusions and the amount of data that is compromised. Database forensics is the forensic study of databases and their metadata.The information from database contents, log files and in-RAM data can be used for creating timelines or recover pertinent information during a forensic investigation. Challenges faced in Digital Forensics Data storage and extraction Storing data has always been tricky and expensive. An explosion in the volume of data generation has only aggravated the situation. Now data comes from different pathways such as social media, web, IoT, and many more.  The real-time analysis of data from IoT devices and other networks also contribute to the data heap. Due to this, investigators find it difficult to store and process data to extract clues or detect incidents, or to track the necessary traffic. Data gathering over scattered mediums Investigators have to face a lot of difficulty as evidence might be scattered over social networks, cloud resources, and Personal physical storage. Therefore, increased tools, expertise and time is a requirement to fully and accurately reconstruct the evidence. Automating these tasks partially may lead to deterioration of the quality of investigation. Investigations to preserve privacy At times, investigators collect information to reconstruct and locate an attack. This can violate user privacy. Also, when information has to be collected from the cloud, there are some other hurdles, such as accessing the evidence in logs, presence of volatile data, and so on. Carrying out Legitimate investigations only Modern infrastructures are complex and virtualized, often shifting their complexity at the border (such as in fog computing) or delegating some duties to third parties (such as in platform-as-a-service frameworks). An important challenge for modern digital forensics lies in executing investigations legally, for instance, without violating laws in borderless scenarios. Anti-forensics techniques on the rise Defensive measures for digital forensics comprise of encryption, obfuscation, and cloaking techniques, including information hiding.Therefore new forensics tools should be engineered in order to support heterogeneous investigations, preserve privacy, and offer scalability. The presence of digital media and electronics is a leading cause for the rise of digital forensics. Also, at this pace, digital media is on the rise, digital forensics is here to stay. Many of the investigators which include CYFOR,  and Pyramid CyberSecurity strive to offer solutions to complex cases in the digital world. One can also try to seek employment or specialize in this field by improving the skills needed for a career in digital forensics. If you are interested in digital forensics, check out our product portfolio on cyber security or subscribe today to a learning path for forensic analysts on MAPT, our digital library. How cybersecurity can help us secure cyberspace Top 5 penetration testing tools for ethical hackers What Blockchain Means for Security
Read more
  • 0
  • 0
  • 19420

article-image-5-reasons-you-should-learn-to-code
Richard Gall
02 May 2018
4 min read
Save for later

5 reasons you should learn to code

Richard Gall
02 May 2018
4 min read
If you're on the Packt Hub, it's likely that you already know how to code. But perhaps you don't - and if you don't, it's our job to let you know why you should learn to code. And even if you do know how, evangelizing and explaining to others why they should learn is incredibly important. People often think of writing code as something very specialized and technically demanding. There’s sometimes even a perception of people that do write code as a certain kind of person. But this is, luckily, demonstrably untrue. Writing code is something that can be done by anyone. While there are some incredibly specialized roles that require incredibly detailed levels of programming knowledge, in actual fact a huge number of roles today use software in creative and analytical ways. That’s why learning how to code is so essential. Let’s take a look at 5 reasons why you should learn how to write code. Learn to code to better understand how software works Code is everywhere. Learning how to write it is a way of understanding how so much of the world works. Perhaps you’re a writer. Or maybe a designer. Maybe you work in marketing. Whatever you’re doing, it’s likely that you’re working in a digital environment. Understanding how these are built and how they work puts you at an advantage, in that you have an extra dimension of knowledge of how things work ‘under the hood’. Learning to code will help you better manipulate and use data The world runs on data. Code can help you better understand it and manage it. Excel is a sad part of business life. You don’t have to look had to finding someone complaining about spreadsheets. But if you’ve ever been impressed with someone’s dexterity with Excel, that’s really just a type of code. In the world of data today there are far more sophisticated systems for processing and managing data. But if you’re working with large datasets in any capacity, learning how to write SQL scripts, for example, is a great place to begin your coding adventure. Learn to code simply to have fun Coding is fun and creative! Before talk of spreadsheets and databases turns you off, it’s important to note that writing code and learning to program is fun. If you’re just starting out seeing the effect of your code as you build your first web page is a really neat experience. A lot of people think coding is purely technical - but the technical part is just a small aspect of it. Really, it’s about solving problems and being creative. Even if you’re not sure how learning to code might help you professionally right now, having a creative outlet might just inspire your future. Coding isn't as hard as you think It’s not actually that hard… Writing code isn’t really that hard. It might take some getting used to, but once you’re over those first steps you’ll feel like an expert. True, learning how to write code always involves some ups and downs, and if you’re getting deeper into more complex programming concepts and tasks, it will be challenging. But that doesn’t mean that it’s beyond you. It’s a valuable skill you can actually develop quickly. You can learn to code without spending lots of money It’s a valuable skill you can develop cheaply. Learning how to write code could increase your earning power. Even if it doesn’t transform your career, it could still give you a push to get to the next step. But while development often requires expensive training and certifications that your employer might pay for, learning the basics of coding is something you can do without spending very much money at all. There’s lots of free resources out there. And even the very best resources won’t put you out of pocket. Slow down to learn how to code faster
Read more
  • 0
  • 0
  • 11712

article-image-iot-forensics-security-connected-world
Vijin Boricha
01 May 2018
3 min read
Save for later

IoT Forensics: Security in an always connected world where things talk

Vijin Boricha
01 May 2018
3 min read
Connected physical devices, home automation appliances, and wearable devices are all part of Internet of Things (IoT). All of these have two major things in common that is seamless connectivity and massive data transfer. This also brings with it, plenty of opportunities for massive data breaches and allied cyber security threats. The motive of digital forensics is to identify, collect, analyse, and present digital evidence collected from various mediums in a cybercrime incident. The multiplication of IoT devices and the increased number of cyber security incidents has given birth to IoT forensics. IoT forensics is a branch of digital forensics which deals with IoT-related cybercrimes and includes investigation of connected devices, sensors and the data stored on all possible platforms. If you look at the bigger picture, IoT forensics is a lot more complex, multifaceted and multidisciplinary in approach than traditional forensics. With versatile IoT devices, there is no specific method of IoT forensics that can be broadly used.So identifying valuable sources is a major challenge. The entire investigation will depend on the nature of the connected or smart device in place. For example, evidence could be collected from fixed home automation sensors, or moving automobile sensors, wearable devices or data store on Cloud. When compared to the standard digital forensic techniques, IoT forensics portrays multiple challenges depending on the versatility and complexity of the IoT devices. Following are some challenges that one may face in an investigation: Variance of the IoT devices Proprietary Hardware and Software Data present across multiple devices and platforms Data can be updated, modified, or lost Proprietary jurisdictions for data is stored on cloud or a different geography As such, IoT Forensics requires a multi-faceted approach where evidence can be collected from various sources. We can categorize sources of evidence into three broad groups: Smart devices and sensors; Gadgets present at the crime scene (Smartwatch, home automation appliances, weather control devices, and more) Hardware and Software; the communication link between smart devices and the external world (computers, mobile, IPS, and firewalls) External resources; areas outside the network unders investigation (Cloud, social networks, ISPs and mobile network providers) Once the evidence is successfully collected from an IoT device no matter the file system, operating system, or the platform it is based on, it should be logged and monitored. The main reason behind this is IoT devices data storage are majorly on Cloud due to its scalability and accessibility. There are high possibilities the data on Cloud can be altered which would result to an investigation failure. No doubt Cloud forensics can equally play an important role here but strengthening cyber security best practices should be the ideal motive. With ever evolving IoT devices there will always be a need for unique practice methods and techniques to break through the investigation. Cybercrime keeps evolving and getting bolder by the day. Forensics experts will have to develop skill sets to deal with the variety and complexity of IoT devices to keep up with this evolution. No matter the challenges one faces there is always a unique solution to complex problems. There will always be a need for unique, intelligent, and adaptable techniques to investigate IoT-related crimes and an even greater need for those displaying these capabilities. To learn more on IoT security, you can get you hands on a few of our books; IoT Penetration Testing Cookbook and Practical Internet of Things Security. Why Metadata is so important for IoT Why the Industrial Internet of Things (IIoT) needs Architects 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 22382
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-reasons-to-choose-kotlin-over-java
Richa Tripathi
30 Apr 2018
3 min read
Save for later

5 reasons to choose Kotlin over Java

Richa Tripathi
30 Apr 2018
3 min read
Java has been a master of all in almost every field of application development, making the Java developers not wander much in search for other languages. However, things have changed with the steady evolution of Kotlin. Kotlin, no more the "other JVM language" has even surpassed Java's prominence . So,what makes this language stand-out and why is it growing in adoption for application development? What are the benefits of Kotlin vs Java, and how can it help developers? In this article, we’re going to look at the top 5 reasons why Kotlin takes a superior stand over Java and why it will work best for your next development project. Kotlin is more concise Kotlin is way more concise than Java in many cases, solving the same problems with fewer lines of code. This improves code maintainability and readability, meaning engineers can write, read, and change code more effectively and efficiently. Kotlin exclusive features such as type inference, smart casts, data classes, and properties help achieve conciseness. Kotlin’s null-safety is great NullPointerExceptions are a huge source of frustration for Java developers. Java allows you to assign null to any variable, but if you try to use an object reference that has a null value, then brace yourself to encounter a NullPointerException! Kotlin’s type system is aimed to eliminate NullPointerExceptions from the code. This type of system helps to avoid null pointer exceptions by simply refusing to compile code that tries to assign or return null. Combine the best of Functional and Procedural Programming Each set of programming paradigm has its own set of pros and cons. Combining the power of both functional and procedural programming leads to better development and output. It consists of many useful methods, which includes higher-order functions, lambda expressions, operator overloading, lazy evaluation, and much more. With a list of weaknesses and strengths from both languages, Kotlin offers inexpensive and intuitive coding style. The power of Kotlin’s extension functions Extensions of Kotlin are very useful because they allow developers to add methods to classes without making changes to their source code. Here, you can add methods on a per-user basis to classes. This allows users to extend the functionality of existing classes without inheriting the functions and properties from other classes. Interoperability with JAVA When debating between Kotlin vs Java, there is always a third option: Use them both. Despite all the differences Kotlin and Java are 100% interoperable,you can literally continue work on your old Java projects using Kotlin. You can call Kotlin code from Java, and you can call Java code from Kotlin. So it’s possible to have Kotlin and Java classes side-by-side within the same project, and everything will still compile. Undoubtedly, Kotlin has made many positive changes to the long and most used Java. It helps to write safer code, because with less work it's possible to write a more reliable code, thus making the life of programmers a lot easier. Kotlin is really a good replacement for Java. With time, more and more advanced features will be added to the Kotlin’s ecosystem that will help its popularity to grow towards its apex making the developers world more promising. Also read Why are Android developers switching from Java to Kotlin? Getting started with Kotlin programming  
Read more
  • 0
  • 0
  • 23168

article-image-tech-hype-cycles-do-they-deserve-your-attention
Richard Gall
30 Apr 2018
6 min read
Save for later

Tech hype cycles: do they deserve your attention?

Richard Gall
30 Apr 2018
6 min read
Hype cycles are an integral aspect of modern technology. They tell us the story of a specific technology and how it fits into a given context. This context is usually professional, but it is sometimes social and cultural. They are also are able to show us how the use of something has changed. They illustrate when something was adopted, when it grew, and perhaps when it began to decline. True, this might seem superfluous or superficial. But that explains while we often fail to pay that much attention to them. Instead of focusing on the cycle, and the wider context of how and why something is being used, we get distracted in the details of whatever is being hyped. "Hype cycles allow us to see past hype." But hype cycles, or hype curves, can help us to make better sense of the technology at our disposal. They allow you to see past the hype. That means rather than following the trends or buzzwords that fashion places on a pedestal at any given moment, you're always able to see those trends and buzzwords in a context. For example, instead of simply moving from big data to to AI, or from cloud to edge, you can see how different technologies and trends fit together. You can begin to observe how things are impacting one another. Hype cycles allow you to see how software changes trends, and then how trends change industries. It's not always easy to see how the code you're writing fits into the big picture - but hype cycles are a good way of allowing you to get a better sense of it. The history of the tech hype cycle According to this Wired article from 2012, the term 'hype cycle' has been around since 1995. But the idea of a hype cycle was taken by research organization Gartner and became central to the way they presented changes across the tech landscape. The first Gartner hype cycle report was released in 1999. Written by Alexander Drobik, the report predicted the end of the dot com bubble at the beginning of the new millennium. However, it's important to note that what Drobnik hadn't simply predicted the end of a trend - it was instead what's called a period of disillusionment within the 'hype cycle' of, well, the internet (perhaps the ultimate hype cycle). Let's look at what the cycle looks like in detail. What does the tech hype cycle look like? Of course, Gartner are the organization that popularized the concept of the hype cycle, but we've created our own example of what it looks like:                 Let's break down each of these points in the hype cycle in a bit more detail. Technology trigger This is the initial breakthrough. It's an exciting time when either researchers, engineers discover a new way of doing something. It's more the possibility of disruption rather than actual disruption. This is often the time when the press - and investors - get excited. Peak of inflated expectations This is when everyone gets really excited about the possibility of disruption. This period can be characterized by the sentence "This changes everything." It's the period when everyone talks about transformation but nothing yet has really transformed. True, the new technology might have worked somewhere, but there's lots of projects that don't even hit the ground, and a few that have simply failed. Trough of disillusionment This is the hangover everyone goes through after getting drunk on inflated expectations. This begins with 'Why X isn't working' pieces in the press, which gradually develops into silence. Technologies or trends seem to disappear into relative insignificance. Slope of enlightenment Now the hype has died down, technologies are applied with more serious consideration. Arguably the period of disillusionment is an important period of reflection about what works and what doesn't. This allows businesses and organizations to apply technologies in a more effective way during this 'enlightened' period. In essence, this time is about experimentation and learning. True, there might be some humility here, which is probably a good thing after the earlier inflated expectations. Plateau of productivity This is where enlightenment turns into stability. Ways of using a particular technology become established within an industry. It becomes mainstream. Perhaps the benefits to customers are now being felt more readily, which makes it easier to calculate just how valuable something might be. The hype cycle is a framework that explains how technologies become popular and gradually more mainstream. Of course, there are some technologies that don't quite follow this trajectory - what happens, for example, when things simply never take off? Some technologies get stuck at the trough of disillusionment. If they can never really give us the full picture, are hype cycles actually nothing more than a load of hype? Are hype cycles just a load of hype? Although hype cycles are useful in outlining how technologies are adopted, and mature, there are, of course, do have some limitations. Of course, Gartner have some stake in actually selling the concept to you. Its business is based on being an authoritative and invaluable source of tech insight. This means Gartner needs you (or maybe your boss) to think that hype cycles are a recurring pattern of all technology. Similarly, the people who write about technology and sell it, have a vested interest in hype cycles. They might not realize it but the need to 'tell a story' about how or why something is important - why something is 'transformative' - feeds into the concept that Gartner has successfully monetized. But that doesn't mean tech hype cycles should simply be ignored. They might well be artificial and lacking in any quantitative rigour, but we ignore the hype cycle at our peril. This is because the way we - the press, industry leaders, and tech communities - plays an important part in how technologies and trends are adopted. We need to take a somewhat ironic approach to hype cycles. That means we need to recognise while part of it is a bit of a charade, it's a charade that is pretty much inescapable. Trends and technology can't exist outside of these systems. Things only ever become popular when they're visible and when they're being talked about. Hype cycles give us a framework for understanding how technology is talked about. Read next: What is AIOps and why is it going to be important?
Read more
  • 0
  • 0
  • 11414

article-image-top-5-penetration-testing-tools-for-ethical-hackers
Vijin Boricha
27 Apr 2018
5 min read
Save for later

Top 5 penetration testing tools for ethical hackers

Vijin Boricha
27 Apr 2018
5 min read
Software systems are vulnerable. That's down to a range of things, from the constant changes our software systems undergo, to the extent of the opportunities for criminals to take advantage of the gaps and vulnerabilities within these systems. Fortunately, penetration testers - or ethical hackers - are a vital line of defence. Yes, you need to properly understand the nature of cyber security threats before you take steps to tackle them, but penetration testing tools are the next step towards securing your software. There's famous saying from Stephane Nappo that sums up cyber security today: It takes 20 years to build a reputation and few minutes of cyber-incident to ruin it. So, make sure you have the right people with the right penetration testing tools to protect not only your software but your reputation too.  The most popular penetration testing tools Kali Linux Kali linux is a Linux distro designed for digital forensics and penetration testing. The predecessor of BackTrack, it has grown in adoption to become one of the most widely used penetration testing tools. Kali Linux is  based on debian - most of its packages are imported from Debian repositories. Kali includes more than 500 preinstalled penetration testing programs that makes it possible to exploit wired, wireless, and ARM devices. The recent release of Kali Linux 2018.1 supports Cloud penetration testing. Kali has collaborated with some of the planet's leading cloud platforms such as AWS and Azure, helping to change the way we approach cloud security. Metasploit Metasploit is another popular penetration testing framework. It was created in 2003 using Perl and was acquired by Rapid7 in 2009 by which time it was completely rewritten in Ruby. It is a collaboration of the open source community and Rapid 7 with the outcome being the Metasploit Project well known for its anti-forensic and evasion tools. Metasploit is a concept of ‘exploit’ which is a code that is capable of surpassing any security measures entering vulnerable systems. Once through the security firewalls, it runs as a ‘payload’, a code that performs operations on a target machine, as a result creating the ideal framework for penetration testing. Wireshark WireShark is one of the world’s primary network protocol analyzers also popular as a packet analyzer. It was initially released as Ethereal back in 1998 and due to some trademark issues was renamed to WireShark in 2006. Users usually use WireShark for network analysis, troubleshooting, and software and communication protocol development. Wireshark basically functions in the second to seventh layer of network protocols, and the analysis made is presented in a human readable form. Security Operations Center analysts and network forensics investigators use this protocol analysis technique to analyze the amount of bits and bytes flowing through a network. The easy to use functionalities and the fact that it is open source makes Wireshark one of the most popular packet analyzers for security professionals and network administrators who want to quickly earn money as freelancers. Burp Suite Threats to web applications have grown in recent years. Ransomware and cryptojacking have become increased techniques used by cybercriminals to attack users in the browser. Burp or Burp Suite is one widely used graphical tool for testing web application security. Since it's about application security there are two versions to this tool: a paid version that include all the functionalities and the free version that comes with few important functionalities. This tool comes preinstalled with basic functionalities that will help you with web application security checks. If you are looking at getting into web penetration testing this should definitely be your first choice as it works with Linux, Mac and Windows as well. Nmap Nmap also known as Network Mapper is a security scanner. As the name suggests it builds a map of the network to discover hosts and services on a computer network. Nmap follows a set of protocols to function where it sends a crafted packet to the target host and then analyses the responses. It was initially released in 1997 and since then it has provided a variety of features to detect vulnerabilities and network glitches. The major reason why one should opt for Nmap is that it is capable of adapting to network conditions like network delay and network congestion during a scan. To keep your environment protected from security threats you should take necessary measures. There are n number of penetration testing tools out there with exceptional capabilities. The most important thing would be to choose the necessary tool based on your environment’s requirement. You can pick and choose from the above mentioned tools as they are shortlisted taking into consideration the fact that they are effective, well supported and easy to understand and most importantly they are open-source. Learn some of the most important penetration testing tools in cyber security Kali Linux - An Ethical Hacker's Cookbook, Metasploit Penetration Testing Cookbook - Third Edition Network Analysis using Wireshark 2 Cookbook - Second Edition For a complete list of books and videos on this topic, check out our penetration testing products.
Read more
  • 0
  • 0
  • 17481

article-image-active-learning-an-approach-to-training-machine-learning-models-efficiently
Savia Lobo
27 Apr 2018
4 min read
Save for later

Active Learning : An approach to training machine learning models efficiently

Savia Lobo
27 Apr 2018
4 min read
Training a machine learning model to give accurate results requires crunching huge amounts of labelled data in it. Data being naturally unlabelled, need ‘experts’ who can scan through the data and tag them with correct labels. To perform topic-specific data labelling, for example, classifying diseases based on their type, would definitely require a doctor or someone with a medical background to label the data. Getting such topic-specific experts to label data can get difficult and quite expensive. Also, doing this for many machine learning projects is impractical. Active learning can help here. What is Active Learning Active learning is a type of semi-supervised machine learning, which aids in reducing the amount of labeled data required to train a model. In active learning, the model focuses only on data that the model is confused about and requests the experts to label them. The model later trains a bit more on the small amount of labeled data, and repeats the same for such confusing data labeling. Active learning, in short, prioritizes confusing samples that need labeling. This enables models to learn faster, and allows experts to skip labeling data that is not a priority, and to provide the model with the most useful information on the confused samples. This in turn can fetch great machine learning models, as active learning can reduce the number of labels required to collect from experts. Types of Active learning An active learning environment includes a learner (the model being trained), huge amount of raw and unlabelled data, and the expert (the person/system labelling the data). The role of the learner is to choose which instances or examples should be labelled. The learner’s goal is to reduce the number of labeled examples needed for an ML model to learn. On the other hand, the expert on receiving the data to be labelled, analyzes the data to determine appropriate labels for it. There are three types of Active learning scenarios. Query Synthesis - In such a scenario, the learner constructs examples, which are further sent to the expert for labeling. Stream-based active learning - Here, from the stream of unlabelled data, the learner decides the instances to be labelled or choose to discard them. Pool-based active learning - This is the most common scenario in active learning. Here, the learner chooses only the most informative or best instances and forwards them to the expert for labelling. Some Real-life applications of Active learning Natural Language Processing (NLP): Most of the NLP applications require a lot of labelled data such as POS (Parts-of-speech) tagging, NER (Named Entity Recognition), and so on. Also, there is a huge cost incurred in labelling this data. Thus, using active learning can reduce the amount of labelled data required to label. Scene understanding in self-driving cars: Active learning can also be used in detecting objects, such as pedestrians from a video camera mounted on a moving car,a key area to ensure safety in autonomous vehicles. This can result in high levels of detection accuracy in complex and variable backgrounds. Drug designing: Drugs are biological or chemical compounds that interact with specific ‘targets’ in the body (usually proteins, RNA or DNA) with an aim to modify their activity. The goal of drug designing is to find which compounds bind to a particular target. The data comes from large collections of compounds, vendor catalogs, corporate collections, and combinatorial chemistry. With active learning, the learner can find out the compounds that are active (binds to target) or inactive. Active learning is still being researched using different deep learning algorithms such as CNNs and LSTMs, which act as learners in order to improve their efficiency. Also, GANs (Generative Adversarial Networks) are being implemented in the active learning framework. There are also some research papers that try to learn active learning strategies using meta-learning. Why is Python so good for AI and Machine Learning? 5 Python Experts Explain AWS Greengrass brings machine learning to the edge Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 11633
article-image-google-arcore-is-pushing-immersive-computing-forward
Sugandha Lahoti
26 Apr 2018
7 min read
Save for later

Google ARCore is pushing immersive computing forward

Sugandha Lahoti
26 Apr 2018
7 min read
Immersive computing has been touted as a crucial innovation that is going to transform the way we interact with software in the future. But like every trend, there are a set of core technologies that lie at the center, helping to drive it forward. In the context of immersive computing Google ARCore is one of these technologies. Of course, it's no surprise to see Google somewhere at the heart of one of the most exciting developments in tech. But what is Google ARCore, exactly? And how is it going to help drive immersive computing into the mainstream? But first, let's take a look at exactly what immersive computing is. After that, we'll explore how Google ARCore is helping to drive it forward, and some examples of how to put it into practice with some motion tracking and light estimation projects. What is Immersive Computing? Immersive computing is a term used to describe applications that provide an immersive experience for the user. This may come in the form of an augmented or virtual reality experience. In order to better understand the spectrum of immersive computing, let's take a look at this diagram: The Immersive Computing Spectrum The preceding diagram illustrates how the level of immersion affects the user experience, with the left-hand side of the diagram representing more traditional applications with little or no immersion, and the right representing fully immersive virtual reality applications. For us, we will stay in the middle sweet spot and work on developing augmented reality applications. Why use Google ARCore for Augmented Reality? Augmented reality applications are unique in that they annotate or augment the reality of the user. This is typically done visually by having the AR app overlay a view of the real world with computer graphics. Google ARCore is designed primarily for providing this type of visual annotation for the user. An example of a demo ARCore application is shown here: The screenshot is even more impressive when you realize that it was rendered real time on a mobile device. It isn't the result of painstaking hours of using Photoshop or other media effects libraries. What you see in that image is the entire superposition of a virtual object, the lion, into the user's reality. More impressive still is the quality of immersion. Note the details, such as the lighting and shadows on the lion, the shadows on the ground, and the way the object maintains position in reality even though it isn't really there. Without those visual enhancements, all you would see is a floating lion superimposed on the screen. It is those visual details that provide the immersion. Google developed ARCore as a way to help developers incorporate those visual enhancements in building AR applications. Google developed ARCore for Android as a way to compete against Apple's ARKit for iOS. The fact that two of the biggest tech giants today are vying for position in AR indicates the push to build new and innovative immersive applications. Google ARCore has its origins in Tango, which is/was a more advanced AR toolkit that used special sensors built into the device. In order to make AR more accessible and mainstream, Google developed ARCore as an AR toolkit designed for Android devices not equipped with any special sensors. Where Tango depended on special sensors, ARCore uses software to try and accomplish the same core enhancements. For ARCore, Google has identified three core areas to address with this toolkit, and they are as follows: Motion tracking Environmental understanding Light estimation In the next three sections, we will go through each of those core areas in more detail and understand how they enhance the user experience. Motion tracking Tracking a user's motion and ultimately their position in 2D and 3D space is fundamental to any AR application. Google ARCore allows you to track position changes by identifying and tracking visual feature points from the device's camera image. An example of how this works is shown in this figure: In the figure, we can see how the user's position is tracked in relation to the feature points identified on the real couch. Previously, in order to successfully track motion (position), we needed to pre-register or pre-train our feature points. If you have ever used the Vuforia AR tools, you will be very familiar with having to train images or target markers. Now, ARCore does all this automatically for us, in real time, without any training. However, this tracking technology is very new and has several limitations. Environmental understanding The better an AR application understands the user's reality or the environment around them, the more successful the immersion. We already saw how Google ARCore uses feature identification in order to track a user's motion. Tracking motion is only the first part. What we need is a way to identify physical objects or surfaces in the user's reality. ARCore does this using a technique called meshing. This is what meshing looks like in action: What we see happening in the preceding image is an AR application that has identified a real-world surface through meshing. The plane is identified by the white dots. In the background, we can see how the user has already placed various virtual objects on the surface. Environmental understanding and meshing are essential for creating the illusion of blended realities. Where motion tracking uses identified features to track the user's position, environmental understanding uses meshing to track the virtual objects in the user's reality. Light estimation Magicians work to be masters of trickery and visual illusion. They understand that perspective and good lighting are everything in a great illusion, and, with developing great AR apps, this is no exception. Take a second and flip back to the scene with the virtual lion. Note the lighting and detail in the shadows on the lion and ground. Did you note that the lion is casting a shadow on the ground, even though it's not really there? That extra level of lighting detail is only made possible by combining the tracking of the user's position with the environmental understanding of the virtual object's position and a way to read light levels. Fortunately, Google ARCore provides us with a way to read or estimate the light in a scene. We can then use this lighting information in order to light and shadow virtual AR objects. Here's an image of an ARCore demo app showing subdued lighting on an AR object: The effects of lighting, or lack thereof, will become more obvious as we start developing our startup applications. To summarize, we took a very quick look at what immersive computing and AR is all about. We learned about augmented reality covering the middle ground of the immersive computing spectrum, and AR is a careful blend of illusions used to trick the user into believing that their reality has been combined with a virtual one. After all, Google developed ARCore as a way to provide a better set of tools for constructing those illusions and to keep Android competitive in the AR market. After that, we learned the core concepts ARCore was designed to address and looked at each: motion tracking, environmental understanding, and light estimation, in a little more detail. This has been taken from Learn ARCore - Fundamentals of Google ARCore. Find it here. Read More Getting started with building an ARCore application for Android Types of Augmented Reality targets  
Read more
  • 0
  • 0
  • 10115

article-image-hybrid-mobile-apps-what-you-need-to-know
Sugandha Lahoti
26 Apr 2018
4 min read
Save for later

Hybrid Mobile apps: What you need to know

Sugandha Lahoti
26 Apr 2018
4 min read
Hybrid mobile apps have been around for quite some time now, but advances in mobile development software and changes in user behavior have allowed it to grow. Today, users expect hybrid apps, even if they wouldn’t know what a ‘hybrid app’ actually is. What is a Hybrid mobile app? A Hybrid app is essentially a web application that acts like a native app. Or a native app that acts like a web application. That means it can do everything HTML5 does while also incorporating native app features, like access to a phone’s camera. Hybrid mobile apps consist of two parts. The first is the back-end code built using languages such as HTML, CSS, and Javascript. The second is a native shell that loads the code using Webview. Advantages of hybrid mobile apps Hybrid apps are much easier to build than native apps. This is because they are built using HTML, CSS, and Javascript - software that typically runs in the browser. They also have a faster development cycle than native apps because you only have a JavaScript codebase. It is, however, important to note that hybrid mobile apps require third-party tools such as Apache Cordova to ease communication between the web view and the native platform. Noteworthy Hybrid apps include MarketWatch, Untappd, Sworkit etc. Hybrid mobile apps can run on both Android and iOS devices (the two most prominent OS). This is great for developers as it means less work for them - code can be reused for progressive web applications and desktop applications with minor tweaking. Disadvantages of hybrid mobile apps Although they’re extremely versatile, hybrid apps have certain disadvantages. They’re often a little more expensive than standard web apps because you have to work with the native wrapper. It’s also sometimes a disadvantage to be dependent on a third-party platform. Compared to native apps, hybrid apps aren’t quite as interactive and often a bit slower. Of course, the app is dependent on resources from the web. Hybrid mobile apps also generally have a standard template. Any customization you want to do in your application will take you away from the hybrid model. If this is the case, you may as well go native. Hybrid mobile app frameworks There are a good range of hybrid mobile application frameworks out there for mobile developers at the moment. Let’s take a look at some of the best. React Native Facebook’s React Native is a mobile framework for implementing a single code multiple times. It compiles to native mobile app components to build native mobile applications (iOS, Android, and Windows) in JavaScript. React Native’s library includes Flexbox CSS styling, inline styling, debugging, and supports deploying to either the App Store or Google Play. Ionic Ionic Framework is an open-source SDK for hybrid mobile app development, licensed under MIT. It is built on top of Angular.js and Apache Cordova.  Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps build using Ionic can be distributed through native app stores to be installed on devices by using Cordova. Xamarin Microsoft’s Xamarin Hybrid development platform allows developers to code in C# many platforms in C#. Developers can use Xamarin tools to write native Android, iOS, and Windows apps with a C#-shared codebase, and share code across multiple platforms. PhoneGap Adobe PhoneGap framework is an open source distribution of Apache Cordova framework. With PhoneGap, hybrid applications are built with HTML5 and CSS3 (for rendering), and JavaScript (for logic) to be used across multiple platforms. Hybrid mobile apps are great for users Hybrid mobile apps are particularly effective when you want to build and deploy an app more efficiently. They are also useful for building prototype applications. However, the key thing to remember about hybrid mobile apps is that many users today expect the type of experience they deliver. The old distinction between browser and native experiences has almost disappeared. A well-written hybrid app does not behave or look any different than its native equivalent and that, really, is what users want. Also, check out React Native Cookbook React and React Native Learning Ionic - Second Edition Ionic 2 Cookbook - Second Edition Mastering Xamarin UI Development
Read more
  • 0
  • 0
  • 22008

article-image-top-7-devops-tools-2018
Vijin Boricha
25 Apr 2018
5 min read
Save for later

Top 7 DevOps tools in 2018

Vijin Boricha
25 Apr 2018
5 min read
DevOps is a methodology or a philosophy. It's a way of improving the friction between development and operations. But while we could talk about what DevOps is and isn't for decades (and people probably will), there are a range of DevOps tools that are integral to putting its principles into practice. So, while it's true that adopting a DevOps mindset will make the way you build software more efficiently, it's pretty hard to put DevOps into practice without the right tools. Let's take a look at some of the best DevOps tools out there in 2018. You might not use all of them, but you're sure to find something useful in at least one of them - probably a combination of them. DevOps tools that help put the DevOps mindset into practice Docker Docker is a software that performs OS-level virtualization, also known as containerization. Docker uses containers to package up all the requirements and dependencies of an application making it shippable to on-premises devices, data center VMs or even Cloud. It was developed by Docker, Inc, back in 2013 with complete support for Linux and limited support for Windows. By 2016 Microsoft had already announced integration of Docker with Windows 10 and Windows Server 2016. As a result, Docker enables developers to easily pack, ship, and run any application as a lightweight, portable container, which can run virtually anywhere. Jenkins Jenkins is an open source continuous integration server in Java. When it comes to integrating DevOps processes, continuous integration plays the most important part and this is where Jenkins comes into picture. It was released in 2011 to help developers integrate DevOps stages with a variety of in-built plugins. Jenkins is one of those prominent tools that helps developers find and solve code bugs quickly and also automates the testing of their builds. Ansible Ansible was developed by the Ansible community back in 2012 to automate network configuration, software provisioning, development environment, and application deployment. In a nutshell, it is responsible for delivering simple IT automation that puts a stop to repetitive task. This eventually helps DevOps teams to focus on more strategic work. Ansible is completely agentless where in it uses syntax written in YAML and follows a master-slave architecture. Puppet Puppet is an open source software configuration management tool written in C++ and Closure. It was released back in 2005 licensed under the GNU General Public License (GPL) until version 2.7.0. Later it was licensed under Apache License 2.0. Puppet is an open-source configuration management tool used to deploy, configure and manage servers. It uses a Master Slave architecture where the Master and Slave use secure encrypted channels to communicate. Puppet runs on any platform that supports Ruby, for example CentOS, Windows Server, Oracle Enterprise Linux, Microsoft, and more. Git Git is a version control system that allows you to track file changes which in turn helps in coordinating with team members working on those files. Git was released in 2005 where it was majorly used for Linux Kernel development. Its primary use case is source code management in software development. Git is a distributed version control system where every contributor can create a local repository by cloning the entire main repository. The main advantage of this system is that contributors can update their local repository without any interference to the main repository. Vagrant Vagrant is an open source tool released in 2010 by HashiCorp and it used to build and maintain virtual environments. It provides a simple command-line interface to manage virtual machines with custom configurations so that DevOps team members have an identical development environment. While Vagrant is written in Ruby, it supports development in all major languages. It works seamlessly on Mac, Windows, and all popular Linux distributions. If you are considering building and configuring a portable, scalable, and lightweight environment, Vagrant is your solution. Chef Chef is a powerful configuration management tool used to transform infrastructure into code. It was released back in 2009 and is written in Ruby and Erlang. Chef uses a pure-ruby domain specific language (DSL) to write system configuration 'recipes' which are put together as cookbook for easier management. Unlike Puppet’s master-slave architecture Chef uses a client-server architecture. Chef supports multiple cloud environments which makes it easy for infrastructures to manage data centers and maintain high availability. Think carefully about the DevOps tools you use To increase efficiency and productivity, the right tool is key. In a fast-paced world where DevOps engineers and their entire teams do all the extensive work, it is really hard to find the right tool that fits your environment perfectly. Your best bet is to choose your tool based on the methodology you are going to adopt. Before making a hard decision it is worth taking a step back to analyze what would work best to increase your team’s productivity and efficiency. The above tools have been shortlisted based on current market adoptions. We hope you find a tool in this list that eventually saves a lot of your time in choosing the right one. Learning resources Here is a small selection of books and videos from our Devops portfolio to help you and your team master the DevOps tools that fit your requirements: Mastering Docker (Second Edition) Mastering DevOps [Video] Mastering Docker [Video] Ansible 2 for Beginners [Video] Learning Continuous Integration with Jenkins (Second Edition) Mastering Ansible (Second Edition) Puppet 5 Beginner's Guide (Third Edition) Effective DevOps with AWS
Read more
  • 0
  • 0
  • 17806
article-image-20-ways-to-describe-programming-in-5-words
Richard Gall
25 Apr 2018
3 min read
Save for later

20 ways to describe programming in 5 words

Richard Gall
25 Apr 2018
3 min read
How would you describe programming? Can you describe programming in 5 words? It's pretty difficult. Even explaining it in a basic and straightforward way can be challenging. You type stuff... and then it turns into something else or makes something happen. Or, as is often the case, something doesn't happen. Twitter account @abstractionscon asked its followers "what 5 words best describe programming?" The results didn't disappoint. There was a mix of funny, slightly tragic, and even poetic evocations and descriptions of what programming is and what it feels like. It turns out that more often than not, it simply feels frustrating. Things go wrong a lot. One of the most interesting aspects of the conversation was how it brings to light just how challenging it is to put programming into language. That's reflected in many of the responses to the original tweet. One of the conclusions we can probably draw from this is that not only is describing programming pretty hard, it's also pretty funny. And from that, perhaps it's also true that programming is generally a pretty funny thing to do. But then why would that be surprising? You learn from an early age that getting a computer to do what you want is difficult, so why should writing software be any different? Take a look at some of the best attempts to describe programming below. Which is your favourite? And how would you describe programming? https://twitter.com/alicegoldfuss/status/988818057219854336 https://twitter.com/jennschiffer/status/988849269552578560 https://twitter.com/lindseybieda/status/988941397544890368 https://twitter.com/sarahmei/status/988600171075268608 https://twitter.com/tef_ebooks/status/988752549552578560 https://twitter.com/jckarter/status/988828156386684928 https://twitter.com/cassidoo/status/988920470907961344 https://twitter.com/kelseyhightower/status/988646191679209472 https://twitter.com/francesc/status/988653691669446658 https://twitter.com/shanselman/status/988919759377915904 https://twitter.com/chriseng/status/988674723516207104 https://twitter.com/EricaJoy/status/988649667914186755 https://twitter.com/brianleroux/status/988628362355773440 https://twitter.com/ftrain/status/988759827731148800 https://twitter.com/jbeda/status/988634633087545344 https://twitter.com/kamal/status/988749873347375104 https://twitter.com/fatih/status/988695353171030016 https://twitter.com/innesmck/status/989067129432498176 https://twitter.com/franckverrot/status/988611564168036352 https://twitter.com/dewitt/status/988609620536053760 Thank you Twitter for your insights and jokes. It does make you feel better to know that there are millions of people out there with the same frustrations and software-induced high blood pressure. The next time something goes wrong remember you're really just meat teaching sand to think. Hopefully that should put everything into perspective. Read more: Slow down to learn how to code faster
Read more
  • 0
  • 0
  • 24883

article-image-what-is-mob-programming
Pavan Ramchandani
24 Apr 2018
4 min read
Save for later

What is Mob Programming?

Pavan Ramchandani
24 Apr 2018
4 min read
Mob Programming is a programming paradigm that is an extension of Pair Programming. The difference is actually quite straightforward. If in Pair Programming engineers work in pairs, in Mob Programming the whole 'mob' of engineers works together. That mob might even include project managers and DevOps engineers. Like any good mob, it can get rowdy, but it can also get things done when you're all focused on the same thing. What is Mob programming? The most common definition given to this approach by Woody Zuill (the self-proclaimed father of Mob programming) is as following: “All the team members working on the same thing, at the same time, in the same space, and on the same computer.” Here are the key principles of Mob Programming: The team comes together in a meeting room with a set task due for the day. This group working together is called the mob. The entire code is developed on a single system. Only one member is allowed to operate the system. This means only the Driver can write the code or make any changes to the code. The other members are called “Navigator” and the expert among them for the problem at hand guides the Driver to write the code. Everyone keeps switching roles, meaning no one person will be at the system all the time. The session ends with all the aspects of the task getting successfully completed. The Mob Programming strategy The success of mob programming depends on the collaborative nature of the developers coming together to form the Mob. A group of 5-6 members make a good mob. For a productive session, each member needs to be familiar with software development concepts like testing, design patterns, software development life cycle, among others. A project manager can initiate the team to take the Mob programming approach in order to make the early stage of software development stress-free. Anyone stuck at a point in the problem will have Navigators who can bring in their expertise and keep the project development moving. The advantages of Mob Programming Mob programming might make you nervous about performing in a group. But the outcomes have shown that it tends to make work, stress free and almost error free since there are multiple opinions. The ground rules to define Mob remains at a state where a single person cannot be on the keyboard, writing code longer than the other. This reduces the grunt work and provides the opportunity to switch to a different role in the mob. This trait really challenges and intrigues  individuals to contribute to the project by using their creativity. Criticisms of Mob Programming Mob programming is about cutting the communication barrier in the team. However, in situations when the dynamics of some members is different, the session can turn out to be just some active members dictating the terms for the task at hand. Many developers out there are set in their own ways. When asked to work on a task/project at the same time, there might occur a conflict of interest. Some developers might not participate with their full capacity and this might lead the work being sub-standard. To do Mob Programming well, you need a good mob Mob programming is a modern approach to software development and comes with its own set of pros and cons. The productivity and fruitfulness of the approach lies in the credibility and dynamics of the members and not in the nature of the problem at hand. Hence the potential of this approach can be leveraged for solving difficult problems, given the best bunch of mobs to deal with it. More on programming paradigms: What is functional reactive programming? What is the difference between functional and object oriented programming?
Read more
  • 0
  • 3
  • 21699