Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-how-to-implement-data-validation-with-xamarin-forms
Packt Editorial Staff
03 Feb 2020
8 min read
Save for later

How to implement data validation with Xamarin.Forms

Packt Editorial Staff
03 Feb 2020
8 min read
In software, data validation is a process that ensures the validity and integrity of user input and usually involves checking that that data is in the correct format and contains an acceptable value. In this Xamarin tutorial, you'll learn how to implement it with Xamarin.Forms. This article is an excerpt from the book Mastering Xamarin.Forms, Third Edition by Ed Snider. The book walks you through the creation of a simple app, explaining at every step why you're doing the things you're doing, so that you gain the skills you need to use Xamarin.Forms to create your own high-quality apps. Types of data validation in mobile application development There are typically two types of validation when building apps: server-side and client-side. Both play an important role in the lifecycle of an app's data. Server-side validation is critical when it comes to security, making sure malicious data or code doesn't make its way into the server or backend infrastructure. Client-side validation is usually more about user experience than security. A mobile app should always validate its data before sending it to a backend (such as a web API) for several reasons, including the following: To provide real time feedback to the user about any issues instead of waiting on a response from the backend. To support saving data in offline scenarios where the backend is not available. To prevent encoding issues when sending the data to the backend. Just as a backend server should never assume all incoming data has been validated by the client-side before being received, a mobile app should also never assume the backend will do its own server-side validation, even though it's a good security practice. For this reason, mobile apps should perform as much client-side validation as possible. When adding validation to a mobile app the actual validation logic can go in a few areas of the app architecture. It could go directly in the UI code (the View layer of an Model-View-ViewModel (MVVM) architecture), it could go in the business logic or controller code (the ViewModel layer of an MVVM architecture), or it could even go in the HTTP code. In most cases when implementing the MVVM pattern it will make the most sense to include validation in the ViewModels for the following reasons: The validation rules can be checked as the individual properties of the ViewModel are changed. The validation rules are often part of or dependent on some business logic that exists in the ViewModel. Most importantly, having the validation rules implemented in the ViewModel makes them easy to test. Adding a base validation ViewModel in Xamarin.Forms Validation makes the most sense in the ViewModel. To do this we will start by creating a new base ViewModel that will provide some base level methods, properties, and events for subclassed ViewModels to leverage. This new base ViewModel will be called BaseValidationViewModel and will subclass the BaseViewModel. It will also implement an interface called from the System.ComponentModel namespace. INotifyDataErrorInfo works a lot like INotifyPropertyChanged – it specifies some properties about what errors have occurred and as well as an event for when the error state of particular property changes. Create a new class in the ViewModels folder name BaseValidationViewModel that subclasses BaseViewModel: Create a new class in the ViewModels folder name BaseValidationViewModel that subclasses BaseViewModel: public class BaseValidationViewModel : BaseViewModel { public BaseValidationViewModel(INavService navService) : base(navService) { } } 2. Update BaseValidationViewModel to implement INotifyDataErrorInfo as follows: public class BaseValidationViewModel : BaseViewModel, INotifyDataErrorInfo { readonly IDictionary<string, List<string>> _errors = new Dictionary<string, List<string>>(); public BaseValidationViewModel(INavService navService) : base(navService) { } public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged; public bool HasErrors => _errors?.Any(x => x.Value?.Any() == true) == true; public IEnumerable GetErrors(string propertyName) { if (string.IsNullOrWhiteSpace(propertyName)) { return _errors.SelectMany(x => x.Value); } if (_errors.ContainsKey(propertyName) && _errors[propertyName].Any()) { return _errors[propertyName]; } return new List<string>(); } } 3. In addition to implementing the required members of INotifyDataErrorInfo – ErrorChanged, HasErrors, and GetErrors() – we also need to add a method that actually handles validating ViewModel properties. This method needs a validation rule parameter in the form of a Func<bool> and an error message to be used if the validation rule fails. Add a protected method named Validate to BaseValidationViewModel as follows: public class BaseValidationViewModel : BaseViewModel, INotifyDataErrorInfo { // ... protected void Validate(Func<bool> rule, string error, [CallerMemberName] string propertyName = "") { if (string.IsNullOrWhiteSpace(propertyName)) return; if (_errors.ContainsKey(propertyName)) { _errors.Remove(propertyName); } if (rule() == false) { _errors.Add(propertyName, new List<string> { error }); } OnPropertyChanged(nameof(HasErrors)); ErrorsChanged?.Invoke(this, new DataErrorsChangedEventArgs(propertyName)); } } If the validation rule Func<bool> returns false, the error message that is provided is added to a private list of errors-used by HasErrors and GetErrors()-mapped to the specific property that called into this Validate() method. Lastly, the Validate() method invokes the ErrorsChanged event with the caller property's name included in the event arguments. Now any ViewModel that needs to perform validation can subclass BaseValidationViewModel and call the Validate() method to check if individual properties are valid. In the next section, we will use BaseValidationViewModel to add validation to the new entry page and its supporting ViewModel. Adding validation to the new entry page in Xamarin.Forms In this section we will add some simple client-side validation to a couple of the entry fields on the new entry page. First, update NewEntryViewModel to subclass BaseValidationViewModel instead of BaseViewModel. public class NewEntryViewModel : BaseValidationViewModel { // ... } Because BaseValidationViewModel subclasses BaseViewModel, NewEntryViewModel is still able to leverage everything in BaseViewModel as well. 2. Next, add a call to Validate() in the Title property setter that includes a validation rule specifying that the field cannot be left blank: public string Title { get => _title; set { _title = value; Validate(() => !string.IsNullOrWhiteSpace(_title), "Title must be provided."); OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } 3. Next, add a call to Validate() in the Rating property setter that includes a validation rule specifying that the field's value must be between 1 and 5: public int Rating { get => _rating; set { _rating = value; Validate(() => _rating >= 1 && _rating <= 5, "Rating must be between 1 and 5."); OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } Notice we also added SaveCommand.ChangeCanExecute() to the setter as well. This is because we want to update the SaveCommand's canExecute value when this value as changed since it will now impact the return value of CanSave(), which we will update in the next step. 4. Next, update CanSave() – the method used for the SaveCommand's canExecute function – to prevent saving if the ViewModel has any errors: bool CanSave() => !string.IsNullOrWhitespace(Title) && !HasErrors; 5. Finally, update the new entry page to reflect any errors by highlighting the field's text color in red: // NewEntryPage.xaml: <EntryCell x:Name="title" Label="Title" Text="{Binding Title}" /> // ... <EntryCell x:Name="rating" Label="Rating" Keyboard="Numeric" Text="{Binding Rating}" /> // NewEntryPage.xaml.cs: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using Xamarin.Forms; using TripLog.ViewModels; public partial class NewEntryPage : ContentPage { NewEntryViewModel ViewModel => BindingContext as NewEntryViewModel; public NewEntryPage() { InitializeComponent(); BindingContextChanged += Page_BindingContextChanged; BindingContext = new NewEntryViewModel(); } void Page_BindingContextChanged(object sender, EventArgs e) { ViewModel.ErrorsChanged += ViewModel_ErrorsChanged; } void ViewModel_ErrorsChanged(object sender, DataErrorsChangedEventArgs e) { var propHasErrors = (ViewModel.GetErrors(e.PropertyName) as List<string>)?.Any() == true; switch (e.PropertyName) { case nameof(ViewModel.Title): title.LabelColor = propHasErrors ? Color.Red : Color.Black; break; case nameof(ViewModel.Rating): rating.LabelColor = propHasErrors ? Color.Red : Color.Black; break; Default: break; } } } Now when we run the app we will see the following screenshots: [caption id="attachment_31034" align="aligncenter" width="846"] The TripLog new entry page with client-side validation[/caption] Navigate to the new entry page and enter an invalid value in either the Title or Rating field we will see the field label turn red and the Save button will be disabled. Once the error has been corrected the field label color returns to black and the Save button is re-enabled. Learn more mobile application development with Xamarin and the open source Xamarin.Forms toolkit with the third edition Mastering Xamarin.Forms. About Ed Snider Ed Snider is a senior software developer, speaker, author, and Microsoft MVP based in the Washington D.C./Northern Virginia area. He has a passion for mobile design and development and regularly speaks about Xamarin and Windows app development in the community. Ed works at InfernoRed Technology, where his primary role is working with clients and partners to build mobile and media focused products on iOS, Android, and Windows. He started working with.NET in 2005 when .NET 2.0 came out and has been building mobile apps with .NET since 2011. Ed was recognized as a Xamarin MVP in 2015 and as a Microsoft MVP in 2017. Find him on Twitter: @edsnider
Read more
  • 0
  • 0
  • 50162

article-image-data-visualization-ggplot2
Janu Verma
14 Nov 2016
6 min read
Save for later

Data Visualization with ggplot2

Janu Verma
14 Nov 2016
6 min read
In this post, we will learn about data visualization using ggplot2. ggplot2 is an R package for data exploration and visualization. It produces amazing graphics that are easy to interpret. The main use of ggplot2 is in exploratory analysis, and it is an important element of a data scientist’s toolkit. The ease with which complex graphs can be plotted using ggplot2 is probably its most attractive feature. It also allows you to slice and dice data in many different ways. ggplot2 is an implementation of A Layered Grammar of Graphics by Hadley Wickham, who is certainly the strongest R programmer out there. Installation Installing packages in R is very easy. Just type the following command on the R prompt. install.packages("ggplot2") Import the package in your R code. library(ggplot2) qplot We will start with the function qplot(). qplot is the simplest plotting function in ggplot2. It is very similar to the generic plot() function in basic R. We will learn how to plot basic statistical and exploratory plots using qplot. We will use the Iris dataset that comes with the base R package (and with every other data mining package that I know of). The Iris data consists of observations of phenotypic traits of three species of iris. In R, the iris data is provided as a data frame of 150 rows and 5 columns. The head command will print first 6 rows of the data. head(iris) The general syntax for the qplot function is: qplot(x,y, data=data.frame) We will plot sepal length against petal length: qplot(Sepal.Length, Petal.Length, data = iris) We can color each point by adding a color argument. We will color each point by what species it belongs to. qplot(Sepal.Length, Petal.Length, data = iris, color = Species) An observant reader would notice that this coloring scheme provides a way to visualize clustering. Also, we can change the size of each point by adding a size argument. Let the size correspond to the petal width. qplot(Sepal.Length, Petal.Length, data = iris, color = Species, size = Petal.Width) Thus we have a visualization for four-dimensional data. The alpha argument can be used to increase the transparency of the circles. qplot(Sepal.Length, Petal.Length, data = iris, color = Species, size = Petal.Width, alpha = I(0.7))   This reduces the over-plotting of the data. Label the graph using xlab, ylab, and main arguments. qplot(Sepal.Length, Petal.Length, data = iris, color = Species, xlab = "Sepal Length", ylab = "Petal Length", main = "Sepal vs. Petal Length in Fisher's Iris data")   All the above graphs were scatterplots. We can use the geom argument to draw other types of graphs. Histogram qplot(Sepal.Length, data = iris, geom="bar")   Line chart qplot(Sepal.Length, Petal.Length, data = iris, geom = "line", color = Species) ggplot Now we'll move to the ggplot() function, which has a much broader range of graphing techniques. We'll start with the basic plots similar to what we did with qplot(). First things first, load the library: library(ggplot2) As before, we will use the iris dataset. For ggplot(), we generate aesthetic mappings that describe how variables in the data are mapped to visual properties. This is specified by the aes function. Scatterplot ggplot(iris, aes(x = Sepal.Length, y = Petal.Length)) + geom_point() This is exactly what we got for qplot(). The syntax is a bit unintuitive, but is very consistent. The basic structure is: ggplot(data.frame, aes(x=, y=, ...)) + geom_*(.) + .... Add Colors in the scatterplot. ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color = Species)) + geom_point() The geom argument We can other geoms to create different types of graphs, for example, linechart: ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color=Species)) + geom_line() + ggtitle("Plot of sepal length vs. petal length") Histogram ggplot(iris, aes(x = Sepal.Length)) + geom_histogram(binwidth = .2) Histogram with color. Use the fill argument ggplot(iris, aes(x = Sepal.Length, fill=Species)) + geom_histogram(binwidth = .2)   The position argument can fine tune positioning to achieve useful effects; for example, we can adjust the position by dodging overlaps to the side. ggplot(iris, aes(x = Sepal.Length, fill = Species)) + geom_histogram(binwidth = .2, position = "dodge")   Labeling ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color=Species)) + geom_point() + ggtitle("Plot of sepal length vs. petal length") size and alpha arguments ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color=Species, size=Petal.Width)) + geom_point(alpha=0.7) + ggtitle("Plot of sepal length vs. petal length") We can also transform variables directly in the ggplot call. ggplot(iris, aes(x = log(Sepal.Length), y = Petal.Length/Petal.Width, color=Species)) + geom_point()   ggplot allows slicing of data. A way to split up the way we look at data is with the facets argument. These break the plot into multiple plots. ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color=Species)) + geom_point() + facet_wrap(~Species) + ggtitle("Plot of sepal length vs. petal length")   Themes We can use a whole range of themes for ggplot using the R package ggthemes. install.packages('ggthemes', dependencies = TRUE) library(ggthemes) Essentially you add the theme_*() argument to the ggplot call. The Economist theme: For someone like me who reads The Economist regularly and might work there one day (ask for my resume if you know someone!!!!), it would be fun/useful to try to reproduce some of the graphs they publish. We may not have the data available, but we have the theme. ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color = Species)) + geom_point() + theme_economist() Five Thirty Eight theme: Nate Silver is probably the most famous statistician (data scientist). His company, Five Thirty Eight, is a must for any data scientist. Good folks at ggthemes have created a 538 theme as well. ggplot(iris, aes(x = Sepal.Length, y = Petal.Length, color = Species)) + geom_point() + theme_fivethirtyeight()   About the author Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute.  He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, and so on.  His current focus is on the development of visual analytics systems for prediction and understanding. He advises start-ups and companies on data science and machine learning in the Delhi-NCR area.
Read more
  • 0
  • 0
  • 50026

article-image-most-popular-programming-languages-in-2018
Fatema Patrawala
19 Jun 2018
11 min read
Save for later

The 5 most popular programming languages in 2018

Fatema Patrawala
19 Jun 2018
11 min read
Whether you’re new to software engineering or have years of experience under your belt, knowing what to learn can be difficult. Which programming language should you learn first? Which programming language should you learn next? There are hundreds of programming languages in widespread use. Each one has different features, many even have very different applications. Of course, many do not too - and knowing which one to use can be tough. Making those decisions is as important a part of being a developer as it is writing code. Just as English is the international language for most businesses and French is the language of love, different programming languages are better suited for different purposes. Let us take a look at what developers have chosen to be the top programming languages for 2018 in this year’s Packt Skill Up Survey. Source: Packt Skill Up Survey 2018 Java reigns supreme Java, the accessible and ever-present programming language continues to be widespread year on year. It is a 22 year old language which if put into human perspective is incredible. You would have been old enough to have finished college, have a celebratory alcoholic drink, gamble in Iowa and get married without parental consent in Mississippi! With time and age, Java has proven its consistency as a reliable programming language for engineers and developers. Our Skill Up 2018 survey data reveals it’s still the most popular programming language. Perhaps one of the reasons for this is Java’s flexibility and adaptability. Programs can run on several different types of machines; as long as the computer has a Java Runtime Environment (JRE) installed, a Java program can run on it. Most types of computers will be compatible with a JRE. PCs running on Windows, Macs, Unix or Linux computers, even large mainframe computers and mobile phones will be compatible with Java. Another reason for Java’s popularity is because it is object oriented. Java code is robust because Java objects contain no references to data external to themselves. This is why Java is so widely used across industry. It’s reliable and secure. When it's time for mobile developers to build Android apps, the first and most popular option is Java. Java is the official language for Android development which means it has great support from Google and most apps on Playstore are built on Java. If one talks about job opportunities in field of Java, there are plenty such as ‘Java-UI Developers’, ’Android Developers’ and many others. Hence, there are numerous jobs opportunities available in Java, J2EE combining with other new technologies. These technologies are among the highest paid jobs in IT industry today. According to Payscale, an average Java Developer salary in the USA is around $102,000 with salaries for job postings nationwide being 77% higher than average salaries. Some of the widely known domains where Java is used extensively is financial services, banking, stock market, retail and scientific and research communities. Finally, it’s worth noting that the demand for Java developers is pretty high and given it’s the language required in many engineering roles. It does make sense to start learning it if you don’t yet know it. Start learning Java. Explore Packt’s latest Java eBooks and videos Java Tutorials: Design a RESTful web API with Java JavaScript retains the runner up spot JavaScript for years kept featuring in the list of top programming languages and this time it is 2nd after Java. JavaScript has continued to be everywhere from front end web pages to mobile web apps and everything in between. A web developer can add personality to websites by using JavaScript. It is the native language of the browser. If you want to build single-page web apps, there is really only one language option for building client-side single-page apps, and that is JavaScript. It is supported by all popular browsers like Microsoft Internet Explorer (beginning with version 3.0), Firefox, Safari, Opera, Google Chrome, etc. JavaScript has been the most versatile and popular among developers because, it is simple to learn, gives extended functionality to web pages, is an inexpensive language. In other words, it does not require any special compilers or text editors to run the script. And it is simple to implement and is relatively fast for end users. After the release of Node.js in 2009, the "JavaScript everywhere" paradigm has become a reality. This server-side JavaScript framework allows to unify web application development around a single programming language, rather than rely on a different language. npm, Node.js's package manager, is the largest ecosystem of open source libraries in the world. Node.js is also being adopted in IoT projects due to its speed, number of plugins and scalability convenience. Another open-source JavaScript library known as React Native lets you build cross platform mobile applications in order to make a smooth transition from web to mobile. According to Daxx, JavaScript developers grab some of the highest paid tech salaries with an average of $96,000 in the US. There are plenty more sources that name JavaScript as one of the most sought-after skills in 2017. ITJobsWatch ranked JavaScript as the second most in-demand programming language in the UK, a conclusion based on the number of job ads posted over the last three months. Read More: 5 JavaScript Machine learning libraries you need to know Python on the rise Python is a general-purpose language; often described as utilitarian, a word which makes it sound a little boring. It is in fact anything but that. Why is it considered among the top programming languages? Simple: it is a truly universal language, applicable to a range of problems and areas. If programmers begin working with Python for one job or career, they can easily jump to another, even if it’s in an unrelated industry. Google uses Python in a number of applications. Additionally it has a developer portal devoted to Python. It features free classes so employees can learn more about the language. Here are just a few reasons why Python is so popular: Python comes with plenty of documentation, guides, tutorials and a developer community which is incredibly active for timely help and support. Python has Google as one of its corporate sponsors. Google contributes to the growing list of documentation and support tools, and effectively acts as a high profile evangelist for Python. Python is one of the most popular languages for data science. It’s used to build machine learning and AI systems. It comes with excellent sets of task specific libraries from Numpy and Scipy for scientific computing to Django for web development. Ask any Python developer and they have to agree Python is speedy, reliable and efficient. You can deploy Python applications on any environment and there is little to no performance loss no matter which platform. Python is easy to learn probably because it lets you think like programmer, as it is easily readable and almost looks like everyday English. The learning curve is very gradual than all other programming languages which tend to be quite steep. Python contains everything from data structures, tools, support, Python community, the Python Software Foundation and everything combined makes it a favorite among the developers. According to Gooroo, a platform that provides tech skill and salary analytics, Python is one of the highest-paying programming languages in the USA. In fact, at $103,492 per year, Python developers are on an average the second best-paid in the country. The language is used for system operations, web development, server and administrative tools, deployment, scientific modeling, and now in building AI and machine learning systems among others. Read More: What are professionals planning to learn this year? Python, deep learning, yes. But also... C# is not left behind Since the introduction of C# in 2002 by Microsoft, C#’s popularity has remained relatively high. It is a general-purpose language designed for developing apps on the Microsoft platform. C# can be used to create almost anything but it’s particularly strong at building Windows desktop applications and games. It can also be used to develop web applications and has become increasingly popular for mobile development too. Cross-platform tools such as Xamarin allow apps written in C# to be used on almost any mobile device. C# is widely used to create games using the Unity game engine, which is the most popular game engine today. Although C#’s syntax is more logical and consistent than C++, it is a complex language. Mastering it may take more time than languages like Python. The average pay for a C# Developer according to Payscale is $69,006 per year. With C# you have solid prospects as big finance corporations are using C# as their choice of language. In the News: Exciting New Features in C# 8.0 C# Tutorial: Behavior Scripting in C# and Javascript for game developers SQL remains strong The acronym SQL stands for Structured Query Language. This essentially means “a programming language that is used to communicate with database in a structured way”. Much like how Javascript is necessary for making websites exciting and more than just a static page, SQL is one of the only two languages widely used to communicate with databases. SQL is one of the very few languages where you describe what you want, not how to get it.  That means the database can be much more intelligent about how it decides to build its response, and is robust to changes in the computing environment it runs on. It is about a set theory which forces you to think very clearly about what it is you want, and express that in a precise way. More importantly, it gives you a vocabulary and set of tools for thinking about the problem you’re trying to solve without reference to the specific idioms of your application. SQL is a critical skill for many data-related roles. Data scientists and analysts will need to know SQL, for example. But as data reaches different parts of modern business, such as marketing and product management, it’s becoming a useful language for non-developers as well to learn. Source: Packt Skill Up Survey 2018 By far MySQL is still the most commonly used databases for web based applications in 2018, according to the report. It’s freeware, but it is frequently updated with features and security improvements. It offers a lot of functionality even for a free database engine. There are a variety of user interfaces that can be implemented. It can be made to work with other databases, including DB2 and Oracle. Organizations that need a robust database management tool but are on a budget, MySQL will be a perfect choice for them. The average salary for a Senior SQL Developer according to Glassdoor is $100,271 in the US. Unlike other areas in IT industry, job options and growth criteria for SQL developer is completely different. You may work as a database administrator, system manager, SQL professionals etc it completely depends on the functional knowledge and experience you have gained. Read More: 12 most common MySQL errors you should be aware of C++ also among favorites C++, the general purpose language for systems programming, was designed in 1979 by Bjarne Stroustrup. Despite competition from other powerful languages such as Java, Python, and SQL, it is still prevalent among developers. People have been predicting its demise for more than 20 years, but it's still growing. Basically, nothing that can handle complexity runs as fast as C++. C++ is designed for fairly hardcore applications, and it's generally used together with some scripting language or other. It is for building systems that need high performance, high reliability, small footprint, low energy consumption, all of these good things. That’s why you see telecom and financial applications built on C++. C++ is also considered to be one of the best solutions for creating applications that process music and film. There is even an extensive list of websites and tools created based on C++. Financial analysts predict that earnings in this specialization will reach at least $102,000 per year. C++ Tutorials: Getting started with C++ Features Read More: Working with Shaders in C++ to create 3D games Other programming languages like C, PHP, Swift, Go etc were also on the developer’s choice list. The creation of supporting technologies in the last few years has given rise to speculation that programming languages are converging in terms of their capabilities. For example, Node.js enables Javascript developers to create back-end functionalities - relinquishing the need to learn PHP or even hiring a separate back-end developer altogether. As of now, we can only speculate on future tech trends. However, definitely it will be worth keeping an eye out for which horse to bet on! 20 ways to describe programming in 5 words A really basic guide to batch file programming What is Mob Programming?
Read more
  • 0
  • 1
  • 49976

article-image-connecting-arduino-web
Packt
27 Sep 2016
6 min read
Save for later

Connecting Arduino to the Web

Packt
27 Sep 2016
6 min read
In this article by Marco Schwartz, author of Internet of Things with Arduino Cookbook, we will focus on getting you started by connecting an Arduino board to the web. This article will really be the foundation of the rest of the article, so make sure to carefully follow the instructions so you are ready to complete the exciting projects we'll see in the rest of the article. (For more resources related to this topic, see here.) You will first learn how to set up the Arduino IDE development environment, and add Internet connectivity to your Arduino board. After that, we'll see how to connect a sensor and a relay to the Arduino board, for you to understand the basics of the Arduino platform. Then, we are actually going to connect an Arduino board to the web, and use it to grab the content from the web and to store data online. Note that all the projects in this article use the Arduino MKR1000 board. This is an Arduino board released in 2016 that has an on-board Wi-Fi connection. You can make all the projects in the article with other Arduino boards, but you might have to change parts of the code. Setting up the Arduino development environment In this first recipe of the article, we are going to see how to completely set up the Arduino IDE development environment, so that you can later use it to program your Arduino board and build Internet of Things projects. How to do it… The first thing you need to do is to download the latest version of the Arduino IDE from the following address: https://www.arduino.cc/en/Main/Software This is what you should see, and you should be able to select your operating system: You can now install the Arduino IDE, and open it on your computer. The Arduino IDE will be used through the whole article for several tasks. We will use it to write down all the code, but also to configure the Arduino boards and to read debug information back from those boards using the Arduino IDE Serial monitor. What we need to install now is the board definition for the MKR1000 board that we are going to use in this article. To do that, open the Arduino boards manager by going to Tools | Boards | Boards Manager. In there, search for SAMD boards: To install the board definition, just click on the little Install button next to the board definition. You should now be able to select the Arduino/GenuinoMKR1000 board inside the Arduino IDE: You are now completely set to develop Arduino projects using the Arduino IDE and the MKR1000 board. You can, for example, try to open an example sketch inside the IDE: How it works... The Arduino IDE is the best tool to program a wide range of boards, including the MKR1000 board that we are going to use in this article. We will see that it is a great tool to develop Internet of Things projects with Arduino. As we saw in this recipe, the board manager makes it really easy to use new boards inside the IDE. See also These are really the basics of the Arduino framework that we are going to use in the whole article to develop IoT projects. Options for Internet connectivity with Arduino Most of the boards made by Arduino don't come with Internet connectivity, which is something that we really need to build Internet of Things projects with Arduino. We are now going to review all the options that are available to us with the Arduino platform, and see which one is the best to build IoT projects. How to do it… The first option, that has been available since the advent of the Arduino platform, is to use a shield. A shield is basically an extension board that can be placed on top of the Arduino board. There are many shields available for Arduino. Inside the official collection of shields, you will find motor shields, prototyping shields, audio shields, and so on. Some shields will add Internet connectivity to the Arduino boards, for example the Ethernet shield or the Wi-Fi shield. This is a picture of the Ethernet shield: The other option is to use an external component, for example a Wi-Fi chip mounted on a breakout board, and then connect this shield to Arduino. There are many Wi-Fi chips available on the market. For example, Texas Instruments has a chip called the CC3000 that is really easy to connect to Arduino. This is a picture of a breakout board for the CC3000 Wi-Fi chip: Finally, there is the possibility of using one of the few Arduino boards that has an onboard Wi-Fi chip or Ethernet connectivity. The first board of this type introduced by Arduino was the Arduino Yun board. It is a really powerful board, with an onboard Linux machine. However, it is also a bit complex to use compared to other Arduino boards. Then, Arduino introduced the MKR1000 board, which is a board that integrates a powerful ARM Cortex M0+ process and a Wi-Fi chip on the same board, all in the small form factor. Here is a picture of this board: What to choose? All the solutions above would work to build powerful IoT projects using Arduino. However, as we want to easily build those projects and possibly integrate them into projects that are battery-powered, I chose to use the MKR1000 board for all the projects in this article. This board is really simple to use, powerful, and doesn't required any connections to hook it up with a Wi-Fi chip. Therefore, I believe this is the perfect board for IoT projects with Arduino. There's more... Of course, there are other options to connect Arduino boards to the Web. One option that's becoming more and more popular is to use 3G or LTE to connect your Arduino projects to the Web, again using either shields or breakout boards. This solution has the advantage of not requiring an active Internet connection like a Wi-Fi router, and can be used anywhere, for example outdoors. See also Now we have chosen a board that we will use in our IoT projects with Arduino, you can move on to the next recipe to actually learn how to use it. Resources for Article: Further resources on this subject: Building a Cloud Spy Camera and Creating a GPS Tracker [article] Internet Connected Smart Water Meter [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 49939

article-image-how-to-perform-event-handling-in-react-tutorial
Bhagyashree R
31 Dec 2018
11 min read
Save for later

How to perform event handling in React [Tutorial]

Bhagyashree R
31 Dec 2018
11 min read
React has a unique approach to handling events: declaring event handlers in JSX. The differentiating factor with event handling in React components is that it's declarative. Contrast this with something like jQuery, where you have to write imperative code that selects the relevant DOM elements and attaches event handler functions to them. The advantage with the declarative approach to event handlers in JSX markup is that they're part of the UI structure. Not having to track down the code that assigns event handlers is mentally liberating. [box type="shadow" align="" class="" width=""]This article is taken from the book React and React Native - Second Edition by Adam Boduch. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL. To follow along with the examples implemented in this article, you can find the code in the GitHub repository of the book.[/box] In this tutorial, we will look at how event handlers for particular elements are declared in JSX. It will walk you through the implementation of inline and higher-order event handler functions. Then you'll learn how React actually maps event handlers to DOM elements under the hood. Finally, you'll learn about the synthetic events that React passes to event handler functions, and how they're pooled for performance purposes. Declaring event handlers In this section, you'll write a basic event handler, so that you can get a feel for the declarative event handling syntax found in React applications. Then, we will see how to use generic event handler functions. Declaring handler functions Let's take a look at a basic component that declares an event handler for the click event of an element: Find the code for this section in GitHub. The event handler function, this.onClick(), is passed to the onClick property of the <button> element. By looking at this markup, it's clear what code is going to run when the button is clicked. Multiple event handlers What I really like about the declarative event handler syntax is that it's easy to read when there's more than one handler assigned to an element. Sometimes, for example, there are two or three handlers for an element. Imperative code is difficult to work with for a single event handler, let alone several of them. When an element needs more handlers, it's just another JSX attribute. This scales well from a code maintainability perspective: Find the code for this section in GitHub. This input element could have several more event handlers, and the code would be just as readable. As you keep adding more event handlers to your components, you'll notice that a lot of them do the same thing. Next, you'll learn how to share generic handler functions across components. Importing generic handlers Any React application is likely going to share the same event handling logic for different components. For example, in response to a button click, the component should sort a list of items. It's these types of generic behaviors that belong in their own modules so that several components can share them. Let's implement a component that uses a generic event handler function: Find the code for this section on GitHub. Let's walk through what's going on here, starting with the imports. You're importing a function called reverse(). This is the generic event handler function that you're using with your <button> element. When it's clicked, the list should reverse its order. The onReverseClick method actually calls the generic reverse() function. It is created using bind() to bind the context of the generic function to this component instance. Finally, looking at the JSX markup, you can see that the onReverseClick() function is used as the handler for the button click. So how does this work, exactly? Do you have a generic function that somehow changes the state of this component because you bound context to it? Well, pretty much, yes, that's it. Let's look at the generic function implementation now: Find the code for this section on GitHub. This function depends on a this.state property and an items array within the state. The key is that the state is generic; an application could have many components with an items array in its state. Here's what our rendered list looks like: As expected, clicking the button causes the list to sort, using your generic reverse() event handler: Next, you'll learn how to bind the context and the argument values of event handler functions. Event handler context and parameters In this section, you'll learn about React components that bind their event handler contexts and how you can pass data into event handlers. Having the right context is important for React event handler functions because they usually need access to component properties or state. Being able to parameterize event handlers is also important because they don't pull data out of DOM elements. Getting component data In this section, you'll learn about scenarios where the handler needs access to component properties, as well as argument values. You'll render a custom list component that has a click event handler for each item in the list. The component is passed an array of values as follows: Find the code for this section on GitHub. Each item in the list has an id property, used to identify the item. You'll need to be able to access this ID when the item is clicked in the UI so that the event handler can work with the item. Here's what the MyList component implementation looks like: Find the code for this section on GitHub. Here is what the rendered list looks like: You have to bind the event handler context, which is done in the constructor. If you look at the onClick() event handler, you can see that it needs access to the component so that it can look up the clicked item in this.props.items. Also, the onClick() handler is expecting an id parameter. If you take a look at the JSX content of this component, you can see that calling bind() supplies the argument value for each item in the list. This means that when the handler is called in response to a click event, the id of the item is already provided. Higher-order event handlers A higher-order function is a function that returns a new function. Sometimes, higher-order functions take functions as arguments too. In the preceding example, you used bind() to bind the context and argument values of your event handler functions. Higher-order functions that return event handler functions are another technique. The main advantage of this technique is that you don't call bind() several times. Instead, you just call the function where you want to bind parameters to the function. Let's look at an example component: Find the code for this section on GitHub. This component renders three buttons and has three pieces of state—a counter for each button. The onClick() function is automatically bound to the component context because it's defined as an arrow function. It takes a name argument and returns a new function. The function that is returned uses this name value when called. It uses computed property syntax (variables inside []) to increment the state value for the given name. Here's what that component content looks like after each button has been clicked a few times: Inline event handlers The typical approach to assigning handler functions to JSX properties is to use a named function. However, sometimes you might want to use an inline function. This is done by assigning an arrow function directly to the event property in the JSX markup: Find the code for this section on GitHub. The main use of inlining event handlers like this is when you have a static parameter value that you want to pass to another function. In this example, you're calling console.log() with the string clicked. You could have set up a special function for this purpose outside of the JSX markup by creating a new function using bind(), or by using a higher-order function. But then you would have to think of yet another name for yet another function. Inlining is just easier sometimes. Binding handlers to elements When you assign an event handler function to an element in JSX, React doesn't actually attach an event listener to the underlying DOM element. Instead, it adds the function to an internal mapping of functions. There's a single event listener on the document for the page. As events bubble up through the DOM tree to the document, the React handler checks to see whether any components have matching handlers. The process is illustrated here: Why does React go to all of this trouble, you might ask? To keep the declarative UI structures separated from the DOM as much as possible. For example, when a new component is rendered, its event handler functions are simply added to the internal mapping maintained by React. When an event is triggered and it hits the document object, React maps the event to the handlers. If a match is found, it calls the handler. Finally, when the React component is removed, the handler is simply removed from the list of handlers. None of these DOM operations actually touch the DOM. It's all abstracted by a single event listener. This is good for performance and the overall architecture (keep the render target separate from the application code). Synthetic event objects When you attach an event handler function to a DOM element using the native addEventListener() function, the callback will get an event argument passed to it. Event handler functions in React are also passed an event argument, but it's not the standard Event instance. It's called SyntheticEvent, and it's a simple wrapper for native event instances. Synthetic events serve two purposes in React: Provides a consistent event interface, normalizing browser inconsistencies Synthetic events contain information that's necessary for propagation to work Here's an illustration of the synthetic event in the context of a React component: Event pooling One challenge with wrapping native event instances is that this can cause performance issues. Every synthetic event wrapper that's created will also need to be garbage collected at some point, which can be expensive in terms of CPU time. For example, if your application only handles a few events, this wouldn't matter much. But even by modest standards, applications respond to many events, even if the handlers don't actually do anything with them. This is problematic if React constantly has to allocate new synthetic event instances. React deals with this problem by allocating a synthetic instance pool. Whenever an event is triggered, it takes an instance from the pool and populates its properties. When the event handler has finished running, the synthetic event instance is released back into the pool, as shown here: This prevents the garbage collector from running frequently when a lot of events are triggered. The pool keeps a reference to the synthetic event instances, so they're never eligible for garbage collection. React never has to allocate new instances either. However, there is one gotcha that you need to be aware of. It involves accessing the synthetic event instances from asynchronous code in your event handlers. This is an issue because, as soon as the handler has finished running, the instance goes back into the pool. When it goes back into the pool, all of its properties are cleared. Here's an example that shows how this can go wrong: Find the code for this section on GitHub. The second call to  console.log() is attempting to access a synthetic event property from an asynchronous callback that doesn't run until the event handler completes, which causes the event to empty its properties. This results in a warning and an undefined value. This tutorial introduced you to event handling in React. The key differentiator between React and other approaches to event handling is that handlers are declared in JSX markup. This makes tracking down which elements handle which events much simpler. We learned that it's a good idea to share event handling functions that handle generic behavior.  We saw the various ways to bind the event handler function context and parameter values. Then, we discussed the inline event handler functions and their potential use, as well as how React actually binds a single DOM event handler to the document object. Synthetic events are an abstraction that wraps the native event, and you learned why they're necessary and how they're pooled for efficient memory consumption. If you found this post useful, do check out the book, React and React Native - Second Edition. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL. JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more!
Read more
  • 0
  • 0
  • 49914

article-image-cambridge-analytica-ethics-data-science
Richard Gall
20 Mar 2018
5 min read
Save for later

The Cambridge Analytica scandal and ethics in data science

Richard Gall
20 Mar 2018
5 min read
Earlier this month, Stack Overflow published the results of its 2018 developer survey. In it, there was an interesting set of questions around the concept of 'ethical code'. The main takeaway was ultimately that the area remains a gray area. The Cambridge Analytica scandal, however, has given the issue of 'ethical code' a renewed urgency in the last couple of days. The data analytics company are alleged to have not only been involved in votes in the UK and US, but also of harvesting copious amounts of data from Facebook (illegally). For whistleblower Christopher Wylie, the issue of ethical code is particularly pronounced. “I created Steve Bannon’s psychological mindfuck tool” he told Carole Cadwalladr in an interview in the Guardian. Cambridge Analytica: psyops or just market research? Wylie is a data scientist whose experience over the last half a decade or so has been impressive. It’s worth noting however, that Wylie’s career didn’t begin in politics. His academic career was focused primarily on fashion forecasting. That might all seem a little prosaic, but it underlines the fact that data science never happens in a vacuum. Data scientists always operate within a given field. It might be tempting to view the world purely through the prism of impersonal data and cold statistics. To a certain extent you have to if you’re a data scientist or a statistician. But at the very least this can be unhelpful; at worst a potential threat to global democracy. At one point in the interview Wylie remarks that: ...it’s normal for a market research company to amass data on domestic populations. And if you’re working in some country and there’s an auxiliary benefit to a current client with aligned interests, well that’s just a bonus. This is potentially the most frightening thing. Cambridge Analytica’s ostensible role in elections and referenda isn’t actually that remarkable. For all the vested interests and meetings between investors, researchers and entrepreneurs, the scandal is really just the extension of data mining and marketing tactics employed by just about every organization with a digital presence on the planet. Data scientists are always going to be in a difficult position. True, we're not all going to end up working alongside Steve Bannon. But your skills are always being deployed with a very specific end in mind. It’s not always easy to see the effects and impact of your work until later, but it’s still essential for data scientists and analysts to be aware of whose data is being collected and used, how it’s being used and why. Who is responsible for the ethics around data and code? There was another interesting question in the Stack Overflow survey that's relevant to all of this. The survey asked respondents who was ultimately most responsible for code that accomplishes something unethical. 57.5% claimed upper management were responsible, 22.8% said the person who came up with the idea, and 19.7% said it was the responsibility of the developer themselves. Clearly the question is complex. The truth lies somewhere between all three. Management make decisions about what’s required from an organizational perspective, but the engineers themselves are, of course, a part of the wider organizational dynamic. They should be in a position where they are able to communicate any personal misgivings or broader legal issues with the work they are being asked to do. The case of Wylie and Cambridge Analytica is unique, however. But it does highlight that data science can be deployed in ways that are difficult to predict. And without proper channels of escalation and the right degree of transparency it's easy for things to remain secretive, hidden in small meetings, email threads and paper trails. That's another thing that data scientists need to remember. Office politics might be a fact of life, but when you're a data scientist you're sitting on the apex of legal, strategic and political issues. To refuse to be aware of this would be naive. What the Cambridge Analytica story can teach data scientists But there's something else worth noting. This story also illustrates something more about the world in which data scientists are operating. This is a world where traditional infrastructure is being dismantled. This is a world where privatization and outsourcing is viewed as the route towards efficiency and 'value for money'. Whether you think that’s a good or bad thing isn’t really the point here. What’s important is that it makes the way we use data, even the code we write more problematic than ever because it’s not always easy to see how it’s being used. Arguably Wylie was naive. His curiosity and desire to apply his data science skills to intriguing and complex problems led him towards people who knew just how valuable he could be. Wylie has evidently developed greater self-awareness. This is perhaps the main reason why he has come forward with his version of events. But as this saga unfolds it’s worth remembering the value of data scientists in the modern world - for a range of organizations. It’s made the concept of the 'citizen data scientist' take on an even more urgent and literal meaning. Yes data science can help to empower the economy and possibly even toy with democracy. But it can also be used to empower people, improve transparency in politics and business. If anything, the Cambridge Analytica saga proves that data science is a dangerous field - not only the sexiest job of the twenty-first century, but one of the most influential in shaping the kind of world we're going to live in. That's frightening, but it's also pretty exciting.
Read more
  • 0
  • 0
  • 49887
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-why-does-the-c-programming-language-refuse-to-die
Kunal Chaudhari
23 Oct 2018
8 min read
Save for later

Why does the C programming language refuse to die?

Kunal Chaudhari
23 Oct 2018
8 min read
As a technology research analyst, I try to keep up the pace with the changing world of technology. It seems like every single day, there is a new programming language, framework, or tool emerging out of nowhere. In order to keep up, I regularly have a peek at the listicles on TIOBE, PyPL, and Stackoverflow along with some twitter handles and popular blogs, which keeps my FOMO (fear of missing out) in check. So here I was, strolling through the TIOBE index, to see if a new programming language is making the rounds or if any old timer language is facing its doomsday in the lower half of the table. The first thing that caught my attention was Python, which interestingly broke into the top 3 for the first time since it was ranked by TIOBE. I never cared to look at Java, since it has been claiming the throne ever since it became popular. But with my pupils dilated, I saw something which I would have never expected, especially with the likes of Python, C#, Swift, and JavaScript around. There it was, the language which everyone seemed to have forgotten about, C, sitting at the second position, like an old tower among the modern skyscrapers in New York. A quick scroll down shocked me even more: C was only recently named the language of 2017 by TIOBE. The reason it won was because of its impressive yearly growth of 1.69% and its consistency - C has been featured in the top 3 list for almost four decades now. This result was in stark contrast to many news sources (including Packt’s own research) that regularly place languages like Python and JavaScript on top of their polls. But surely this was an indicator of something. Why would a language which is almost 50 years old still hold its ground against the ranks of newer programming language? C has a design philosophy for the ages A solution to the challenges of UNIX and Assembly The 70s was a historic decade for computing. Many notable inventions and developments, particularly in the area of networking, programming, and file systems, took place. UNIX was one such revolutionary milestone, but the biggest problem with UNIX was that it was programmed in Assembly language. Assembly was fine for machines, but difficult for humans. Watch now: Learn and Master C Programming For Absolute Beginners So, the team working on UNIX, namely Dennis Ritchie, Ken Thompson, and Brian Kernighan decided to develop a language which could understand data types and supported data structures. They wanted C to be as fast as the Assembly but with the features of a high-level language. And that’s how C came into existence, almost out of necessity. But the principles on which the C programming language was built were not coincidental. It compelled the programmers to write better code and strive for efficiency rather than being productive by providing a lot of abstractions. Let’s discuss some features which makes C a language to behold. Portability leads to true ubiquity When you try to search for the biggest feature of C, almost instantly, you are bombarded with articles on portability. Which makes you wonder what is it about portability that makes C relevant in the modern world of computing. Well, portability can be defined as the measure of how easily software can be transferred from one computer environment or architecture to another. One can also argue that portability is directly proportional to how flexible your software is. Applications or software developed using C are considered to be extremely flexible because you can find a C compiler for almost every possible platform available today. So if you develop your application by simply exercising some discipline to write portable code, you have yourself an application which virtually runs on every major platform. Programmer-driven memory management It is universally accepted that C is a high-performance language. The primary reason for this is that it works very close to the machine, almost like an Assembly language. But very few people realize that versatile features like explicit memory management makes C one of the better-performing languages out there. Memory management allows programmers to scale down a program to run with a small amount of memory. This feature was important in the early days because the computers or terminals as they used to call it, were not as powerful as they are today. But the advent of mobile devices and embedded systems has renewed the interest of programmers in C language because these mobile devices demand that the programmers keep memory requirement to a minimum. Many of the programming languages today provide functionalities like garbage collection that takes care of the memory allocation. But C calls programmers’ bluff by asking them to be very specific. This makes their programs and its memory efficient and inherently fast. Manual memory management makes C one of the most suitable languages for developing other programming languages. This is because even in a garbage collector someone has to take care of memory allocation - that infrastructure is provided by C. Structure is all I got As discussed before, Assembly was difficult to work with, particularly when dealing with large chunks of code. C has a structured approach in its design which allows the programmers to break down the program into multiple blocks of code for execution, often called as procedures or functions. There are, of course, multiple ways in which software development can be approached. Structural programming is one such approach that is effective when you need to break down a problem into its component pieces and then convert it into application code. Although it might not be quite as in vogue as object-oriented programming is today, this approach is well suited to tasks like database scripting or developing small programs with logical sequences to carry out specific set of tasks. As one of the best languages for structural programming, it’s easy to see how C has remained popular, especially in the context of embedded systems and kernel development. Applications that stand the test of time If Beyoncé would have been a programmer, she definitely might have sang “Who runs the world? C developers”. And she would have been right. If you’re using a digital alarm clock, a microwave, or a car with anti-lock brakes, chances are that they have been programmed using C. Though it was never developed specifically for embedded systems, C has become the defacto programming language for embedded developers, systems programmers, and kernel development. C: the backbone of our operating systems We already know that the world famous UNIX system was developed in C, but is it the only popular application that has been developed using C? You’ll be astonished to see the list of applications that follows: The world desktop operating market is dominated by three major operating systems: Windows, MAC, and Linux. The kernel of all these OSes has been developed using the C programming language. Similarly, Android, iOS, and Windows are some of the popular mobile operating systems whose kernels were developed in C. Just like UNIX, the development of Oracle Database began on Assembly and then switched to C. It’s still widely regarded as one of the best database systems in the world. Not only Oracle but MySQL and PostgreSQL have also been developed using C - the list goes on and on. What does the future hold for C? So far we discussed the high points of C programming, it’s design principle and the applications that were developed using it. But the bigger question to ask is, what its future might hold. The answer to this question is tricky, but there are several indicators which show positive signs. IoT is one such domain where the C programming language shines. Whether or not beginner programmers should learn C has been a topic of debate everywhere. The general consensus says that learning C is always a good thing, as it builds up your fundamental knowledge of programming and it looks good on the resume. But IoT provides another reason to learn C, due to the rapid growth in the IoT industry. We already saw the massive number of applications built on C and their codebase is still maintained in it. Switching to a different language means increased cost for the company. Since it is used by numerous enterprises across the globe the demand for C programmers is unlikely to vanish anytime soon. Read Next Rust as a Game Programming Language: Is it any good? Google releases Oboe, a C++ library to build high-performance Android audio apps Will Rust Replace C++?
Read more
  • 0
  • 0
  • 49866

article-image-getting-started-metasploitable2-and-kali-linux
Packt
16 Mar 2017
8 min read
Save for later

Getting Started with Metasploitable2 and Kali Linux

Packt
16 Mar 2017
8 min read
In this article, by Michael Hixon, the author of the book, Kali Linux Network Scanning Cookbook - Second Edition, we will be covering: Installing Metasploitable2 Installing Kali Linux Managing Kali services (For more resources related to this topic, see here.) Introduction We need to first configure a security lab environment using VMware Player (Windows) or VMware Fusion (macOS), and then install Ubuntu server and Windows server on the VMware Player. Installing Metasploitable2 Metasploitable2 is an intentionally vulnerable Linux distribution and is also a highly effective security training tool. It comes fully loaded with a large number of vulnerable network services and also includes several vulnerable web applications. Getting ready Prior to installing Metasploitable2 in your virtual security lab, you will first need to download it from the Web. There are many mirrors and torrents available for this. One relatively easy method to acquire Metasploitable is to download it from SourceForge at the following URL: http://sourceforge.net/projects/metasploitable/files/Metasploitable2/. How to do it… Installing Metasploitable2 is likely to be one of the easiest installations that you will perform in your security lab. This is because it is already prepared as a VMware virtual machine when it is downloaded from SourceForge. Once the ZIP file has been downloaded, you can easily extract the contents of this file in Windows or macOS by double-clicking on it in Explorer or Finder respectively. Have a look at the following screenshot: Once extracted, the ZIP file will return a directory with five additional files inside. Included among these files is the VMware VMX file. To use Metasploitable in VMware, just click on the File drop-down menu and click on Open. Then, browse to the directory created from the ZIP extraction process and open Metasploitable.vmx as shown in the following screenshot: Once the VMX file has been opened, it should be included in your virtual machine library. Select it from the library and click on Run to start the VM and get the following screen: After the VM loads, the splash screen will appear and request login credentials. The default credential to log in is msfadmin for both the username and password. This machine can also be accessed via SSH. How it works… Metasploitable was built with the idea of security testing education in mind. This is a highly effective tool, but it must be handled with care. The Metasploitable system should never be exposed to any untrusted networks. It should never be assigned a publicly routable IP address, and port forwarding should not be used to make services accessible over the Network Address Translation (NAT) interface. Installing Kali Linux Kali Linux is known as one of the best hacking distributions providing an entire arsenal of penetration testing tools. The developers recently released Kali Linux 2016.2 which solidified their efforts in making it a rolling distribution. Different desktop environments have been released along side GNOME in this release, such as e17, LXDE, Xfce, MATE and KDE. Kali Linux will be kept updated with latest improvements and tools by weekly updated ISOs. We will be using Kali Linux 2016.2 with GNOME as our development environment for many of the scanning scripts. Getting ready Prior to installing Kali Linux in your virtual security testing lab, you will need to acquire the ISO file (image file) from a trusted source. The Kali Linux ISO can be downloaded at http://www.kali.org/downloads/. How to do it… After selecting the Kali Linux .iso file you will be asked what operating system you are installing. Currently Kali Linux is built on Debian 8.x, choose this and click Continue. You will see a finish screen but lets customize the settings first. Kali Linux requires at least 15 GB of hard disk space and a minimum for 512 MB RAM. After booting from the Kali Linux image file, you will be presented with the initial boot menu. Here, scroll down to the sixth option, Install, and press Enter to start the installation process: Once started, you will be guided through a series of questions to complete the installation process. Initially, you will be asked to provide your location (country) and language. You will then be provided with an option to manually select your keyboard configuration or use a guided detection process. The next step will request that you provide a hostname for the system. If the system will be joined to a domain, ensure that the hostname is unique, as shown in the following screenshot: Next, you will need to set the password for the root account. It is recommended that this be a fairly complex password that will not be easily compromised. Have a look at the following screenshot: Next, you will be asked to provide the time zone you are located in. The system will use IP geolocation to provide its best guess of your location. If this is not correct, manually select the correct time zone: To set up your disk partition, using the default method and partitioning scheme should be sufficient for lab purposes: It is recommended that you use a mirror to ensure that your software in Kali Linux is kept up to date: Next, you will be asked to provide an HTTP proxy address. An external HTTP proxy is not required for any of the exercises, so this can be left blank: Finally, choose Yes to install the GRUB boot loader and then press Enter to complete the installation process. When the system loads, you can log in with the root account and the password provided during the installation: How it works… Kali Linux is a Debian Linux distribution that has a large number of preinstalled, third-party penetration tools. While all of these tools could be acquired and installed independently, the organization and implementation that Kali Linux provides makes it a useful tool for any serious penetration tester. Managing Kali services Having certain services start automatically can be useful in Kali Linux. For example lets say I want to be able to SSHto my Kali Linux distribution. By default theSSH server does not start on Kali, so I would need to log into the virtual machine, open a terminal and run the command to start the service. Getting ready Prior to modifying the Kali Linux configuration, you will need to have installed the operating system on a virtual machine. How to do it… We begin by logging into our Kali Linux distribution and opening a terminal window. Type in the following command: More than likely it is already installed and you will see a message as follows: So now that we know it is installed, let us see if the service is running. From the terminal type: If the SSH server is not running you will see something like this: Type Ctrl+ C to get back to the prompt. Now lets start the service and check the status again by typing the following command: You should now see something like the following: So now the service is running, great, but if we reboot we will see that the service does not start automatically. To get the service to start every time we boot we need to make a few configuration changes. Kali Linux puts in extra measures to make sure you do not have services starting automatically. Specifically, it has a service whitelist and blacklist file. So to get SSH to start at boot we will need to remove the SSH service from the blacklist. To do this open a terminal and type the following command: Navigate down to the section labeled List of blacklisted init scripts and find ssh. Now we will just add a # symbol to the beginning of that line, save the file and exit. The file should look similar to the following screenshot: Now that we have removed the blacklist policy, all we need to do is enable ssh at boot. To do this run the following commands from your terminal: That’s it! Now when you reboot the service will begin automatically.You can use this same procedure to start other services automatically at boot time. How it works… The rc.local file is executed after all the normal Linux services have started. It can be used to start services you want available after you boot your machine. Summary In this article, we learnt aboutMetasploitable2 and it's installation. We also covered what is Kali Linux, how it is installed, and the services it provides.Kali Linux is a useful tool for any serious penetration tester by the organization and implementation provided by it. Resources for Article: Further resources on this subject: Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article] Creating a VM using VirtualBox - Ubuntu Linux [article]
Read more
  • 0
  • 0
  • 49857

article-image-how-install-virtualbox-guest-additions
Packt
28 Apr 2010
6 min read
Save for later

How to Install VirtualBox Guest Additions

Packt
28 Apr 2010
6 min read
In this article by Alfonso V. Romero, author of VirtualBox 3.1: Beginner's Guide, you shall learn what the Guest Additions are and how to install them on Windows, Linux, and Open Solaris virtual machines. Introducing Guest Additions Ok, you have been playing with a couple of virtual machines by now, and I know it feels great to be capable of running two different operating systems on the same machine, but as I said at the beginning of this article, what's the use if you can't share information between your host and guest systems, or if you can't maximize your guest screen? Well, that's what Guest Additions are for. With the Guest Additions installed in a virtual machine, you'll be able to enjoy all of these: Full keyboard and mouse integration between your host and your guest operating systems: This means you won't need to use the capture/uncapture feature anymore! Enhanced video support in your guest virtual machine: You will be able to use 3D and 2D video acceleration features, and if you resize your virtual machine's screen, its video resolution will adjust automatically. Say hello to full screen! Better time synchronization between host and guest: A virtual machine doesn't know it's running inside another computer, so it expects to have 100% of the CPU and all the other resources without any interference. Since the host computer needs to use those resources too, sometimes it can get messy, especially if both host and guest are running several applications at the same time, as would be the case in most situations. But don't worry about it! Guest Additions re-synchronize your virtual machine's time regularly to avoid any serious problems. Shared folders: This is one of my favorite Guest Addition features! You can designate one or more folders to share files easily between your host and your guest, as if they were network shares. Seamless windows: This is another amazing feature that lets you use any application in your guest as if you were running it directly from your host PC. For example, if you have a Linux host and a Windows virtual machine, you'll be able to use MS Word or MS Excel as if you were running it directly from your Linux machine! Shared clipboard: This is a feature I couldn't live without because it lets you copy and paste information between your host and guest applications seamlessly. Automated Windows guest logons: The Guest Additions for Windows provide modules to automate the logon process in a Windows virtual machine. Now that I've shown you that life isn't worth living without the Guest Additions installed in your virtual machines, let's see how to install them on Windows, Linux, and Solaris hosts. Installing Guest Additions for Windows "Hmmm… I still can't see how you plan to get everyone in this office using VirtualBox… you can't even use that darned virtual machine in full screen!" your boss says with a mocking tone of voice. "Well boss, if you stick with me during these article's exercises, you'll get your two cents worth!" you respond back to him, feeling like the new kid in town… Time for action – installing Guest Additions on a Windows XP virtual machine In this exercise, I'll show you how to install Guest Additions on your Windows XP virtual machine so that your boss can stop whining about not being able to use the Windows VM as a real PC... Open VirtualBox, and start your WinXP virtual machine. Press F8 when Windows XP is booting to enter the Windows Advanced Options Menu, select the Safe Mode option, and press Enter to continue: Wait for Windows XP to boot, and then login with your administrator account. The Windows is running in safe mode dialog will show up. Click on Yes to continue, and wait for Windows to finish booting up. Then select the Devices option from VirtualBox's menu bar, and click on the Install Guest Additions option: VirtualBox will mount the Guest Additions ISO file on your virtual machine's CD/DVD-ROM drive, and the Guest Additions installer will start automatically: Click on Next to continue. The License Agreement dialog will appear next. Click on the I Agree button to accept the agreement and continue. Leave the default destination folder on the Choose Install Location dialog, and click on Next to continue. The Choose Components dialog will appear next. Select the Direct 3D Support option to enable it, and click on Install to continue: Guest Additions will begin to install in your virtual machine. The next dialog will inform you that the Guest Additions setup program replaced some Windows system files, and if you receive a warning dialog from the Windows File Protection mechanism, you need to click on the Cancel button of that warning dialog to avoid restoring the original files. Click on OK to continue. The setup program will ask if you want to reboot your virtual machine to complete the installation process. Make sure the Reboot now option is enabled, and click on Finish to continue. Your WinXP virtual machine will reboot automatically, and once it has finished booting up, a VirtualBox - Information dialog will appear, to tell you that your guest operating system now supports mouse pointer integration. Enable the Do not show this message again, and click on the OK button to continue. Now you'll be able to move your mouse freely between your virtual machine's area and your host machine's area! You'll also be able to resize your virtual machine's screen, and it will adjust automatically! What just happened? Hey, it was pretty simple and neat, huh? Now you can start to get the real juice out of your Windows virtual machines! And your boss will never complain again about your wonderful idea of virtualizing your office environment with VirtualBox! The Guest Additions installation process on Windows virtual machines is a little bit more complicated than its Linux or OpenSolaris counterparts due to the fact that Windows has a file protection mechanism that interferes with some system files VirtualBox needs to replace. By using 'Safe Mode', VirtualBox can override the file protection mechanism, and the Guest Additions software can be installed successfully. But if you don't want or need 3D guest support, you can install Guest Additions in Windows, normal mode.
Read more
  • 0
  • 0
  • 49825

article-image-getting-started-opengl-es-30-using-glsl-30
Packt
04 Jun 2015
27 min read
Save for later

Getting started with OpenGL ES 3.0 Using GLSL 3.0

Packt
04 Jun 2015
27 min read
In this article by Parminder Singh, author of OpenGL ES 3.0 Cookbook, we will program shaders in Open GL ES shading language 3.0, load and compile a shader program, link a shader program, check errors in OpenGL ES 3.0, use the per-vertex attribute to send data to a shader, use uniform variables to send data to a shader, and program OpenGL ES 3.0 Hello World Triangle. (For more resources related to this topic, see here.) OpenGL ES 3.0 stands for Open Graphics Library for embedded systems version 3.0. It is a set of standard API specifications established by the Khronos Group. The Khronos Group is an association of members and organizations that are focused on producing open standards for royalty-free APIs. OpenGL ES 3.0 specifications were publicly released in August 2012. These specifications are backward compatible with OpenGL ES 2.0, which is a well-known de facto standard for embedded systems to render 2D and 3D graphics. Embedded operating systems such as Android, iOS, BlackBerry, Bada, Windows, and many others support OpenGL ES. OpenGL ES 3.0 is a programmable pipeline. A pipeline is a set of events that occur in a predefined fixed sequence, from the moment input data is given to the graphic engine to the output generated data for rendering the frame. A frame refers to an image produced as an output on the screen by the graphics engine. This article will provide OpenGL ES 3.0 development using C/C++, you can refer to the book OpenGL ES 3.0 Cookbook for more information on building OpenGL ES 3.0 applications on Android and iOS platforms. We will begin this article by understanding the basic programming of the OpenGL ES 3.0 with the help of a simple example to render a triangle on the screen. You will learn how to set up and create your first application on both platforms step by step. Understanding EGL: The OpenGL ES APIs require the EGL as a prerequisite before they can effectively be used on the hardware devices. The EGL provides an interface between the OpenGL ES APIs and the underlying native windowing system. Different OS vendors have their own ways to manage the creation of drawing surfaces, communication with hardware devices, and other configurations to manage the rendering context. EGL provides an abstraction, how the underlying system needs to be implemented in a platform-independent way. The EGL provides two important things to OpenGL ES APIs: Rendering context: This stores the data structure and important OpenGL ES states that are essentially required for rendering purpose Drawing surface: This provides the drawing surface to render primitives The following screenshot shows OpenGL ES 3.0 the programmable pipeline architecture. EGL provides the following responsibilities: Checking the available configuration to create rendering context of the device windowing system Creating the OpenGL rendering surface for drawing Compatibility and interfacing with other graphics APIs such as OpenVG, OpenAL, and so on Managing resources such as texture mapping Programming shaders in Open GL ES shading language 3.0 OpenGL ES shading language 3.0 (also called as GLSL) is a C-like language that allows us to writes shaders for programmable processors in the OpenGL ES processing pipeline. Shaders are the small programs that run on the GPU in parallel. OpenGL ES 3.0 supports two types of shaders: vertex shader and fragment shader. Each shader has specific responsibilities. For example, the vertex shader is used to process geometric vertices; however, the fragment shader processes the pixels or fragment color information. More specially, the vertex shader processes the vertex information by applying 2D/3D transformation. The output of the vertex shader goes to the rasterizer where the fragments are produced. The fragments are processed by the fragment shader, which is responsible for coloring them. The order of execution of the shaders is fixed; the vertex shader is always executed first, followed by the fragment shader. Each shader can share its processed data with the next stage in the pipeline. Getting ready There are two types of processors in the OpenGL ES 3.0 processing pipeline to execute vertex shader and fragment shader executables; it is called programmable processing unit: Vertex processor: The vertex processor is a programmable unit that operates on the incoming vertices and related data. It uses the vertex shader executable and run it on the vertex processor. The vertex shader needs to be programmed, compiled, and linked first in order to generate an executable, which can then be run on the vertex processor. Fragment processor: The fragment processor uses the fragment shader executable to process fragment or pixel data. The fragment processor is responsible for calculating colors of the fragment. They cannot change the position of the fragments. They also cannot access neighboring fragments. However, they can discard the pixels. The computed color values from this shader are used to update the framebuffer memory and texture memory. How to do it... Here are the sample codes for vertex and fragment shaders: Program the following vertex shader and store it into the vertexShader character type array variable: #version 300 es             in vec4 VertexPosition, VertexColor;       uniform float RadianAngle; out vec4     TriangleColor;     mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),                    -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; TriangleColor = VertexColor; } Program the following fragment shader and store it into another character array type variable called fragmentShader: #version 300 es         precision mediump float; in vec4   TriangleColor; out vec4 FragColor;     void main() {           FragColor = TriangleColor; }; How it works... Like most of the languages, the shader program also starts its control from the main() function. In both shader programs, the first line, #version 300 es, specifies the GLES shading language version number, which is 3.0 in the present case. The vertex shader receives a per-vertex input variable VertexPosition. The data type of this variable is vec4, which is one of the inbuilt data types provided by OpenGL ES Shading Language. The in keyword in the beginning of the variable specifies that it is an incoming variable and it receives some data outside the scope of our current shader program. Similarly, the out keyword specifies that the variable is used to send some data value to the next stage of the shader. Similarly, the color information data is received in VertexColor. This color information is passed to TriangleColor, which sends this information to the fragment shader, and is the next stage of the processing pipeline. The RadianAngle is a uniform type of variable that contains the rotation angle. This angle is used to calculate the rotation matrix to make the rendering triangle revolve. The input values received by VertexPosition are multiplied using the rotation matrix, which will rotate the geometry of our triangle. This value is assigned to gl_Position. The gl_Position is an inbuilt variable of the vertex shader. This variable is supposed to write the vertex position in the homogeneous form. This value can be used by any of the fixed functionality stages, such as primitive assembly, rasterization, culling, and so on. In the fragment shader, the precision keyword specifies the default precision of all floating types (and aggregates, such as mat4 and vec4) to be mediump. The acceptable values of such declared types need to fall within the range specified by the declared precision. OpenGL ES Shading Language supports three types of the precision: lowp, mediump, and highp. Specifying the precision in the fragment shader is compulsory. However, for vertex, if the precision is not specified, it is considered to be highest (highp). The FragColor is an out variable, which sends the calculated color values for each fragment to the next stage. It accepts the value in the RGBA color format. There's more… As mentioned there are three types of precision qualifiers, the following table describes these, the range and precision of these precision qualifiers are shown here: Loading and compiling a shader program The shader program created needs to be loaded and compiled into a binary form. This article will be helpful in understanding the procedure of loading and compiling a shader program. Getting ready Compiling and linking a shader is necessary so that these programs are understandable and executable by the underlying graphics hardware/platform (that is, the vertex and fragment processors). How to do it... In order to load and compile the shader source, use the following steps: Create a NativeTemplate.h/NativeTemplate.cpp and define a function named loadAndCompileShader in it. Use the following code, and proceed to the next step for detailed information about this function: GLuint loadAndCompileShader(GLenum shaderType, const char* sourceCode) { GLuint shader = glCreateShader(shaderType); // Create the shader if ( shader ) {      // Pass the shader source code      glShaderSource(shader, 1, &sourceCode, NULL);      glCompileShader(shader); // Compile the shader source code           // Check the status of compilation      GLint compiled = 0;      glGetShaderiv(shader,GL_COMPILE_STATUS,&compiled);      if (!compiled) {        GLint infoLen = 0;       glGetShaderiv(shader,GL_INFO_LOG_LENGTH, &infoLen);        if (infoLen) {          char* buf = (char*) malloc(infoLen);          if (buf) {            glGetShaderInfoLog(shader, infoLen, NULL, buf);            printf("Could not compile shader %s:" buf);            free(buf);          }          glDeleteShader(shader); // Delete the shader program          shader = 0;        }    } } return shader; } This function is responsible for loading and compiling a shader source. The argument shaderType accepts the type of shader that needs to be loaded and compiled; it can be GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. The sourceCode specifies the source program of the corresponding shader. Create an empty shader object using the glCreateShader OpenGL ES 3.0 API. This API returns a non-zero value if the object is successfully created. This value is used as a handle to reference this object. On failure, this function returns 0. The shaderType argument specifies the type of the shader to be created. It must be either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER: GLuint shader = glCreateShader(shaderType); Unlike in C++, where object creation is transparent, in OpenGL ES, the objects are created behind the curtains. You can access, use, and delete the objects as and when required. All the objects are identified by a unique identifier, which can be used for programming purposes. The created empty shader object (shader) needs to be bound first with the shader source in order to compile it. This binding is performed by using the glShaderSource API: // Load the shader source code glShaderSource(shader, 1, &sourceCode, NULL); The API sets the shader code string in the shader object, shader. The source string is simply copied in the shader object; it is not parsed or scanned. Compile the shader using the glCompileShader API. It accepts a shader object handle shader:        glCompileShader(shader);   // Compile the shader The compilation status of the shader is stored as a state of the shader object. This state can be retrieved using the glGetShaderiv OpenGL ES API:      GLint compiled = 0;   // Check compilation status      glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); The glGetShaderiv API accepts the handle of the shader and GL_COMPILE_STATUS as an argument to check the status of the compilation. It retrieves the status in the compiled variable. The compiled returns GL_TRUE if the last compilation was successful. Otherwise, it returns GL_FALSE. Use glGetShaderInfoLog to get the error report. The shader is deleted if the shader source cannot be compiled. Delete the shader object using the glDeleteShader API. Return the shader object ID if the shader is compiled successfully: return shader; // Return the shader object ID How it works... The loadAndCompileShader function first creates an empty shader object. This empty object is referenced by the shader variable. This object is bound with the source code of the corresponding shader. The source code is compiled through a shader object using the glCompileShader API. If the compilation is successful, the shader object handle is returned successfully. Otherwise, the shader object returns 0 and needs to be deleted explicitly using glDeleteShader. The status of the compilation can be checked using glGetShaderiv with GL_COMPILE_STATUS. There's more... In order to differentiate among various versions of OpenGL ES and GL shading language, it is useful to get this information from the current driver of your device. This will be helpful to make the program robust and manageable by avoiding errors caused by version upgrade or application being installed on older versions of OpenGL ES and GLSL. The other vital information can be queried from the current driver, such as the vendor, renderer, and available extensions supported by the device driver. This information can be queried using the glGetString API. This API accepts a symbolic constant and returns the queried system metrics in the string form. The printGLString wrapper function in our program helps in printing device metrics: static void printGLString(const char *name, GLenum s) {    printf("GL %s = %sn", name, (const char *) glGetString(s)); } Linking a shader program Linking is a process of aggregating a set (vertex and fragment) of shaders into one program that maps to the entirety of the programmable phases of the OpenGL ES 3.0 graphics pipeline. The shaders are compiled using shader objects. These objects are used to create special objects called program objects to link it to the OpenGL ES 3.0 pipeline. How to do it... The following instructions provide a step-by-step procedure to link as shader: Create a new function, linkShader, in NativeTemplate.cpp. This will be the wrapper function to link a shader program to the OpenGL ES 3.0 pipeline. Follow these steps to understand this program in detail: GLuint linkShader(GLuint vertShaderID,GLuint fragShaderID){ if (!vertShaderID || !fragShaderID){ // Fails! return return 0; } // Create an empty program object GLuint program = glCreateProgram(); if (program) { // Attach vertex and fragment shader to it glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID);   // Link the program glLinkProgram(program); GLint linkStatus = GL_FALSE; glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);   if (linkStatus != GL_TRUE) { GLint bufLength = 0; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength); if (bufLength) { char* buf = (char*) malloc(bufLength); if(buf) { glGetProgramInfoLog(program,bufLength,NULL, buf); printf("Could not link program:n%sn", buf); free(buf); } } glDeleteProgram(program); program = 0; } } return program; } Create a program object with glCreateProgram. This API creates an empty program object using which the shader objects will be linked: GLuint program = glCreateProgram(); //Create shader program Attach shader objects to the program object using the glAttachShader API. It is necessary to attach the shaders to the program object in order to create the program executable: glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID); How it works... The linkShader wrapper function links the shader. It accepts two parameters: vertShaderID and fragShaderID. They are identifiers of the compiled shader objects. The createProgram function creates a program object. It is another OpenGL ES object to which shader objects are attached using glAttachShader. The shader objects can be detached from the program object if they are no longer in need. The program object is responsible for creating the executable program that runs on the programmable processor. A program in OpenGL ES is an executable in the OpenGL ES 3.0 pipeline that runs on the vertex and fragment processors. The program object is linked using glLinkShader. If the linking fails, the program object must be deleted using glDeleteProgram. When a program object is deleted it automatically detached the shader objects associated with it. The shader objects need to be deleted explicitly. If a program object is requested for deletion, it will only be deleted until it's not being used by some other rendering context in the current OpenGL ES state. If the program's object link successfully, then one or more executable will be created, depending on the number of shaders attached with the program. The executable can be used at runtime with the help of the glUseProgram API. It makes the executable a part of the current OpenGL ES state. Checking errors in OpenGL ES 3.0 While programming, it is very common to get unexpected results or errors in the programmed source code. It's important to make sure that the program does not generate any error. In such a case, you would like to handle the error gracefully. OpenGL ES 3.0 allows us to check the error using a simple routine called getGlError. The following wrapper function prints all the error messages occurred in the programming: static void checkGlError(const char* op) { for(GLint error = glGetError(); error; error= glGetError()){ printf("after %s() glError (0x%x)n", op, error); } } Here are few examples of code that produce OpenGL ES errors: glEnable(GL_TRIANGLES);   // Gives a GL_INVALID_ENUM error   // Gives a GL_INVALID_VALUE when attribID >= GL_MAX_VERTEX_ATTRIBS glEnableVertexAttribArray(attribID); How it works... When OpenGL ES detects an error, it records the error into an error flag. Each error has a unique numeric code and symbolic name. OpenGL ES does not track each time an error has occurred. Due to performance reasons, detecting errors may degrade the rendering performance therefore, the error flag is not set until the glGetError routine is called. If there is no error detected, this routine will always return GL_NO_ERRORS. In distributed environment, there may be several error flags, therefore, it is advisable to call the glGetError routine in the loop, as this routine can record multiple error flags. Using the per-vertex attribute to send data to a shader The per-vertex attribute in the shader programming helps receive data in the vertex shader from OpenGL ES program for each unique vertex attribute. The received data value is not shared among the vertices. The vertex coordinates, normal coordinates, texture coordinates, color information, and so on are the example of per-vertex attributes. The per-vertex attributes are meant for vertex shaders only, they cannot be directly available to the fragment shader. Instead, they are shared via the vertex shader throughout variables. Typically, the shaders are executed on the GPU that allows parallel processing of several vertices at the same time using multicore processors. In order to process the vertex information in the vertex shader, we need some mechanism that sends the data residing on the client side (CPU) to the shader on the server side (GPU). This article will be helpful to understand the use of per-vertex attributes to communicate with shaders. Getting ready The vertex shader contains two per-vertex attributes named VertexPosition and VertexColor: // Incoming vertex info from program to vertex shader in vec4 VertexPosition; in vec4 VertexColor; The VertexPosition contains the 3D coordinates of the triangle that defines the shape of the object that we intend to draw on the screen. The VertexColor contains the color information on each vertex of this geometry. In the vertex shader, a non-negative attribute location ID uniquely identifies each vertex attribute. This attribute location is assigned at the compile time if not specified in the vertex shader program. Basically, the logic of sending data to their shader is very simple. It's a two-step process: Query attribute: Query the vertex attribute location ID from the shader. Attach data to the attribute: Attach this ID to the data. This will create a bridge between the data and the per-vertex attribute specified using the ID. The OpenGL ES processing pipeline takes care of sending data. How to do it... Follow this procedure to send data to a shader using the per-vertex attribute: Declare two global variables in NativeTemplate.cpp to store the queried attribute location IDs of VertexPosition and VertexColor: GLuint positionAttribHandle; GLuint colorAttribHandle; Query the vertex attribute location using the glGetAttribLocation API: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle    = glGetAttribLocation (programID, "VertexColor"); This API provides a convenient way to query an attribute location from a shader. The return value must be greater than or equals to 0 in order to ensure that attribute with given name exists. Send the data to the shader using the glVertexAttribPointer OpenGL ES API: // Send data to shader using queried attrib location glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); The data associated with geometry is passed in the form of an array using the generic vertex attribute with the help of the glVertexAttribPointer API. It's important to enable the attribute location. This allows us to access data on the shader side. By default, the vertex attributes are disabled. Similarly, the attribute can be disabled using glDisableVertexAttribArray. This API has the same syntax as that of glEnableVertexAttribArray. Store the incoming per-vertex attribute color VertexColor into the outgoing attribute TriangleColor in order to send it to the next stage (fragment shader): in vec4 VertexColor; // Incoming data from CPU out vec4 TriangleColor; // Outgoing to next stage void main() { . . . TriangleColor = VertexColor; } Receive the color information from the vertex shader and set the fragment color: in vec4 TriangleColor; // Incoming from vertex shader out vec4 FragColor; // The fragment color void main() { FragColor = TriangleColor; }; How it works... The per-vertex attribute variables VertexPosition and VertexColor defined in the vertex shader are the lifelines of the vertex shader. These lifelines constantly provide the data information from the client side (OpenGL ES program or CPU) to server side (GPU). Each per-vertex attribute has a unique attribute location available in the shader that can be queried using glGetAttribLocation. The per-vertex queried attribute locations are stored in positionAttribHandle; colorAttribHandle must be bound with the data using attribute location with glVertexAttribPointer. This API establishes a logical connection between client and server side. Now, the data is ready to flow from our data structures to the shader. The last important thing is the enabling of the attribute on the shader side for optimization purposes. By default, all the attribute are disabled. Therefore, even if the data is supplied for the client side, it is not visible at the server side. The glEnableVertexAttribArray API allows us to enable the per-vertex attributes on the shader side. Using uniform variables to send data to a shader The uniform variables contain the data values that are global. They are shared by all vertices and fragments in the vertex and fragment shaders. Generally, some information that is not specific to the per-vertex is treated in the form of uniform variables. The uniform variable could exist in both the vertex and fragment shaders. Getting ready The vertex shader we programmed in the programming shaders in OpenGL ES shading language 3.0 contains a uniform variable RadianAngle. This variable is used to rotate the rendered triangle: // Uniform variable for rotating triangle uniform float RadianAngle; This variable will be updated on the client side (CPU) and send to the shader at server side (GPU) using special OpenGL ES 3.0 APIs. Similar to per-vertex attributes for uniform variables, we need to query and bind data in order to make it available in the shader. How to do it... Follow these steps to send data to a shader using uniform variables: Declare a global variable in NativeTemplate.cpp to store the queried attribute location IDs of radianAngle: GLuint radianAngle; Query the uniform variable location using the glGetUniformLocation API: radianAngle=glGetUniformLocation(programID,"RadianAngle"); Send the updated radian value to the shader using the glUniform1f API: float degree = 0; // Global degree variable float radian; // Global radian variable radian = degree++/57.2957795; // Update angle and convert it into radian glUniform1f(radianAngle, radian); // Send updated data in the vertex shader uniform Use a general form of 2D rotation to apply on the entire incoming vertex coordinates: . . . . uniform float RadianAngle; mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; . . . . . } How it works... The uniform variable RadianAngle defined in the vertex shader is used to apply rotation transformation on the incoming per-vertex attribute VertexPosition. On the client side, this uniform variable is queried using glGetUniformLocation. This API returns the index of the uniform variable and stores it in radianAngle. This index will be used to bind the updated data information that is stored the radian with the glUniform1f OpenGL ES 3.0 API. Finally, the updated data reaches the vertex shader executable, where the general form of the Euler rotation is calculated: mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); The rotation transformation is calculated in the form of 2 x 2 matrix rotation, which is later promoted to a 4 x 4 matrix when multiplied by VertexPosition. The resultant vertices cause to rotate the triangle in a 2D space. Programming OpenGL ES 3.0 Hello World Triangle The NativeTemplate.h/cpp file contains OpenGL ES 3.0 code, which demonstrates a rotating colored triangle. The output of this file is not an executable on its own. It needs a host application that provides the necessary OpenGL ES 3.0 prerequisites to render this program on a device screen. Developing Android OpenGL ES 3.0 application Developing iOS OpenGL ES 3.0 application This will provide all the necessary prerequisites that are required to set up OpenGL ES, rendering and querying necessary attributes from shaders to render our OpenGL ES 3.0 "Hello World Triangle" program. In this program, we will render a simple colored triangle on the screen. Getting ready OpenGL ES requires a physical size (pixels) to define a 2D rendering surface called a viewport. This is used to define the OpenGL ES Framebuffer size. A buffer in OpenGL ES is a 2D array in the memory that represents pixels in the viewport region. OpenGL ES has three types of buffers: color buffer, depth buffer, and stencil buffer. These buffers are collectively known as a framebuffer. All the drawings commands effect the information in the framebuffer. The life cycle of this is broadly divided into three states: Initialization: Shaders are compiled and linked to create program objects Resizing: This state defines the viewport size of rendering surface Rendering: This state uses the shader program object to render geometry on screen How to do it... Follow these steps to program this: Use the NativeTemplate.cpp file and create a createProgramExec function. This is a high-level function to load, compile, and link a shader program. This function will return the program object ID after successful execution: GLuint createProgramExec(const char* VS, const char* FS) { GLuint vsID = loadAndCompileShader(GL_VERTEX_SHADER, VS); GLuint fsID = loadAndCompileShader(GL_FRAGMENT_SHADER, FS); return linkShader(vsID, fsID); } Visit the loading and compiling a shader program and linking shader program for more information on the working of loadAndCompileShader and linkShader. Use NativeTemplate.cpp, create a function GraphicsInit and create the shader program object by calling createProgramExec: GLuint programID; // Global shader program handler bool GraphicsInit(){ printOpenGLESInfo(); // Print GLES3.0 system metrics // Create program object and cache the ID programID = createProgramExec(vertexShader, fragmentShader); if (!programID) { // Failure !!! return printf("Could not create program."); return false; } checkGlError("GraphicsInit"); // Check for errors } Create a new function GraphicsResize. This will set the viewport region: bool GraphicsResize( int width, int height ){ glViewport(0, 0, width, height); } The viewport determines the portion of the OpenGL ES surface window on which the rendering of the primitives will be performed. The viewport in OpenGL ES is set using the glViewPort API. Create the gTriangleVertices global variable that contains the vertices of the triangle: GLfloat gTriangleVertices[] = { { 0.0f, 0.5f}, {-0.5f, - 0.5f}, { 0.5f, -0.5f} }; Create the GraphicsRender renderer function. This function is responsible for rendering the scene. Add the following code in it and perform the following steps to understand this function:        bool GraphicsRender(){ glClear( GL_COLOR_BUFFER_BIT ); // Which buffer to clear? – color buffer glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Clear color with black color   glUseProgram( programID ); // Use shader program and apply radian = degree++/57.2957795; // Query and send the uniform variable. radianAngle = glGetUniformLocation(programID, "RadianAngle"); glUniform1f(radianAngle, radian); // Query 'VertexPosition' from vertex shader positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); // Send data to shader using queried attribute glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); glEnableVertexAttribArray(positionAttribHandle); // Enable vertex position attribute glEnableVertexAttribArray(colorAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); // Draw 3 triangle vertices from 0th index } Choose the appropriate buffer from the framebuffer (color, depth, and stencil) that we want to clear each time the frame is rendered using the glClear API. In this, we want to clear color buffer. The glClear API can be used to select the buffers that need to be cleared. This API accepts a bitwise OR argument mask that can be used to set any combination of buffers. Query the VertexPosition generic vertex attribute location ID from the vertex shader into positionAttribHandle using glGetAttribLocation. This location will be used to send triangle vertex data that is stored in gTriangleVertices to the shader using glVertexAttribPointer. Follow the same instruction in order to get the handle of VertexColor into colorAttributeHandle: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); Enable the generic vertex attribute location using positionAttribHandle before the rendering call and render the triangle geometry. Similarly, for the per-vertex color information, use colorAttribHandle: glEnableVertexAttribArray(positionAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); How it works... When the application starts, the control begins with GraphicsInit, where the system metrics are printed out to make sure that the device supports OpenGL ES 3.0. The OpenGL ES programmable pipeline requires vertex shader and fragment shader program executables in the rendering pipeline. The program object contains one or more executables after attaching the compiled shader objects and linking them to program. In the createProgramExec function the vertex and fragment shaders are compiled and linked, in order to generate the program object. The GraphicsResize function generates the viewport of the given dimension. This is used internally by OpenGL ES 3.0 to maintain the framebuffer. In our current application, it is used to manage color buffer. Finally, the rendering of the scene is performed by GraphicsRender, this function clears the color buffer with black background and renders the triangle on the screen. It uses a shader object program and sets it as the current rendering state using the glUseProgram API. Each time a frame is rendered, data is sent from the client side (CPU) to the shader executable on the server side (GPU) using glVertexAttribPointer. This function uses the queried generic vertex attribute to bind the data with OpenGL ES pipeline. There's more... There are other buffers also available in OpenGL ES 3.0: Depth buffer: This is used to prevent background pixels from rendering if there is a closer pixel available. The rule of prevention of the pixels can be controlled using special depth rules provided by OpenGL ES 3.0. Stencil buffer: The stencil buffer stores the per-pixel information and is used to limit the area of rendering. The OpenGL ES API allows us to control each buffer separately. These buffers can be enabled and disabled as per the requirement of the rendering. The OpenGL ES can use any of these buffers (including color buffer) directly to act differently. These buffers can be set via preset values by using OpenGL ES APIs, such as glClearColor, glClearDepthf, and glClearStencil. Summary This article covered different aspects of OpenGL ES 3.0. Resources for Article: Further resources on this subject: OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [article] OpenGL 4.0: Building a C++ Shader Program Class [article] Introduction to Modern OpenGL [article]
Read more
  • 0
  • 0
  • 49782
article-image-how-to-build-a-live-interactive-visual-dashboard-in-power-bi-with-azure-stream
Sugandha Lahoti
17 Apr 2018
4 min read
Save for later

How to build a live interactive visual dashboard in Power BI with Azure Stream

Sugandha Lahoti
17 Apr 2018
4 min read
Azure Stream Analytics is a managed complex event processing interactive data engine. As a built-in output connector, it offers the facility of building live interactive intelligent BI charts and graphics using Microsoft's cloud-based Business Intelligent tool called Power BI. In this tutorial we implement a data architecture pipeline by designing a visual dashboard using Microsoft Power BI and Stream Analytics. Prerequisites of building an interactive visual live dashboard in Power BI with Stream Analytics: Azure subscription Power BI Office365 account (the account email ID should be the same for both Azure and Power BI). It can be a work or school account Integrating Power BI as an output job connector for Stream Analytics To start with connecting the Power BI portal as an output of an existing Stream Analytics job, follow the given steps:  First, select Outputs in the Azure portal under JOB TOPOLOGY:  After clicking on Outputs, click on +Add in the top left corner of the job window, as shown in the following screenshot:  After selecting +Add, you will be prompted to enter the New output connectors of the job. Provide details such as job Output name/alias; under Sink, choose Power BI from the drop-down menu.  On choosing Power BI as the streaming job output Sink, it will automatically prompt you to authorize the Power BI work/personal account with Azure. Additionally, you may create a new Power BI account by clicking on Signup. By authorizing, you are granting access to the Stream Analytics output permanently in the Power BI dashboard. You can also revoke the access by changing the password of the Power BI account or deleting the output/job.  Post the successful authorization of the Power BI account with Azure, there will be options to select Group Workspace, which is the Power BI tenant workspace where you may create the particular dataset to configure processed Stream Analytics events. Furthermore, you also need to define the Table Name as data output. Lastly, click on the Create button to integrate the Power BI data connector for real-time data visuals:   Note: If you don't have any custom workspace defined in the Power BI tenant, the default workspace is My Workspace. If you define a dataset and table name that already exists in another Stream Analytics job/output, it will be overwritten. It is also recommended that you just define the dataset and table name under the specific tenant workspace in the job portal and not explicitly create them in Power BI tenants as Stream Analytics automatically creates them once the job starts and output events start to push into the Power BI dashboard.   On starting the Streaming job with output events, the Power BI dataset would appear under the dataset tab following workspace. The  dataset can contain maximum 200,000 rows and supports real-time streaming events and historical BI report visuals as well: Further Power BI dashboard and reports can be implemented using the streaming dataset. Alternatively, you may also create tiles in custom dashboards by selecting CUSTOM STREAMING  DATA under REAL-TIME OATA, as shown in the following screenshot:  By selecting Next, the streaming dataset should be selected and then the visual type, respective fields, Axis, or legends, can be defined: Thus, a complete interactive near real-time Power BI visual dashboard can be implemented with analyzed streamed data from Stream Analytics, as shown in the following screenshot, from the real-world Connected Car-Vehicle Telemetry analytics dashboard: In this article we saw a step-by-step implementation of a real-time visual dashboard using Microsoft Power BI with processed data from Azure Stream Analytics as the output data connector. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. To learn more on designing and managing Stream Analytics jobs using reference data and utilizing petabyte-scale enterprise data store with Azure Data Lake Store, you may refer to this book. Unlocking the secrets of Microsoft Power BI Ride the third wave of BI with Microsoft Power BI  
Read more
  • 0
  • 0
  • 49665

article-image-post-production-activities-for-ensuring-and-enhancing-it-reliability-tutorial
Savia Lobo
13 Jan 2019
15 min read
Save for later

Post-production activities for ensuring and enhancing IT reliability [Tutorial]

Savia Lobo
13 Jan 2019
15 min read
Evolving business expectations are being duly automated through a host of delectable developments in the IT space. These improvements elegantly empower business houses to deliver newer and premium business offerings fast. Businesses are insisting on reliable business operations.  IT pundits and professors are therefore striving hard and stretching further to bring forth viable methods and mechanisms toward reliable IT. Site Reliability Engineering (SRE) is a promising engineering discipline, and its key goals include significantly enhancing and ensuring the reliability aspects of IT. In this tutorial, we will focus on the various ways and means of bringing up the reliability assurance factor by embarking on some unique activities in the post-production/deployment phase. Monitoring, measuring, and managing the various operational and behavioral data is the first and foremost step toward reliable IT infrastructures and applications. This tutorial is an excerpt from a book titled Practical Site Reliability Engineering written by Pethuru Raj Chelliah, Shreyash Naithani, Shailender Singh. This book will teach you to create, deploy, and manage applications at scale using Site Reliability Engineering (SRE) principles. All the code files for this book can be found at GitHub. Monitoring clouds, clusters, and containers The cloud centers are being increasingly containerized and managed. That is, there are going to be well-entrenched containerized clouds soon. The formation and managing of containerized clouds get simplified through a host of container orchestration and management tools. There are both open source and commercial-grade container-monitoring tools. Kubernetes is emerging as the leading container orchestration and management platform. Thus, by leveraging the aforementioned toolsets, the process of setting up and sustaining containerized clouds is accelerated, risk-free, and rewarding. The tool-assisted monitoring of cloud resources (both coarse-grained as well as fine-grained) and applications in production environments is crucial to scaling the applications and providing resilient services. In a Kubernetes cluster, application performance can be examined at many different levels: containers, pods, services, and clusters. Through a single pane of glass, the operational team can provide the running applications and their resource utilization details to their users. These will give users the right insights into how the applications are performing, where application bottlenecks may be found, if any, and how to surmount any deviations and deficiencies of the applications. In short, application performance, security, scalability constraints, and other pertinent information can be captured and acted upon. Cloud infrastructure and application monitoring The cloud idea has disrupted, innovated, and transformed the IT world. Yet, the various cloud infrastructures, resources, and applications ought to be minutely monitored and measured through automated tools. The aspect of automation is gathering momentum in the cloud era. A slew of flexibilities in the form of customization, configuration, and composition are being enacted through cloud automation tools. A bevy of manual and semi-automated tasks are being fully automated through a series of advancements in the IT space. In this section, we will understand the infrastructure monitoring toward infrastructure optimization and automation. Enterprise-scale and mission-critical applications are being cloud-enabled to be deployed in various cloud environments (private, public, community, and hybrid). Furthermore, applications are being meticulously developed and deployed directly on cloud platforms using microservices architecture (MSA). Thus, besides cloud infrastructures, there are cloud-based IT platforms and middleware, business applications, and database management systems. The total IT is accordingly modernized to be cloud-ready. It is very important to precisely and perfectly monitor and measure every asset and aspect of cloud environments. Organizations need to have the capability for precisely monitoring the usage of the participating cloud resources. If there is any deviation, then the monitoring feature triggers an alert to the concerned to ponder about the next course of action. The monitoring capability includes viable tools for monitoring CPU usage per computing resource, the varying ratios between systems activity and user activity, and the CPU usage from specific job tasks. Also, organizations have to have the intrinsic capability for predictive analytics that allows them to capture trending data on memory utilization and filesystem growth. These details help the operational team to proactively plan the needed changes to computing/storage/network resources before they encounter service availability issues. Timely action is essential for ensuring business continuity. Not only infrastructures, but also applications' performance levels have to be closely monitored in order to embark on fine-tuning application code, as well as the infrastructure architectural considerations. Typically, organizations find it easier to monitor the performance of applications that are hosted at a single server, as opposed to the performance of composite applications that are leveraging several server resources. This becomes more tedious and tough when the underlying computer resources are spread across multiple and are distributed. The major worry here is that the team loses its visibility and controllability of third-party data center resources. Enterprises, for different valid reasons, prefer multi-cloud strategy for hosting their applications and data. There are several IT infrastructure management tools, practices, and principles. These traditional toolsets become obsolete for the cloud era. There are a number of distinct characteristics being associated with software-defined cloud environments. It is expected that any cloud application has to innately fulfill the non-functional requirements (NFRs) such as scalability, availability, performance, flexibility, and reliability. Research reports say that organizations across the globe enjoy significant cost savings and increased flexibility of management by modernizing and moving their applications into cloud environments. The monitoring tool capabilities It is paramount to deploy monitoring and management tools to effectively and efficiency run cloud environments, wherein thousands of computing, storage, and network solutions are running. The key characteristics of this tool are vividly illustrated through the following diagram: Here are some of the key features and capabilities we need to properly monitor for modern cloud-based applications and infrastructures: Firstly, the ability to capture and query events and traces in addition to data aggregation is essential. When a customer buys something online, the buying process generates a lot of HTTP requests. For proper end-to-end cloud monitoring, we need to see the exact set of HTTP requests the customer makes while completing the purchase. Any monitoring system has to have the capability to quickly identify bottlenecks and understand the relationships among different components. The solution has to give the exact response time of each component for each transaction. Critical metadata such as error traces and custom attributes ought to be made available to enhance trace and event data. By segmenting the data via the user and business-specific attributes, it is possible to prioritize improvements and sprint plans to optimize for those customers. Secondly, the monitoring system has to have the ability to monitor a wide variety of cloud environments (private, public, and hybrid). Thirdly, the monitoring solution has to scale for any emergency. The benefits Organizations that are using the right mix of technology solutions for IT infrastructure and business application monitoring in the cloud are to gain the following benefits: Performance engineering and enhancement On-demand computing Affordability Prognostic, predictive, and prescriptive analytics Any operational environment is in need of data analytics and machine learning capabilities to be intelligent in their everyday actions and reactions. As data centers and server farms evolve and embrace new technologies (virtualization and containerization), it becomes more difficult to determine what impacts these changes have on the server, storage, and network performance. By using proper analytics, system administrators and IT managers can easily identify and even predict potential choke points and errors before they create problems.  To know more about prognostic, predictive, and prescriptive analytics; head over to our book Practical Site Reliability Engineering. Log analytics Every software and hardware system generates a lot of log data (big data), and it is essential to do real-time log analytics to quickly understand whether there is any deviation or deficiency. This extracted knowledge helps administrators to consider countermeasures in time. Log analytics, if done systematically, facilitates preventive, predictive, and prescriptive maintenance. Workloads, IT platforms, middleware, databases, and hardware solutions all create a lot of log data when they are working together to complete business functionalities. There are several log analytics tools on the market. Open source log analytics platforms If there is a need to handle all log data in one place, then ELK is being touted as the best-in-class open source log analytics solution. There are an application as well as system logs. Logs are typically errors, warnings, and exceptions. ELK is a combination of three different products, namely Elasticsearch, Logstash, and Kibana (ELK). The macro-level ELK architecture is given as follows: Elasticsearch is a search mechanism that is based on the Lucene search to store and retrieve its data. Elasticsearch is, in a way, a NoSQL database. That is, it stores multi-structured data and does not support SQL as the query language. Elasticsearch has a REST API, which uses either PUT or POST to fetch the data. If you want real-time processing of big data, then Elasticsearch is the way forward.  Increasingly, Elasticsearch is being primed for real-time and affordable log analytics. Logstash is an open source and server-side data processing pipeline that ingests data from a variety of data sources simultaneously and transforms and sends them to a preferred database. Logstash also handles unstructured data with ease. Logstash has more than 200 plugins built in, and it is easy to come out on our own. Kibana is the last module of the famous ELK toolset and is an open source data visualization and exploration tool mainly used for performing log and time-series analytics, application monitoring, and IT operational analytics (ITOA). Kibana is gaining a lot of market and mind shares, as it makes it easy to make histograms, line graphs, pie charts, and heat maps. Logz.io, the commercialized version of the ELK platform, is the world's most popular open source log analysis platform. This is made available as an enterprise-grade service in the cloud. It assures high availability, unbreakable security, and scalability. Cloud-based log analytics platforms The log analytics capability is being given as a cloud-based and value-added service by various cloud service providers (CSPs). The Microsoft Azure cloud provides the log analytics service to its users/subscribers by constantly monitoring both cloud and on-premises environments to take correct decisions that ultimately ensure their availability and performance.  The Azure cloud has its own monitoring mechanism in place through its Azure monitor, which collects and meticulously analyze log data emitted by various Azure resources. The log analytics feature of the Azure cloud considers the monitoring data and correlates with other relevant data to supply additional insights. The same capability is also made available for private cloud environments. It can collect all types of log data through various tools from multiple sources and consolidate them into a single and centralized repository. Then, the suite of analysis tools in log analytics, such as log searches and views a collaborate with one another to provide you with centralized insights of your entire environment. The macro-level architecture is given here: This service is being given by other cloud service providers. AWS is one of the well-known providers amongst many others.  The paramount contributions of log analytics tools include the following: Infrastructure monitoring: Log analytics platforms easily and quickly analyze logs from bare metal (BM) servers and network solutions, such as firewalls, load balancers, application delivery controllers, CDN appliances, storage systems, virtual machines, and containers. Application performance monitoring: The analytics platform captures application logs, which are streamed live and takes the assigned performance metrics for doing real-time analysis and debugging. Security and compliance: The service provides an immutable log storage, centralization, and reporting to meet compliance requirements. It has deeper monitoring and decisive collaboration for extricating useful and usable insights. AI-enabled log analytics platforms Algorithmic IT Operations (AIOps) leverages the proven and potential AI algorithms to help organizations to make the path smooth for their digital transformation goals. AIOps is being touted as the way forward to substantially reduce IT operational costs. AIOps automates the process of analyzing IT infrastructures and business workloads to give right and relevant details to administrators about their functioning and performance levels. AIOps minutely monitors each of the participating resources and applications and then intelligently formulates the various steps to be considered for their continuous well being. AIOps helps to realize the goals of preventive and predictive maintenance of IT and business systems and also comes out with prescriptive details for resolving issues with all the clarity and confidence. Furthermore, AIOps lets IT teams conduct root-cause analysis by identifying and correlating issues. Loom Loom is a leading provider of AIOps solutions. Loom's AIOps platform is consistently leveraging competent machine-learning algorithms to easily and quickly automate the log analysis process. The real-time analytics capability of the ML algorithms enables organizations to arrive at correct resolutions for the issues and to complete the resolution tasks in an accelerated fashion. Loom delivers an AI-powered log analysis platform to predict all kinds of impending issues and prescribe the resolution steps. The overlay or anomaly detection is rapidly found, and the strategically sound solution gets formulated with the assistance of this AI-centric log analytics platform . IT operational analytics Operational analytics helps with the following: Extricating operational insights Reducing IT costs and complexity Improving employee productivity Identifying and fixing service problems for an enhanced user experience Gaining end-to-end insights critical to the business operations, offerings, and outputs To facilitate operational analytics, there are integrated platforms, and their contributions are given as follows: Troubleshoot applications, investigate security incidents, and facilitate compliance requirements in minutes instead of hours or days Analyze various performance indicators to enhance system performance Use report-generation capabilities to indicate the various trends in preferred formats (maps, charts, and graphs) and much more! Thus, the operational analytics capability comes handy in capturing operational data (real-time and batch) and crunching them to produce actionable insights to enable autonomic systems. Also, the operational team members, IT experts, and business decision-makers can get useful information on working out correct countermeasures if necessary. The operational insights gained also convey what needs to be done to empower the systems under investigation to attain their optimal performance. IT performance and scalability analytics There are typically big gaps between the theoretical and practical performance limits. The challenge is how to enable systems to attain their theoretical performance level under any circumstance. The performance level required can suffer due to various reasons like poor system design, bugs in software, network bandwidth, third-party dependencies, and I/O access. Middleware solutions can also contribute to the unexpected performance degradation of the system. The system's performance has to be maintained under any loads (user, message, and data). Performance testing is one way of recognizing the performance bottlenecks and adequately addressing them. The testing is performed in the pre-production phase. Besides the system performance, application scalability and infrastructure elasticity are other prominent requirements. There are two scalability options, indicated as follows: Scale up for fully utilizing SMP hardware Scale-out for fully utilizing distributed processors It is also possible to have both at the same time. That is, to scale up and out is to combine the two scalability choices. IT security analytics IT infrastructure security, application security, and data (at rest, transit, and usage) security are the top three security challenges, and there are security solutions approaching the issues at different levels and layers. Access-control mechanisms, cryptography, hashing, digest, digital signature, watermarking, and steganography are the well-known and widely used aspects of ensuing impenetrable and unbreakable security. There's also security testing, and ethical hacking for identifying any security risk factors and eliminating them at the budding stage itself. All kinds of security holes, vulnerabilities, and threats are meticulously unearthed in to deploy defect-free, safety-critical, and secure software applications. During the post-production phase, the security-related data is being extracted out of both software and hardware products, to precisely and painstakingly spit out security insights that in turn goes a long way in empowering security experts and architects to bring forth viable solutions to ensure the utmost security and safety for IT infrastructures and software applications. The importance of root-cause analysis The cost of service downtime is growing up. There are reliable reports stating that the cost of downtime ranges from $100,000-$72,000 per minute. Identifying the root-cause (mean-time-to-identification (MTTI) generally takes hours. For a complex situation, the process may run into days. OverOps analyzes code in staging and production to automatically detect and deliver the root-causes for all errors with no dependency on logging. OverOps shows you a stack trace for every error and exception. However, it also shows you the complete source code, objects, variables, and values that caused that error or exception to be thrown. This assists in identifying the root-cause of when your code breaks. OverOps injects a hyperlink into the exception's link, and you'll be able to jump directly into the source code and actual variable state that cause it. OverOps can co-exist in production alongside all the major APM agents and profilers. Using OverOps with your APM allows monitoring server slowdowns and errors, along with the ability to drill down into the real root-cause of each issue. Summary There are several activities being strategically planned and executed to enhance the resiliency, robustness, and versatility of enterprise, edge, and embedded IT. This tutorial described the various post-production data analytics to allow you to gain a deeper understanding of applications, middleware solutions, databases, and IT infrastructures to manage them effectively and efficiently. In order to gain experience on working with SRE concepts and be able to deliver highly reliable apps and services, check out this book Practical Site Reliability Engineering. Site reliability engineering: Nat Welch on what it is and why we need it [Interview] Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity 5 ways artificial intelligence is upgrading software engineering
Read more
  • 0
  • 0
  • 49572

article-image-unity-3x-scripting-character-controller-versus-rigidbody
Packt
22 Jun 2012
16 min read
Save for later

Unity 3.x Scripting-Character Controller versus Rigidbody

Packt
22 Jun 2012
16 min read
Creating a controllable character There are two ways to create a controllable character in Unity, by using the Character Controller component or physical Rigidbody. Both of them have their pros and cons, and the choice to use one or the other is usually based on the needs of the project. For instance, if we want to create a basic role playing game, where a character is expected to be able to walk, fight, run, and interact with treasure chests, we would recommend using the Character Controller component. The character is not going to be affected by physical forces, and the Character Controller component gives us the ability to go up slopes and stairs without the need to add extra code. Sounds amazing, doesn't it? There is one caveat. The Character Controller component becomes useless if we decide to make our character non-humanoid. If our character is a dragon, spaceship, ball, or a piece of gum, the Character Controller component won't know what to do with it. It's not programmed for those entities and their behavior. So, if we want our character to swing across the pit with his whip and dodge traps by rolling over his shoulder, the Character Controller component will cause us many problems. In this article, we will look into the creation of a character that is greatly affected by physical forces, therefore, we will look into the creation of a custom Character Controller with Rigidbody, as shown in the preceding screenshot. Custom Character Controller In this section, we will write a script that will take control of basic character manipulations. It will register a player's input and translate it into movement. We will talk about vectors and vector arithmetic, try out raycasting, make a character obey our controls and see different ways to register input, describe the purpose of the FixedUpdate function, and learn to control Rigidbody. We shall start with teaching our character to walk in all directions, but before we start coding, there is a bit of theory that we need to know behind character movement. Most game engines, if not all, use vectors to control the movement of objects. Vectors simply represent direction and magnitude, and they are usually used to define an object's position (specifically its pivot point) in a 3D space. Vector is a structure that consists of three variables—X, Y, and Z. In Unity, this structure is called Vector3: To make the object move, knowing its vector is not enough. Length of vectors is known as magnitude. In physics, speed is a pure scalar, or something with a magnitude but no direction. To give an object a direction, we use vectors. Greater magnitude means greater speed. By controlling vectors and magnitude, we can easily change our direction or increase speed at any time we want. Vectors are very important to understand if we want to create any movement in a game. Through the examples in this article, we will explain some basic vector manipulations and describe their influence on the character. It is recommended that you learn extra material about vectors to be able to perfect a Character Controller based on game needs. Setting up the project To start this section, we need an example scene. Perform the following steps: Select Chapter 2 folder from book assets, and click on on the Unity_chapter2 scene inside the custom_scene folder. In the Custom scripts folder, create a new JavaScript file. Call it CH_Controller (we will reference this script in the future, so try to remember its name, if you choose a different one): In a Hierarchy view, click on the object called robot. Translate the mouse to a Scene view and press F; the camera will focus on a funny looking character that we will teach to walk, run, jump, and behave as a character from a video game. Creating movement The following is the theory of what needs to be done to make a character move: Register a player's input. Store information into a vector variable. Use it to move a character. Sounds like a simple task, doesn't it? However, when it comes to moving a player-controlled character, there are a lot of things that we need to keep in mind, such as vector manipulation, registering input from the user, raycasting, Character Controller component manipulation, and so on. All these things are simple on their own, but when it comes to putting them all together, they might bring a few problems. To make sure that none of these problems will catch us by surprise, we will go through each of them step by step. Manipulating character vector By receiving input from the player, we will be able to manipulate character movement. The following is the list of actions that we need to perform in Unity: Open the CH_Character script. Declare public variables Speed and MoveDirection of types float and Vector3 respectively. Speed is self-explanatory, it will determine at which speed our character will be moving. MoveDirection is a vector that will contain information about the direction in which our character will be moving. Declare a new function called Movement. It will be checking horizontal and vertical inputs from the player. Finally, we will use this information and apply movement to the character. An example of the code is as follows: public var Speed : float = 5.0; public var MoveDirection : Vector3 = Vector3.zero; function Movement (){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirecti on.y, Input.GetAxisRaw("Vertical")); this.transform.Translate(MoveDirection); } Register input from the user In order to move the character, we need to register an input from the user. To do that, we will use the Input.GetAxis function. It registers input and returns values from -1 to 1 from the keyboard and joystick. Input.GetAxis can only register input that had been defined by passing a string parameter to it. To find out which options are available, we will go to Edit | Projectsettings | Input. In the Inspector view, we will see Input Manager. Click on the Axes drop-down menu and you will be able to see all available input information that can be passed to the Input.GetAxis function. Alternatively, we can use Input.GetAxisRaw. The only difference is that we aren't using Unity's built-in smoothing and processing data as it is, which allows us to have greater control over character movement. To create your own input axes, simply increase the size of the array by 1 and specify your preferences (later we will look into a better way of doing and registering input for different buttons). this.transform is an access to transformation of this particular object. transform contains all the information about translation, rotation, scale, and children of this object. Translate is a function inside Unity that translates GameObject to a specific direction based on a given vector. If we simply leave it as it is, our character will move with the speed of light. That happens because translation is being applied on character every frame. Relying on frame rate when dealing with translation is very risky, and as each computer has different processing power, execution of our function will vary based on performance. To solve this problem, we will tell it to apply movement based on a common factor—time: this.transform.Translate(MoveDirection * Time.deltaTime); This will make our character move one Unity unit every second, which is still a bit too slow. Therefore, we will multiply our movement speed by the Speed variable: this.transform.Translate((MoveDirection * Speed) * Time.deltaTime); Now, when the Movement function is written, we need to call it from Update. A word of warning though—controlling GameObject or Rigidbody from the usual Update function is not recommended since, as mentioned previously, that frame rate is unreliable. Thankfully, there is a FixedUpdate function that will help us by applying movement at every fixed frame. Simply change the Update function to FixedUpdate and call the Movement function from there: function FixedUpdate (){ Movement(); } The Rigidbody component Now, when our character is moving, take a closer look at the Rigidbody component that we have attached to it. Under the Constraints drop-down menu, we will notice that Freeze Rotation for X and Z axes is checked, as shown in the following screenshot: If we uncheck those boxes and try to move our character, we will notice that it starts to fall in the direction of the movement. Why is this happening? Well, remember, we talked about Rigidbody being affected by physics laws in the engine? That applies to friction as well. To avoid force of friction affecting our character, we forced it to avoid rotation along all axes but Y. We will use the Y axis to rotate our character from left to right in the future. Another problem that we will see when moving our character around is a significant increase in speed when walking in a diagonal direction. This is not an unusual bug, but an expected behavior of the MoveDirection vector. That happens because for directional movement we use vertical and horizontal vectors. As a result, we have a vector that inherits magnitude from both, in other words, its magnitude is equal to the sum of vertical and horizontal vectors. To prevent that from happening, we need to set the magnitude of the new vector to 1. This operation is called vector normalization. With normalization and speed multiplier, we can always make sure to control our magnitude: this.transform.Translate((MoveDirection.normalized * Speed) * Time. deltaTime); Jumping Jumping is not as hard as it seems. Thanks to Rigidbody, our character is already affected by gravity, so the only thing we need to do is to send it up in the air. Jump force is different from the speed that we applied to movement. To make a decent jump, we need to set it to 500.0). For this specific example, we don't want our character to be controllable in the air (as in real life, that is physically impossible). Instead, we will make sure that he preserves transition velocity when jumping, to be able to jump in different directions. But, for now, let's limit our movement in air by declaring a separate vector for jumping. User input verification In order to make a jump, we need to be sure that we are on the ground and not floating in the air. To check that, we will declare three variables—IsGrounded, Jumping, and inAir—of a type boolean. IsGrounded will check if we are grounded. Jumping will determine if we pressed the jump button to perform a jump. inAir will help us to deal with a jump if we jumped off the platform without pressing the jump button. In this case, we don't want our character to fly with the same speed as he walks; we need to add an airControl variable that will smooth our fall. Just as we did with movement, we need to register if the player pressed a jump button. To achieve this, we will perform a check right after registering Vertical and Horizontal inputs: public var jumpSpeed : float = 500.0; public var jumpDirection : Vector3 = Vector3.zero; public var IsGrounded : boolean = false; public var Jumping : boolean = false; public var inAir : boolean = false; public var airControl : float = 0.5; function Movement(){ if (Input.GetAxis("Horizontal") || Input.GetAxis("Vertical")) { MoveDirection = Vector3(Input.GetAxisRaw("Horizontal"),MoveDirection.y,Input.GetAxisRaw("Vertical")); } if (Input.GetButtonDown("Jump") && isGrounded) {} } GetButtonDown determines if we pressed a specific button (in this case, Space bar), as specified in Input Manager. We also need to check if our character is grounded to make a jump. We will apply vertical force to a rigidbody by using the AddForce function that takes the vector as a parameter and pushes a rigidbody in the specified direction. We will also toggle Jumping boolean to true, as we pressed the jump button and preserve velocity with JumpDirection: if (Input.GetButtonDown("Jump") &&isGrounded){ Jumping = true; jumpDirection = MoveDirection; rigidbody.AddForce((transform.up) * jumpSpeed); } if (isGrounded) this.transform.Translate((MoveDirection.normalized * Speed) * Time.deltaTime); else if (Jumping || inAir) this.transform.Translate((jumpDirection * Speed * airControl) * Time.deltaTime); To make sure that our character doesn't float in space, we need to restrict its movement and apply translation with MoveDirection only, when our character is on the ground, or else we will use jumpDirection. Raycasting The jumping functionality is almost written; we now need to determine whether our character is grounded. The easiest way to check that is to apply raycasting. Raycasting simply casts a ray in a specified direction and length, and returns if it hits any collider on its way (a collider of the object that the ray had been cast from is ignored): To perform a raycast, we will need to specify a starting position, direction (vector), and length of the ray. In return, we will receive true, if the ray hits something, or false, if it doesn't: function FixedUpdate () { if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; JumpDirection = MoveDirection; } Movement(); } As we have already mentioned, we used transform.position to specify the starting position of the ray as a center of our collider. -transform.up is a vector that is pointing downwards and collider.height is the height of the attached collider. We are using half of the height, as the starting position is located in the middle of the collider and extended ray for two units, to make sure that our ray will hit the ground. The rest of the code is simply toggling state booleans. Improving efficiency in raycasting But what if the ray didn't hit anything? That can happen in two cases—if we walk off the cliff or are performing a jump. In any case, we have to check for it. If the ray didn't hit a collider, then obviously we are in the air and need to specify that. As this is our first check, we need to preserve our current velocity to ensure that our character doesn't drop down instantly. Raycasting is a very handy thing and being used in many games. However, you should not rely on it too often. It is very expensive and can dramatically drop down your frame rate. Right now, we are casting rays every frame, which is extremely inefficient. To improve our performance, we only need to cast rays when performing a jump, but never when grounded. To ensure this, we will put all our raycasting section in FixedUpdate to fire when the character is not grounded. function FixedUpdate (){ if (!isGrounded){ if (Physics.Raycast(transform.position, -transform.up, collider.height/2 + 0.2)){ isGrounded = true; Jumping = false; inAir = false; } else if (!inAir){ inAir = true; jumpDirection = MoveDirection; } } Movement(); } function OnCollisionExit(collisionInfo : Collision){ isGrounded = false; } To determine if our character is not on the ground, we will use a default function— OnCollisionExit(). Unlike OnControllerColliderHit(), which had been used with Character Controller, this function is only for colliders and rigidbodies. So, whenever our character is not touching any collider or rigidbody, we will expect to be in the air, therefore, not grounded. Let's hit Play and see our character jumping on our command. Additional jump functionality Now that we have our character jumping, there are a few issues that should be resolved. First of all, if we decide to jump on the sharp edge of the platform, we will see that our collider penetrates other colliders. Thus, our collider ends up being stuck in the wall without a chance of getting out: A quick patch to this problem will be pushing the character away from the contact point while jumping. We will use the OnCollisionStay() function that's called at every frame when we are colliding with an object. This function receives collision contact information that can help us determine who we are colliding with, its velocity, name, if it has Rigidbody, and so on. In our case we are interested in contact points. Perform the following steps: Declare a new private variable contact of a ContactPoint type that describes the collision point of colliding objects. Declare the OnCollisonStay function. Inside this function, we will take the first point of contact with the collider and assign it to our private variable. Add force to the contact position to reverse the character's velocity, but only if the character is not on the ground. Declare a new variable and call it jumpClimax of boolean type. Contacts is an array of all contact points. Finally, we need to move away from that contact point by reversing our velocity. The AddForceAtPosition function will help us here. It is similar to the one that we used for jumping, however, this one applies force at a specified position (contact point): public var jumpClimax :boolean = false; ... function OnCollisionStay(collisionInfo : Collision){ contact = collisionInfo.contacts[0]; if (inAir || Jumping) rigidbody.AddForceAtPosition(-rigidbody.velocity, contact.point); } The next patch will aid us in the future, when we will be adding animation to our character later in this article. To make sure that our jumping animation runs smoothly, we need to know when our character reaches jumping climax, in other words, when it stops going up and start a falling. In the FixedUpdate function, right after the last else if statement, put the following code snippet: else if (inAir&&rigidbody.velocity.y == 0.0) { jumpClimax = true; } Nothing complex here. In theory, the moment we stop going up is a climax of our jump, that's why we check if we are in the air (obviously we can't reach jump climax when on the ground), and if vertical velocity of rigidbody is 0.0. The last part is to set our jumping climax to false. We'll do that at the moment when we touch the ground: if (Physics.Raycast(transform.position, -transform.up, collider. height/2 + 2)){ isGrounded = true; Jumping = false; inAir = false; jumpClimax = false; } Running We taught our character to walk, jump, and stand aimlessly on the same spot. The next logical step will be to teach him running. From a technical point of view, there is nothing too hard. Running is simply the same thing as walking, but with a greater speed. Perform the following steps: Declare a new variable IsRunning of a type boolean, which will be used to determine whether our character has been told to run or not. Inside the Movement function, at the very top, we will check if the player is pressing left or right, and shift and assign an appropriate value to isRunning: public var isRunning : boolean = false; ... function Movement() { if (Input.GetKey (KeyCode.LeftShift) || Input.GetKey (KeyCode. RightShift)) isRunning = true; else isRunning = false; ... } Another way to get input from the user is to use KeyCode. It is an enumeration for all physical keys on the keyboard. Look at the KeyCode script reference for a complete list of available keys, on the official website: http://unity3d.com/ support/documentation/ScriptReference/KeyCode We will return to running later, in the animation section.
Read more
  • 0
  • 1
  • 49506
article-image-import-add-background-music-audacity
Packt
03 Mar 2010
2 min read
Save for later

Importing and Adding Background Music with Audacity 1.3

Packt
03 Mar 2010
2 min read
Audacity is commonly used to import music into your project, convert different audio files from one format to another, bring in multiple files and convert them, and more. In this article, we will learn how to add background music into your podcast, overdub and fade in and out. We will also discuss some additional information about importing music from CDs, cassette tapes, and vinyl records. This Audacity tutorial has been taken from Getting Started with Audacity 1.3. Read more here. Importing digital music into Audacity Before you can add background music to any Audacity project, you'll first have to import a digital music file into to the project itself. Importing WAV, AIFF, MP3, and MP4/M4A files Audacity can import a number of music file formats. WAV, AIFF, MP3 are most common, but it can also import MP4/M4A files, as long as they are not rights-managed or copy-protected (like some songs purchased through stores such as iTunes). To import a song into Audacity: Open your sample project. From the main menu, select File, Import, and then Audio. The audio selection window is displayed. Choose the music file from your computer, and then click on Open. A new track is added to your project at the very bottom of the project window. Importing music from iTunes Your iTunes library can contain protected and unprotected music files. The main difference is that the protected files were typically purchased from the iTunes store and can't be played outside of that software. There is no easy way to determine visually which music tracks are protected or unprotected, so you can try both methods outlined next to import into Audacity. However, remember there are copyright laws for songs written and recorded by popular artists, so you need to investigate how to use music legally for your own use or for distribution through a podcast. Unprotected files from iTunes If the songs that you want to import from iTunes aren't copy-protected, importing them is easy. Click-and-drag the song from the iTunes window and drop it into your Audacity project window (with your project open, of course). Within a few moments, the music track is shown at the bottom of your project window. The music track is now ready to be edited in, as an introduction or however you desire, in the main podcast file.
Read more
  • 0
  • 2
  • 49470

article-image-creating-a-simple-modular-application-in-java-11-tutorial
Prasad Ramesh
20 Feb 2019
11 min read
Save for later

Creating a simple modular application in Java 11 [Tutorial]

Prasad Ramesh
20 Feb 2019
11 min read
Modular programming enables one to organize code into independent, cohesive modules, which can be combined to achieve the desired functionality. This article is an excerpt from a book written by Nick Samoylov and Mohamed Sanaulla titled Java 11 Cookbook - Second Edition. In this book, you will learn how to implement object-oriented designs using classes and interfaces in Java 11. The complete code for the examples shown in this tutorial can be found on GitHub. You should be wondering what this modularity is all about, and how to create a modular application in Java. In this article, we will try to clear up the confusion around creating modular applications in Java by walking you through a simple example. Our goal is to show you how to create a modular application; hence, we picked a simple example so as to focus on our goal. Our example is a simple advanced calculator, which checks whether a number is prime, calculates the sum of prime numbers, checks whether a number is even, and calculates the sum of even and odd numbers. Getting ready We will divide our application into two modules: The math.util module, which contains the APIs for performing the mathematical calculations The calculator module, which launches an advanced calculator How to do it Let's implement the APIs in the com.packt.math.MathUtil class, starting with the isPrime(Integer number) API: public static Boolean isPrime(Integer number){ if ( number == 1 ) { return false; } return IntStream.range(2,num).noneMatch(i -> num % i == 0 ); } Implement the sumOfFirstNPrimes(Integer count) API: public static Integer sumOfFirstNPrimes(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isPrime(j)) .limit(count).sum(); } Let's write a function to check whether the number is even: public static Boolean isEven(Integer number){ return number % 2 == 0; } The negation of isEven tells us whether the number is odd. We can have functions to find the sum of the first N even numbers and the first N odd numbers, as shown here: public static Integer sumOfFirstNEvens(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> isEven(j)) .limit(count).sum(); } public static Integer sumOfFirstNOdds(Integer count){ return IntStream.iterate(1,i -> i+1) .filter(j -> !isEven(j)) .limit(count).sum(); } We can see in the preceding APIs that the following operations are repeated: An infinite sequence of numbers starting from 1 Filtering the numbers based on some condition Limiting the stream of numbers to a given count Finding the sum of numbers thus obtained Based on our observation, we can refactor the preceding APIs and extract these operations into a method, as follows: Integer computeFirstNSum(Integer count, IntPredicate filter){ return IntStream.iterate(1,i -> i+1) .filter(filter) .limit(count).sum(); } Here, count is the limit of numbers we need to find the sum of, and filter is the condition for picking the numbers for summing. Let's rewrite the APIs based on the refactoring we just did: public static Integer sumOfFirstNPrimes(Integer count){ return computeFirstNSum(count, (i -> isPrime(i))); } public static Integer sumOfFirstNEvens(Integer count){ return computeFirstNSum(count, (i -> isEven(i))); } public static Integer sumOfFirstNOdds(Integer count){ return computeFirstNSum(count, (i -> !isEven(i))); So far, we have seen a few APIs around mathematical computations. These APIs are part of our com.packt.math.MathUtil class. The complete code for this class can be found at Chapter03/2_simple-modular-math-util/math.util/com/packt/math, in the codebase downloaded for this book. Let's make this small utility class part of a module named math.util. The following are some conventions we use to create a module: Place all the code related to the module under a directory named math.util and treat this as our module root directory. In the root folder, insert a file named module-info.java. Place the packages and the code files under the root directory. What does module-info.java contain? The following: The name of the module The packages it exports, that is, the one it makes available for other modules to use The modules it depends on The services it uses The service for which it provides implementation Our math.util module doesn't depend on any other module (except, of course, the java.base module). However, it makes its API available for other modules (if not, then this module's existence is questionable). Let's go ahead and put this statement into code: module math.util{ exports com.packt.math; } We are telling the Java compiler and runtime that our math.util module is exporting the code in the com.packt.math package to any module that depends on math.util. The code for this module can be found at Chapter03/2_simple-modular-math-util/math.util. Now, let's create another module calculator that uses the math.util module. This module has a Calculator class whose work is to accept the user's choice for which mathematical operation to execute and then the input required to execute the operation. The user can choose from five available mathematical operations: Prime number check Even number check Sum of N primes Sum of N evens Sum of N odds Let's see this in code: private static Integer acceptChoice(Scanner reader){ System.out.println("************Advanced Calculator************"); System.out.println("1. Prime Number check"); System.out.println("2. Even Number check"); System.out.println("3. Sum of N Primes"); System.out.println("4. Sum of N Evens"); System.out.println("5. Sum of N Odds"); System.out.println("6. Exit"); System.out.println("Enter the number to choose operation"); return reader.nextInt(); } Then, for each of the choices, we accept the required input and invoke the corresponding MathUtil API, as follows: switch(choice){ case 1: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isPrime(number)){ System.out.println("The number " + number +" is prime"); }else{ System.out.println("The number " + number +" is not prime"); } break; case 2: System.out.println("Enter the number"); Integer number = reader.nextInt(); if (MathUtil.isEven(number)){ System.out.println("The number " + number +" is even"); } break; case 3: System.out.println("How many primes?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d primes is %d", count, MathUtil.sumOfFirstNPrimes(count))); break; case 4: System.out.println("How many evens?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d evens is %d", count, MathUtil.sumOfFirstNEvens(count))); break; case 5: System.out.println("How many odds?"); Integer count = reader.nextInt(); System.out.println(String.format("Sum of %d odds is %d", count, MathUtil.sumOfFirstNOdds(count))); break; } The complete code for the Calculator class can be found at Chapter03/2_simple-modular-math-util/calculator/com/packt/calculator/Calculator.java. Let's create the module definition for our calculator module in the same way we created it for the math.util module: module calculator{ requires math.util; } In the preceding module definition, we mentioned that the calculator module depends on the math.util module by using the required keyword. The code for this module can be found at Chapter03/2_simple-modular-math-util/calculator. Let's compile the code: javac -d mods --module-source-path . $(find . -name "*.java") The preceding command has to be executed from Chapter03/2_simple-modular-math-util. Also, you should have the compiled code from across both the modules, math.util and calculator, in the mods directory. Just a single command and everything including the dependency between the modules is taken care of by the compiler. We didn't require build tools such as ant to manage the compilation of modules. The --module-source-path command is the new command-line option for javac, specifying the location of our module source code. Let's execute the preceding code: java --module-path mods -m calculator/com.packt.calculator.Calculator The --module-path command, similar to --classpath, is the new command-line option  java, specifying the location of the compiled modules. After running the preceding command, you will see the calculator in action: Congratulations! With this, we have a simple modular application up and running. We have provided scripts to test out the code on both Windows and Linux platforms. Please use run.bat for Windows and run.sh for Linux. How it works Now that you have been through the example, we will look at how to generalize it so that we can apply the same pattern in all our modules. We followed a particular convention to create the modules: |application_root_directory |--module1_root |----module-info.java |----com |------packt |--------sample |----------MyClass.java |--module2_root |----module-info.java |----com |------packt |--------test |----------MyAnotherClass.java We place the module-specific code within its folders with a corresponding module-info.java file at the root of the folder. This way, the code is organized well. Let's look into what module-info.java can contain. From the Java language specification (http://cr.openjdk.java.net/~mr/jigsaw/spec/lang-vm.html), a module declaration is of the following form: {Annotation} [open] module ModuleName { {ModuleStatement} } Here's the syntax, explained: {Annotation}: This is any annotation of the form @Annotation(2). open: This keyword is optional. An open module makes all its components accessible at runtime via reflection. However, at compile-time and runtime, only those components that are explicitly exported are accessible. module: This is the keyword used to declare a module. ModuleName: This is the name of the module that is a valid Java identifier with a permissible dot (.) between the identifier names—similar to math.util. {ModuleStatement}: This is a collection of the permissible statements within a module definition. Let's expand this next. A module statement is of the following form: ModuleStatement: requires {RequiresModifier} ModuleName ; exports PackageName [to ModuleName {, ModuleName}] ; opens PackageName [to ModuleName {, ModuleName}] ; uses TypeName ; provides TypeName with TypeName {, TypeName} ; The module statement is decoded here: requires: This is used to declare a dependency on a module. {RequiresModifier} can be transitive, static, or both. Transitive means that any module that depends on the given module also implicitly depends on the module that is required by the given module transitively. Static means that the module dependence is mandatory at compile time, but optional at runtime. Some examples are requires math.util, requires transitive math.util, and requires static math.util. exports: This is used to make the given packages accessible to the dependent modules. Optionally, we can force the package's accessibility to specific modules by specifying the module name, such as exports com.package.math to claculator. opens: This is used to open a specific package. We saw earlier that we can open a module by specifying the open keyword with the module declaration. But this can be less restrictive. So, to make it more restrictive, we can open a specific package for reflective access at runtime by using the opens keyword—opens com.packt.math. uses: This is used to declare a dependency on a service interface that is accessible via java.util.ServiceLoader. The service interface can be in the current module or in any module that the current module depends on. provides: This is used to declare a service interface and provide it with at least one implementation. The service interface can be declared in the current module or in any other dependent module. However, the service implementation must be provided in the same module; otherwise, a compile-time error will occur. We will look at the uses and provides clauses in more detail in the Using services to create loose coupling between the consumer and provider modules recipe. The module source of all modules can be compiled at once using the --module-source-path command-line option. This way, all the modules will be compiled and placed in their corresponding directories under the directory provided by the -d option. For example, javac -d mods --module-source-path . $(find . -name "*.java") compiles the code in the current directory into a mods directory. Running the code is equally simple. We specify the path where all our modules are compiled into using the command-line option --module-path. Then, we mention the module name along with the fully qualified main class name using the command-line option -m, for example, java --module-path mods -m calculator/com.packt.calculator.Calculator. In this tutorial, we learned to create a simple modular Java application. To learn more Java 11 recipes, check out the book Java 11 Cookbook - Second Edition. Brian Goetz on Java futures at FOSDEM 2019 7 things Java programmers need to watch for in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility
Read more
  • 0
  • 0
  • 49465