Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-play-functions
Packt
21 Feb 2018
6 min read
Save for later

Play With Functions

Packt
21 Feb 2018
6 min read
This article by Igor Wojda and Marcin Moskala, authors of the book Android Development with Kotlin, introduces functions in Kotlin, together with different ways of calling functions. (For more resources related to this topic, see here.) Single-expression functions During typical programming, many functions contain only one expression. Here is example of this kind of function: fun square(x: Int): Int { return x * x } Or another one, which can be often found in Android projects. It is pattern used in Activity, to define methods that are just getting text from some view or providing some other data from view to allow presenter to get them: fun getEmail(): String { return emailView.text.toString() } Both functions are defined to return result of single expression. In first example it is result of x * x multiplication, and in second one it is result of expression emailView.text.toString(). This kind of functions are used all around Android projects. Here are some common use-cases: Extracting some small operations Using polymorphism to provide values specific to class Functions that are only creating some object Functions that are passing data between architecture layers (like in preceding example Activity is passing data from view to presenter) Functional programming style functions that base on recurrence Such functions are often used, so Kotlin has notation for this kind of them. When a function returns a single expression, then curly braces and body of the function can be omitted. We specify expression directly using equality character. Functions defined this way are called single-expression functions. Let's update our square function, and define it as a single-expression function: As we can see, single expression function have expression body instead of block body. This notation is shorter, but whole body needs to be just a single expression. In single-expression functions, declaring return type is optional, because it can be inferred by the compiler from type of expression. This is why we can simplify use square function, and define it this way: fun square(x: Int) = x * x There are many places inside Android application where we can utilize single expression functions. Let's consider RecyclerView adapter that is providing layout ID and creating ViewHolder: class AddressAdapter : ItemAdapter<AddressAdapter.ViewHolder>() { override fun getLayoutId() = R.layout.choose_address_view override fun onCreateViewHolder(itemView: View) = ViewHolder(itemView) // Rest of methods } In the following example, we achieve high readability thanks to single expression function. Single-expression functions are also very popular in the functional world. Single expression function notation is also well-pairing with when structure. Here example of their connection, used to get specific data from object according to key (use-case from big Kotlin project): fun valueFromBooking(key: String, booking: Booking?) = when(key) { // 1 "patient.nin" -> booking?.patient?.nin "patient.email" -> booking?.patient?.email "patient.phone" -> booking?.patient?.phone "comment" -> booking?.comment else -> null } We don't need a type, because it is inferred from when expression. Another common Android example is that we can combine when expression with activity method onOptionsItemSelected that handles top bar menu clicks: override fun onOptionsItemSelected(item: MenuItem): Boolean = when { item.itemId == android.R.id.home -> { onBackPressed() true } else -> super.onOptionsItemSelected(item) } As we can see, single expression functions can make our code more concise and improved readability. Single-expression functions are commonly used in Kotlin Android projects and they are really popular for functional programming. As an example. Let's suppose that we need to filter all the odd values from following list: val list = listOf(1, 2, 3, 4, 5) We will use following helper function that returns true if argument is odd otherwise it returns false: fun isOdd(i: Int) = i % 2 == 1 In imperative programming style, we should specify steps of processing, which are connected to execution process (iterate through list, check if value is odd, add value to one list if it's odd). Here is implementation of this functionality, that is typical for imperative style: var oddList = emptyList<Int>() for(i in list) { if(isOdd(i)) { newList += i } } In declarative programming style, the way of thinking about code is different - we should think what is the required result and simply use functions that will give us this result. Kotlin stdlib provides lot of functions supporting declarative programming style. Here is how we could implement the same functionality using one of them, called filter: var oddList = list.filter(::isOdd) filter is function that leaves only elements that are true according to predicate. Here function isOdd is used as an predicate. Different ways of calling a function Sometimes we need to call function and provide only selected arguments. In Java we could create multiple overloads of the same method, but this solution have there are some limitations. First problem is that number of possible method permutations is growing very quickly (2n) making them very difficult to maintain. Second problem is that overloads must be distinguishable from each other, so compiler will know which overload to call, so when method defines few parameters with the same type we can't define all possible overloads. That's why in Java we often need to pass multiple null values to a method: // Java printValue("abc", null, null, "!"); Multiple null parameters provide boilerplate and greatly decrease method readability. In Kotlin there is no such problem, because Kotlin has feature called default arguments and named argument syntax. Default arguments values Default arguments are mostly known from C++, which is one of the oldest languages supporting it. Default argument provides a value for a parameter in case it is not provided during method call. Each function parameter can have default value. It might be any value that is matching specified type including null. This way we can simply define functions that can be called in multiple ways We can use this function the same way as normal function (function without default argument values) by providing values for each parameter (all arguments): printValue("str", true, "","") // Prints: (str) Thanks to default argument values, we can call a function by providing arguments only for parameters without default values: printValue("str") // Prints: (str) We can also provide all parameters without default values, and only some that have a default value: printValue("str", false) // Prints: str Named arguments syntax Sometimes we want only to pass value for last argument. Let's suppose that we define want to define value for suffix, but not for prefix and inBracket (which are defined before suffix). Normally we would have to provide values for all previous parameters including the default parameter values: printValue("str", true, true, "!") // Prints: (str) By using named argument syntax, we can pass specific argument using argument name: printValue("str", suffix = "!") // Prints: (str)! We can also use named argument syntax together with classic call. The only restriction is when we start using named syntax we cannot use classic one for next arguments we are serving: printValue("str", true, "") printValue("str", true, prefix = "") printValue("str", inBracket = true, prefix = "") Summary In this article, we learned about single expression functions as a type of defining functions in application development. We also briefly explained Resources for Article:   Further resources on this subject: Getting started with Android Development [article] Android Game Development with Unity3D [article] Kotlin Basics [article]
Read more
  • 0
  • 0
  • 30011

article-image-api-gateway-and-its-need
Packt
21 Feb 2018
9 min read
Save for later

API Gateway and its Need

Packt
21 Feb 2018
9 min read
 In this article by Umesh R Sharma, author of the book Practical Microservices, we will cover API Gateway and its need with simple and short examples. (For more resources related to this topic, see here.) Dynamic websites show a lot on a single page, and there is a lot of information that needs to be shown on the page. The common success order summary page shows the cart detail and customer address. For this, frontend has to fire a different query to the customer detail service and order detail service. This is a very simple example of having multiple services on a single page. As a single microservice has to deal with only one concern, in result of that to show much information on page, there are many API calls on the same page. So, a website or mobile page can be very chatty in terms of displaying data on the same page. Another problem is that, sometimes, microservice talks on another protocol, then HTTP only, such as thrift call and so on. Outer consumers can't directly deal with microservice in that protocol. As a mobile screen is smaller than a web page, the result of the data required by the mobile or desktop API call is different. A developer would want to give less data to the mobile API or have different versions of the API calls for mobile and desktop. So, you could face a problem such as this: each client is calling different web services and keeping track of their web service and developers have to give backward compatibility because API URLs are embedded in clients like in mobile app. Why do we need the API Gateway? All these preceding problems can be addressed with the API Gateway in place. The API Gateway acts as a proxy between the API consumer and the API servers. To address the first problem in that scenario, there will only be one call, such as /successOrderSummary, to the API Gateway. The API Gateway, on behalf of the consumer, calls the order and user detail, then combines the result and serves to the client. So basically, it acts as a facade or API call, which may internally call many APIs. The API Gateway solves many purposes, some of which are as follows. Authentication API Gateways can take the overhead of authenticating an API call from outside. After that, all the internal calls remove security check. If the request comes from inside the VPC, it can remove the check of security, decrease the network latency a bit, and make the developer focus more on business logic than concerning about security. Different protocol Sometimes, microservice can internally use different protocols to talk to each other; it can be thrift call, TCP, UDP, RMI, SOAP, and so on. For clients, there can be only one rest-based HTTP call. Clients hit the API Gateway with the HTTP protocol and the API Gateway can make the internal call in required protocol and combine the results in the end from all web service. It can respond to the client in required protocol; in most of the cases, that protocol will be HTTP. Load-balancing The API Gateway can work as a load balancer to handle requests in the most efficient manner. It can keep a track of the request load it has sent to different nodes of a particular service. Gateway should be intelligent enough to load balances between different nodes of a particular service. With NGINX Plus coming into the picture, NGINX can be a good candidate for the API Gateway. It has many of the features to address the problem that is usually handled by the API Gateway. Request dispatching (including service discovery) One main feature of the gateway is to make less communication between client and microservcies. So, it initiates the parallel microservices if that is required by the client. From the client side, there will only be one hit. Gateway hits all the required services and waits for the results from all services. After obtaining the response from all the services, it combines the result and sends it back to the client. Reactive microservice designs can help you achieve this. Working with service discovery can give many extra features. It can mention which is the master node of service and which is the slave. Same goes for DB in case any write request can go to the master or read request can go to the slave. This is the basic rule, but users can apply so many rules on the basis of meta information provided by the API Gateway. Gateway can record the basic response time from each node of service instance. For higher priority API calls, it can be routed to the fastest responding node. Again, rules can be defined on the basis of the API Gateway you are using and how it will be implemented. Response transformation Being a first and single point of entry for all API calls, the API Gateway knows which type of client is calling a mobile, web client, or other external consumer; it can make the internal call to the client and give the data to different clients as per needs and configuration. Circuit breaker To handle the partial failure, the API Gateway uses a technique called circuit breaker pattern. A service failure in one service can cause the cascading failure in the flow to all the service calls in stack. The API Gateway can keep an eye on some threshold for any microservice. If any service passes that threshold, it marks that API as open circuit and decides not to make the call for a configured time. Hystrix (by Netflix) served this purpose efficiently. Default value in this is failing of 20 requests in 5 seconds. Developers can also mention the fall back for this open circuit. This fall back can be of dummy service. Once API starts giving results as expected, then gateway marks it as a closed service again. Pros and cons of API Gateway Using the API Gateway itself has its own pros and cons. In the previous section, we have described the advantages of using the API Gateway already. I will still try to make them in points as the pros of the API Gateway. Pros Microservice can focus on business logic Clients can get all the data in a single hit Authentication, logging, and monitoring can be handled by the API Gateway Gives flexibility to use completely independent protocols in which clients and microservice can talk It can give tailor-made results, as per the clients needs It can handle partial failure Addition to the preceding mentioned pros, some of the trade-offs are also to use this pattern. Cons It can cause performance degrade due to lots of happenings on the API Gateway With this, discovery service should be implemented Sometimes, it becomes the single point of failure Managing routing is an overhead of the pattern Adding additional network hope in the call Overall. it increases the complexity of the system Too much logic implementation in this gateway will lead to another dependency problem So, before using the API Gateway, both of the aspects should be considered. Decision of including the API Gateway in the system increases the cost as well. Before putting effort, cost, and management in this pattern, it is recommended to analysis how much you can gain from it. Example of API Gateway In this example, we will try to show only sample product pages that will fetch the data from service product detail to give information about the product. This example can be increased in many aspects. Our focus of this example is to only show how the API Gateway pattern works; so we will try to keep this example simple and small. This example will be using Zuul from Netflix as an API Gateway. Spring also had an implementation of Zuul in it, so we are creating this example with Spring Boot. For a sample API Gateway implementation, we will be using http://start.spring.io/ to generate an initial template of our code. Spring initializer is the project from Spring to help beginners generate basic Spring Boot code. A user has to set a minimum configuration and can hit the Generate Project button. If any user wants to set more specific details regarding the project, then they can see all the configuration settings by clicking on the Switch to the full version button, as shown in the following screenshot: Let's create a controller in the same package of main application class and put the following code in the file: @SpringBootApplication @RestController public class ProductDetailConrtoller { @Resource ProductDetailService pdService; @RequestMapping(value = "/product/{id}") public ProductDetail getAllProduct( @PathParam("id") String id) { return pdService.getProductDetailById(id); } }   In the preceding code, there is an assumption of the pdService bean that will interact with Spring data repository for product detail and get the result for the required product ID. Another assumption is that this service is running on port 10000. Just to make sure everything is running, a hit on a URL such as http://localhost:10000/product/1 should give some JSON as response. For the API Gateway, we will create another Spring Boot application with Zuul support. Zuul can be activated by just adding a simple @EnableZuulProxy annotation. The following is a simple code to start the simple Zuul proxy: @SpringBootApplication @EnableZuulProxy public class ApiGatewayExampleInSpring { public static void main(String[] args) { SpringApplication.run(ApiGatewayExampleInSpring.class, args); } }   Rest all the things are managed in configuration. In the application.properties file of the API Gateway, the content will be something as follows: zuul.routes.product.path=/product/** zuul.routes.produc.url=http://localhost:10000 ribbon.eureka.enabled=false server.port=8080  With this configuration, we are defining rules such as this: for any request for a URL such as /product/xxx, pass this request to http://localhost:10000. For outer world, it will be like http://localhost:8080/product/1, which will internally be transferred to the 10000 port. If we defined a spring.application.name variable as product in product detail microservice, then we don't need to define the URL path property here (zuul.routes.product.path=/product/** ), as Zuul, by default, will make it a URL/product. The example taken here for an API Gateway is not very intelligent, but this is a very capable API Gateway. Depending on the routes, filter, and caching defined in the Zuul's property, one can make a very powerful API Gateway. Summary In this article, you learned about the API Gateway, its need, and its pros and cons with the code example. Resources for Article:   Further resources on this subject: What are Microservices? [article] Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 31663

article-image-how-to-use-standard-macro-in-workflows
Sunith Shetty
21 Feb 2018
6 min read
Save for later

How to use Standard Macro in Workflows

Sunith Shetty
21 Feb 2018
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Renato Baruti titled Learning Alteryx. In this book, you will learn how to perform self-analytics and create interactive dashboards using various tools in Alteryx.[/box] Today we will learn Standard Macro that will provide you with a foundation for building enhanced workflows. The csv file required for this tutorial is available to download here. Standard Macro Before getting into Standard Macro, let's define what a macro is. A macro is a collection of workflow tools that are grouped together into one tool. Using a range of different interface tools, a macro can be developed and used within a workflow. Any workflow can be turned into a macro and a repeatable element of a workflow can commonly be converted into a macro. There are a couple of ways you can turn your workflow into a Standard Macro. The first is to go to the canvas configuration pane and navigate to the Workflow tab. This is where you select what type of workflow you want. If you select Macro you should then have Standard Macro automatically selected. Now, when you save this workflow it will save as a macro. You’ll then be able to add it to another workflow and run the process created within the macro itself. The second method is just to add a Macro Input tool from the Interface tool section onto the canvas; the workflow will then automatically change to a Standard Macro. The following screenshot shows the selection of a Standard Macro, under the Workflow tab: Let's go through an example of creating and deploying a standard macro. Standard Macro Example #1: Create a macro that allows the user to input a number used as a multiplier. Use the multiplier for the DataValueAlt field. The following steps demonstrate this process: Step 1: Select the Macro Input tool from the Interface tool palette and add the tool onto the canvas. The workflow will automatically change to a Standard Macro. Step 2: Select Text Input and Edit Data option within the Macro Input tool configuration. Step 3: Create a field called Number and enter the values: 155, 243, 128, 352, and 357 in each row, as shown in the following image: Step 4: Rename the Input Name Input and set the Anchor Abbreviation as I as shown in the following image: Step 5: Select the Formula tool from the Preparation tool palette. Connect the Formula tool to the Macro Input tool. Step 6: Select the + Add Column option in the Select Column drop down within the Formula tool configuration. Name the field Result. Step 7: Add the following expression to the expression window: [Number]*0.50 Step 8: Select the Macro Output tool from the Interface tool palette and add the tool onto the canvas. Connect the Macro Output tool to the Formula tool. Step 9: Rename the Output Name Output and set the Anchor Abbreviation as O: The Standard Macro has now been created. It can be saved to use as multiplier, to calculate the five numbers added within the Macro Input tool to multiply 0.50. This is great; however, let's take it a step further to make it dynamic and flexible by allowing the user to enter a multiplier. For instance, currently the multiplier is set to 0.50, but what if a user wants to change that to 0.25 or 0.10 to determine the 25% or 10% value of a field. Let's continue building out the Standard Macro to make this possible. Step 1: Select the Text Box tool from the Interface tool palette and drag it onto the canvas. Connect the Text Box tool to the Formula tool on the lightning bolt (the macro indicator). The Action tool will automatically be added to the canvas, as this automatically updates the configuration of a workflow with values provided by interface questions when run as an app or macro. Step 2: Configure the Action tool that will automatically update the expression replaced by a specific field. Select Formula | FormulaFields | FormulaField | @expression - value="[Number]*0.50". Select the Replace a specific string: option and enter 0.50. This is where the automation happens, updating the 0.50 to any number the user enters. You will see how this happens in the following steps: Step 3: In the Enter the text or question to be displayed text box, within the Text Box tool configuration, enter: Please enter a number: Step 4: Save the workflow as Standard Macro.yxmc. The .yxmc file type indicates it's a macro related workflow, as shown in the following image: Step 5: Open a new workflow. Step 6: Select the Input Data tool from the In/Out tool palette and connect to the U.S. Chronic Disease Indicators.csv file. Step 7: Select the Select tool from the Preparation tool palette and drag it onto the canvas. Connect the Select tool to the Input Data tool. Step 8: Change the Data Type for the DataValueAlt field to Double. Step 9: Right-click on the canvas and select Insert | Macro | Standard Macro. Step 10: Connect the Standard Macro to the Select tool. Step 11: There will be Questions to select within the Standard Macro tool configuration. Select DataValueAlt (Double) as the Choose Field option and enter 0.25 in the Please enter a number text box: Step 12: Add a Browse tool to the Standard Macro tool. Step 13: Run the workflow: The goal for creating this Standard Macro was to allow the user to select what they would like the multiplier to be rather than a static number. Let's recap what has been created and deployed using a Standard Macro. First, Standard Macro.yxmc was developed using Interface tools. The Macro Input (I) was used to enter sample text data for the Number field. This Number field is what is used to multiply to what the given multiplier is - in this case, 0.50. This is the static number multiplier. The Formula tool was used to create the expression to conclude that the Number field will be multiplied by 0.50. The Macro Output (O) was used to output the macro so that it can be used in another workflow. The Text Box tool is where the question Please enter a number will be displayed, along with the Action tool that is used to update the specific value replaced. The current multiplier, 0.50, is replaced by 0.25, as identified in step 20, through a dynamic input by which the user can enter the multiplier. Notice that, in the Browse tool output, the Result field has been added, multiplying the values for the DataValueAlt field to the multiplier 0.25. Change the value in the macro to 0.10 and run the workflow. The Result field has been updated to now multiple the values for the DataValueAlt field to the multiplier 0.10. This is a great use case of a Standard Macro and demonstrates how versatile the Interface tools are. We learned about macros and their dynamic use within workflows. We saw how Standard Macro was developed to allow the end user to specify what they want the multiplier to be. This is a great way to implement the interactivity within a workflow. To know more about high-quality interactive dashboards and efficient self-service data analytics, do checkout this book Learning Alteryx.  
Read more
  • 0
  • 0
  • 6426

article-image-vmware-vsphere-storage-datastores-snapshots
Packt
21 Feb 2018
9 min read
Save for later

VMware vSphere storage, datastores, snapshots

Packt
21 Feb 2018
9 min read
VMware vSphere storage, datastores, snapshotsIn this article, byAbhilash G B, author of the book,VMware vSphere 6.5 CookBook - Third Edition, we will cover the following:Managing VMFS volumes detected as snapshotsCreating NFSv4.1 datastores with Kerberos authenticationEnabling storage I/O control (For more resources related to this topic, see here.) IntroductionStorage is an integral part of any infrastructure. It is used to store the files backing your virtual machines. The most common way to refer to a type of storage presented to a VMware environment is based on the protocol used and the connection type. NFS are storage solutions that can leverage the existing TCP/IP network infrastructure. Hence, they are referred to as IP-based storage. Storage IO Control (SIOC) is one of the mechanisms to use ensure a fair share of storage bandwidth allocation to all Virtual Machines running on shared storage, regardless of the ESXi host the Virtual Machines are running on. Managing VMFS volumes detected as snapshotsSome environments maintain copies of the production LUNs as a backup, by replicating them. These replicas are exact copies of the LUNs that were already presented to the ESXi hosts. If for any reason a replicated LUN is presented to an ESXi host, then the host will not mount the VMFS volume on the LUN. This is a precaution to prevent data corruption. ESXi identifies each VMFS volume using its signature denoted by aUniversally Unique Identifier (UUID). The UUID is generated when the volume is first created or resignatured and is stored in the LVM header of the VMFS volume. When an ESXi host scans for new LUN ;devices and VMFS volumes on it, it compares the physical device ID (NAA ID) of the LUN with the device ID (NAA ID) value stored in the VMFS volumes LVM header. If it finds a mismatch, then it flags the volume as a snapshot volume.Volumes detected as snapshots are not mounted by default. There are two ways to mount such volumes/datastore:Mount by Keeping the Existing Signature Intact - This is used when you are attempting to temporarily mount the snapshot volume on an ESXi that doesn't see the original volume. If you were to attempt mounting the VMFS volume by keeping the existing signature and if the host sees the original volume, then you will not be allowed to mount the volume and will be warned about the presence of another VMFS volume with the same UUID:Mount by generating a new VMFS Signature - This has to be used if you are mounting a clone or a snapshot of an existing VMFS datastore to the same host/s. The process of assigning a new signature will not only update the LVM header with the newly generated UUID, but all the Physical Device ID (NAA ID) of the snapshot LUN. Here, the VMFS volume/datastore will be renamed by prefixing the wordsnap followed by a random number and the name of the original datastore: Getting readyMake sure that the original datastore and its LUN is no longer seen by the ESXi host the snapshot is being mounted to. How to do it...The following procedure will help mount a VMFS volume from a LUN detected as a snapshot:Log in to the vCenter Server using the vSphere Web Client and use the key combination Ctrl+Alt+2 to switch to the Host and Clusters view.Right click on the ESXi host the snapshot LUN is mapped to and go to Storage | New Datastore.On the New Datastore wizard, select VMFS as the filesystem type and click Next to continue.On the Name and Device selection screen, select the LUN detected as a snaphsot and click Next to continue:On the Mount Option screen, choose to either mount by assigning a new signature or by keeping the existing signature, and click Next to continue:On the Ready to Complete screen, review the setting and click Finish to initiate the operation. Creating NFSv4.1 datastores with Kerberos authenticationVMware introduced support for NFS 4.1 with vSphere 6.0. The vSphere 6.5 added several enhancements:It now supports AES encryptionSupport for IP version 6Support Kerberos's integrity checking mechanismHere, we will learn how to create NFS 4.1 datastores. Although the procedure is similar to NFSv3, there are a few additional steps that needs to be performed. Getting readyFor Kerberos authentication to work, you need to make sure that the ESXi hosts and the NFS Server is joined to the Active Directory domainCreate a new or select an existing AD user for NFS Kerberos authenticationConfigure the NFS Server/Share to Allow access to the AD user chosen for NFS Kerberos authentication How to do it...The following procedure will help you mount an NFS datasture using the NFSv4.1 client with Kerberos authentication enabled:Log in to the vCenter Server using the vSphere Web Client and use the key combination Ctrl+Alt+2 to switch to the Host and Clusters view, select the desired ESXi host and navigate to it  Configure | System | Authentication Services section and supply the credentials of the Active Directory user that was chosen for NFS Kerberon Authentication:Right-click on the desired ESXi host and go to Storage | New Datastore to bring-up the Add Storage wizard.On the New Datastore wizard, select the Type as NFS and click Next to continue.On the Select NFS version screen, select NFS 4.1 and click Next to continue. Keep in mind that it is not recommended to mount an NFS Export using both NFS3 and NFS4.1 client. On the Name and Configuration screen, supply a Name for the Datastore, the NFS export's folder path and NFS Server's IP Address or FQDN. You can also choose to mount the share as ready-only if desired:On the Configure Kerberos Authentication screen, check the Enable Kerberos-based authentication box and choose the type of authentication required and click Next to continue:On the Ready to Complete screen review the settings and click Finish to mount the NFS export. Enabling storage I/O controlThe use of disk shares will work just fine as long as the datastore is seen by a single ESXi host. Unfortunately, that is not a common case. Datastores are often shared among multiple ESXi hosts. When datastores are shared, you bring in more than one local host scheduler into the process of balancing the I/O among the virtual machines. However, these lost host schedules cannot talk to each other and their visibility is limited to the ESXi hosts they are running on. This easily contributes to a serious problem called thenoisy neighbor situation. The job of SIOC is to enable some form of communication between local host schedulers so that I/O can be balanced between virtual machines running on separate hosts.  How to do it...The following procedure will help you enable SIOC on a datastore:Connect to the vCenter Server using the Web Client and switch to the Storage view using the key combination Ctrl+Alt+4.Right-click on the desired datastore and go to Configure Storage I/O Control:On the Configure Storage I/O Control window, select the checkbox Enable Storage I/O Control, set a custom congestion threshold (only if needed) and click OK to confirm the settings: With the Virtual Machine selected from the inventory, navigate to its Configure | General tab and review its datastore capability settings to ensure that SIOC is enabled:  How it works...As mentioned earlier, SIOC enables communication between these local host schedulers so that I/O can be balanced between virtual machines running on separate hosts. It does so by maintaining a shared file in the datastore that all hosts can read/write/update. When SIOC is enabled on a datastore, it starts monitoring the device latency on the LUN backing the datastore. If the latency crosses the threshold, it throttles the LUN's queue depth on each of the ESXi hosts in an attempt to distribute a fair share of access to the LUN for all the Virtual Machines issuing the I/O.The local scheduler on each of the ESXi hosts maintains an iostats file to keep its companion hosts aware of the device I/O statistics observed on the LUN. The file is placed in a directory (naa.xxxxxxxxx) on the same datastore.For example, if there are six virtual machines running on three different ESXi hosts, accessing a shared LUN. Among the six VMs, four of them have a normal share value of 1000 and the remaining two have high (2000) disk share value sets on them. These virtual machines have only a single VMDK attached to them. VM-C on host ESX-02 is issuing a large number of I/O operations. Since that is the only VM accessing the shared LUN from that host, it gets the entire queue's bandwidth. This can induce latency on the I/O operations performed by the other VMs: ESX-01 and ESX-03. If the SIOC detects the latency value to be greater than the dynamic threshold, then it will start throttling the queue depth: The throttled DQLEN for a VM is calculated as follows:DQLEN for the VM = (VM's Percent of Shares) of (Queue Depth)Example: 12.5 % of 64 → (12.5 * 64)/100 = 8The throttled DQLEN per host is calculated as follows:DQLEN of the Host = Sum of the DQLEN of the VMs on itExample: VM-A (8) + VM-B(16) = 24The following diagram shows the effect of SIOC throttling the queue depth: SummaryIn this article we learnt, how to mount a VMFS volume from a LUN detected as a snapshot, how to mount an NFS datasture using the NFSv4.1 client with Kerberos authentication enabled, and how to enable SIOC on a datastore. Resources for Article:   Further resources on this subject: Essentials of VMware vSphere [article] Working with VMware Infrastructure [article] Network Virtualization and vSphere [article]
Read more
  • 0
  • 0
  • 37975

article-image-exchange-management-shell-common-tasks
Packt
21 Feb 2018
11 min read
Save for later

Exchange Management Shell Common Tasks

Packt
21 Feb 2018
11 min read
In this article by Jonas Andersson, Nuno Mota, Michael Pfeiffer, the author of the book Microsoft Exchange Server 2016 PowerShell Cookbook, they will cover: Manually configuring remote PowerShell connections Using explicit credentials with PowerShell cmdlets (For more resources related to this topic, see here.) Microsoft introduced some radical architectural changes in Exchange 2007, including a brand-new set of management tools. PowerShell, along with an additional set of Exchange Server specific cmdlets, finally gave administrators an interface that could be used to manage the entire product from a command line shell. This was an interesting move, and at that time the entire graphical management console was built on top of this technology. The same architecture still existed with Exchange 2010, and PowerShell was even more tightly integrated with this product. Exchange 2010 used PowerShell v2, which relied heavily on its new remoting infrastructure. This provides seamless administrative capabilities from a single seat with the Exchange Management Tools, whether your servers are on-premises or in the cloud. Initially when Exchange 2013 was released, it was using version 4 of PowerShell, and during the life cycle it could be updated to version 5 of PowerShell with a lot of new cmdlets, core functionality changes, and even more integrations with the cloud services. Now with Exchange 2016, we have even more cmdlets and even more integrations with cloud-related integration and services. During the initial work on this book, we had 839 cmdlets with Cumulative Update 4 which was released in December 2016. This can be compared with the previous book, where at that stage we had 806 cmdlets based on Service Pack 1 and Cumulative Update 7. It gives us an impression that Microsoft is working heavily on the integrations and that the development of the on-premises product is still ongoing. This demonstrates that more features and functionality have been added over time. It will most likely continue like this in the future as well. In this article, we'll cover some of the most common topics, as well as common tasks, that will allow you to effectively write scripts with this latest release. We'll also take a look at some general tasks such as scheduling scripts, sending emails, generating reports, and more. Performing some basic steps To work with the code samples in this article, follow these steps to launch the Exchange Management Shell: Log onto a workstation or server with the Exchange Management Tools installed. You can connect using remote PowerShell if, for some reason, you don't have Exchange Management Tools installed. Use the following command: $Session = New-PSSession -ConfigurationName Microsoft.Exchange ` -ConnectionUri http://tlex01/PowerShell/ ` -Authentication Kerberos Import-PSSession $Session Open the Exchange Management Shell by clicking the Windows button and go to Microsoft Exchange Server 2016 | Exchange Management Shell. Remember to start the Exchange Management Shell using Run as Administrator to avoid permission problems. In the article, notice that in the examples of cmdlets, I have used the back tick (`) character for breaking up long commands into multiple lines. The purpose of this is to make it easier to read. The back ticks are not required and should only be used if needed. Notice that the Exchange variables, such as $exscripts, are not available when using the preceding method. Manually configuring remote PowerShell connections Just like Exchange 2013, Exchange 2016 is very reliable on remote PowerShell for both on-premises and cloud services. When you double-click the Exchange Management Shell shortcut on a server or workstation with the Exchange Management Tools installed, you are connected to an Exchange server using a remote PowerShell session. PowerShell remoting also allows you to remotely manage your Exchange servers from a workstation or a server even when the Exchange Management Tools are not installed. In this recipe, we'll create a manual remote shell connection to an Exchange server using a standard PowerShell console. Getting ready To complete the steps in this recipe, you'll need to log on to a workstation or a server and launch Windows PowerShell. How to do it... First, create a credential object using the Get-Credential cmdlet. When running this command, you'll be prompted with a Windows authentication dialog box. Enter a username and password for an account that has administrative access to your Exchange organization. Make sure you enter your user name in DOMAINUSERNAME or UPN format: $credential = Get-Credential Next, create a new session object and store it in a variable. In this example, the Exchange server we are connecting to is specified using the -ConnectionUri parameter. Replace the server FQDN in the following example with one of your own Exchange servers: $session = New-PSSession -ConfigurationName Microsoft.Exchange ` -ConnectionUri http://tlex01.testlabs.se/PowerShell/ ` -Credential $credential Finally, import the session object: Import-PSSession $session -AllowClobber After you execute the preceding command, the Exchange Management Shell cmdlets will be imported into your current Windows PowerShell session, as shown in the following screenshot: How it works... Each server runs IIS and supports remote PowerShell sessions through HTTP. Exchange servers host a PowerShell virtual directory in IIS. This contains several modules that perform authentication checks and determine which cmdlets and parameters are assigned to the user making the connection. This happens both when running the Exchange Management Shell with the tools installed, and when creating a manual remote connection. The IIS virtual directory that is being used for connecting is shown in the following screenshot: The IIS virtual directories can also be retrieved by using PowerShell with the cmdlet Get-WebVirtualDirectory. For getting the information about the web applications, use the cmdlet Get-WebApplication. Remote PowerShell connections to Exchange 2016 servers connect almost the same way as Exchange 2013 did. This is called implicit remoting that allows us to import remote commands into the local shell session. With this feature, we can use the Exchange PowerShell cmdlets installed on the Exchange server and load the cmdlets into our local PowerShell session without installing any management tools. However, the detailed behavior for establishing a remote PowerShell session was changed in Exchange 2013 CU11. What happens right now when a user or admin is trying to establish the PowerShell session is that it first tries to connect to the user’s or admin’s mailbox (anchor mailbox), if there are any. If the user doesn’t have an existing mailbox, the PowerShell request will be redirected to the organization arbitration mailbox named SystemMailbox{bb558c35-97f1-4cb9-8ff7-d53741dc928c[AP4] }. You may be curious as to why Exchange uses remote PowerShell even when the tools are installed and when running the shell from the server. There are a couple of reasons for this, but some of the main factors are permissions. The Exchange 2010, 2013, and 2016 permissions model has been completely transformed in these latest versions and uses a feature called Role Based Access Control (RBAC), which defines what administrators can and cannot do. When you make a remote PowerShell connection to an Exchange 2016 server, the RBAC authorization module in IIS determines which cmdlets and parameters you have access to. Once this information is obtained, only the cmdlets and parameters that have been assigned to your account through an RBAC role are loaded into your PowerShell session using implicit remoting. There's more... In the previous example, we explicitly set the credentials used to create the remote shell connection. This is optional and not required if the account you are currently logged on with has the appropriate Exchange permissions assigned. To create a remote shell session using your currently logged on credentials, use the following syntax to create the session object: $session = New-PSSession -ConfigurationName Microsoft.Exchange ` -ConnectionUri http://tlex01.testlabs.se/PowerShell/ Once again, import the session: Import-PSSession $session When the tasks have been completed, remove the session: Remove-PSSession $session You can see here that the commands are almost identical to the previous example, except this time we've removed the -Credential parameter and used the assigned credential object. After this is done, you can simply import the session and the commands will be imported into your current session using implicit remoting. In addition to implicit remoting, Exchange 2016 servers running PowerShell v5 or above can also be managed using fan-out remoting. This is accomplished using the Invoke-Command cmdlet and it allows you to execute a script block on multiple computers in parallel. For more details, run Get-Help Invoke-Command and Get-Help about_remoting. Since Exchange Online is commonly used by Microsoft customers nowadays, let’s take a look at an example on how to connect as well. It’s very similar to connecting to remote PowerShell on-premises. The following prerequisites are required: .NET Framework 4.5 or 4.5.1 and then either Windows Management Framework 3.0 or 4.0. Create a variable of the credentials: $UserCredential = Get-Credential Create a session variable: $session = New-PSSession -ConfigurationName Microsoft.Exchange ` -ConnectionUri https://outlook.office365.com/powershell-liveid/ ` -Credential $UserCredential -Authentication Basic ` -AllowRedirection Finally, import the session: Import-PSSession $session -AllowClobber Perform the tasks you want to do: Get-Mailbox Exchange Online mailboxes are shown in the following screenshot: When the tasks have been completed, remove the session: Remove-PSSession $session Using explicit credentials with PowerShell cmdlets There are several PowerShell and Exchange Management Shell cmdlets that provide a credential parameter that allows you to use an alternate set of credentials when running a command. You may need to use alternate credentials when making manual remote shell connections, sending email messages, working in cross-forest scenarios, and more. In this recipe, we'll take a look at how you can create a credential object that can be used with commands that support the -Credential parameter. How to do it... To create a credential object, we can use the Get-Credential cmdlet. In this example, we store the credential object in a variable that can be used by the Get-Mailbox cmdlet: $credential = Get-Credential Get-Mailbox -Credential $credential How it works... When you run the Get-Credential cmdlet,, you are presented with a Windows authentication dialog box requesting your username and password. In the previous example, we assigned the Get-Credential cmdlet to the $credential variable. After typing your username and password into the authentication dialog box, the credentials are saved as an object that can then be assigned to the -Credential parameter of a cmdlet. The cmdlet that utilizes the credential object will then run using the credentials of the specified user. Supplying credentials to a command doesn't have to be an interactive process. You can programmatically create a credential object within your script without using the Get-Credential cmdlet: $user = "testlabsadministrator" $pass = ConvertTo-SecureString -AsPlainText P@ssw0rd01 -Force $credential = New-Object ` System.Management.Automation.PSCredential ` -ArgumentList $user,$pass You can see here that we've created a credential object from scratch without using the Get-Credential cmdlet. In order to create a credential object, we need to supply the password as a secure string type. The ConvertTo-SecureString cmdlet can be used to create a secure string object. We then use the New-Object cmdlet to create a credential object specifying the desired username and password as arguments. If you need to prompt a user for their credentials but you do not want to invoke the Windows authentication dialog box, you can use this alternative syntax to prompt the user in the shell for their credentials: $user = Read-Host "Please enter your username" $pass = Read-Host "Please enter your password" -AsSecureString $credential = New-Object ` System.Management.Automation.PSCredential-ArgumentList ` $user,$pass This syntax uses the Read-Host cmdlet to prompt the user for both their username and password. Notice that when creating the $pass object, we use Read-Host with the -AsSecureString parameter to ensure that the object is stored as a secure string. There's more... After you've created a credential object, you may need to access the properties of that object to retrieve the username and password. We can access the username and password properties of the $credential object created previously using the following commands: You can see here that we can simply grab the username stored in the object by accessing the UserName property of the credential object. Since the Password property is stored as a secure string, we need to use the GetNetworkCredential method to convert the credential to a NetworkCredential object that exposes the Password property as a simple string. Another powerful method for managing passwords for scripts is to encrypt them and store them into a text file. This can be easily done using the following example. The password is stored into a variable: $secureString = Read-Host -AsSecureString "Enter a secret password" The variable gets converted from SecureString and saved to a text file: $secureString | ConvertFrom-SecureString | Out-File .storedPassword.txt The content in the text file is retrieved and converted into a SecureString value: $secureString = Get-Content .storedPassword.txt | ConvertTo-SecureString Summary In this article, we have covered how to manually setup remote PowerShell connections and how to work with the PowerShell cmdlets. Resources for Article: Further resources on this subject: Exploring Windows PowerShell 5.0 [article] Working with PowerShell [article] How to use PowerShell Web Access to manage Windows Server [article]
Read more
  • 0
  • 0
  • 9201

article-image-installing-tensorflow-in-windows-ubuntu-and-mac-os
Amarabha Banerjee
21 Feb 2018
7 min read
Save for later

Installing TensorFlow in Windows, Ubuntu and Mac OS

Amarabha Banerjee
21 Feb 2018
7 min read
[box type="note" align="" class="" width=""]This article is taken from the book Machine Learning with Tensorflow 1.x, written by Quan Hua, Shams Ul Azeem and Saif Ahmed. This book will help tackle common commercial machine learning problems with Google’s TensorFlow 1.x library.[/box] Today, we shall explore the basics of getting started with TensorFlow, its installation and configuration process. The proliferation of large public datasets, inexpensive GPUs, and open-minded developer culture has revolutionized machine learning efforts in recent years. Training data, the lifeblood of machine learning, has become widely available and easily consumable in recent years. Computing power has made the required horsepower available to small businesses and even individuals. The current decade is incredibly exciting for data scientists. Some of the top platforms used in the industry include Caffe, Theano, and Torch. While the underlying platforms are actively developed and openly shared, usage is limited largely to machine learning practitioners due to difficult installations, non-obvious configurations, and difficulty with productionizing solutions. TensorFlow has one of the easiest installations of any platform, bringing machine learning capabilities squarely into the realm of casual tinkerers and novice programmers. Meanwhile, high-performance features, such as—multiGPU support, make the platform exciting for experienced data scientists and industrial use as well. TensorFlow also provides a reimagined process and multiple user-friendly utilities, such as TensorBoard, to manage machine learning efforts. Finally, the platform has significant backing and community support from the world's largest machine learning powerhouse--Google. All this is before even considering the compelling underlying technical advantages, which we'll dive into later. Installing TensorFlow TensorFlow conveniently offers several types of installation and operates on multiple operating systems. The basic installation is CPU-only, while more advanced installations unleash serious horsepower by pushing calculations onto the graphics card, or even to multiple graphics cards. We recommend starting with a basic CPU installation at first. More complex GPU and CUDA installations will be discussed in Appendix, Advanced Installation. Even with just a basic CPU installation, TensorFlow offers multiple options, which are as follows: A basic Python pip installation A segregated Python installation via Virtualenv A fully segregated container-based installation via Docker Ubuntu installation Ubuntu is one of the best Linux distributions for working with Tensorflow. We highly recommend that you use an Ubuntu machine, especially if you want to work with GPU. We will do most of our work on the Ubuntu terminal. We will begin with installing pythonpip and python-dev via the following command: sudo apt-get install python-pip python-dev A successful installation will appear as follows: If you find missing packages, you can correct them via the following command: sudo apt-get update --fix-missing Then, you can continue the python and pip installation. We are now ready to install TensorFlow. The CPU installation is initiated via the following command: sudo pip install tensorflow A successful installation will appear as follows: macOS installation If you use Python, you will probably already have the Python package installer, pip. However, if not, you can easily install it using the easy_install pip command. You'll note that we actually executed sudo easy_install pip—the sudo prefix was required because the installation requires administrative rights. We will make the fair assumption that you already have the basic package installer, easy_install, available; if not, you can install it from https://pypi.python.org/pypi/setuptools. A successful installation will appear as shown in the following screenshot: Next, we will install the six package: sudo easy_install --upgrade six A successful installation will appear as shown in the following screenshot: Surprisingly, those are the only two prerequisites for TensorFlow, and we can now install the core platform. We will use the pip package installer mentioned earlier and install TensorFlow directly from Google's site. The most recent version at the time of writing this book is v1.3, but you should change this to the latest version you wish to use: sudo pip install tensorflow The pip installer will automatically gather all the other required dependencies. You will see each individual download and installation until the software is fully installed. A successful installation will appear as shown in the following screenshot: That's it! If you were able to get to this point, you can start to train and run your first model. Skip to Chapter 2, Your First Classifier, to train your first model. macOS X users wishing to completely segregate their installation can use a VM instead, as described in the Windows installation. Windows installation As we mentioned earlier, TensorFlow with Python 2.7 does not function natively on Windows. In this section, we will guide you through installing TensorFlow with Python 3.5 and set up a VM with Linux if you want to use TensorFlow with Python 2.7. First, we need to install Python 3.5.x or 3.6.x 64-bit from the following links: https://www.python.org/downloads/release/python-352/ https://www.python.org/downloads/release/python-362/ Make sure that you download the 64-bit version of Python where the name of the installation has amd64, such as python-3.6.2-amd64.exe. The Python 3.6.2 installation looks like this: We will select Add Python 3.6 to PATH and click Install Now. The installation process will complete with the following screen: We will click the Disable path length limit and then click Close to finish the Python installation. Now, let's open the Windows PowerShell application under the Windows menu. We will install the CPU-only version of Tensorflow with the following command: pip3 install tensorflow. The result of the installation will look like this: Congratulations, you can now use TensorFlow on Windows with Python 3.5.x or 3.6.x support. In the next section, we will show you how to set up a VM to use TensorFlow with Python 2.7. However, you can skip to the Test installation section of Chapter 2, Your First Classifier, if you don't need Python 2.7. Now, we will show you how to set up a VM with Linux to use TensorFlow with Python 2.7. We recommend the free VirtualBox system available at https://www.virtualbox.org/wiki/Downloads. The latest stable version at the time of writing is v5.0.14, available at the following URL: http:/ / download. virtualbox. org/ virtualbox/ 5. 1. 28/ VirtualBox- 5. 1. 28- 117968- Win. exe A successful installation will allow you to run the Oracle VM VirtualBox Manager dashboard, which looks like this: Testing the installation In this section, we will use TensorFlow to compute a simple math operation. First, open your terminal on Linux/macOS or Windows PowerShell in Windows. Now, we need to run python to use TensorFlow with the following command: python Enter the following program in the Python shell: import tensorflow as tf a = tf.constant(1.0) b = tf.constant(2.0) c = a + b sess = tf.Session() print(sess.run(c)) The result will look like the following screen where 3.0 is printed at the end: We covered TensorFlow installation on the three major operating systems, so that you are up and running with the platform. Windows users faced an extra challenge, as TensorFlow on Windows only supports Python 3.5.x or Python 3.6.x 64-bit version. However, even Windows users should now be up and running. Further get a detailed understanding of implementing Tensorflow with contextual examples in this post. If you liked this article, be sure to check out Machine Learning with Tensorflow 1.x which will help you take up any challenge you may face while implementing TensorFlow 1.x in your machine learning environment.  
Read more
  • 0
  • 0
  • 13492
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-react-native-really-native-framework
Packt
21 Feb 2018
11 min read
Save for later

Is React Native is really Native framework?

Packt
21 Feb 2018
11 min read
This article by Vladimir Novick, author of the book React Native - Building Mobile Apps with JavaScript, introduces the concept of how the the React Native is really a Native framework, it's working, information flow, architecture, and benefits. (For more resources related to this topic, see here.) Introduction So how React Native is different? Well it doesn’t fall under hybrid category because the approach is different. When hybrid apps are trying to make platform specific features reusable between platforms, React Native have platform independent features, but also have lots of specific platform implementations. Meaning that on iOS and on Android code will look different, but somewhere between 70-90 percent of code will be reused. Also React Native does not depend on HTML or CSS. You write in JavaScript, but this JavaScript is compiled to platform specific Native code using React Native bridge. It happens all the time, but it’s optimize to a way, that application will run smoothly in 60fps. So to summarize React Native is not really a Native framework, but It’s much closer to Native code, than hybrid apps. And now let’s dive a bit deeper and understand how JavaScript gets converted into a Native code. How React Native bridge from JavaScript to Native world works? Let’s dive a bit deeper and understand how React Native works under the hood, which will help us understand how JavaScript is compiled to a Native code and how the whole process works. It’s crucial to know how the whole process works, so if you will have performance issues one day, you will understand where it originates. Information flow So we’ve talked about React concepts that power up React Native and one of them is that UI is a function of data. You change the state and React knows what to update. Let’s visualize now how information flows through common React app. Check out the diagram:  We have React component, which passes data to three child components Under the hood what is happening is, Virtual DOM tree is created representing these component hierarchy When state of the parent component is updated, React knows how to pass information to the children Since children are basically representation of UI, React figures out how to batch Browser DOM updates and executes them So now let’s remove Browser DOM and think that instead of batching Browser DOM updates, React Native does the same with calls to Native modules. So what about passing information down to Native modules? It can be done in two ways: Shared mutable data Serializable messages exchanged between JavaScript and Native modules React Native is going with the second approach. Instead of mutating data on shareable objects it passes asynchronous serialized batched messages to React Native Bridge. Bridge is the layer that is responsible for glueing together Native and JavaScript environments. Architecture Let’s take a look at the following diagram, which explains how React Native Architecture is structured and walk through the diagram: In diagram, pictured three layers: Native, Bridge and JavaScript. Native layer is pictured at the last in picture, because the layer that is closer to device itself. Bridge is the layer that connects between JavaScript and Native modules and basically is a transport layer that transport asynchronous serialized batched response messages from JavaScript to Native modules. When event is executed on Native layer. It can be touch, timer, network request. Basically any event involving device Native modules, It’s data is collected and is sent to the Bridge as a serialized message. Bridge pass this message to JavaScript layer. JavaScript layer is an event loop. Once Bridge passes Serialized payload to JavaScript, Event is processed and your application logic comes into play. If you update state, triggering your UI to re-render for example, React Native will batch Update UI and send them to the Bridge. Bridge will pass this Serialized batched response to Native layer, which will process all commands, that it can distinguish from serialized batched response and will Update UI accordingly. Threading model Up till now we’ve seen that there are lots of stuff going on under the hood of React Native. It’s important to know that everything is done on three main threads: UI (application main thread) Native modules JavaScript Runtime UI thread is the main Native thread where Native level rendering occurs. It is here, where your platform of choice, iOS or Android, does measures, layouting, drawing. If your application accesses any Native APIs, it’s done on a separate Native modules thread. For example, if you want to access the camera, Geo location, photos, and any other Native API. Panning and gestures in general are also done on this thread. JavaScript Runtime thread is the thread where all your JavaScript code will run. It’s slower than UI thread since it’s based on JavaScript event loop, so if you do complex calculations in your application, that leads to lots of UI changes, these can lead to bad performance. The rule of thumb is that if your UI will change slower than 16.67ms, then UI will appear sluggish. What are benefits of React Native? React Native brings with it lots of advantages for mobile development. We covered some of them briefly before, but let’s go over now in more detail. These advantages are what made React Native so popular and trending nowadays. And most of all it give web developers to start developing Native apps with relatively short learning curve compared to overhead learning Objective-C and Java. Developer experience One of the amazing changes React Native brings to mobile development world is enhancing developer experience. If we check developer experience from the point of view of web developer, it’s awesome. For mobile developer it’s something that every mobile developer have dreamt of. Let’s go over some of the features React Native brings for us out of the box. Chrome DevTools debugging Every web developer is familiar with Chrome Developer tools. These tools give us amazing experience debugging web applications. In mobile development debugging mobile applications can be hard. Also it’s really dependent on your target platform. None of mobile application debugging techniques does not even come near web development experience. In React Native, we already know, that JavaScript event loop is running on a separate thread and it can be connected to Chrome DevTools. By clicking Ctrl/Cmd + D in application simulator, we can attach our JavaScript code to Chrome DevTools and bring web debugging to a mobile world. Let’s take a look at the following screenshot: Here you see a React Native debug tools. By clicking on Debug JS Remotely, a separate Google Chrome window is opened where you can debug your applications by setting breakpoints, profiling CPU and memory usage and much more. Elements tab in Chrome Developer tools won’t be relevant though. For that we have a different option. Let’s take a look at what we will get with Chrome Developer tools Remote debugger. Currently Chrome developer tools are focused on Sources tab. You can notice that JavaScript is written in ECMAScript 2015 syntax. For those of you who are not familiar with React JSX, you see weird XML like syntax. Don’t worry, this syntax will be also covered in the book in the context of React Native.  If you put debugger inside your JavaScript code, or a breakpoint in your Chrome development tools, the app will pause on this breakpoint or debugger and you will be able to debug your application while it’s running. Live reload As you can see in React Native debugging menu, the third row says Live Reload. If you enable this option, whenever you change your code and save, the application will be automatically reloaded. This ability to Live reload is something mobile developers only dreamt of. No need to recompile application after each minor code change. Just save and the application will reload itself in simulator. This greatly speed up application development and make it much more fun and easy than conventional mobile development. The workflow for every platform is different while in React Native the experience is the same. Does not matter for which platform you develop. Hot reload Sometimes you develop part of the application which requires several user interactions to get to. Think of, for example logging in, opening menu and choosing some option. When we change our code and save, while live reload is enabled, our application is reloaded and we need to once again do these steps. But it does not have to be like that. React Native gives us amazing experience of hot reloading. If you enable this option in React Native development tools and if you change your React Native component, only the component will be reloaded while you stay on the same screen you were before. This speeds up the development process even more. Component hierarchy inspections I’ve said before, that we cannot use elements panel in Chrome development tools, but how you inspect your component structure in React Native apps? React Native gives us built in option in development tools called Show Inspector. When clicking it, you will get the following window: After inspector is opened, you can select any component on the screen and inspect it. You will get the full hierarchy of your components as well as their styling: In this example I’ve selected Welcome to React Native! text. In the opened pane I can see it’s dimensions, padding margin as well as component hierarchy. As you can see it’s IntroApp/Text/RCTText. RCTText is not a React Native JavaScript component, but a Native text component, connected to React Native bridge. In that way you also can see that component is connected to a Native text component. There are even more dev tools available in React Native, that I will cover later on, but we all can agree, that development experience is outstanding. Web inspired layout techniques Styling for Native mobile apps can be really painful sometimes. Also it’s really different between iOS and Android. React Native brings another solution. As you may’ve seen before the whole concept of React Native is bringing web development experience to mobile app development. That’s also the case for creating layouts. Modern way of creating layout for the web is by using flexbox. React Native decided to adopt this modern technique for web and bring it also to the mobile world with small differences. In addition to layouting, all styling in React Native is very similar to using inline styles in HTML. Let’s take a look at example: const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#F5FCFF', }); As you can see in this example, there are several properties of flexbox used as well as background color. This really reminds CSS properties, however instead of using background-color, justify-content and align-items, CSS properties are named in a camel case manner. In order to apply these styles to text component for example. It’s enough to pass them as following: <Text styles={styles.container}>Welcome to React Native </Text> Styling will be discussed in the book, however as you can see from example before , styling techniques are similar to web. They are not dependant on any platform and the same for both iOS and Android Code reusability across applications In terms of code reuse, if an application is properly architectured (something we will also learn in this book), around 80% to 90% of code can be reused between iOS and Android. This means that in terms of development speed React Native beats mobile development. Sometimes even code used for the web can be reused in React Native environment with small changes. This really brings React Native to top of the list of the best frameworks to develop Native mobile apps. Summary In this article, we learned about the concept of how the React Native is really a Native framework, working, information flow, architecture, and it's benefits briefly. Resources for Article:   Further resources on this subject: Building Mobile Apps [article] Web Development with React and Bootstrap [article] Introduction to JavaScript [article]
Read more
  • 0
  • 1
  • 64855

article-image-handle-missing-data-ibm-spss-modeler
Amey Varangaonkar
21 Feb 2018
8 min read
Save for later

How to handle missing data in IBM SPSS Modeler

Amey Varangaonkar
21 Feb 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book IBM SPSS Modeler Essentials written by Keith McCormick and Jesus Salcedo. This book gets you up and running with the fundamentals of SPSS Modeler, a premium tool for data mining and predictive analytics.[/box] In today’s tutorial we will demonstrate how easy it is to work with missing values in a dataset using the SPSS Modeler. Missing data is different than other topics in data modeling that you cannot choose to ignore . This is because failing to make a choice just means you are using the default option for a procedure, which most of the time is not optimal. In fact, it is important to remember that every model deals with missing data in a certain way, and some modeling techniques handle missing data better than others. In SPSS Modeler, there are four types of missing data: Type of missing data Definition $Null$ value Applies only to numeric fields. This is a cell that is empty or has an illegal value White space Applies only to string fields. This is a cell that is empty or has spaces. Empty string Applies only to string fields. This is a cell that is empty. Empty string is a subset of white space Blank value This is predefined code, and it applies to any type of field The first step in dealing with missing data is to assess the type and amount of missing data for each field. Consider whether there is a pattern as to why data might be missing. This can help determine if missing values could have affected responses. Only then can we decide how to handle it. There are two problems associated with missing data, and these affect the quantity and quality of the data: Missing data reduces sample size (quantity) Responders may be different from non-responders (quality—there could be biased results) Ways to address missing data There are three ways to address missing data: Remove fields Remove cases Impute missing values It can be necessary at times to remove fields with a large proportion of missing values. The easiest way to remove fields is to use a Filter node (discussed later in the book), however you can also use the Data Audit node to do this. [box type="info" align="" class="" width=""]Note that in some cases missing data can be predictive of behavior, so it is important to assess the importance of a variable before removing a field.[/box] In some situations, it may be necessary to remove cases instead of fields. For example, you may be developing a predictive model to predict customers' purchasing behavior and you simply do not have enough information concerning new customers. The easiest way to remove cases would be to use a Select node (discussed in the next chapter); however, you can also use the Data Audit node to do this. Imputing missing values implies replacing values for fields. However, some people do not estimate values for categorical fields because it does not seem right. In general, it is easier to estimate missing values for numeric fields, such as age, where often analysts will use the mean, median, or mode. [box type="info" align="" class="" width=""]Note that it is not a good idea to estimate missing data if you are missing a large percentage of information for that field, because estimates will not be accurate. Typically, we try not to impute more than 5% of values.[/box] To close out of the Data Audit node: Click OK to return to the stream canvas Defining missing values in the Type node When working with missing data, the first thing you need to do is define the missing data so that Modeler knows there is missing data, otherwise Modeler will think that the missing data is another value for a field (which, in some situations, it is, as in our dataset, but quite often this is not the case). Although the Data Audit node provides a report of missing values, blank values need to be defined within a Type node (or Type tab of a source node) in order for these to be identified by the Data Audit node. The Type tab (or node) is the only place where users can define missing values (Missing column). [box type="info" align="" class="" width=""]Note that in the Type node, blank values and $null$ values are not shown; however, empty strings and white space are depicted by "" or " ".[/box] To define blank values: Edit the Var.File node. Click on the Types tab. Click on the Missing cell for the field Region. Select Specify in the Missing column. Click Define blanks. Selecting Define blanks chooses Null and White space (remember, Empty String is a subset of White space, so it is also selected), and in this way these types of missing data are specified. To specify a predefined code, or a blank value, you can add each individual value to a separate cell in the Missing values area, or you can enter a range of numeric values if they are consecutive. 6. Type "Not applicable" in the first Missing values cell. 7. Hit Enter: We have now specified that "Not applicable" is a code for missing data for the field Region. 8. Click OK. In our dataset, we will only define one field as having missing data. 9. Click on the Clear Values button. 10. Click on the Read Values button: The asterisk indicates that missing values have been defined for the field Region. Now Not applicable is no longer considered a valid value for the field Region, but it will still be shown in graphs and other output. However, models will now treat the category Not applicable as a missing value. 11. Click OK. Imputing missing values with the Data Audit node As we have seen, the Data Audit node allows you to identify missing values so that you can get a sense of how much missing data you have. However, the Data Audit node also allows you to remove fields or cases that have missing data, as well as providing several options for data imputation: Rerun the Data Audit node. Note that the field Region only has 15,774 valid cases now, because we have correctly identified that the Not applicable category was a predefined code for missing data. 2. Click on the Quality tab. We are not going to impute any missing values in this example because it is not necessary, but we are going to show you some of the options, since these will be useful in other situations. To impute missing values you first need to specify when you want to impute missing values. For example: 3. Click in the Impute when cell for the field Region. 4. Select the Blank & Null Values. Now you need to specify how the missing values will be imputed. 5. Click in the Impute Method cell for the field Region. 6. Select Specify. In this dialog box, you can specify which imputation method you want to use, and once you have chosen a method, you can then further specify details about the imputation. There are several imputation methods: Fixed uses the same value for all cases. This fixed value can be a constant, the mode, the mean, or a midpoint of the range (the options will vary depending on the measurement level of the field). Random uses a random (different) value based on a normal or uniform distribution. This allows for there to be variation for the field with imputed values. Expression allows you to create your own equation to specify missing values. Algorithm uses a value predicted by a C&R Tree model. We are not going to impute any values now so click Cancel. If we had selected an imputation method, we would then: Click on the field Region to select it. Click on the Generate menu. The Generate menu of the Data Audit node allows you to remove fields, remove cases, or impute missing values, explained as follows: Missing Values Filter Node: This removes fields with too much missing data, or keeps fields with missing data so that you can investigate them further Missing Values Select Node: This removes cases with missing data, or keeps cases with missing data so that you can further investigate Missing Values SuperNode: This imputes missing values: If we were going to impute values, we would then click Missing Values SuperNode. In this way you can impute missing values using SPSS Modeler, and it makes your analysis a lot more easier. If you found our post useful, make sure to check out our book IBM SPSS Modeler Essentials, for more information on data mining and generating hidden insights using the popular SPSS Modeler tool.  
Read more
  • 0
  • 0
  • 15133

article-image-open-and-proprietary-next-generation-networks
Packt
21 Feb 2018
29 min read
Save for later

Open and Proprietary Next Generation Networks

Packt
21 Feb 2018
29 min read
In this article by Steven Noble, the author of the book Building Modern Networks, we will discuss networking concepts such as hyper-scale networking, software-defined networking, network hardware and software design along with a litany of network design ideas utilized in NGN. (For more resources related to this topic, see here.) The term Next Generation Network (NGN) has been around for over 20 years and refers to the current state of the art network equipment, protocols and features. A big driver in NGN is the constant newer, better, faster forwarding ASICs coming out of companies like Barefoot, Broadcom, Cavium, Nephos (MediaTek) and others. The advent of commodity networking chips has shortened the development time for generic switches, allowing hyper scale networking end users to build equipment upgrades into their network designs. At the time of writing, multiple companies have announced 6.4 Tbps switching chips. In layman terms, a 6.4 Tbps switching chip can handle 64x100GbE of evenly distributed network traffic without losing any packets. To put the number in perspective, the entire internet in 2004 was about 4 Tbps, so all of the internet traffic in 2004 could have crossed this one switching chip without issue. (Internet Traffic 1.3 EB/month http://blogs.cisco.com/sp/the-history-and-future-of-internet-traffic) A hyper-scale network is one that is operated by companies such as Facebook, Google, Twitter and other companies that add hundreds if not thousands of new systems a month to keep up with demand. Examples of next generation networking At the start of the commercial internet age (1994), software routers running on minicomputers such as BBNs PDP-11 based IP routers designed in the 1970's were still in use and hubs were simply dumb hardware devices that broadcast traffic everywhere. At that time, the state of the art in networking was the Cisco 7000 series router, introduced in 1993. The next generation router was the Cisco 7500 (1995), while the Cisco 12000 series (gigabit) routers and the Juniper M40 were only concepts. When we say next generation, we are speaking of the current state of the art and the near future of networking equipment and software. For example, 100 GB Ethernet is the current state of the art, while 400 GB Ethernet is in the pipeline. The definition of a modern network is a network that contains one or more of the following concepts: Software-defined Networking (SDN) Network design concepts Next generation hardware Hyper scale networking Open networking hardware and software Network Function Virtualization (NFV) Highly configurable traffic management Both Open and Closed network hardware vendors have been innovating at a high rate of speed with the help of and due to hyper-scale companies like Google, Facebook and others who have the need for next generation high speed network devices. This provides the network architect with a reasonable pipeline of equipment to be used in designs. Google and Facebook are both companies with hyper scale networks. A hyper scale network is one where the data stored, transferred, and updated on the network grows exponentially. Hyper scale companies deploy new equipment, software, and configurations weekly or even daily to support the needs of their customers. These companies have needs that are outside of the normal networking equipment available, so they must innovate by building their own next generation network devices, designing multi-tiered networks (like a three stage Clos network) and automating the installation and configuration of the next generation networking devices. The need of hyper scalers is well summed up by Google's Amin Vahdat in a 2014 Wired article "We couldn't buy the hardware we needed to build a network of the size and speed we needed to build". Terms and concepts in networking Here you will find the definition of some terms that are important in networking. They have been broken into groups of similar concepts. Routing and switching concepts In network devices and network designs there are many important concepts to understand. Here we begin with the way data is handled. The easiest way to discuss networking is to look at the OSI layer and point out where each device sits. OSI Layer with respect to routers and switches: Layer 1 (Physical): Layer 1 includes cables, hub, and switch ports. This is how all of the devices connect to each other including copper cables (CatX), fiber optics and Direct Attach Cables (DAC) which connect SFP ports without fiber. Layer 2 (Data link Layer): Layer 2 includes the raw data sent over the links and manages the Media Access Control (MAC) addresses for Ethernet Layer 3 (Network layer): Layer 3 includes packets that have more than just layer 2 data, such as IP, IPX (Novell Networks protocol), AFP (Apple's protocol) Routers and switches In a network you will have equipment that switches and/or routes traffic. A switch is a networking device that connects multiple devices such as servers, provides local connectivity and provides an uplink to the core network. A router is a network device that computes paths to remote and local devices, providing connectivity to devices across a network. Both switches and routers can use copper and fiber connections to interconnect. There are a few parts to a networking device, the forwarding chip, the TCAM, and the network processor. Some newer switches have Baseboard Management Controllers (BMCs) which manage the power, fans and other hardware, lessening the burden on the NOS to manage these devices. Currently routers and switches are very similar as there are many Layer 3 forwarding capable switches and some Layer 2 forwarding capable routers. Making a switch Layer 3 capable is less of an issue than making a router Layer 2 forwarding as the switch already is doing Layer 2 and adding Layer 3 is not an issue. A router does not do Layer 2 forwarding in general, so it has to be modified to allow for ports to switch rather than route. Control plane The control plane is where all of the information about how packets should be handled is kept. Routing protocols live in the control plane and are constantly scanning information received to determine the best path for traffic to flow. This data is then packed into a simple table and pushed down to the data plane. Data plane The data plane is where forwarding happens. In a software router, this would be done in the devices CPU, in a hardware router, this would be done using the forwarding chip and associated memories. VLAN/VXLAN A Virtual Local Area Network (VLAN) is a way of creating separate logical networks within a physical network. VLANs are generally used to separate/combine different users, or network elements such as phones, servers, workstations, and so on. You can have up to 4,096 VLANs on a network segment. A Virtual Extensible LAN (VXLAN) was created to all for large, dynamic isolated logical networks for virtualized and multiple tenant networks. You can have up to 16 million VXLANs on a network segment. A VXLAN Tunnel Endpoint (VTEP) is a set of two logical interfaces inbound which encapsulates incoming traffic into VXLANs and outbound which removes the encapsulation of outgoing traffic from VXLAN back to its original state.  Network design concepts Network design requires the knowledge of the physical structure of the network so that the proper design choices are made. For example, in data center you would have a local area network, if you have multiple data centers near each other, they would be considered a metro area network. LAN A Local Area Network (LAN), generally considered to be within the same building. These networks can be bridged (switched) or routed. In general LANs are segmented into areas to avoid large broadcast domains. MAN A Metro Area Network (MAN), generally defined as multiple sites in the same geographic area or city, that is, metropolitan area. A MAN generally runs at the same speed as a LAN but is able to cover larger distances. WAN A Wide Area Network (WAN), essentially everything that is not a LAN or MAN is a WAN. WANs generally use fiber optic cables to transmit data from one location to another. WAN circuits can be provided via multiple connections and data encapsulations including MPLS, ATM, and Ethernet. Most large network providers utilize Dense Wavelength Division Multiplexing (DWDM) to put more bits on their fiber networks. DWDM puts multiple colors of light onto the fiber, allowing up to 128 different wavelengths to be sent down a single fiber. DWDM has just entered open networking with the introduction of Facebook's Voyager system. Leaf-Spine design In a Leaf-Spine network design, there are Leaf switches (the switches that connect to the servers) sometimes called Top of Rack (ToR) switches connected to a set of Spine (switches that connect leafs together) sometimes called End of Rack (EoR) switches. Clos network A Clos network is one of the ways to design a multi-stage network. Based on the switching network design by Charles Clos in 1952, a three stage Clos is the smallest version of a Clos network. It has an ingress, a middle, and an egress stage. Some hyper scale networks are using five stage Clos where the middle is replaced with another three stage Clos. In a five stage Clos there is an ingress, a middle ingress, a middle, a middle egress and an egress stage. All stages are connected to their neighbor, so in the example shown, Ingress 1 is connected to all four of the middle stages just as Egress 1 is connected to all four of the middle stages. A Clos network can be built in odd numbers starting with 3, so a 5, 7, and so on stage Clos is possible. For even numbered designs, Benes designs are usable. Benes network A Benes design is a non-blocking Clos design where the middle stage is 2x2 instead of NxN. A Benes network can have even numbers of stages. Here is a four stage Benes network. Network controller concepts Here we will discuss the concepts of network controllers. Every networking device has a controller, whether built in or external to manage the forwarding of the system. Controller A controller is a computer that sits on the network and manages one or more network devices. A controller can be built into a device, like the Cisco Supervisor module or standalone like an OpenFlow controller. The controller is responsible for managing all of the control plane data and deciding what should be sent down to the data plane. Generally, a controller will have a Command-line Interface (CLI) and more recently a web configuration interface. Some controllers will even have an Application Programming Interface (API). OpenFlow controller An OpenFlow controller, as it sounds is a controller that uses the OpenFlow protocol to communicate with network devices. The most common OpenFlow controllers that people hear about are OpenDaylight and ONOS. People who are working with OpenFlow would also know of Floodlight and RYU. Supervisor module A route processor is a computer that sits inside of the chassis of the network device you are managing. Sometimes the route processor is built in to the system, while other times it is a module that can be replaced/upgraded. Many vendor multi-slot systems have multiple route processors for redundancy. An example of a removable route processor is the Cisco 9500 series Supervisor module. There are multiple versions available including revision A, with a 4 core processor and 16 GB of RAM and revision B with a 6 core processor and 24 GB of RAM. Previous systems such as the Cisco Catalyst 7600 had options such as the SUP720 (Supervisor Module 720) of which they offered multiple versions. The standard SUP720 had a limited number of routes that it could support (256k) versus the SUP720 XL which could support up to 1M routes. Juniper Route Engine In Juniper terminology, the controller is called a Route Engine. They are similar to the Cisco Route Processor/Supervisor modules. Unlike Cisco Supervisor modules which utilize special CPUS, Juniper's REs generally use common x86 CPUs. Like Cisco, Juniper multi-slot systems can have redundant processors. Juniper has recently released the information about the NG-REs or Next Generation Route Engines. One example is the new RE-S-X6-64G, a 6-core x86 CPU based routing engine with 64 GB DRAM and 2x 64 GB SSD storage available for the MX240/MX480/MX960. These NG-REs allow for containers and other virtual machines to be run directly. Built in processor When looking at single rack unit (1 RU) or pizza box design switches there are some important design considerations. Most 1 RU switches do not have redundant processors, or field replaceable route processors. In general the field replaceable units (FRUs) that the customer can replace are power supplies and fans. If the failure is outside of the available FRUs, the entire switch must be replaced in the event of a failure. With white box switches this can be a simple process as white box switches can be used in multiple locations of your network including the customer edge, provider edge and core. Sparing (keeping a spare switch) is easy when you have the same hardware in multiple parts of the network. Recently commodity switch fabric chips have come with built-in low power ARM CPUs that can be used to manage the entire system, leading to cheaper and less power hungry designs. Facebook Wedge microserver The Facebook Wedge is different from most white box switches as it has its controller as an add in module, the same board that is used in some of the OCP servers. By separating the controller board from the switch, different boards can be put in place, such as higher memory, faster CPUs, different CPU types, and so on. Routing protocols A routing protocol is a daemon that runs on a controller and communicates with other network devices to exchange route information. For this section we will use common words to demonstrate the way the routing protocol is working, these should not be construed as the actual way that the protocols talk. BGP Border Gateway Protocol (BGP) is a path vector based External Gateway Protocol (EGP) protocol that makes routing decisions based on paths, network policies, or rules (route-maps on Cisco). Though designed as a EGP, BGP can be used as both an interior (iboga) and exterior (eBGP) routing protocol. BGP uses keep alive packets (are you there?) to confirm that neighbors are still accessible. BGP is the protocol that is utilized to route traffic across the internet, exchanging routing information between different Autonomous Systems (AS). An AS is all of the connected networks under the control of a single entity such as Level 3 (AS1) or Sprint (AS1239). When two different ASes interconnect, BGP peering sessions are setup between two or more network devices that have direct connections to each other. In an eBGP scenario, AS1 and AS1239 would setup BGP peering sessions that would allow traffic to route between their AS. In an iBGP scenario, the same AS would peer with other routers with the same AS and transfer the routes that are defined on the system. While iBGP is used internally in most networks, iBGP is used in large corporate networks because other Interior Gateway Protocols (IGPs) may not scale. Examples: iBGP next hop self In this scenario AS1 and AS2 are peered with each other and exchanging one prefix each. AS1 advertises 192.168.1.0/24 and AS2 advertises 192.168.2.0/24. Each network has two routers, one border router, which connects to other ASes and one internal router which gets its routes from the border router. The routes are advertised internally with the next-hop set as the border router. This is a standard scenario when you are not running an IGP inside to distribute the routes for the border router external interfaces. The conversation goes like this: AS1 -> AS2: Hi AS2, I am AS1 AS2 -> AS1: Hi AS1, I am AS2 AS1 -> AS2: I have the following route, 192.168.1.0/24 AS2 - AS1: I have received the route, I have 192.168.2.0/24 AS1 - AS2: I have received the route AS1 -> Internal Router AS1: I have this route, 192.168.2.0/24, you can reach it through me at 10.1.1.1 AS2 -> Internal Router AS2: I have this route, 192.168.1.0/24, you can reach it through me at 10.1.1.1 iBGP next-hop unmodified In the next scenario the border routers are the same, but the internal routers are given a next-hop of the external (Other AS) border router. The last scenario is where you peer with a router server, a system that handles peering, filtering the routes based on what you have specified you send. The routes are then forwarded onto your peers with your IP as the next hop. OSPF Open Shortest Path First (OSPF) is a relatively simple protocol. Different links on the same router are put into the same or different areas. For example, you would use area 1 for the interconnects between campuses but you would use another area, such as area 10 for the campus itself. By separating areas, you can reduce the amount of cross talk that happens between devices. There are two versions of OSPF, v2 and v3. The main difference between v2 and v3 is that v2 is for IPv4 networks and v3 is for IPv6 networks. When there are multiple paths that can be taken, the cost of the links must be taken into account. Below you can see where there are two paths, one has a total cost of 20 (5+5+10), the other 16 (8+8) so the traffic will take the lowest cost link. IS-IS IS-IS is a link-state routing protocol, operating by flooding link state information throughout a network of routers using NETs (Network Entity Title). Each IS-IS router has its own database of the network topology, built by aggregating the flooded network information. IS-IS is used by companies who are looking for Fast convergence, scalability and Rapid flooding of new information. IS-IS uses the concept of levels instead of areas as in OSPF. There are two levels in IS-IS, Level 1 - area and Level2 - backbone. A Level 1 Intermediate System (IS), keeps track of the destinations within its area, while a Level 2 IS keep track of paths to the Level 1 areas. EIGRP Enhanced Interior Gateway Routing Protocol (EIGRP) is Cisco's proprietary routing protocol. It is hardly ever seen in current networks but if you see it in yours, then you need to plan accordingly. Replacing EIGRP with OSPF is suggested so that you can interoperate with non-cisco devices. RIP If Routing Information Protocol (RIP) is being used in your network, it must be replaced during the design. Most newer routing stacks do not support RIP. RIP is one of the original routing protocols, using the number of hops (routed ports) between the device and remote location to determine the optimal path. RIP sends its entire routing database out every 30 seconds. When routing tables were small, many years ago, RIP worked fine. With larger tables, the traffic bursts and resulting re-computing by other routers in the network causes routers to run at almost 100 percent CPU all the time. Cables Here we will review the major types of cables. Copper Copper cables have been around for a very long time, originally network devices were connected together using coax cable (the same cable used for television antennas and cable).  These days there are a few standard cables that are used. RJ45 Cables Cat5 - A 100Mb capable cable, used for both 10Mb and 100Mb connections  Cat5E - 1GbE capable cable but not suggested for 1GbE networks (Cat6 is better and the price difference is nominal). Cat6 - A 1GbE capable cable, can be used for any speed at or below 1GbE including 100Mb and 10Mb. SFPs SFP - Small Form-factor Pluggable port. Capable of up to 1GbE connections SFP+ - Same size as the SFP, capable of up to 10Gb connections SFP28 - Same size as the SFP, capable of up to 25Gb connections QSFP - Quad Small Form-factor Pluggable - A bit wider than the SFP but capable of multiple GbE connections QSFP+ - Same size as the QSFP - capable of 40GbE as 4x10GbE on the same cable QSFP28 - Same size as the QSFP - capable of 100GbE DAC - A direct attach cable that fits into a SFP or QSFP port Fiber/Hot pluggable Breakout Cables As routers and switches continue to become more dense, where the number of ports on the front of the device can no longer fit in the space, manufacturers have moved to what we call breakout cables. For example, if you have a switch that can handle 3.2Tb/s of traffic, you need to provide 3200Gbp/s of port capacity. The easiest way to do that is to use 32 100Gb ports which will fit on the front of a 1U device.  You cannot fit 128 10Gb ports without using either a breakout patch panel (which will then use another few rack units (RUs), or a breakout cable. For a period of time in the 1990's, Cisco used RJ21 connectors to provide up to 96 ethernet ports per slot Network engineers would then create breakout cables to go from the RJ21 to RJ45. These days, we have both DAC (Direct Attach Cable) and Fiber breakout cables. For example, here you can see a 1x4 breakout cable, providing 4 10g or 25G ports from a single 40G or 100G port. If you build a LAN network that only includes switches that provide layer 2 connectivity, any devices you want to connect together need to be in the same IP block. If you have a router in your network, it can route traffic between IP blocks. Part 1: What defines a modern network There is a litany of concepts that define a modern network, from simple principles to full feature sets. In general, a next-generation data center design enables you to move to a widely distributed non-blocking fabric with uniform chipset, bandwidth, and buffering characteristics in a simple architecture. In one example, to support these requirements, you would begin with a true three-tier Clos switching architecture with Top of Rack (ToR), spine, and fabric layers to build a data center network. Each ToR would have access to multiple fabrics and have the ability to select a desired path based on application requirement or network availability. Following the definition of a modern network from the introduction, here we layout the general definition of the parts. Modern network pieces Here we will discuss the concepts that build a Next Generation Network (NGN). Software Defined Networks Software defined networks can be defined in multiple ways. The general definition of a Software defined network is one which can be controlled as a singular unit instead of at a system by system basis. The control-plane which would normally be in the device and using routing protocols is replaced with a controller. Software defined networks can be built using many different technologies including OpenFlow, overlay networks and automation tools. Next generation networking and hyper scale networks As we mention in the introduction, twenty years ago NGN hardware would have been the Cisco GSR (officially introduced in 1997) or the Juniper M40 (officially released in 1998). Large Cisco and Juniper customers would have been working with the companies to help come up with the specifications and determining how to deploy the devices (possibly Alpha or Beta versions) in their networks. Today we can look at the hyper scale networking companies to see what a modern network looks like. A hyper scale network is one where the data stored, transferred and updated on the network grows exponentially. Technology such as 100Gb Ethernet, software defined networking, Open networking equipment and software are being deployed by hyper scale companies. Open networking hardware overview Open Hardware has been around for about 10 years, first in the consumer space and more recently in the enterprise space. Enterprise open networking hardware companies such as Quanta and Accton provide a significant amount of the hardware currently utilized in networks today. Companies such as Google and Facebook have been building their own hardware for many years. Facebook's routers such as the Wedge 100 and Backpack are available publicly for end users to utilize. Some examples of Open Networking hardware are: The Dell S6000-ON - a 32x40G switch with 32 QSFP ports on the front. The Quanta LY8 - a 48x10G + 6x40G switch with 48 SFP+ ports and 6 QSFP ports. The Facebook Wedge 100 - a 32x100G switch with 32 QSFP28 ports on the front. Open networking software overview To use open networking hardware, you need an operating system. The operating system manages the system devices such as fans, power, LEDs and temperature. On top of the operating system you will run a forwarding agent, examples of forwarding agents are Indigo, the open source OpenFlow daemon and Quagga, an open source routing agent. Closed networking hardware overview Cisco and Juniper are the leaders in the Closed Hardware and Software space. Cisco produces switches like the Nexus series (3000, 7000, 9000) with the 9000 programmable by ACI. Juniper provides the MX series (480, 960, 2020) with the 2020 being the highest end forwarding system they sell. Closed networking software overview Cisco has multiple network operating systems including IOS, NX-OS, IOS-XR. All Cisco NOSs are closed source and proprietary to the system that they run on. Cisco has what the industry calls a "industry standard CLI" which is emulated by many other companies. Juniper ships a single NOS, JunOS which can install on multiple different systems. JunOS is a closed source BSD based NOS. The JunOS CLI is significantly different from IOS and is more focused on engineers who program. Network Virtualization Not to be confused with Network Function Virtualization (NFV), Network virtualization is the concept of re-creating the hardware interfaces that exist in a traditional network in software. By creating a software counterpart to the hardware interfaces, you decouple the network forwarding from the hardware. There are a few companies and software projects that allow the end user to enable network virtualization. The first one is NSX which comes from the same team that developed OvS (Open Virtual Switch) Nicira, which was acquired by VMWare in 2012. Another project is Big Switch Networks Big Cloud Fabric, which utilizes a heavily modified version of Indigo, an OpenFlow controller. Network Function Virtualization Network Function Virtualization can be summed up by the statement that: "Due to recent network focused advancements in PC hardware, any service able to be delivered on proprietary, application specific hardware should be able to be done on a virtual machine". Essentially: routers, firewalls, load balancers and other network devices all running virtualized on commodity hardware. Traffic Engineering Traffic engineering is a method of optimizing the performance of a telecommunications network by dynamically analyzing, predicting and regulating the behavior of data transmitted over that network. Part 2: Next generation networking examples In my 25 or so years of networking, I have dealt with a lot of different networking technologies, each iteration (supposedly) better than the last. Starting with Thin Net (10BASE2), moving through ArcNet, 10BASE-T, Token Ring, ATM to the Desktop, FDDI and onwards. Generally, the technology improved for each system until it was swapped out. A good example is the change from a literal ring for token ring to a switching design where devices hung off of a hub (as in 10BASE-T). ATM to the desktop was a novel idea, providing up to 25Mbps to connected devices, but the complexity of configuring and managing it was not worth the gain. Today almost everything is Ethernet as shown by the Facebook Voyager DWDM system, which uses Ethernet over both traditional SFP ports and the DWDM interfaces.  Ethernet is simple, well supported and easy to manage. Example 1 - Migration from FDDI to 100Base-T In late 1996, early 1997, the Exodus network used FDDI rings (Fiber Distributed Data Interface) to connect the main routers together at 100Mbps. As the network grew we had to decide between two competing technologies, FDDI switches and Fast Ethernet (100Base-T) both providing 100Mbp/s. FDDI switches from companies like DEC (FDDI Gigaswitch) were used in most of the Internet Exchange Points (IXPs) and worked reasonably well with one minor issue, head of line blocking (HOLB), which also impacted other technologies. Head of line blocking occurs when a packet is destined for an interface that is already full, so a queue is built, if the interface continues to be full, eventually the queue will be dropped. While we were testing the DEC FDDI Gigaswitches, we were also in deep discussions with Cisco about the availability of Fast Ethernet (FE) and working on designs. Because FE was new, there were concerns about how it would perform and how we would be able to build a redundant network design. In the end, we decided to use FE, connect the main routers in a full mesh and use routing protocols to manage fail-over. Example 2 - NGN Failure - LANE (LAN Emulation) During the high growth period at Exodus communications, there was a request to connect a new data center to the original one and allow customers to put servers in both locations using the same address space. To do this, we chose LAN Emulation or LANE which allows a ATM network to be used like a LAN. On paper, LANE looked like a great idea, the ability to extend the LAN so that customers could use the same IP space in two different locations. In reality, it was very different. For hardware, we were using Cisco 5513 switches which provided a combination of Ethernet and ATM ports. There were multiple issues with this design: First, the customer is provided with an ethernet interface, which runs over an ATM optical interface.  Any error on the physical connection between switches or the ATM layer would cause errors on the Ethernet layer. Second, monitoring was very hard, when there were network issues, you had to look in multiple locations to determine where the errors were happening. After a few weeks, we did a midnight swap putting Cisco 7500 routers in to replace the 5500 switches and moving customers onto new blocks for the new data center. Part 3: Designing a modern network When designing a new network, some of the following might be important to you: Simple, focused yet non-blocking IP fabric Multistage parallel fabrics based on Clos network concept Simple merchant silicon Distributed control plane with some centralized controls Wide multi-path (ECMP) Uniform chipset, bandwidth, and buffering 1:1 oversubscribed (non-blocking fabric) Minimize the hardware necessary to carry east–west traffic Ability to support a large number of bare metal servers without adding an additional layer Limit fabric to a 5 stage Clos within the data center to minimize lookups and switching latency. Support host attachment at 10G, 25G, 50G and 100G Ethernet Traffic management In a modern network one of the first decisions is whether you will use a centralized controller or not. If you use a centralized controller, you will be able to see and control the entire network from one location. If you do not use a centralized controller, you will need to either manage each system directly or via automation. There is a middle space where you can use some software defined network pieces to manage parts of the network, such as an OpenFlow controller for the WAN or VMware NSX for your virtualized workloads. Once you know what the general management goal is, the next decision is whether to use open, proprietary, or a combination of both open and proprietary networking equipment. Open networking equipment is a concept that has been around less than a decade and started when very large network operators decided that they wanted a better control of the cost and features of the equipment in their networks. Google is a good example. In the following figure, you can see how Facebook used both their own hardware, 6-Pack/Backpack and legacy vendor hardware for their interoperability and performance testing. Google wanted to build a high-speed backbone, but was not looking to pay the prices that the incumbent proprietary vendors such as Cisco and Juniper wanted. Google set a price per port (1G/10G/40G) that they wanted to hit and designed equipment around that. Later companies like Facebook decided to go the same direction and contracted with commodity manufacturers to build network switches that met their needs. Proprietary vendors can offer the same level of performance or better using their massive teams of engineers to design and optimize hardware. This distinction even applies on the software side where companies like VMware and Cisco have created software defined networking tools such as NSX and ACI. With the large amount of networking gear available, designing and building a modern network can appear to be a complex concept. Designing a modern network requires research and a good understanding of networking equipment. While complex, the task is not hard if you follow the guidelines. These are a few of the stages of planning that need to be followed before the modern network design is started: The first step is to understand the scope of the project (single site, multi-site, multi-continent, multi-planet). The second step is to determine if the project is a green field (new) or brown field deployment (how many of the sites already exist and will/will not be upgraded). The third step is to determine if there will be any software defined networking (SDN), next generation networking (NGN) or Open Networking pieces. Finally, it is key that the equipment to be used is assembled and tested to determine if the equipment meets the needs of the network. Summary In this article, we have discussed many different concepts that tie NGN together. The term NGN refers to the latest and near-term networking equipment and designs. We looked at networking concepts such as local, metro and wide area networks, network controllers, routers and switches. Routing protocols such as BGP, IS-IS, OSPF and RIP. Then we discussed many pieces that are used either singularly or together that create a modern network. In the end, we also learned some guidelines that should be followed while designing a network. Resources for Article:   Further resources on this subject: Analyzing Social Networks with Facebook [article] Social Networks [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 21370

article-image-stack-wars-epic-struggle-who-controls-tech-stack
Dave Maclean
20 Feb 2018
4 min read
Save for later

Stack Wars: The epic struggle for control of the tech stack

Dave Maclean
20 Feb 2018
4 min read
The choice of tech stack for a project, team or organisation is an ongoing struggle between competing forces. Each of the players has their own logic, beliefs and drivers. Where you stand and what side you are on totally determines the way you see the struggle. Packt is on the developer team. This is how we see the struggle we’re all part of: Technology vendors are the Empire Any organisation that is selling tools, technologies or platform services is either already behaving like the Empire, or will, eventually, become the Empire. Vendors want the stack to include their tech, and if the vendor has a full stack like IBM, MS, or Oracle then they want you to live in their world. To be completely Blue or Red Stack. The economics driving this are relentless. The biggest cost for large software vendors is acquiring customers. Once you have a customer, it makes sense to keep expanding your product portfolio to sell more to each customer. The end game is when the Empire captures whole planets from the Alliance and enslaves the occupants in a move called Large Outsourcing Deals. Businesses and IT departments are the Rebel Alliance Companies and organisations build systems to try and serve their users and customers. Their underlying intentions are good. They are trying to do the right thing. They do the best they can. They have to manage within a structured organisation, co-ordinating different groups and teams. They sometimes have some cool new stuff, but often they are struggling with outdated kit, against overwhelming odds. Companies sometimes achieve great things in specific battles with heroic individuals and teams, but they also have to keep the whole show on the road. The Empire Vendors are constantly trying to bring them into their captive stack-universe, to make life “easier" with the comforting myth of the one-stop-shop. The Alliance gets new weapons and allies in the form of insurgent vendors who start out fighting the Empire, like GitHhub, Jira and AWS. However, these can be dangerous alliances. The iron law of the costs of customer acquisition will drive even the insurgent vendors to continually expand their product offer and then - BAM! – another empire wanting to lock you in. They call this the ‘Land and Expand’ strategy and every vendor has it, overtly or secretly. Even the currently much-beloved Slack will eventually try and turn itself into the Facebook of the office, and will gobble up the app ecosystem just like Facebook.  They all cross over to the dark side eventually. Developers are the Jedi Devs have a deep understanding of how technologies really work in action because they have to actually build things. This knowledge can appear mystical to outsiders. It is hard to express and articulate the intuitive skills gained from actual development experience. The very best devs are 10, 100, 1000 times more productive than the implementation teams from the vendors. Devs know what vendor tools are really like under the hood, when the action starts. They know that even the Death Star has hidden yet fatal vulnerabilities, no matter how great it looks from a distance. Over the years devs have evolved their own special ways of working that is hard for outsiders to understand. These go by the names of Agile and Open Source. Agile is a semi-mysterious Way, trusting the process to migrate towards success, without being really able to say what that is before we realise we get there. Open Source is the shared network that binds developers together into a powerful network of shared power on platforms like GitHub. Devs have two forces driving them. The first is to get the very best tech stack for each project, based on their unique technical insight into how it really works. Devs always want to choose best of breed, for this problem, here and now. But devs also have personal weapons of choice, over which they have mastery, and will try and use these wherever possible. Laser swords can do a lot more than you think, but there are other, better weapons in certain circumstances. Stack Wars are never going to end. There will be more and more episodes of this eternal struggle. The Empire can never be completely defeated, any more than the Jedi can die out. The story needs all three, and ebbs and flows over time in a pattern that repeats itself but in new and different ways.
Read more
  • 0
  • 0
  • 2892
article-image-getting-know-generative-models-types
Sunith Shetty
20 Feb 2018
9 min read
Save for later

Getting to know Generative Models and their types

Sunith Shetty
20 Feb 2018
9 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Rajdeep Dua and Manpreet Singh Ghotra titled Neural Network Programming with Tensorflow. In this book, you will use TensorFlow to build and train neural networks of varying complexities, without any hassle.[/box] In today’s tutorial, we will learn about generative models, and their types. We will also look into how discriminative models differs from generative models. Introduction to Generative models Generative models are the family of machine learning models that are used to describe how data is generated. To train a generative model we first accumulate a vast amount of data in any domain and later train a model to create or generate data like it. In other words, these are the models that can learn to create data that is similar to data that we give them. One such approach is using Generative Adversarial Networks (GANs). There are two kinds of machine learning models: generative models and discriminative models. Let's examine the following list of classifiers: decision trees, neural networks, random forests, generalized boosted models, logistic regression, naive bayes, and Support Vector Machine (SVM). Most of these are classifiers and ensemble models. The odd one out here is Naive Bayes. It's the only generative model in the list. The others are examples of discriminative models. The fundamental difference between generative and discriminative models lies in the underlying probability inference structure. Let's go through some of the key differences between generative and discriminative models. Discriminative versus generative models Discriminative models learn P(Y|X), which is the conditional relationship between the target variable Y and features X. This is how least squares regression works, and it is the kind of inference pattern that gets used. It is an approach to sort out the relationship among variables. Generative models aim for a complete probabilistic description of the dataset. With generative models, the goal is to develop the joint probability distribution P(X, Y), either directly or by computing P(Y | X) and P(X) and then inferring the conditional probabilities required to classify newer data. This method requires more solid probabilistic thought than regression demands, but it provides a complete model of the probabilistic structure of the data. Knowing the joint distribution enables you to generate the data; hence, Naive Bayes is a generative model. Suppose we have a supervised learning task, where xi is the given features of the data points and yi is the corresponding labels. One way to predict y on future x is to learn a function f() from (xi,yi) that takes in x and outputs the most likely y. Such models fall in the category of discriminative models, as you are learning how to discriminate between x's from different classes. Methods like SVMs and neural networks fall into this category. Even if you're able to classify the data very accurately, you have no notion of how the data might have been generated. The second approach is to model how the data might have been generated and learn a function f(x,y) that gives a score to the configuration determined by x and y together. Then you can predict y for a new x by finding the y for which the score f(x,y) is maximum. A canonical example of this is Gaussian mixture models. Another example of this is: you can imagine x to be an image and y to be a kind of object like a dog, namely in the image. The probability written as p(y|x) tells us how much the model believes that there is a dog, given an input image compared to all possibilities it knows about. Algorithms that try to model this probability map directly are called discriminative models. Generative models, on the other hand, try to learn a function called the joint probability p(y, x). We can read this as how much the model believes that x is an image and there is a dog y in it at the same time. These two probabilities are related and that could be written as p(y, x) = p(x) p(y|x), with p(x) being how likely it is that the input x is an image. The p(x) probability is usually called a density function in literature. The main reason to call these models generative ultimately connects to the fact that the model has access to the probability of both input and output at the same time. Using this, we can generate images of animals by sampling animal kinds y and new images x from p(y, x). We can mainly learn the density function p(x) which only depends on the input space. Both models are useful; however, comparatively, generative models have an interesting advantage over discriminative models, namely, they have the potential to understand and explain the underlying structure of the input data even when there are no labels available. This is very desirable when working in the real world. Types of generative models Discriminative models have been at the forefront of the recent success in the field of machine learning. Models make predictions that depend on a given input, although they are not able to generate new samples or data. The idea behind the recent progress of generative modeling is to convert the generation problem to a prediction one and use deep learning algorithms to learn such a problem. Autoencoders One way to convert a generative to a discriminative problem can be by learning the mapping from the input space itself. For example, we want to learn an identity map that, for each image x, would ideally predict the same image, namely, x = f(x), where f is the predictive model. This model may not be of use in its current form, but from this, we can create a generative model. Here, we create a model formed of two main components: an encoder model q(h|x) that maps the input to another space, which is referred to as hidden or the latent space represented by h, and a decoder model q(x|h) that learns the opposite mapping from the hidden input space. These components--encoder and decoder--are connected together to create an end-to-end trainable model. Both the encoder and decoder models are neural networks of different architectures, for example, RNNs and Attention Nets, to get desired outcomes. As the model is learned, we can remove the decoder from the encoder and then use them separately. To generate a new data sample, we can first generate a sample from the latent space and then feed that to the decoder to create a new sample from the output space. GAN As seen with autoencoders, we can think of a general concept to create networks that will work together in a relationship, and training them will help us learn the latent spaces that allow us to generate new data samples. Another type of generative network is GAN, where we have a generator model q(x|h) to map the small dimensional latent space of h (which is usually represented as noise samples from a simple distribution) to the input space of x. This is quite similar to the role of decoders in autoencoders. The deal is now to introduce a discriminative model p(y| x), which tries to associate an input instance x to a yes/no binary answer y, about whether the generator model generated the input or was a genuine sample from the dataset we were training on. Let's use the image example done previously. Assume that the generator model creates a new image, and we also have the real image from our actual dataset. If the generator model was right, the discriminator model would not be able to distinguish between the two images easily. If the generator model was poor, it would be very simple to tell which one was a fake or fraud and which one was real. When both these models are coupled, we can train them end to end by assuring that the generator model is getting better over time to fool the discriminator model, while the discriminator model is trained to work on the harder problem of detecting frauds. Finally, we desire a generator model with outputs that are indistinguishable from the real data that we used for the training. Through the initial parts of the training, the discriminator model can easily detect the samples coming from the actual dataset versus the ones generated synthetically by the generator model, which is just beginning to learn. As the generator gets better at modeling the dataset, we begin to see more and more generated samples that look similar to the dataset. The following example depicts the generated images of a GAN model learning over time: Sequence models If the data is temporal in nature, then we can use specialized algorithms called Sequence Models. These models can learn the probability of the form p(y|x_n, x_1), where i is an index signifying the location in the sequence and x_i is the ith  input sample. As an example, we can consider each word as a series of characters, each sentence as a series of words, and each paragraph as a series of sentences. Output y could be the sentiment of the sentence. Using a similar trick from autoencoders, we can replace y with the next item in the series or sequence, namely y = x_n + 1, allowing the model to learn. To summarize, we learned generative models are a fast advancing area of study and research. As we proceed to advance these models and grow the training and datasets, we can expect to generate data examples that depict completely believable images. This can be used in several applications such as image denoising, painting, structured prediction, and exploration in reinforcement learning. To know more about how to build and optimize neural networks using TensorFlow, do checkout this book Neural Network Programming with Tensorflow.    
Read more
  • 0
  • 0
  • 37175

article-image-your-first-swift-program
Packt
20 Feb 2018
4 min read
Save for later

Your First Swift Program

Packt
20 Feb 2018
4 min read
 In this article, by Keith Moon author of the book Swift 4 Programming Cookbook, we will learn how to write your first swift program. (For more resources related to this topic, see here.) Your first Swift program In this first recipe will be get up and running with Swift using a Swift Playground, and run our first piece of Swift code. Getting ready To run our first Swift program, we first need to download and install our IDE. During the beta of Apple's Xcode 9, it is available as a direct download from Apple's developer website at http://developer.apple.com/download, access to this beta will require a free Apple developer account. Once the beta has ended and Xcode 9 is publically available, it will also be available from the Mac App Store. By obtaining it from the Mac App Store, you will automatically be informed of updates, so this is the preferred route, once Xcode 9 is out of beta. Xcode from the Mac App Store Open up the Mac App Store, either from the dock or via Spotlight: Search for xcode: Click Install: Xcode is a large download (over 4 GB). So, depending on your internet connection, this could take a while! Progress can be monitored from Launchpad: Xcode as a direct download Go to the Apple Developer download page at http://developer.apple.com/download  Click the Download button to download Xcode within a .xip file.  Double click on the downloaded file to unpack the Xcode application. Drag the Xcode application into your Applications folder How to do it... With Xcode downloaded, let create our first Swift playground: Launch Xcode from the icon in your dock. From the welcome screen, choose Get started with a playground. From the template chooser, select the blank template from the iOS tab: Choose a name for your playground and a location to save it: Xcode Playgrounds can be based on one of three different Apple platforms, iOS, tvOS and macOS (the operating system formerly known as OSX). Playgrounds provide full access to the frameworks available to either iOS, tvOS or macOS, depending on which you choose. An iOS playground will be assumed for the entirety of this chapter, chiefly because this is the platform of choice of the author. Where recipes do have UI components, the iOS platform will be used until otherwise stated. You are now presented with a view that looks like this: Let's replace the word playground with Swift!. Press the blue play button in the bottom left-hand corner of the window to execute the code in the playground: Congratulations! You have just run some Swift code. On the right-hand side of the window, you will see the output of each line of code in the playground. We can see our line of code has output "Hello, Swift!": There's more... If you put your cursor over the output on the left-hand side, you will see two buttons, one that looks like an eye, another that is a circle: Click on the eye button and you get a Quick Look box of the output. This isn't that useful for just a string, but can be useful for more visual output like colors and views. Click on the square button, and a box will be added in-line, under your code, showing the output of the code. This can be really useful if you want to see how the output changes as you change the code. Summary In this article, we learnt how to run your first swift program. Resources for Article: Further resources on this subject: Your First Swift App [article] Exploring Swift [article] Functions in Swift [article]
Read more
  • 0
  • 0
  • 28336

article-image-develop-stock-price-predictive-model-using-reinforcement-learning-tensorflow
Aaron Lazar
20 Feb 2018
12 min read
Save for later

How to develop a stock price predictive model using Reinforcement Learning and TensorFlow

Aaron Lazar
20 Feb 2018
12 min read
[box type="note" align="" class="" width=""]This article is an extract from the book Predictive Analytics with TensorFlow, authored by Md. Rezaul Karim. This book helps you build, tune, and deploy predictive models with TensorFlow.[/box] In this article we’ll show you how to create a predictive model to predict stock prices, using TensorFlow and Reinforcement Learning. An emerging area for applying Reinforcement Learning is the stock market trading, where a trader acts like a reinforcement agent since buying and selling (that is, action) particular stock changes the state of the trader by generating profit or loss, that is, reward. The following figure shows some of the most active stocks on July 15, 2017 (for an example): Now, we want to develop an intelligent agent that will predict stock prices such that a trader will buy at a low price and sell at a high price. However, this type of prediction is not so easy and is dependent on several parameters such as the current number of stocks, recent historical prices, and most importantly, on the available budget to be invested for buying and selling. The states in this situation are a vector containing information about the current budget, current number of stocks, and a recent history of stock prices (the last 200 stock prices). So each state is a 202-dimensional vector. For simplicity, there are only three actions to be performed by a stock market agent: buy, sell, and hold. So, we have the state and action, what else do you need? Policy, right? Yes, we should have a good policy, so based on that an action will be performed in a state. A simple policy can consist of the following rules: Buying (that is, action) a stock at the current stock price (that is, state) decreases the budget while incrementing the current stock count Selling a stock trades it in for money at the current share price Holding does neither, and performing this action simply waits for a particular time period and yields no reward To find the stock prices, we can use the yahoo_finance library in Python. A general warning you might experience is "HTTPError: HTTP Error 400: Bad Request". But keep trying. Now, let's try to get familiar with this module: >>> from yahoo_finance import Share >>> msoft = Share('MSFT') >>> print(msoft.get_open()) 72.24= >>> print(msoft.get_price()) 72.78 >>> print(msoft.get_trade_datetime()) 2017-07-14 20:00:00 UTC+0000 >>> So as of July 14, 2017, the stock price of Microsoft Inc. went higher, from 72.24 to 72.78, which means about a 7.5% increase. However, this small and just one-day data doesn't give us any significant information. But, at least we got to know the present state for this particular stock or instrument. To install yahoo_finance, issue the following command: $ sudo pip3 install yahoo_finance Now it would be worth looking at the historical data. The following function helps us get the historical data for Microsoft Inc: def get_prices(share_symbol, start_date, end_date, cache_filename): try: stock_prices = np.load(cache_filename) except IOError: share = Share(share_symbol) stock_hist = share.get_historical(start_date, end_date) stock_prices = [stock_price['Open'] for stock_price in stock_ hist] np.save(cache_filename, stock_prices) return stock_prices The get_prices() method takes several parameters such as the share symbol of an instrument in the stock market, the opening date, and the end date. You will also like to specify and cache the historical data to avoid repeated downloading. Once you have downloaded the data, it's time to plot the data to get some insights. The following function helps us to plot the price: def plot_prices(prices): plt.title('Opening stock prices') plt.xlabel('day') plt.ylabel('price ($)') plt.plot(prices) plt.savefig('prices.png') Now we can call these two functions by specifying a real argument as follows: if __name__ == '__main__': prices = get_prices('MSFT', '2000-07-01', '2017-07-01', 'historical_stock_prices.npy') plot_prices(prices) Here I have chosen a wide range for the historical data of 17 years to get a better insight. Now, let's take a look at the output of this data: The goal is to learn a policy that gains the maximum net worth from trading in the stock market. So what will a trading agent be achieving in the end? Figure 8 gives you some clue: Well, figure 8 shows that if the agent buys a certain instrument with price $20 and sells at a peak price say at $180, it will be able to make $160 reward, that is, profit. So, implementing such an intelligent agent using RL algorithms is a cool idea? From the previous example, we have seen that for a successful RL agent, we need two operations well defined, which are as follows: How to select an action How to improve the utility Q-function To be more specific, given a state, the decision policy will calculate the next action to take. On the other hand, improve Q-function from a new experience of taking an action. Also, most reinforcement learning algorithms boil down to just three main steps: infer, perform, and learn. During the first step, the algorithm selects the best action (a) given a state (s) using the knowledge it has so far. Next, it performs the action to find out the reward (r) as well as the next state (s'). Then, it improves its understanding of the world using the newly acquired knowledge (s, r, a, s') as shown in the following figure: Now, let's start implementing the decision policy based on which action will be taken for buying, selling, or holding a stock item. Again, we will do it an incremental way. At first, we will create a random decision policy and evaluate the agent's performance. But before that, let's create an abstract class so that we can implement it accordingly: class DecisionPolicy: def select_action(self, current_state, step): pass def update_q(self, state, action, reward, next_state): pass The next task that can be performed is to inherit from this superclass to implement a random decision policy: class RandomDecisionPolicy(DecisionPolicy): def __init__(self, actions): self.actions = actions def select_action(self, current_state, step): action = self.actions[random.randint(0, len(self.actions) - 1)] return action The previous class did nothing except defi ning a function named select_action (), which will randomly pick an action without even looking at the state. Now, if you would like to use this policy, you can run it on the real-world stock price data. This function takes care of exploration and exploitation at each interval of time, as shown in the following figure that form states S1, S2, and S3. The policy suggests an action to be taken, which we may either choose to exploit or otherwise randomly explore another action. As we get rewards for performing an action, we can update the policy function over time: Fantastic, so we have the policy and now it's time to utilize this policy to make decisions and return the performance. Now, imagine a real scenario—suppose you're trading on Forex or ForTrade platform, then you can recall that you also need to compute the portfolio and the current profit or loss, that is, reward. Typically, these can be calculated as follows: portfolio = budget + number of stocks * share value reward = new_portfolio - current_portfolio At first, we can initialize values that depend on computing the net worth of a portfolio, where the state is a hist+2 dimensional vector. In our case, it would be 202 dimensional. Then we define the range of tuning the range up to: Length of the prices selected by the user query – (history + 1), since we start from 0, we subtract 1 instead. Then, we should calculate the updated value of the portfolio and from the portfolio, we can calculate the value of the reward, that is, profit. Also, we have already defined our random policy, so we can then select an action from the current policy. Then, we repeatedly update the portfolio values based on the action in each iteration and the new portfolio value after taking the action can be calculated. Then, we need to compute the reward from taking an action at a state. Nevertheless, we also need to update the policy after experiencing a new action. Finally, we compute the final portfolio worth: def run_simulation(policy, initial_budget, initial_num_stocks, prices, hist, debug=False): budget = initial_budget num_stocks = initial_num_stocks share_value = 0 transitions = list() for i in range(len(prices) - hist - 1): if i % 100 == 0: print('progress {:.2f}%'.format(float(100*i) / (len(prices) - hist - 1))) current_state = np.asmatrix(np.hstack((prices[i:i+hist], budget, num_stocks))) current_portfolio = budget + num_stocks * share_value action = policy.select_action(current_state, i) share_value = float(prices[i + hist + 1]) if action == 'Buy' and budget >= share_value: budget -= share_value num_stocks += 1 elif action == 'Sell' and num_stocks > 0: budget += share_value num_stocks -= 1 else: action = 'Hold' new_portfolio = budget + num_stocks * share_value reward = new_portfolio - current_portfolio next_state = np.asmatrix(np.hstack((prices[i+1:i+hist+1], budget, num_stocks))) transitions.append((current_state, action, reward, next_ state)) policy.update_q(current_state, action, reward, next_state) portfolio = budget + num_stocks * share_value if debug: print('${}t{} shares'.format(budget, num_stocks)) return portfolio The previous simulation predicts a somewhat good result; however, it produces random results too often. Thus, to obtain a more robust measurement of success, let's run the simulation a couple of times and average the results. Doing so may take a while to complete, say 100 times, but the results will be more reliable: def run_simulations(policy, budget, num_stocks, prices, hist): num_tries = 100 final_portfolios = list() for i in range(num_tries): final_portfolio = run_simulation(policy, budget, num_stocks, prices, hist) final_portfolios.append(final_portfolio) avg, std = np.mean(final_portfolios), np.std(final_portfolios) return avg, std The previous function computes the average portfolio and the standard deviation by iterating the previous simulation function 100 times. Now, it's time to evaluate the previous agent. As already stated, there will be three possible actions to be taken by the stock trading agent such as buy, sell, and hold. We have a state vector of 202 dimension and budget only $1000. Then, the evaluation goes as follows: actions = ['Buy', 'Sell', 'Hold'] hist = 200 policy = RandomDecisionPolicy(actions) budget = 1000.0 num_stocks = 0 avg,std=run_simulations(policy,budget,num_stocks,prices, hist) print(avg, std) >>> 1512.87102405 682.427384814 The first one is the mean and the second one is the standard deviation of the final portfolio. So, our stock prediction agent predicts that as a trader you/we could make a profit about $513. Not bad. However, the problem is that since we have utilized a random decision policy, the result is not so reliable. To be more specific, the second execution will definitely produce a different result: >>> 1518.12039077 603.15350649 Therefore, we should develop a more robust decision policy. Here comes the use of neural network-based QLearning for decision policy. Next, we will see a new hyperparameter epsilon to keep the solution from getting stuck when applying the same action over and over. The lesser its value, the more often it will randomly explore new actions: Next, I am going to write a class containing their functions: Constructor: This helps to set the hyperparameters from the Q-function. It also helps to set the number of hidden nodes in the neural networks. Once we have these two, it helps to define the input and output tensors. It then defines the structure of the neural network. Further, it defines the operations to compute the utility. Then, it uses an optimizer to update model parameters to minimize the loss and sets up the session and initializes variables. select_action: This function exploits the best option with probability 1-epsilon. update_q: This updates the Q-function by updating its model parameters. Refer to the following code: class QLearningDecisionPolicy(DecisionPolicy): def __init__(self, actions, input_dim): self.epsilon = 0.9 self.gamma = 0.001 self.actions = actions output_dim = len(actions) h1_dim = 200 self.x = tf.placeholder(tf.float32, [None, input_dim]) self.y = tf.placeholder(tf.float32, [output_dim]) W1 = tf.Variable(tf.random_normal([input_dim, h1_dim])) b1 = tf.Variable(tf.constant(0.1, shape=[h1_dim])) h1 = tf.nn.relu(tf.matmul(self.x, W1) + b1) W2 = tf.Variable(tf.random_normal([h1_dim, output_dim])) b2 = tf.Variable(tf.constant(0.1, shape=[output_dim])) self.q = tf.nn.relu(tf.matmul(h1, W2) + b2) loss = tf.square(self.y - self.q) self.train_op = tf.train.GradientDescentOptimizer(0.01). minimize(loss) self.sess = tf.Session() self.sess.run(tf.initialize_all_variables()) def select_action(self, current_state, step): threshold = min(self.epsilon, step / 1000.) if random.random() < threshold: # Exploit best option with probability epsilon action_q_vals = self.sess.run(self.q, feed_dict={self.x: current_state}) action_idx = np.argmax(action_q_vals) action = self.actions[action_idx] else: # Random option with probability 1 - epsilon action = self.actions[random.randint(0, len(self.actions) - 1)] return action def update_q(self, state, action, reward, next_state): action_q_vals = self.sess.run(self.q, feed_dict={self.x: state}) next_action_q_vals = self.sess.run(self.q, feed_dict={self.x: next_state}) next_action_idx = np.argmax(next_action_q_vals) action_q_vals[0, next_action_idx] = reward + self.gamma * next_action_q_vals[0, next_action_idx] action_q_vals = np.squeeze(np.asarray(action_q_vals)) self.sess.run(self.train_op, feed_dict={self.x: state, self.y: action_q_vals}) There you go! We have a stock price predictive model running and we’ve built it using Reinforcement Learning and TensorFlow. If you found this tutorial interesting and would like to learn more, head over to grab this book, Predictive Analytics with TensorFlow, by Md. Rezaul Karim.    
Read more
  • 0
  • 1
  • 19813
article-image-installing-configuring-x-pack-elasticsearch-kibana
Pravin Dhandre
20 Feb 2018
6 min read
Save for later

Installing and Configuring X-pack on Elasticsearch and Kibana

Pravin Dhandre
20 Feb 2018
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Pranav Shukla and Sharath Kumar M N titled Learning Elastic Stack 6.0. This book provides detailed coverage on fundamentals of Elastic Stack, making it easy to search, analyze and visualize data across different sources in real-time.[/box] In this short tutorial, we will show step-by-step installation and configuration of X-pack components in Elastic Stack to extend the functionalities of Elasticsearch and Kibana. As X-Pack is an extension of Elastic Stack, prior to installing X-Pack, you need to have both Elasticsearch and Kibana installed. You must run the version of X-Pack that matches the version of Elasticsearch and Kibana. Installing X-Pack on Elasticsearch X-Pack is installed just like any plugin to extend Elasticsearch. These are the steps to install X-Pack in Elasticsearch: Navigate to the ES_HOME folder. Install X-Pack using the following command: $ ES_HOME> bin/elasticsearch-plugin install x-pack During installation, it will ask you to grant extra permissions to X-Pack, which are required by Watcher to send email alerts and also to enable Elasticsearch to launch the machine learning analytical engine. Specify y to continue the installation or N to abort the installation. You should get the following logs/prompts during installation: -> Downloading x-pack from elastic [=================================================] 100% @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission .pipe* read,write * java.lang.RuntimePermissionaccessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.net.SocketPermission * connect,accept,resolve * java.security.SecurityPermission createPolicy.JavaPolicy * java.security.SecurityPermission getPolicy * java.security.SecurityPermission putProviderProperty.BC * java.security.SecurityPermission setPolicy * java.util.PropertyPermission * read,write * java.util.PropertyPermission sun.nio.ch.bugLevel write See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html for descriptions of what these permissions allow and the associated Risks. Continue with installation? [y/N]y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N]y Elasticsearch keystore is required by plugin [x-pack], creating... -> Installed x-pack Restart Elasticsearch: $ ES_HOME> bin/elasticsearch Generate the passwords for the default/reserved users—elastic, kibana, and logstash_system—by executing this command: $ ES_HOME>bin/x-pack/setup-passwords interactive You should get the following logs/prompts to enter the password for the reserved/default users: Initiating the setup of reserved user elastic,kibana,logstash_system passwords. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: elastic Reenter password for [elastic]: elastic Enter password for [kibana]: kibana Reenter password for [kibana]:kibana Enter password for [logstash_system]: logstash Reenter password for [logstash_system]: logstash Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [elastic] Please make a note of the passwords set for the reserved/default users. You can choose any password of your liking. We have chosen the passwords as elastic, kibana, and logstash for elastic, kibana, and logstash_system users, respectively, and we will be using them throughout this chapter. To verify the X-Pack installation and enforcement of security, point your web browser to http://localhost:9200/ to open Elasticsearch. You should be prompted to log in to Elasticsearch. To log in, you can use the built-in elastic user and the password elastic. Upon a successful log in, you should see the following response: { name: "fwDdHSI", cluster_name: "elasticsearch", cluster_uuid: "08wSPsjSQCmeRaxF4iHizw", version: { number: "6.0.0", build_hash: "8f0685b", build_date: "2017-11-10T18:41:22.859Z", build_snapshot: false, lucene_version: "7.0.1", minimum_wire_compatibility_version: "5.6.0", minimum_index_compatibility_version: "5.0.0" }, tagline: "You Know, for Search" } A typical cluster in Elasticsearch is made up of multiple nodes, and X-Pack needs to be installed on each node belonging to the cluster. To skip the install prompt, use the—batch parameters during installation: $ES_HOME>bin/elasticsearch-plugin install x-pack --batch. Your installation of X-Pack will have created folders named x-pack in bin, config, and plugins found under ES_HOME. We shall explore these in later sections of the chapter. Installing X-Pack on Kibana X-Pack is installed just like any plugins to extend Kibana. The following are the steps to install X-Pack in Kibana: Navigate to the KIBANA_HOME folder. Install X-Pack using the following command: $KIBANA_HOME>bin/kibana-plugin install x-pack You should get the following logs/prompts during installation: Attempting to transfer from x-pack Attempting to transfer from https://artifacts.elastic.co/downloads/kibana-plugins/x-pack/x-pack -6.0.0.zip Transferring 120307264 bytes.................... Transfer complete Retrieving metadata from plugin archive Extracting plugin archive Extraction complete Optimizing and caching browser bundles... Plugin installation complete Add the following credentials in the kibana.yml file found under $KIBANA_HOME/config and save it: elasticsearch.username: "kibana" elasticsearch.password: "kibana" If you have chosen a different password for the kibana user during password setup, use that value for the elasticsearch.password property. Start Kibana: $KIBANA_HOME>bin/kibana To verify the X-Pack installation, go to http://localhost:5601/ to open Kibana. You should be prompted to log in to Kibana. To log in, you can use the built-in elastic user and the password elastic. Your installation of X-Pack will have created a folder named x-pack in the plugins folder found under KIBANA_HOME. You can also optionally install X-Pack on Logstash. However, X-Pack currently supports only monitoring of Logstash. Uninstalling X-Pack To uninstall X-Pack: Stop Elasticsearch. Remove X-Pack from Elasticsearch: $ES_HOME>bin/elasticsearch-plugin remove x-pack Restart Elasticsearch and stop Kibana 2. Remove X-Pack from Kibana: $KIBANA_HOME>bin/kibana-plugin remove x-pack Restart Kibana. Configuring X-Pack X-Pack comes bundled with security, alerting, monitoring, reporting, machine learning, and graph capabilities. By default, all of these features are enabled. However, one might not be interested in all the features it provides. One can selectively enable and disable the features that they are interested in from the elasticsearch.yml and kibana.yml configuration files. Elasticsearch supports the following features and settings in the elasticsearch.yml file: Kibana supports these features and settings in the kibana.yml file: If X-Pack is installed on Logstash, you can disable the monitoring by setting the xpack.monitoring.enabled property to false in the logstash.yml configuration file.   With this, we successfully explored how to install and configure the X-Pack components in order to bundle different capabilities of X-pack into one package of Elasticsearch and Kibana. If you found this tutorial useful, do check out the book Learning Elastic Stack 6.0 to examine the fundamentals of Elastic Stack in detail and start developing solutions for problems like logging, site search, app search, metrics and more.    
Read more
  • 0
  • 0
  • 40114

article-image-decision-trees
Packt
20 Feb 2018
17 min read
Save for later

Decision Trees

Packt
20 Feb 2018
17 min read
In this article by David Toth, the author of the book Data Science Algorithms in a Week, we will cover the following topics: Concepts Analysis Concepts A decision tree is the arrangement of the data in a tree structure where at each node data is separated to different branches according to the value of the attribute at the node. Analysis To construct a decision tree, we will use a standard ID3 learning algorithm that chooses an attribute that classifies the data samples in the best possible way to maximize the information gain – a measure based on information entropy. Information entropy Information entropy of the given data measures the least amount of the information necessary to represent a data item from the given data. The unit of the information entropy is a familiar unit – a bit and a byte, a kilobyte, and so on. The lower information entropy, the more regular, data is, the more pattern occurs in the data and thus less amount of the information is necessary to represent it. That is why compression tools on the computer can take large text files and compress them to a much smaller size, as words and word expressions keep reoccurring, forming a pattern. Coin flipping Imagine we flip and unbiased coin. We would like to know if the result is head or tail. How much information do we need to represent the result? Both words head and tail consists of 4 characters and if we represent one character with one byte (8 bits) as it is standard in ASCII table, then we would need 4 bytes or 32 bits to represent the result. But the information entropy is the least amount of the data necessary to represent the result. We know that there are only two possible results – head or tail. If we agree to represent head with 0 and tail with 1, then 1 bit would be sufficient to communicate the result efficiently. Here the data is the space of the possibilities of the result of the coin throw. It is the set {head,tail} which can represented as a set {0,1}. The actual result is a data item from this set. It turns out that the entropy of the set is 1. This is owing to that the probability of head and tail are both 50%. Now imagine that the coin is biased and throws head 25% of time and tails 75% of time. What would be the entropy of the probability space {0,1} this time? We could certainly represent the result with 1 bit of the information. But can we do better? 1 bit is of course indivisible, but maybe we could generalize the concept of the information to indiscrete amounts. In the previous example, we know nothing about the previous result of the coin flip unless we look at the coin. But in the example with the biased coin, we know that the result tail is more likely to happen. If we recorded n results of coin flips in a file representing heads with 0 and tails with 1, then about 75% of the bits there would have the value 1 and 25% of them would have the value 0. The size of such file would be n bits. But since it is more regular (the pattern of 1s prevails in it) a good compression tool should be able to compress it to less than n bits. To learn the theoretical bound to the compression and the amount of the information necessary to represent a data item we define information entropy precisely. Definition of Information Entropy Suppose that we are given a probability space S with the elements 1, 2, …, n. The probability an element i would be chosen from the probability space is pi. Then the information entropy of the probability space is defined as: E(S)=-p1 *log2(p1) - … - pn *log2(pn) where log2 is a binary logarithm. So for the information entropy of the probability space of unbiased coin throws is E = -0.5 * log2(0.5) – 0.5*log2(0.5)=0.5+0.5=1. When the coin is based with 25% chance of a head and 75% change of a tail, then the information entropy of such space is: E = -0.25 * log2(0.25) – 0.75*log2(0.75) = 0.81127812445 which is less than 1. Thus for example if we had a large file with about 25% of 0 bits and 75% of 1 bits, a good compression tool should be able to compress it down to about 81.12% of its size. Information gain The information gain is the amount of the information entropy gained as a result of a certain procedure. For example, if we would like to know the results of 3 fair coins, then its information entropy is 3. But if we could look at the 3rd coin, then information entropy of the result for the remaining 2 coins would be 2. Thus by looking at the 3rd coin we gained 1 bit information, so the information gain was 1. We may also gain the information entropy by dividing the whole set S into sets grouping them by similar pattern. If we group elements by their value of an attribute A, then we define the information gain as IG(S, A) = E(S) – Sumv in values(A)[(|Sv|/|S|) * E(Sv)] where Sv is a set with the elements of S that have the value v for the attribute A. For example let us calculate the information gain for the 6 rows in the swimming example by taking swimming suit as an attribute. Because we are interested whether a given row of data is classified as no or yes for the question whether one should swim, we will use swim preference to calculate the entropy and information gain. We partition the set S by the attribute swimming suit: Snone={(none,cold,no),(none,warm,no)} Ssmall={(small,cold,no),(small,warm,no)} Sgood= {(good,cold,no),(good,warm,yes)} The information entropy of S is E(S)=-(1/6)*log2(1/6)-(5/6)*log2(5/6)~0.65002242164 The information entropy of the partitions is: E(Snone)=-(2/2)*log2(2/2)=-log2(1)=0 since all instances have the class no. E(Ssmall)=0 for a similar reason E(Sgood)=-(1/2)*log2(1/2)=1 Therefore the information gain is IG(S,swimming suit)=E(S)-[(2/6)*E(Snone)+(2/6)*E(Ssmall)+(2/6)*E(Sgood)] =0.65002242164-(1/3)=0.3166890883 If we chose the attribute water temperature to partition the set S, what would be the information gain IG(S,water temperature)? The water temperature partitions the set S into the following sets: Scold={(none,cold,no),(small,cold,no),(good,cold,no)} Swarm={(none,warm,no),(small,warm,no),(good,warm,yes)} Their entropies are: E(Scold)=0 as all instances are classified as no. E(Swarm)=-(2/3)*log2(2/3)-(1/3)*log2(1/3)~0.91829583405 which is less than IG(S,swimming suit). Therefore, we can gain more information about the set S (the classification of its instances) by partitioning it per the attribute swimming suit instead of the attribute water temperature. This finding will be the basis of the ID3 algorithm constructing a decision tree in the next paragraph. ID3 algorithm ID3 algorithm constructs a decision tree from the data based on the information gain. In the beginning, we start with the set S. The data items in the set S have various properties according to which we can partition the set S. If an attribute A has the values {v1, …, vn}, then we partition the set S into the sets Sv1, …, Svn. Where the set Svi is a subset of the set S where the elements have the value vi for the attribute A. If each element in the set S has attributes A1, …, Am, then we can partition the set S according to any of the possible attributes. ID3 algorithm partitions the set S according to the attribute that yields the highest information gain. Now suppose that it was an attribute A1. Then for the set S we have the partitions Sv1, …, Svn where A1 has the possible values {v1,…, vn}. Since we have not constructed any tree yet, we first place a root node into the tree. For every partition of S we place a new branch from the root. Every branch represents one value of the selected attributes. A branch has data samples with the same value for that attribute. For every new branch we can define a new node that will have data samples from its ancestor branch. Once we have defined a new node, we choose another of the remaining attributes with the highest information gain for the data at that node to partition the data at that node further, then define new branches and nodes. This process can be repeated until we run out of all the attributes for the nodes or even earlier until all the data at the node have the same class of our interest. In the case of a swimming example there are only two possible classes for swimming preference: class no and class yes. The last node is called a leaf node and decides the class of a data item from the data. Tree construction by ID3 algorithm Here we describe step by step how an ID3 algorithm would construct a decision tree from the given data samples in the swimming example. The initial set consists of 6 data samples: S={(none,cold,no),(small,cold,no),(good,cold,no),(none,warm,no),(small,warm,no),(good,warm,yes)} In the previous sections we calculated the information gains for both and the only non- classifying attributes swimming suit and water temperature: IG(S,swimming suit)=0.3166890883 IG(S,water temperature)=0.19087450461 Hence we would choose the attribute swimming suit as it has a higher information gain. There is no tree drawn yet, so we start from the root node. As the attribute swimming suit has 3 possible values {none, small, good}, we draw 3 possible branches out of it for each. Each branch will have one partition from the partitioned set S: Snone, Ssmall, Sgood. We add nodes to the ends of the branches. Snone data samples have the same class swimming preference = no, so we do not need to branch that node by a further attribute and partition set. Thus the node with the data Snone is already a leaf node. The same is true for the node with the data Ssmall. But the node with the data Sgood has two possible classes for swimming preference. Therefore, we will branch the node further. There is only one non- classifying attribute left – water temperature. So there is no need to calculate the information gain for that attribute with the data Sgood. From the node Sgood we will have 2 branches each with the partition from the set Sgood. One branch will have the set of the data sample Sgood, cold={(good,cold,no)}, the other branch will have the partition Sgood, warm={(good,warm,yes)}. Each of these 2 branches will end with a node. Each node will be a leaf node because each node has the data samples of the same value for the classifying attribute swimming preference. The resulting decision tree has 4 leaf nodes and is the tree in the picture decision tree for the swimming preference example. Deciding with a decision tree Once we have constructed a decision tree from the data with the attributes A1, …, Am and the classes {c1, …, ck}; we can use this decision tree to classify a new data item with the attributes A1, …, Am into one of the classes {c1, …, ck}. Given a new data item that we would like to classify, we can think of each node including the root as a question for data sample: What value does that data sample for the selected attribute Aihave? Then based on the answer we select the branch of a decision tree and move further to the next node. Then another question is answered about the data sample and another until the data sample reaches the leaf node. A leaf node has an associated one of the classes {c1, …, ck} with it, e.g. ci. Then the decision tree algorithm would classify the data sample into the class ci. Deciding a data sample with the swimming preference decision tree Let us construct a decision tree for the swimming preference example with the ID3 algorithm. Consider a data sample (good,cold,?) and we would like to use the constructed decision tree to decide into which class it should belong. Start with a data sample at the root of the tree. The first attribute that branches from the root is swimming suit, so we ask for the value for the attribute swimming suit of the sample (good,cold,?). We learn that the value of the attribute is swimming suit=good, therefore move down the rightmost branch with that value for its data samples. We arrive at the node with the attribute water temperature and ask the question: what is the value of the attribute water temperature for the data sample (good,cold,?). We learn that for that data sample we have water temperature=cold, therefore we move down the left branch into the leaf node. This leaf is associated with the class swimming preference=no. Therefore the decision tree would classify the data sample (good,cold,?) to be in that class swimming preference, i.e. to complete it to the data sample (good,cold,no). Therefore, the decision tree says that if one has a good swimming suit, but the water temperature is cold, then one would still not want to swim based on the data collected in the table. Implementation decision_tree.py import math import imp import sys #anytree module is used to visualize the decision tree constructed by this ID3 algorithm. from anytree import Node, RenderTree import common #Node for the construction of a decision tree. class TreeNode: definit(self,var=None,val=None): self.children=[] self.var=varself.val=val defadd_child(self,child): self.children.append(child) defget_children(self): return self.children defget_var(self): return self.var defis_root(self): return self.var==None and self.val==None defis_leaf(self): return len(self.children)==0 def name(self): if self.is_root(): return “[root]” return “[“+self.var+”=“+self.val+”]” #Constructs a decision tree where heading is the heading of the table with the data, i.e. the names of the attributes. #complete_data are data samples with a known value for every attribute. #enquired_column is the index of the column (starting from zero) which holds the classifying attribute. defconstuct_decision_tree(heading,complete_data,enquired_column): available_columns=[] for col in range(0,len(heading)): if col!=enquired_column: available_columns.append(col) tree=TreeNode() add_children_to_node(tree,heading,complete_data,available_columns,enquired_ column) return tree #Splits the data samples into the groups with each having a different value for the attribute at the column col. defsplit_data_by_col(data,col): data_groups={} for data_item in data: if data_groups.get(data_item[col])==None: data_groups[data_item[col]]=[] data_groups[data_item[col]].append(data_item) return data_groups #Adds a leaf node to node. defadd_leaf(node,heading,complete_data,enquired_column): node.add_child(TreeNode(heading[enquired_column],complete_data[0][enquired_ column])) #Adds all the descendants to the node. def add_children_to_node(node,heading,complete_data,available_columns,enquired_ column): if len(available_columns)==0: add_leaf(node,heading,complete_data,enquired_column) return -1 selected_col=select_col(complete_data,available_columns,enquired_column) for i inrange(0,len(available_columns)): if available_columns[i]==selected_col: available_columns.pop(i) break data_groups=split_data_by_col(complete_data,selected_col) if(len(data_groups.items())==1): add_leaf(node,heading,complete_data,enquired_column) return -1 for child_group, child_data in data_groups.items(): child=TreeNode(heading[selected_col],child_group) add_children_to_node(child,heading,child_data,list(available_columns),enquired_column) node.add_child(child) #Selects an available column/attribute with the highest information gain. defselect_col(complete_data,available_columns,enquired_column): selected_col=-1 selected_col_information_gain=-1 for col in available_columns: current_information_gain=col_information_gain(complete_data,col,enquired_column) if current_information_gain>selected_col_information_gain: selected_col=col selected_col_information_gain=current_information_gainreturn selected_col #Calculates the information gain when partitioning complete_dataaccording to the attribute at the column col and classifying by the attribute at enquired_column. defcol_information_gain(complete_data,col,enquired_column): data_groups=split_data_by_col(complete_data,col) information_gain=entropy(complete_data,enquired_column) for _,data_group in data_groups.items(): information_gain- =(float(len(data_group))/len(complete_data))*entropy(data_group,enquired_column) return information_gain #Calculates the entropy of the data classified by the attribute at the enquired_column. def entropy(data,enquired_column): value_counts={} for data_item in data: if value_counts.get(data_item[enquired_column])==None: value_counts[data_item[enquired_column]]=0 value_counts[data_item[enquired_column]]+=1 entropy=0 for _,count in value_counts.items(): probability=float(count)/len(data) entropy-=probability*math.log(probability,2) return entropy #A visual output of a tree using the text characters. defdisplay_tree(tree): anytree=convert_tree_to_anytree(tree) for pre, fill, node in RenderTree(anytree): pre=pre.encode(encoding=‘UTF-8’,errors=‘strict’) print(“%s%s” % (pre, node.name)) #A simple textual output of a tree without the visualization. defdisplay_tree_simple(tree): print(‘***Tree structure***’) display_node(tree) sys.stdout.flush() #A simple textual output of a node in a tree. defdisplay_node(node): if node.is_leaf(): print(‘The node ‘+node.name()+’ is a leaf node.’) return sys.stdout.write(‘The node ‘+node.name()+’ has children: ‘) for child in node.get_children(): sys.stdout.write(child.name()+’‘) print(‘‘) for child in node.get_children(): display_node(child) #Convert a decision tree into the anytree module tree format to make it ready for rendering. defconvert_tree_to_anytree(tree): anytree=Node(“Root”) attach_children(tree,anytree) return anytree#Attach the children from the decision tree into the anytree tree format. defattach_children(parent_node, parent_anytree_node): for child_node in parent_node.get_children(): child_anytree_node=Node(child_node.name(),parent=parent_anytree_node) attach_children(child_node,child_anytree_node) ###PROGRAM START### if len(sys.argv)<2: sys.exit(‘Please, input as an argument the name of the CSV file.’) csv_file_name=sys.argv[1] (heading,complete_data,incomplete_data,enquired_column)=common.csv_file_to_ ordered_data(csv_file_name) tree=constuct_decision_tree(heading,complete_data,enquired_column) display_tree(tree) common.py #Reads the csv file into the table and then separates the table into heading, complete data, incomplete data and then produces also the index number for the column that is not complete, i.e. contains a question mark. defcsv_file_to_ordered_data(csv_file_name): with open(csv_file_name, ‘rb’) as f: reader = csv.reader(f) data = list(reader) return order_csv_data(data) deforder_csv_data(csv_data): #The first row in the CSV file is the heading of the data table. heading=csv_data.pop(0) complete_data=[] incomplete_data=[] #Let enquired_column be the column of the variable which conditional probability should be calculated. Here set that column to be the last one. enquired_column=len(heading)-1 #Divide the data into the complete and the incomplete data. An incomplete row is the one that has a question mark in the enquired_column. The question mark will be replaced by the calculated Baysian probabilities from the complete data. for data_item in csv_data: if is_complete(data_item,enquired_column): complete_data.append(data_item) else: incomplete_data.append(data_item) return (heading,complete_data,incomplete_data,enquired_column) Program input swim.csv swimming_suit,water_temperature,swimNone,Cold,No None,Warm,NoSmall,Cold,NoSmall,Warm,NoGood,Cold,NoGood,Warm,Yes Program output $ python decision_tree.py swim.csv Root ├── [swimming_suit=Small] │├──[water_temperature=Cold] ││└──[swim=No] │└──[water_temperature=Warm] │└──[swim=No] ├── [swimming_suit=None] │├──[water_temperature=Cold] ││└──[swim=No] │└──[water_temperature=Warm] │└──[swim=No] └── [swimming_suit=Good] ├── [water_temperature=Cold] │└──[swim=No] └── [water_temperature=Warm] └── [swim=Yes] Summary In this article we have learned the concept of decision tree, analysis using ID3 algorithm, and implementation. Resources for Article: Further resources on this subject: Working with Data – Exploratory Data Analysis [article] Introduction to Data Analysis and Libraries [article] Data Analysis Using R [article]
Read more
  • 0
  • 0
  • 2300