Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-5-engines-build-games-without-coding
Sam Wood
17 Feb 2016
4 min read
Save for later

5 engines to build games without coding

Sam Wood
17 Feb 2016
4 min read
Let's start with a disclaimer - if you want to make a video game the best it can be, you're going to need to learn how to code. But you can build games without coding. So, if the prospect of grinding C++ in order to make the next Minecraft doesn't quite appeal to you here are 5 accessible tools to help you get into game development without writing a single line of code. GameMaker - drag and drop game development GameMaker is one of the premier engines that offers users the chance to make complete mobile games just with using a drag-and-drop interface. Specifically designed so that novice computer programmers would be able to make computer games without much programming knowledge, it's an excellent choice for anyone looking to make cross-platform game app without reams of code. In addition, GameMaker boasts its own language for when you want to add extra custom features and refine your game experience. Unreal Engine - AAA game development without writing code Unreal Engine is a AAA engine, used to make some of the biggest names out there. If you're just getting into games development and are unsure about coding, you might be surprised to see it on this list. What UE4 offers to beginners and non-coders, though, is the power of its Blueprints visual scripting. With Blueprints, you can create (reasonably) complex games all without typing a single line of C++. Based on the concept of using a node-based interface, Blueprints allows non-programmers access to gameplay elements including camera control, player input, items and triggers, and more. Unity - the definitive game engine and a good place to begin Unity is the tool of pro-game developers; in our 2015 Skill Up Survey, it was revealed as the tech that was most important for earning the top salaries in the industry. Unity has no built-in visual scripting like Unreal - but what it does have is a massive community, and a huge supply of code snippets and assets available for almost every requirement. You can do a lot in its editor just by dragging scripts onto in-game objects. Whilst (like Unreal) you'll want to pick up some coding skills when you start to make your games more complex, you can get quite a way standing on the shoulders of your fellow developers. If that's not working for you, though, why not check out PlayMaker? This visual scripting plugin for Unity offers a whole new arrange of options for when you want something a little more custom. GameSalad - an amazing behavior library Much like GameMaker, GameSalad is an intuitive drag-and-drop interface game creator. What makes GameSalad stand out from the crowd though is its amazing behavior library. This library lets developers implement really complex behaviors, of a kind that would be challenging or even impossible for someone to muddle through without a knowledge of coding. There a thousands of successful games on Google Play and the App Store built with GameSalad - why not add yours to their number? Lumberyard Okay, so I lied a bit in the title of this blog - currently, there's nothing to suggest that Amazon's new game engine will be particularly friendly to non-coders. So what is Lumberyard? Why is it on here? Lumberyard is Amazon's new game engine, derived from CryENGINE. It's built to get people deploying their games to Amazon Web Services (AWS) but is otherwise free to use. What's interesting about Lumberyard is its visual scripting tool supposedly made for designers and engineers with little to no backend experience to add cloud-connected features to a game. These features can include a "community news feed, daily gifts, or server-side combat resolution" - added within minutes through drag-and-drop visual scripting. Lumberyard is still super new, so we'll have to wait and see if it delivers on its promises - but we may well find ourselves with a serious contender to the likes of Unity and Unreal. Check out other related posts: Construct Game Development: Platformer Revisited, a 2D Shooter C++, SFML, Visual Studio, and Starting the first game  
Read more
  • 0
  • 2
  • 63922

article-image-why-ruby-developers-like-elixir
Guest Contributor
26 Apr 2019
7 min read
Save for later

Why Ruby developers like Elixir

Guest Contributor
26 Apr 2019
7 min read
Learning a new technology stack requires time and effort, and some developers prefer to stick with their habitual ways. This is one of the major reasons why developers stick with Ruby. Ruby libraries are very mature making it a very productive language, used by many developers worldwide. However, more and more experienced Ruby coders are turning to Elixir. Why is it so? Let’s find out all the ins and outs about Elixir and what makes it so special for Ruby developers. What is Elixir? Elixir is a vibrant and practical functional programming language created for developing scalable and maintainable applications. This programming language leverages the Erlang VM. The latter is famous for running low-latency, as well as distributed and fault-tolerant systems. Elixir is currently being used in web development successfully. This general-purpose programming language first appeared back in 2011. It is created by José Valim, one of the major authors of Ruby on Rails. Elixir became a result of Valim’s efforts to solve problems with concurrency that Ruby on Rails has. Phoenix Framework If you are familiar with Elixir, you have probably heard of Phoenix as well. Phoenix is an Elixir-powered web framework, most frequently used by Elixir developers. This framework incorporates some of the best Ruby solutions while taking them to the next level. This framework allows the developers to enjoy speed and maintainability at the same time. Core features of Elixir Over time, Elixir evolved into a dynamic language that numerous programmers around the world use for their projects. Below are its core features that make Elixir so appealing to web developers. Scalability. Elixir code is executed within the frames of small isolated processes. Any information is transferred via messages. If an application has many users or is growing actively, Elixir is a perfect choice because it can cope with high loads without the need for extra servers. Functionality. Elixir is built to make coding easier and faster. This language is well-designed for writing fast and shortcode that can be maintained easily. Extensibility and DSLs. Elixir is an extensible language that allows coders to extend it naturally to special domains. This way, they can increase their productivity significantly. Interactivity. With tools like IEx, Elixir’s interactive shell, developers can use auto-complete, debug, reload code, and format their documentation well. Error resistance. Elixir is one of the strongest systems in terms of fault tolerance. Elixir supervisors assist developers by describing how to take the needed action when a failure occurs to achieve complete recovery. Elixir supervisors carry different strategies to create a hierarchical process structure, also referred to as a supervision tree. This guarantees the smooth performance of applications, tolerant of errors. The handiness of Elixir Tools. Elixir gives the developers working with it an opportunity to use a wide range of handy tools like Hex and Mix. These tools help programmers to improve the software resources in terms of discovery, quality, and sustainability. Compatibility with Erlang. Elixir developers have full access to the Erlang ecosystem. It is so because Elixir code executes on the Erlang VM. Disadvantages of Elixir The Elixir ecosystem isn’t perfect and complete yet. Chances are, there isn’t a library to integrate with a service you are working on. When coding in Elixir, you may have to build your own libraries sometimes. The reason behind it is that the Elixir community isn’t as huge as the communities of well-established popular coding languages like Ruby. Some developers believe that Elixir is a niche language and is difficult to get used to. Functional programming. This feature of Elixir is both an advantage and a disadvantage at the same time. Most coding languages are object-oriented. For this reason, it might be hard for a developer to switch to a functional-oriented language. Limited talent pool. Elixir is still quite new, and it’s harder to find professional coders who have a lot of experience with this language compared to others. Yet, as the language gets more and more traction, companies and individual developers show more interest in it. As you can see, there are some downsides to using Elixir as your programming language. However, due to the advantages it offers, some Ruby developers think that it is worth a try. Let's find out why. Why Elixir is popular among Ruby developers As you probably know, Ruby and Ruby on Rails are the technologies that contribute to programmers' happiness a lot. There are many reasons for developers to love them but are there any with respect to Elixir? If you analyze what makes programmers happy, you will make a list of a few important points. Let's name them and analyze whether Elixir comes within them. Productive technologies Elixir is extremely productive. With it, it is possible to grow and scale apps quickly. Having many helpful frameworks, tools, and services Though there are not many libraries in Elixir, their number is continuously growing due to the work of its team and contributors. However, Phoenix and Elixir's extensive toolset is its strong side for now. Speed of building new features Due to the clean syntax of Elixir, features can be implemented by fewer lines of code. Active community Though Elixir community is still not massive, it is very friendly, active and growing at a fast pace. Comfort and satisfaction from development Elixir programmers enjoy the fact that this programming language is good at performance and development speed. They don't need to compromise on any of these important aspects. As you can see, Elixir still has room for improvement but it is progressing swiftly. In addition to the overall experience, there are other technical reasons that make Ruby developers hooked to Elixir programming. Elixir solves the concurrency issue that Ruby currently has. As Elixir runs on Erlang VM, it can handle the distributed systems much more effectively compared to Ruby. Elixir runs fast. In fact, it is faster than Ruby in terms of response and compilation times. Fits decentralized systems perfectly. Unlike Ruby, Elixir uses message passing to convey the commands. This way, it is perfect for building fault-tolerant decentralized systems. Scalability. Applications can be scaled with Elixir easily. If you expect the code of your project to be very large, and the website you are building to get a lot of traffic, it’s a good idea to choose Elixir for it. Thanks to its incorporated tools like umbrella projects, you can easily break the code in chunks to deal with it easier. Elixir is the first programming language after Ruby that considers code aesthetics and language UX. It also cares about the libraries and the whole ecosystem. Elixir is one of the most practical functional programming languages. In addition to being efficient, it has a modern-looking syntax similar to Ruby. Clear and direct code representation. This programming language is nearly homoiconic. Open Telecom Platform (OTP). OTP gives Elixir fault tolerance and concurrency capabilities. Quick response. Elixir response time is under 100ms. So, there’s no waste of time, and you can handle numerous requests with the same hardware. Zero downtime. With Elixir, you can reach 100% up-time without having to stop for updates. You can deliver the updates to the production without interfering with its performance. No reinventing the wheel. With Elixir, developers can use existing coding patterns and libraries for their projects. Exhaustive documentation. Elixir has pretty instructive documentation that is easy to comprehend. Being quite a young programming language, Elixir has already attracted a lot of devoted followers thanks to all the above-described features. It has the potential to make programming easier, more fun, and in line with the demands of modern businesses. Choosing Elixir is definitely worth it for all the benefits the language offers. We believe that clean and comprehensible syntax, fast performance, high stability, and error tolerance gives Elixir a successful future. Technological giants like Discord, Bleacher Report, Pinterest and Moz have been using Elixir for a while now, enjoying all the competitive advantages it has to offer. Author Bio Maria Redka is a Technology Writer at MLSDev, a web and mobile app development company in Ukraine. She has been writing content professionally for more than 3 years.
Read more
  • 0
  • 0
  • 63050

article-image-rust-as-a-game-programming-language-is-it-any-good
Amarabha Banerjee
22 Sep 2018
4 min read
Save for later

Rust as a Game Programming Language: Is it any good?

Amarabha Banerjee
22 Sep 2018
4 min read
We have moved lightyears away from the the handheld gaming days. The good old Tetris and Mario games were easy to use, low on graphics, super difficult to program in-spite of their apparent simpler appearance. Although it’s difficult to trace back to the language in which all of these games were written, many of them were written in the C family of languages, which contributed to the difficulty in programming them. Rust has been touted as one of the successors of C. Which in-turn brings the question back - if C was difficult for coding, then how exactly is Rust going to be different? The answer of this question lies in the approach of Rust. Rust was designed primarily as a systems programming language by the Mozilla Foundation. The primary game development language over the past 20 years have been C/C++ majorly. Rust brings a fresh change in approach - from Object Oriented to Data Oriented. The problem with object oriented programming was summarized nicely by Catherine West from Chucklefish. According to her, treating game elements like NPC, game worlds, as Objects might work well at a small level. But when you are trying to create your own game engine, then treating game elements as Objects will imply creating a lot of super sized objects with complex layers of dependencies. The Rust approach, on the other hand, is data oriented. This implies that every element is treated as data. This simplifies the process of creating midsized game engines a lot. Chucklefish being a significant name in 2D game development, this statement from Catherine West comes as a major boost for developers who want to use Rust for developing 2D games. She has although expressed her doubts on using Rust for 3D game development. Another important personality who has recently come out in support of Rust is Andrea Pessino -  CTO of Ready at Dawn. Ready at Dawn is a well established game studio known for games such as The Order: 1886, Daxter and various God of War titles. His tweet read like this. This is another feather in Rust’s cap for game development. The present state of game development in Rust is quite encouraging. There are quite a few low level graphical libraries like GFX.  GFX is a low-level abstraction layer over platform specific graphical interfaces (OpenGL, Metal, Vulkan). It offers some handy wrapper over windows backend (glutin the Rust one, or wrapper around Vulkan system, GLFW and more). GFX is still at a very early stage of development with the present version being 0.17. Although major game engines like Unity, and Unreal are yet to support Rust for game development, there exist a few complete game engines which allow you to create complete games with Rust using their framework. The first one is Piston. It is the oldest game engine for Rust. It is also the most stable and one with great documentation. However, many people find Piston confusing and hard to use as it is super-modular by design. Sometimes it is even hard to understand which module to load for achieving a certain goal or build a certain component of a game. Amethyst is a more recent game engine/framework inspired by commercial monolithic game engines. It comes with all the necessary dependencies in its package. However it is evolving quickly and hence the present documentation is already outdated. However there is a vibrant community which is looking to include more and more developers into its foray. Hence this gives an opportunity to new developers to get into game development with Rust and get involved with a game engine also. GGEZ is a simple 2D game engine inspired by the LÖVE engine. This library is more suited at creating simple 2D games for hobbyists. GGEZ is also very new and changes quickly. The design simplicity is an incentive for indie developers and hobbyists to start creating games with it. Some other popular libraries include: noise-rs / a noise generator rlua / High level bindings between Rust and Lua sfxr / Reimplementation of DrPetter’s “sfxr” sound effect generator as a Rust library The conclusion that we can draw from here is that Rust has a lot of promise when it comes to game development. With the data oriented approach, easy memory management and access to low level performance enhancement techniques, Rust can become a full fledged game development language in the near future. Best game engines for Artificial Intelligence game development Implementing Unity game engine and assets for 2D game development How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 62777

article-image-cloud-pricing-comparison-aws-vs-azure
Guest Contributor
02 Feb 2019
11 min read
Save for later

Cloud pricing comparison: AWS vs Azure

Guest Contributor
02 Feb 2019
11 min read
On average, businesses waste about 35% of their cloud spend due to inefficiently using their cloud resources. This amounts to more than $10 billion in wasted cloud spend across just the top three public cloud providers. Although the unmatched compute power, data storage options and efficient content delivery systems of the leading public cloud providers can support incredible business growth, this can cause some hubris. It’s easy to lose control of costs when your cloud provider appears to be keeping things running smoothly. To stop this from happening, it’s essential to adopt a new approach to how we manage - and optimize - cloud spend. It’s not an easy thing to do, as pricing structures can be complicated. However, in this post, we’ll look at how both AWS and Azure structure their pricing, and how you can best determine what’s right for you. Different types of cloud pricing schemes Broadly, the pricing model for cloud services can range from a pure subscription-based model, where services are charged based on a cloud catalog and users are billed per month, per mailbox, or app license ordered. In this instance, subscribers are billed for all the resources to which they are subscribed, irrespective of whether they are used or not. The other option is pay-as-you-go. This is where subscribers begin with a billing amount set at 0, which then grows with the services and resources they use.. Amazon uses the Pay-As-You-Go model, charging a predetermined price for every hour of virtual machine resources used. Such a model is also used by other leading cloud service providers including Microsoft Azure and Google’s Google Cloud Platform. Another variant of cloud pricing is an enterprise billing service. This is based on the number of active users assigned to a particular cloud subscription. Microsoft Azure is a leading cloud provider that offers cloud subscription for its customers. Most cloud providers offer varying combinations of the above three models with attractive discount options built-in. These include: What free tier services do AWS and Azure offer? Both AWS and Azure offer a ‘free tier’ service for new and initial subscribers. This is for potential long-time subscribers to test out the service before committing for the long run. For AWS, Amazon allows subscribers to try out most of AWS’ services free for a year, including RDS, S3, EC2, Elastic Block Store, Elastic Load Balancing (EBS) and other AWS services. For example, you can utilize EC2 and EBS on the free tier to host a website for a whole year. EBS pricing will be zero unless your usage exceeds the limit of 30GB of storage. The free tier for the EC2 includes 730 hours of a t2.micro instance. Azure offers similar deals for new users. Azure’s services like App Service, Virtual Machines, Azure SQL Database, Blob Storage and Azure Kubernetes Service (AKS) are free for the initial period of 12 months. Additionally, Azure provides the ‘Functions’ compute service (for serverless) at 1 million requests free every month throughout the subscription. This is useful if you want to give serverless a try. AWS and Azure’s pay-as-you-go, on-demand pricing models Under the pay-as-you-go model, AWS and Azure offer subscribers the option to simply settle their bills at the end of every month without any upfront investment. This is a good option if you want to avoid a long-term and binding contract. Most resources are available on demand and charged on a per hour basis, and costs are calculated based on the number of hours the resource was used. For data storage and data transfer, the rates are generally calculated per Gigabyte. Subscribers are notified 30 days in advance for any changes in the Pay As You Go rates as well as when new services are added periodically to the platform. Reserve-and-pay-less pricing model In addition to the on-demand pricing model, Amazon AWS has an alternate scheme called Reserved Instance (RI) that allows the subscriber to reserve capacity for specific products. RI offers discounted hourly rates and capacity reservation for its EC2 and RDS services. A subscriber can reserve a resource and can save up to 75% of total billing costs in the long run. These discounted rates are automatically added to the subscriber’s AWS bills. Subscribers have the option to reserve instances either for a 1-year or a 3-year term. Microsoft Azure offers to help subscribers save up to 72% of their billing costs compared to its pay-as-you-go model when subscribers sign up for one to three-year terms for Windows and Linux virtual machines (VMs). Microsoft also allows for added flexibility in the sense that if your business needs change, you can cancel your Azure RI subscription at any time and return the remaining unused RI to Microsoft for an early termination fee. Use-more-and-pay-less pricing model In addition to the above payment options, AWS offers subscribers one additional payment option. When it comes to data transfer and data storage services, AWS gives discounts based on the subscriber’s usage. These volume-based discounts help subscribers realize critical savings as their usage increases. Subscribers can benefit from the economies of scale, allowing their businesses to grow while costs are kept relatively under control. AWS also gives subscribers the option to sign up for services that help their growing business. As an example, AWS’ storage services offer subscribers with opportunities to lower pricing based on how frequently data is accessed and performance needed in the retrieval process. For EC2, you can get a discount of up to 10% if you reserve more. The image below demonstrates the pricing of the AWS S3 bucket based on usage. Comparing Cloud Pricing on Azure and AWS As the major cloud service providers – Amazon Web Services, Azure, Google Cloud Platform and IBM – continually decrease prices of cloud instances, provide new and innovative discount options, include additional instances, and drop billing increments. In some cases, especially, Microsoft Azure, per second billing has also been introduced. However, as costs decrease, the complexity increases. It is paramount for subscribers to understand and efficiently navigate this complexity. We take a crack at it here. Reserved Instance Pricing Given the availability of Reserved Instances by Azure, AWS and GCP have also introduced publicly available discounts, some reaching up to 75%. This is in exchange for signing up to use the services of the particular cloud service provider for a one year to 3 year period. We’ve briefly covered this in the section above. Before signing up, however, subscribers need to understand the amount of usage they are committing to and how much of usage to leave as an ‘on-demand’ option. To do this, subscribers need to consider many different factors – Historical usage – by region, instance type, etc Steady-state vs. part-time usage An estimate of usage growth or decline Probability of switching cloud service providers Choosing alternative computing models like serverless, containers, etc. On-Demand Instance Pricing On-Demand Instances work best for applications that have short-term, irregular workloads but critical enough as to not be interrupted. For instance, if you’re running cron jobs on a periodic basis that lasts for a few hours, you can move them to on-demand instances. Each On-Demand Instance is billed per instance hour from time it is launched until it is terminated. These are most useful during the testing or development phase of applications. On-demand instances are available in many varying levels of computing power, designed for different tasks executed within the cloud environment. These on-demand instances have no binding contractual commitments and can be used as and when required. Generally, on-demand instances are among the most expensive purchasing options for instances. Each on-demand instance is billed at a per instance hour from the time it is launched until it is stopped or terminated. If partial instance hours are used, these are rounded up to the full hour during billing. The chart below shows the on-demand price per hour for AWS and Azure cloud services and the hourly price for each GB of RAM. VM Type AWS OD Hourly Azure OD Hourly AWS OD / GB RAM Azure OD / GB RAM Standard 2 vCPU w Local SSD $0.133 $0.100 $0.018 $0.013 Standard 2 vCPU no local disk $0.100 $0.100 $0.013 $0.013 Highmem 2 vCPU w Local SSD $0.166 $0.133 $0.011 $0.008 Highmem 2 vCPU no local disk $0.133 $0.133 $0.009 $0.008 Highcpu 2 vCPU w Local SSD $0.105 $0.085 $0.028 $0.021 Highcpu 2 vCPU no local disk $0.085 $0.085 $0.021 $0.021   The on-demand price of Azure instances is cheaper compared to AWS for certain VM types. The price difference is evident for instances with local SSD. Discounted Cloud Instance Pricing When it comes to discounted cloud pricing, it is important to remember that this comes with a lock-in period of 1 – 3 years. Therefore, it would work best for organizations that are more stable and have a good idea of what their historical cloud usage is and can fairly accurately predict what cloud services they would require over the next 12 month period. In the table below, we have looked at annual costs of both AWS and Azure. VM Type AWS 1 Y RI Annual Azure 1 Y RI Annual AWS 1 Y RI Annual / GB RAM Azure 1 Y RI Annual / GB RAM Standard 2 vCPU w Local SSD $867 $508 $116 $64 Standard 2 vCPU no local disk $622 $508 $78 $64 Highmem 2 vCPU w Local SSD $946 $683 $63 $43 Highmem 2 vCPU no local disk $850 $683 $56 $43 Highcpu 2 vCPU w Local SSD $666 $543 $178 $136 Highcpu 2 vCPU no local disk $543 $543 $136 $136 Azure’s rates are clearly better than Amazon’s pricing and by a good margin. Azure offers better-discounted rates for Standard, Highmem and High CPU compute instances.   Optimizing Cloud Pricing Subscribers need to move beyond short-term, one time fixes and make use of automation to continuously monitor their spend, raise alerts for over or underuse of service and also take an automated action based on a predetermined condition. Here are some of the ways you can optimize your cloud spending: Cloud Pricing Calculators Cloud Pricing tools enable you to list the different parameters for your AWS or Azure subscriptions. You can use these tools to calculate an approximate monthly cost that would likely be incurred. AWS Simple Monthly Calculator You can try the official cloud pricing calculators from AWS and Azure or a third-party pricing calculator. Calculators help you to optimize your pricing based on your requirements. For example, if you have a long-term requirement for running instances, and if you’re currently running them using on-demand pricing schemes, cloud calculators can offer better insights into reserved-instance schemes and other ways that you can improve your cloud expenditure. For instance, this Azure calculator by NetApp offers more price optimization option. This includes options to tier less frequently used data to storage objects like Azure Blob and customize snapshot creation and storage efficiency. Zerto is another popular calculator for Azure and AWS with a simpler interface. However, note that the estimated cost is based on current pricing and is subject can be liable to change. Price List API Historically, for potential users to narrow down on the final usage cost involved a considerable amount of manual rate checks. They involve collecting price points, and checking and cross-referencing them manually. In the case of AWS, the Price List API offers programmatic access, which is especially beneficial to designers who can now query the AWS price list instead of searching manually through the web. To make matters more natural, the queries can be constructed into simple code in any language. Azure offers a similar billing API to gain insights into your Azure usage programmatically. Summary Understanding and optimizing cloud pricing is somewhat challenging with AWS and Azure. This is partially because they offer hundreds of features with different pricing options and new features are added to the pipeline every week. To solve some of these complexities, we’ve covered some of the popular ways to tackle pricing in AWS and Azure. Here’s a list of things that we’ve covered: How the cloud pricing works and the different pricing schemes in AWS and Azure Comparison of different instance pricing options in AWS and Azure which includes reserved instance, on-demand instances, and discounted instances. Third-party tools like calculators for optimizing price. Price list API for AWS and Azure. If you have any thoughts to share, feel free to post it in the comments. About the author Gilad David Maayan Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years, Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations, and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches, and ecosystems across the tech industry. Cloud computing trends in 2019 The 10 best cloud and infrastructure conferences happening in 2019 Bo Weaver on Cloud security, skills gap, and software development in 2019  
Read more
  • 0
  • 0
  • 61685

article-image-how-nodejs-changing-web-development
Antonio Cucciniello
05 Jul 2017
5 min read
Save for later

How is Node.js Changing Web Development?

Antonio Cucciniello
05 Jul 2017
5 min read
If you have remotely been paying attention to what is going on in the web development space, you know that Node.js has become extremely popular and is many developers’ choice of backend technology. It all started in 2009 by Ryan Dahl. It is a JavaScript runtime that is built on Google Chrome's V8 JavaScript Engine.Over the past couple of years, more and more engineers have moved towards Node.js in many of the their web applications. With plenty of people using it now, how has Node.js changed web development? Scalability Scalability is the one thing that makes Node.js so popular. Node.js runs everything in a single thread. This single thread is event driven (due to JavaScript being the language that it is written with). It is also non-blocking. Now, when you spin up a server in your Node web app, every time a new user connects to the server, that launches an event. That event gets handled concurrently with the other events that are occurring or users that are connecting to the server. In web applications built with other technologies, this would slow down the server after a large amount of users. In contrast, with a Node application, and the non-blocking event driven nature, this allows for highly scalable applications. This now allows companies that are attempting to scale, to build their apps with Node, which will prevent any slowdowns they may have had. This also means they do not have to purchase as much server space as someone using a web app that was not developed with Node. Ease of Use As previously mentioned, Node.js is written with JavaScript. Now, JavaScript was always used to add functionality to the frontend of applications. But with the addition of Node.js, you can now write the entire application in JavaScript. This now makes it so much easier to be a frontend developer who can edit some backend code, or be a backend engineer who can play around with some frontend code. This in turn makes it so much easier to become a Full Stack Engineer. You do not really need to know anything new except the basic concepts of how things work in the backend. As a result, we have recently seen the rise in a full stack JavaScript developer. This also reduces the complexity of working with multiple languages; it minimizes any confusion that might arise when you have to switch from JavaScript on the front end to whatever language would have been used on the backend.  Open Source Community When Node was released, NPM, node package manager, was also given to the public. The Node package manager does exactly what it says on the tin: it allows developers to quickly add and use third party libraries and frameworks in their code. If you have used Node, then you can vouch for me here when I say there is almost always a package that you can use in your application that can make it easier to develop your application or automate a larger task. There are packages to help create http servers, help with image processing, and help with unit testing. If you need it, it’s probably been made. The even more awesome part about this community is that it’s growing by the day, and people are extremely active by contributing the many open source packages out there to help developers with various needs. This increases the productivity of all developers that are using Node in their application because they can shift the focus from something that is not that important in their application, to the main purpose of it. Aid in Frontend Development With the release of Node it did not only benefit the backend side of development, it also benefitted the frontend. With new frameworks that can be used on the frontend such as React.js or virtual-dom, these are all installed using NPM. With packages like browserify you can also use Node’s require to use packages on the frontend that normally would be used on the backend! You can be even more productive and develop things faster on the front end as well! Conclusion Node.js is definitely changing web development for the better. It is making engineers more productive with the use of one language across the entire stack. So, my question to you is, if you have not tried out Node in your application, what are you waiting for? Do you not like being more productive? If you enjoyed this post, tweet about your opinion of how Node.js changed web development. If you dislike Node.js, I would love to hear your opinion as well! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 60486

article-image-what-are-generative-adversarial-networks-gans-and-how-do-they-work
Richard Gall
11 Sep 2018
3 min read
Save for later

What are generative adversarial networks (GANs) and how do they work? [Video]

Richard Gall
11 Sep 2018
3 min read
Generative adversarial networks, or GANs, are a powerful type of neural network used for unsupervised machine learning. Made up of two competing models which run in competition with one another, GANs are able to capture and copy variations within a dataset. They’re great for image manipulation and generation, but they can also be deployed for tasks like understanding risk and recovery in healthcare and pharmacology. GANs are actually pretty new - they were first introduced by Ian Goodfellow in 2014. Goodfellow developed them to tackle some of the issues with similar neural networks, including the Boltzmann machine and autoencoders. Both the Boltzmann machine and autoencoders use the Markov Decision Chain which has a pretty high computational cost. This efficiency gives engineers significant gains - which you need if you’re working at the cutting edge of artificial intelligence. How do Generative Adversarial Networks work? Let's start with a simple analogy. You have a painting - say the Mona Lisa - and we have a master forger who wants to create a duplicate painting. The forger does this by learning how the original painter - Leonardo Da Vinci - produced the painting. Meanwhile, you have an investigator trying to capture the forger and ‘second guess’ the rules the forger is learning. To map this onto the architecture of a GAN, the forger is the generator network, which learns the distribution of classes while the investigator is the discriminator network, which learning the boundaries between those classes - the formal ‘shape’ of the dataset. Applications of GANs Generative adversarial networks are used for a number of different applications. One of the best examples is a Google Brain project back in 2016 - researchers used GANs to develop a method of encryption. This project used 3 neural networks - Alice, Bob, and Eve. Alice’s job was to send an encrypted message to Bob. Bob’s job was to decode that message, while Eve’s job was to intercept it. To begin with Alice’s messages were easily intercepted by Eve. However, thanks to Eve’s adversarial work, Alice began to develop its own encryption strategy - it took 15,000 runs for Alice to successfully encrypt a message that could be deciphered by Bob that Eve couldn’t intercept. Elsewhere, GANs are also being used in fields such as drug research. The neural networks can be trained on the existing drugs and suggest new synthetic chemical structures that improve on drugs that already exist. Generative adversarial networks: the cutting edge of artificial intelligence As we’ve seen, GANs offer some really exciting opportunities in artificial intelligence. There are two key advantages you need to remember: GANs solve the problem of generating data when you don’t have enough to begin with and they require no human supervision. This is crucial when you think about the cutting edge of artificial intelligence, both in terms of the efficiency of running the models, and the real-world data we want to use - which could be poor quality or have privacy and confidentiality issues, as much healthcare data does.
Read more
  • 0
  • 0
  • 59896
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-will-ai-impact-job-roles-in-cybersecurity
Melisha Dsouza
25 Sep 2018
7 min read
Save for later

How will AI impact job roles in Cybersecurity

Melisha Dsouza
25 Sep 2018
7 min read
"If you want a job for the next few years, work in technology. If you want a job for life, work in cybersecurity." -Aaron Levie, chief executive of cloud storage vendor Box The field of cybersecurity will soon face some dire, but somewhat conflicting, views on the availability of qualified cybersecurity professionals over the next four or five years. Global Information Security Workforce Study from the Center for Cyber Safety and Education, predicts a shortfall of 1.8 million cybersecurity workers by 2022. The cybersecurity workforce gap will hit 1.8 million by 2022. On the flipside,  Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures highlight that there will be 3.5 million cybersecurity job openings by 2021. Cybercrime will feature more than triple the number of job openings over the next 5 years. Living in the midst of a digital revolution caused by AI- we can safely say that AI will be the solution to the dilemma of “what will become of human jobs in cybersecurity?”. Tech enthusiasts believe that we will see a new generation of robots that can work alongside humans and complement or, maybe replace, them in ways not envisioned previously. AI will not make jobs easier to accomplish, but also bring about new job roles for the masses. Let’s find out how. Will AI destroy or create jobs in Cybersecurity? AI-driven systems have started to replace humans in numerous industries. However, that doesn’t appear to be the case in cybersecurity. While automation can sometimes reduce operational errors and make it easier to scale tasks, using AI to spot cyberattacks isn’t completely practical because such systems yield a large number of false positives. It lacks the contextual awareness which can lead to attacks being wrongly identified or missed completely. As anyone who’s ever tried to automate something knows, automated machines aren’t great at dealing with exceptions that fall outside of the parameters to which they have been programmed. Eventually, human expertise is needed to analyze potential risks or breaches and make critical decisions. It’s also worth noting that completely relying on artificial intelligence to manage security only leads to more vulnerabilities - attacks could, for example, exploit the machine element in automation. Automation can support cybersecurity professionals - but shouldn’t replace them Supported by the right tools, humans can do more. They can focus on critical tasks where an automated machine or algorithm is inappropriate. In the context of cybersecurity, artificial intelligence can do much of the 'legwork' at scale in processing and analyzing data, to help inform human decision making. Ultimately, this isn’t a zero-sum game -  humans and AI can work hand in hand to great effects. AI2 Take, for instance, the project led by the experts at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab. AI2 (Artificial Intelligence + Analyst Intuition) is a system that combines the capabilities of AI with the intelligence of human analysts to create an adaptive cybersecurity solution that improves over time. The system uses the PatternEx machine learning platform, and combs through data looking for meaningful, predefined patterns. For instance, a sudden spike in postback events on a webpage might indicate an attempt at staging a SQL injection attack. The top results are then presented to a human analyst, who will separate any false positives and flags legitimate threats. The information is then fed into a virtual analyst that uses human input to learn and improve the system’s detection rates. On future iterations, a more refined dataset is presented to the human analyst, who goes through results and once again "teaches" the system to make better decisions. AI2 is a perfect example that shows man and machine can complement each other’s strengths to create something even more effective. It’s worth remembering that in any company that uses AI for cybersecurity, automated tools and techniques require significant algorithm training, data markup. New cybersecurity job roles and the evolution of the job market The bottom line of this discussion is that- AI will not destroy cybersecurity jobs, but it will drastically change them. The primary focus of many cybersecurity jobs can be going through the hundreds of security tools available, determining what tools and techniques are most appropriate for their organization’s needs. Of course, as systems move to the cloud, these decisions will already be made because cloud providers will offer in-built security solutions. This means that the number of companies that will need a full staff of cybersecurity experts will be drastically reduced. Instead, companies will need more individuals that understand issues like the potential business impact and risk of different projects and architectural decisions. This demands a very different set of skills and knowledge compared to the typical current cybersecurity role - it is less directly technical and will require more integration with other key business decision makers. AI can provide assistance, but it can’t offer easy answers. Humans and AI working together Companies concerned with cybersecurity legal compliance and effective real-world solutions should note that cybersecurity and information technology professionals are best-suited for tasks such as risk analysis, policy formulation and cyber attack response. Human intervention is can help AI systems to learn and evolve. Take the example of Spain-based antivirus company called Panda Security, that had a number of people reverse-engineering malicious code and writing signatures. In today's times, to keep pace with overflowing amounts of data, the company would need hundreds of thousands of engineers to deal with malicious code. Enter AI and only a small team of engineers are required-- to look at more than 200,000 new malware samples per day. Is AI going to steal cybersecurity engineers their jobs? So what about former employees that used to perform this job? Have they been laid off? The answer is a straight No! But they will need to upgrade their skill set. In the world of cybersecurity, AI is going to create new jobs, as it throws up new problems to be analyzed and solved. It’s going to create what are being called "new collar" jobs - this is something that IBM’s hiring strategy has already taken into account. Once graduates enter the IBM workforce, AI enters the equation to help them get a fast start. Even the Junior analysts can have the ability to investigate a new malware infecting mobile phones of employees. AI would quickly research the new malware impacting the phones, and identify the characteristics reported by others and to provide a recommended course of action. This would relieve analysts from the manual work of going through reams of data and lines of code - in theory, it should make their job more interesting and more fun. Artificial intelligence and the human workforce, then, aren’t in conflict when it comes to cybersecurity. Instead, they can complement each other to create new job opportunities that will test the skills of the upcoming generation, and lead experienced professionals into new and maybe more interesting directions. It will be interesting to see how cybersecurity workforce makes use of AI in the future. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce 5 ways artificial intelligence is upgrading software engineering    
Read more
  • 0
  • 0
  • 59760

article-image-essential-skills-penetration-testing
Hari Vignesh
11 Jun 2017
6 min read
Save for later

Essential skills for penetration testing

Hari Vignesh
11 Jun 2017
6 min read
Cybercriminals are continally developing new and more sophisticated ways to exploit software vulnerabilities, making it increasingly difficult to defend our systems. Today, then, we need to be proactive in how we protect our digital properties. That's why penetration testers are so in demand. Although risk analysis can easily be done by internal security teams, support from skilled penetration testers can be the difference between security and vulnerability. These highly trained professionals can “think like the enemy” and employ creative ways to identify problems before they occur, going beyond the use of automated tools. Pentesters can perform technological offensives, but also simulate spear phishing campaigns to identify weak links in the security posture of the companies and pinpoint training needs. The human element is essential to simulate a realistic attack and uncover all of the infrastructure’s critical weaknesses. Being a pen tester can be financially rewarding because trained and skilled ones can normally secure good wages. Employers are willing to pay top dollar to attract and retain talent. Most pen testers enjoy sizable salaries depending on where they live and their level of experience and training. According to a PayScale salary survey, the average salary is approximately $78K annually, ranging from $44K to $124K on the higher end. To be a better pen tester, you need to upgrade or master your art in certain aspects. The following skills will make you stand out in the crowd and will make you a better and more effective pen tester. I know what you’re thinking. This seems like an awful lot of work learning penetration testing, right? Wrong. You can still learn how to penetration test and become a penetration tester without these things, but learning all of these things will make it easier and help you understand both how and why things are done a certain way. Bad pen testers know that things are vulnerable. Good pen testers know how things are vulnerable. Great pen testers know why things are vulnerable. Mastering command-line If you notice that even in modern hacker films and series, the hackers always have a little black box on the screen with text going everywhere. It’s a cliché but it’s based in reality. Hackers and penetration testers alike use the command line a lot. Most of the tools are normally command line based. It’s not showing off, it’s just the most efficient way to do our jobs. If you want to become a penetration tester you need to be at the very least, comfortable with a DOS or PowerShell prompt or terminal. The best way to develop this sort of skillset is to learn how to write DOS Batch or PowerShell scripts. There are various command line tools that make the life of a pen-tester easy. So learning to use those tools and mastering them will enable you to pen-test your environment efficiently. Mastering OS concepts If you look at penetration testing or hacking sites and tutorials, there’s a strong tendency to use Linux. If you start with something like Ubuntu, Mint or Fedora or Kali as a main OS and try to spend some time tinkering under the hood, it’ll help you become more familiar with the environment. Setting up a VM to install and break into a Linux server is a great way to learn. You wouldn’t expect to be able to comfortably find and exploit file permission weaknesses if you don’t understand how Linux file permissions work, nor should you expect to be able to exploit the latest vulnerabilities comfortably and effectively without understanding how they affect a system. A basic understanding of Unix file permissions, processes, shell scripting, and sockets will go a long way. Mastering networking and protocols to the packet level TCP/IP seems really scary at first, but the basics can be learned in a day or two. While breaking in you can use a packet sniffing tool called Wireshark to see what’s really going on when they send traffic to a target instead of blindly accepting documented behavior without understanding what’s happening. You’ll also need to know not only how HTTP works over the wire, but also you’ll need to understand the Document Object Model (DOM) and enough knowledge about how backends work to then, further understand how web-based vulnerabilities occur. You can become a penetration tester without learning a huge volume of things, but you’ll struggle and it’ll be a much less rewarding career. Mastering programming If you can’t program then you’re at risk of losing out to candidates who can. At best, you’re possibly going to lose money from that starting salary. Why? You would require sufficient knowledge in a programming language to understand the source code and find a vulnerability in it. For instance, only if you know PHP and how it interacts with a database, will you be able to exploit SQL injection. Your prospective employer is going to need to give you time to learn these things if they’re going to get the most out of you. So don’t steal money from your own career, learn to program. It’s not hard. Being able to program means you can write tools, automate activities, and be far more efficient. Aside from basic scripting you should ideally become at least semi-comfortable with one programming languageand cover the basics in another. Web people like Ruby. Python is popular amongst reverse engineers. Perl is particularly popular amongst hardcore Unix users. You don’t need to be a great programmer, but being able to program is worth its weight in goldand most languages have online tutorials to get you started. Final thoughts Employers will hire a bad junior tester if they have to, and a good junior tester if there’s no one better, but they’ll usually hire a potentially great junior pen tester in a heartbeat. If you don’t spend time learning the basics to make yourself a great pen tester, you’re stealing from your own potential salary. If you’re missing some or all of the things above, don’t be upset. You can still work towards getting a job in penetration testing and you don’t need to be an expert in any of these things. They’re simply technical qualities that make you a much better candidate for being (and probably better paid) hired from a hiring manager and supporting interviewer’s perspective. About the author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 59587

article-image-common-big-data-design-patterns
Sugandha Lahoti
08 Jul 2018
17 min read
Save for later

Common big data design patterns

Sugandha Lahoti
08 Jul 2018
17 min read
Design patterns have provided many ways to simplify the development of software applications. Now that organizations are beginning to tackle applications that leverage new sources and types of big data, design patterns for big data are needed. These big data design patterns aim to reduce complexity, boost the performance of integration and improve the results of working with new and larger forms of data. This article intends to introduce readers to the common big data design patterns based on various data layers such as data sources and ingestion layer, data storage layer and data access layer. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. In this book, you will learn the importance of architectural and design patterns in business-critical applications. Data sources and ingestion layer Enterprise big data systems face a variety of data sources with non-relevant information (noise) alongside relevant (signal) data. Noise ratio is very high compared to signals, and so filtering the noise from the pertinent information, handling high volumes, and the velocity of data is significant. This is the responsibility of the ingestion layer. The common challenges in the ingestion layers are as follows: Multiple data source load and prioritization Ingested data indexing and tagging Data validation and cleansing Data transformation and compression The preceding diagram depicts the building blocks of the ingestion layer and its various components. We need patterns to address the challenges of data sources to ingestion layer communication that takes care of performance, scalability, and availability requirements. In this section, we will discuss the following ingestion and streaming patterns and how they help to address the challenges in ingestion layers. We will also touch upon some common workload patterns as well, including: Multisource extractor Multidestination Protocol converter Just-in-time (JIT) transformation Real-time streaming pattern Multisource extractor An approach to ingesting multiple data types from multiple data sources efficiently is termed a Multisource extractor. Efficiency represents many factors, such as data velocity, data size, data frequency, and managing various data formats over an unreliable network, mixed network bandwidth, different technologies, and systems: The multisource extractor system ensures high availability and distribution. It also confirms that the vast volume of data gets segregated into multiple batches across different nodes. The single node implementation is still helpful for lower volumes from a handful of clients, and of course, for a significant amount of data from multiple clients processed in batches. Partitioning into small volumes in clusters produces excellent results. Data enrichers help to do initial data aggregation and data cleansing. Enrichers ensure file transfer reliability, validations, noise reduction, compression, and transformation from native formats to standard formats. Collection agent nodes represent intermediary cluster systems, which helps final data processing and data loading to the destination systems. The following are the benefits of the multisource extractor: Provides reasonable speed for storing and consuming the data Better data prioritization and processing Drives improved business decisions Decoupled and independent from data production to data consumption Data semantics and detection of changed data Scaleable and fault tolerance system The following are the impacts of the multisource extractor: Difficult or impossible to achieve near real-time data processing Need to maintain multiple copies in enrichers and collection agents, leading to data redundancy and mammoth data volume in each node High availability trade-off with high costs to manage system capacity growth Infrastructure and configuration complexity increases to maintain batch processing Multidestination pattern In multisourcing, we saw the raw data ingestion to HDFS, but in most common cases the enterprise needs to ingest raw data not only to new HDFS systems but also to their existing traditional data storage, such as Informatica or other analytics platforms. In such cases, the additional number of data streams leads to many challenges, such as storage overflow, data errors (also known as data regret), an increase in time to transfer and process data, and so on. The multidestination pattern is considered as a better approach to overcome all of the challenges mentioned previously. This pattern is very similar to multisourcing until it is ready to integrate with multiple destinations (refer to the following diagram). The router publishes the improved data and then broadcasts it to the subscriber destinations (already registered with a publishing agent on the router). Enrichers can act as publishers as well as subscribers: Deploying routers in the cluster environment is also recommended for high volumes and a large number of subscribers. The following are the benefits of the multidestination pattern: Highly scalable, flexible, fast, resilient to data failure, and cost-effective Organization can start to ingest data into multiple data stores, including its existing RDBMS as well as NoSQL data stores Allows you to use simple query language, such as Hive and Pig, along with traditional analytics Provides the ability to partition the data for flexible access and decentralized processing Possibility of decentralized computation in the data nodes Due to replication on HDFS nodes, there are no data regrets Self-reliant data nodes can add more nodes without any delay The following are the impacts of the multidestination pattern: Needs complex or additional infrastructure to manage distributed nodes Needs to manage distributed data in secured networks to ensure data security Needs enforcement, governance, and stringent practices to manage the integrity and consistency of data Protocol converter This is a mediatory approach to provide an abstraction for the incoming data of various systems. The protocol converter pattern provides an efficient way to ingest a variety of unstructured data from multiple data sources and different protocols. The message exchanger handles synchronous and asynchronous messages from various protocol and handlers as represented in the following diagram. It performs various mediator functions, such as file handling, web services message handling, stream handling, serialization, and so on: In the protocol converter pattern, the ingestion layer holds responsibilities such as identifying the various channels of incoming events, determining incoming data structures, providing mediated service for multiple protocols into suitable sinks, providing one standard way of representing incoming messages, providing handlers to manage various request types, and providing abstraction from the incoming protocol layers. Just-In-Time (JIT) transformation pattern The JIT transformation pattern is the best fit in situations where raw data needs to be preloaded in the data stores before the transformation and processing can happen. In this kind of business case, this pattern runs independent preprocessing batch jobs that clean, validate, corelate, and transform, and then store the transformed information into the same data store (HDFS/NoSQL); that is, it can coexist with the raw data: The preceding diagram depicts the datastore with raw data storage along with transformed datasets. Please note that the data enricher of the multi-data source pattern is absent in this pattern and more than one batch job can run in parallel to transform the data as required in the big data storage, such as HDFS, Mongo DB, and so on. Real-time streaming pattern Most modern businesses need continuous and real-time processing of unstructured data for their enterprise big data applications. Real-time streaming implementations need to have the following characteristics: Minimize latency by using large in-memory Event processors are atomic and independent of each other and so are easily scalable Provide API for parsing the real-time information Independent deployable script for any node and no centralized master node implementation The real-time streaming pattern suggests introducing an optimum number of event processing nodes to consume different input data from the various data sources and introducing listeners to process the generated events (from event processing nodes) in the event processing engine: Event processing engines (event processors) have a sizeable in-memory capacity, and the event processors get triggered by a specific event. The trigger or alert is responsible for publishing the results of the in-memory big data analytics to the enterprise business process engines and, in turn, get redirected to various publishing channels (mobile, CIO dashboards, and so on). Big data workload patterns Workload patterns help to address data workload challenges associated with different domains and business cases efficiently. The big data design pattern manifests itself in the solution construct, and so the workload challenges can be mapped with the right architectural constructs and thus service the workload. The following diagram depicts a snapshot of the most common workload patterns and their associated architectural constructs: Workload design patterns help to simplify and decompose the business use cases into workloads. Then those workloads can be methodically mapped to the various building blocks of the big data solution architecture. Data storage layer Data storage layer is responsible for acquiring all the data that are gathered from various data sources and it is also liable for converting (if needed) the collected data to a format that can be analyzed. The following sections discuss more on data storage layer patterns. ACID versus BASE versus CAP Traditional RDBMS follows atomicity, consistency, isolation, and durability (ACID) to provide reliability for any user of the database. However, searching high volumes of big data and retrieving data from those volumes consumes an enormous amount of time if the storage enforces ACID rules. So, big data follows basically available, soft state, eventually consistent (BASE), a phenomenon for undertaking any search in big data space. Database theory suggests that the NoSQL big database may predominantly satisfy two properties and relax standards on the third, and those properties are consistency, availability, and partition tolerance (CAP). With the ACID, BASE, and CAP paradigms, the big data storage design patterns have gained momentum and purpose. We will look at those patterns in some detail in this section. The patterns are: Façade pattern NoSQL pattern Polyglot pattern Façade pattern This pattern provides a way to use existing or traditional existing data warehouses along with big data storage (such as Hadoop). It can act as a façade for the enterprise data warehouses and business intelligence tools. In the façade pattern, the data from the different data sources get aggregated into HDFS before any transformation, or even before loading to the traditional existing data warehouses: The façade pattern allows structured data storage even after being ingested to HDFS in the form of structured storage in an RDBMS, or in NoSQL databases, or in a memory cache. The façade pattern ensures reduced data size, as only the necessary data resides in the structured storage, as well as faster access from the storage. NoSQL pattern This pattern entails getting NoSQL alternatives in place of traditional RDBMS to facilitate the rapid access and querying of big data. The NoSQL database stores data in a columnar, non-relational style. It can store data on local disks as well as in HDFS, as it is HDFS aware. Thus, data can be distributed across data nodes and fetched very quickly. Let's look at four types of NoSQL databases in brief: Column-oriented DBMS: Simply called a columnar store or big table data store, it has a massive number of columns for each tuple. Each column has a column key. Column family qualifiers represent related columns so that the columns and the qualifiers are retrievable, as each column has a column key as well. These data stores are suitable for fast writes. Key-value pair database: A key-value database is a data store that, when presented with a simple string (key), returns an arbitrarily large data (value). The key is bound to the value until it gets a new value assigned into or from a database. The key-value data store does not need to have a query language. It provides a way to add and remove key-value pairs. A key-value store is a dictionary kind of data store, where it has a list of words and each word represents one or more definitions. Graph database: This is a representation of a system that contains a sequence of nodes and relationships that creates a graph when combined. A graph represents three data fields: nodes, relationships, and properties. Some types of graph store are referred to as triple stores because of their node-relationship-node structure. You may be familiar with applications that provide evaluations of similar or likely characteristics as part of the search (for example, a user bought this item also bought... is a good illustration of graph store implementations). Document database: We can represent a graph data store as a tree structure. Document trees have a single root element or sometimes even multiple root elements as well. Note that there is a sequence of branches, sub-branches, and values beneath the root element. Each branch can have an expression or relative path to determine the traversal path from the origin node (root) and to any given branch, sub-branch, or value. Each branch may have a value associated with that branch. Sometimes the existence of a branch of the tree has a specific meaning, and sometimes a branch must have a given value to be interpreted correctly. The following table summarizes some of the NoSQL use cases, providers, tools and scenarios that might need NoSQL pattern considerations. Most of this pattern implementation is already part of various vendor implementations, and they come as out-of-the-box implementations and as plug and play so that any enterprise can start leveraging the same quickly. NoSQL DB to Use Scenario Vendor / Application / Tools Columnar database Application that needs to fetch entire related columnar family based on a given string: for example, search engines SAP HANA / IBM DB2 BLU / ExtremeDB / EXASOL / IBM Informix / MS SQL Server / MonetDB Key Value Pair database Needle in haystack applications (refer to the Big data workload patterns given in this section) Redis / Oracle NoSQL DB / Linux DBM / Dynamo / Cassandra Graph database Recommendation engine: application that provides evaluation of Similar to / Like: for example, User that bought this item also bought ArangoDB / Cayley / DataStax / Neo4j / Oracle Spatial and Graph / Apache Orient DB / Teradata Aster Document database Applications that evaluate churn management of social media data or non-enterprise data Couch DB / Apache Elastic Search / Informix / Jackrabbit / Mongo DB / Apache SOLR Polyglot pattern Traditional (RDBMS) and multiple storage types (files, CMS, and so on) coexist with big data types (NoSQL/HDFS) to solve business problems. Most modern business cases need the coexistence of legacy databases. At the same time, they would need to adopt the latest big data techniques as well. Replacing the entire system is not viable and is also impractical. The polyglot pattern provides an efficient way to combine and use multiple types of storage mechanisms, such as Hadoop, and RDBMS. Big data appliances coexist in a storage solution: The preceding diagram represents the polyglot pattern way of storing data in different storage types, such as RDBMS, key-value stores, NoSQL database, CMS systems, and so on. Unlike the traditional way of storing all the information in one single data source, polyglot facilitates any data coming from all applications across multiple sources (RDBMS, CMS, Hadoop, and so on) into different storage mechanisms, such as in-memory, RDBMS, HDFS, CMS, and so on. Data access layer Data access in traditional databases involves JDBC connections and HTTP access for documents. However, in big data, the data access with conventional method does take too much time to fetch even with cache implementations, as the volume of the data is so high. So we need a mechanism to fetch the data efficiently and quickly, with a reduced development life cycle, lower maintenance cost, and so on. Data access patterns mainly focus on accessing big data resources of two primary types: End-to-end user-driven API (access through simple queries) Developer API (access provision through API methods) In this section, we will discuss the following data access patterns that held efficient data access, improved performance, reduced development life cycles, and low maintenance costs for broader data access: Connector pattern Lightweight stateless pattern Service locator pattern Near real-time pattern Stage transform pattern The preceding diagram represents the big data architecture layouts where the big data access patterns help data access. We discuss the whole of that mechanism in detail in the following sections. Connector pattern The developer API approach entails fast data transfer and data access services through APIs. It creates optimized data sets for efficient loading and analysis. Some of the big data appliances abstract data in NoSQL DBs even though the underlying data is in HDFS, or a custom implementation of a filesystem so that the data access is very efficient and fast. The connector pattern entails providing developer API and SQL like query language to access the data and so gain significantly reduced development time. As we saw in the earlier diagram, big data appliances come with connector pattern implementation. The big data appliance itself is a complete big data ecosystem and supports virtualization, redundancy, replication using protocols (RAID), and some appliances host NoSQL databases as well. The preceding diagram shows a sample connector implementation for Oracle big data appliances. The data connector can connect to Hadoop and the big data appliance as well. It is an example of a custom implementation that we described earlier to facilitate faster data access with less development time. Lightweight stateless pattern This pattern entails providing data access through web services, and so it is independent of platform or language implementations. The data is fetched through restful HTTP calls, making this pattern the most sought after in cloud deployments. WebHDFS and HttpFS are examples of lightweight stateless pattern implementation for HDFS HTTP access. It uses the HTTP REST protocol. The HDFS system exposes the REST API (web services) for consumers who analyze big data. This pattern reduces the cost of ownership (pay-as-you-go) for the enterprise, as the implementations can be part of an integration Platform as a Service (iPaaS): The preceding diagram depicts a sample implementation for HDFS storage that exposes HTTP access through the HTTP web interface. Near real-time pattern For any enterprise to implement real-time data access or near real-time data access, the key challenges to be addressed are: Rapid determination of data: Ensure rapid determination of data and make swift decisions (within a few seconds, not in minutes) before the data becomes meaningless Rapid analysis: Ability to analyze the data in real time and spot anomalies and relate them to business events, provide visualization, and generate alerts at the moment that the data arrived Some examples of systems that would need real-time data analysis are: Radar systems Customer services applications ATMs Social media platforms Intrusion detection systems Storm and in-memory applications such as Oracle Coherence, Hazelcast IMDG, SAP HANA, TIBCO, Software AG (Terracotta), VMware, and Pivotal GemFire XD are some of the in-memory computing vendor/technology platforms that can implement near real-time data access pattern applications: As shown in the preceding diagram, with multi-cache implementation at the ingestion phase, and with filtered, sorted data in multiple storage destinations (here one of the destinations is a cache), one can achieve near real-time access. The cache can be of a NoSQL database, or it can be any in-memory implementations tool, as mentioned earlier. The preceding diagram depicts a typical implementation of a log search with SOLR as a search engine. Stage transform pattern In the big data world, a massive volume of data can get into the data store. However, all of the data is not required or meaningful in every business case. The stage transform pattern provides a mechanism for reducing the data scanned and fetches only relevant data. HDFS has raw data and business-specific data in a NoSQL database that can provide application-oriented structures and fetch only the relevant data in the required format: Combining the stage transform pattern and the NoSQL pattern is the recommended approach in cases where a reduced data scan is the primary requirement. The preceding diagram depicts one such case for a recommendation engine where we need a significant reduction in the amount of data scanned for an improved customer experience. The implementation of the virtualization of data from HDFS to a NoSQL database, integrated with a big data appliance, is a highly recommended mechanism for rapid or accelerated data fetch. We discussed big data design patterns by layers such as data sources and ingestion layer, data storage layer and data access layer. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, read our book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 59537

article-image-what-are-the-challenges-of-adopting-ai-powered-tools-in-sales-how-salesforce-can-help
Guest Contributor
24 Aug 2019
8 min read
Save for later

What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help

Guest Contributor
24 Aug 2019
8 min read
Artificial intelligence is a hot topic for many industries. When it comes to sales, the situation gets complicated. According to the latest Salesforce State of Sales report, just 21% of organizations use AI in sales today, while its adoption in sales is expected to grow 155% by 2020. Let’s explore what keeps sales teams from implementing AI and how to overcome these challenges to unlock new opportunities. Why do so few teams adopt AI in Sales There are a few reasons behind such a low rate of AI application in sales. First, some teams don’t feel they are prepared to integrate AI into their existing strategies. Second, AI technologies are often applied in a hectic way: many businesses have high expectations of AI and concentrate mostly on its benefits rather than contemplating possible difficulties upfront. Such an approach rarely results in positive business transformation. Here are some common challenges that businesses need to overcome to turn their sales AI projects into success stories. Businesses don’t know how to apply AI in their workflow Problem: Different industries call for different uses of AI. Still, companies tend to buy AI platforms to use them for the same few popular tasks, like predictions based on historical data or automatic data logging. In reality, the business type and direction should dictate what AI solution will best fit the needs of an organization. For example, in e-commerce, AI can serve dynamic product recommendations on the basis of the customer’s previous purchases or views. Teams relying on email marketing can use AI to serve personalized email content as well as optimize send times. Solution: Let a sales team participate in AI onboarding. Prior to setup, gain insight into your sales reps’ daily routine, needs, and pains. Then, get their feedback continuously during the actual AI implementation. Such a strategy will ensure the sales team benefits from a tailored, rather than a generic, AI system. AI requires data businesses don’t have Problem: AI is most efficient when fed with huge amounts of data. It’s true, a company with a few hundred leads per week will train AI for better predictions than the company with the same amount of leads per month. Frequently, companies assume they don’t have so much data or they cannot present it in a suitable format to train an AI algorithm. Solution: In reality, AI can be trained with incomplete and imperfect data. Instead of trying to integrate the whole set of data prior to implementing AI, it’s possible to use it with data subsets, like historical purchase data or promotional campaign analytics. Plus, AI can improve the quality of data by predicting missing elements or identifying possible errors. Businesses lack skills to manage AI platforms Problem: AI is a sophisticated algorithm that requires special skills to implement and use it. Thus, sales teams need to be augmented with specialized knowledge in data management, software optimization, and integration. Otherwise, AI tools can be used incorrectly and thus provide little value. Solution: There are two ways of solving this problem. First, it’s possible to create a new team of big data, machine learning, and analytics experts to run AI implementation and coordinate it with the sales team. This option is rather time-consuming. Second, it’s possible to buy an AI-driven platform, like Salesforce, for example, that includes both out-of-the-box features as well as plenty of customization opportunities. Instead of hiring new specialists to manage the platform, you can reach out to Salesforce consultants who will help you select the best-fit plan, configure, and implement it. If your requirements go beyond the features available by default, then it’s possible to add custom functionality. How AI can change the sales of tomorrow When you have a clear vision of the AI implementation challenges and understand how to overcome them, it’s time to make use of AI-provided benefits. A core benefit of any AI system is its ability to analyze large amounts of data across multiple platforms and then connect the dots, i.e. draw actionable conclusions. To illustrate these AI opportunities, let’s take Salesforce, one of the most popular solutions in this domain today, and see how its AI technology, Einstein, can enhance a sales workflow. Time-saving and productivity boost Administrative work eats up sales reps’ time that they can spend selling. That’s why many administrative tasks should be automated. Salesforce Einstein can save time usually wasted on manual data entry by: Automating contact creation and update Activity logging Generating lead status reports Syncing emails and calendars Scheduling meetings Efficient lead management When it comes to leads, sales reps tend to base their lead management strategies on gut feeling. In spite of its importance, intuition cannot be the only means of assessing leads. The approach should be more holistic. AI has unmatched abilities to analyze large amounts of information from different sources to help score and prioritize leads. In combination with sales reps’ intuition, such data can bring lead management to a new level. For example, Einstein AI can help with: Scoring leads based on historical data and performance metrics of the best customers Classifying opportunities in terms of their readiness to convert Tracking reengaged opportunities and nurturing them Predictive forecasting AI is well-known for its predictive capabilities that help sales teams make smarter decisions without running endless what-if scenarios. AI forecasting builds sales models using historical data. Such models anticipate possible outcomes of multiple scenarios common in sales reps’ work. Salesforce Einstein, for example, can give the following predictions: Prospects most likely to convert Deals most likely to close Prospects or deals to target New leads Opportunities to upsell or cross-sell The same algorithm can be used for forecasting sales team performance during a specified period of time and taking proactive steps based on those predictions. What’s more, sales intelligence is shifting from predictive to prescriptive, where prescriptive AI does not recommend but prescribes exact actions to be taken by sales reps to achieve a particular outcome. Watching out for pitfalls of AI in sales While AI promises to fulfil sales reps’ advanced requests, there are still some fears and doubts around it. First of all, as a rising technology, AI still carries ethical issues related to its safe and legitimate use in the workplace, such as those of the integrity of autonomous AI-driven decisions and legitimate origin of data fed to algorithms. While the full-fledged legal framework is yet to be worked out, governments have already stepped in. For example, the High-Level Expert Group on AI of the European Commission came up with the Ethics Guidelines for Trustworthy Artificial Intelligence covering every aspect from human oversight and technical robustness to data privacy and non-discrimination. In particular, non-discrimination relates to potential bias,, such as algorithmic bias that comes from human bias when sourcing data, and the one where correlation does not equal causation. Thus, AI-driven analysis should be incorporated in decision-making cautiously as just one of the many sources of insights. AI won’t replace a human mind⁠—the data still needs to be processed critically. When it comes to sales, another common concern is that AI will take sales reps’ jobs. Yes, some tasks that are deemed monotonous and time-consuming are indeed taken over by AI automation. However, it is actually a blessing as AI does not replace jobs but augments them. This way, sales reps can have more time on their hands to complete more creative and critical tasks. It's true, however, that employers would need people who know how to work with AI technologies. It means either ongoing training or new hires, which can be rather costly. The stakes are high, though. To keep up with the fast-changing world, one has to bargain their way to success, finding one’s way around current limitations and challenges. In a nutshell AI is key to boosting sales team performance. However, successful AI integration into sales and marketing strategies requires teams to overcome challenges posed by sophisticated AI technologies. Such popular AI-driven platforms like Salesforce help sales reps get hold of the AI potential as well as enjoy vast opportunities for saving time and increasing productivity. Author Bio Valerie Nechay is MarTech and CX Observer at Iflexion, a Denver-based custom software development provider. Using her writing powers, she's translating complex technologies into fascinating topics and shares them with the world. Now her focus is on Salesforce implementation how-tos, challenges, insights, and shortcuts, as well as broader applications of enterprise tech for business development. IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report. Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library How to create sales analysis app in Qlik Sense using DAR method [Tutorial]
Read more
  • 0
  • 0
  • 58315
article-image-6-artificial-intelligence-cybersecurity-tools-you-need-to-know
Savia Lobo
25 Aug 2018
7 min read
Save for later

6 artificial intelligence cybersecurity tools you need to know

Savia Lobo
25 Aug 2018
7 min read
Recently, most of the organizations experienced severe downfall due to an undetected malware, Deeplocker, which secretly evaded even the stringent cyber security mechanisms. Deeplocker leverages the AI model to attack the target host by using indicators such as facial recognition, geolocation and voice recognition. This incidence speaks volumes about the big role AI plays in the cybersecurity domain. In fact, some may even go on to say that AI for cybersecurity is no longer a nice to have tech rather a necessity. Large and small organizations and even startups are hugely investing in building AI systems to analyze the huge data trove and in turn, help their cybersecurity professionals to identify possible threats and take precautions or immediate actions to solve it. If AI can be used in getting the systems protected, it can also harm it. How? The hackers and intruders can also use it to launch an attack--this would be a much smarter attack--which would be difficult to combat. Phishing, one of the most common and simple social engineering cyber attack is now easy for attackers to master. There are a plethora of tools on the dark web that can help anyone to get their hands on phishing. In such trying conditions, it is only imperative that organizations take necessary precautions to guard their information castles. What better than AI? How 6 tools are using artificial intelligence for cybersecurity Symantec’s Targeted attack analytics (TAA) tool This tool was developed by Symantec and is used to uncover stealthy and targeted attacks. It applies AI and machine learning on the processes, knowledge, and capabilities of the Symantec’s security experts and researchers. The TAA tool was used by Symantec to counter the Dragonfly 2.0 attack last year. This attack targeted multiple energy companies and tried to gain access to operational networks. Eric Chein, Technical Director of Symantec Security says, “ With TAA, we’re taking the intelligence generated from our leading research teams and uniting it with the power of advanced machine learning to help customers automatically identify these dangerous threats and take action.” The TAA tools analyze incidents within the network against the incidents found in their Symantec threat data lake. TAA unveils suspicious activity in individual endpoints and collates that information to determine whether each action indicate hidden malicious activity. The TAA tools are now available for Symantec Advanced Threat Protection (ATP) customers. Sophos’ Intercept X tool Sophos is a British security software and hardware company. Its tool, Intercept X, uses a deep learning neural network that works similar to a human brain. In 2010, the US Defense Advanced Research Projects Agency (DARPA) created their first Cyber Genome Program to uncover the ‘DNA’ of malware and other cyber threats, which led to the creation of algorithm present in the Intercept X. Before a file executes, the Intercept X is able to extract millions of features from a file, conduct a deep analysis, and determine if a file is benign or malicious in 20 milliseconds. The model is trained on real-world feedback and bi-directional sharing of threat intelligence via an access to millions of samples provided by the data scientists. This results in high accuracy rate for both existing and zero-day malware, and a lower false positive rate. Intercept X utilizes behavioral analysis to restrict new ransomware and boot-record attacks.  The Intercept X has been tested on several third parties such as NSS labs and received high-scores. It is also proven on VirusTotal since August of 2016. Maik Morgenstern, CTO, AV-TEST said, “One of the best performance scores we have ever seen in our tests.” Darktrace Antigena Darktrace Antigena is Darktrace’s active self-defense product. Antigena expands Darktrace’s core capabilities to detect and replicate the function of digital antibodies that identify and neutralize threats and viruses. Antigena makes use of Darktrace’s Enterprise Immune System to identify suspicious activity and responds to them in real-time, depending on the severity of the threat. With the help of underlying machine learning technology, Darktrace Antigena identifies and protects against unknown threats as they develop. It does this without the need for human intervention, prior knowledge of attacks, rules or signatures. With such automated response capability, organizations can respond to threats quickly, without disrupting the normal pattern of business activity. Darktrace Antigena modules help to regulate user and machine access to the internet, message protocols and machine and network connectivity via various products such as Antigena Internet, Antigena Communication, and Antigena network. IBM QRadar Advisor IBM’s QRadar Advisor uses the IBM Watson technology to fight against cyber attacks. It uses AI to auto-investigate indicators of any compromise or exploit. QRadar Advisor uses cognitive reasoning to give critical insights and further accelerates the response cycle. With the help of IBM’s QRadar Advisor, security analysts can assess threat incidents and reduce the risk of missing them. Features of the IBM QRadar Advisor Automatic investigations of incidents QRadar Advisor with Watson investigates threat incidents by mining local data using observables in the incident to gather broader local context. It later quickly assesses the threats regarding whether they have bypassed layered defenses or were blocked. Provides Intelligent reasoning QRadar identifies the likely threat by applying cognitive reasoning. It connects threat entities related to the original incident such as malicious files, suspicious IP addresses, and rogue entities to draw relationships among these entities. Identifies high priority risks With this tool, one can get critical insights on an incident, such as whether or not a malware has executed, with supporting evidence to focus your time on the higher risk threats. Then make a decision quickly on the best response method for your business. Key insights on users and critical assets IBM’s QRadar can detect suspicious behavior from insiders through integration with the User Behavior Analytics (UBA) App and understands how certain activities or profiles impact systems. Vectra’s Cognito Vectra’s Cognito platform uses AI to detect attackers in real-time. It automates threat detection and hunts for covert attackers. Cognito uses behavioral detection algorithms to collect network metadata, logs and cloud events. It further analyzes these events and stores them to reveal hidden attackers in workloads and user/IoT devices. Cognito platform consists of Cognito Detect and Cognito Recall. Cognito Detect reveals hidden attackers in real time using machine learning, data science, and behavioral analytics. It automatically triggers responses from existing security enforcement points by driving dynamic incident response rules. Cognito Recall determines exploits that exist in historical data. It further speeds up detection of incident investigations with actionable context about compromised devices and workloads over time. It’s a quick and easy fix to find all devices or workloads accessed by compromised accounts and identify files involved in exfiltration. Just as diamond cuts diamond, AI cuts AI. By using AI to attack and to prevent on either side, AI systems will learn different and newer patterns and also identify unique deviations to security analysts. This provides organizations to resolve an attack on the way much before it reaches to the core. Given the rate at which AI and machine learning are expanding, the days when AI will redefine the entire cybersecurity ecosystem are not that far. DeepMind AI can spot over 50 sight-threatening eye diseases with expert accuracy IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware 7 Black Hat USA 2018 conference cybersecurity training highlights Top 5 cybersecurity trends you should be aware of in 2018  
Read more
  • 0
  • 0
  • 57828

article-image-is-dart-programming-dead-already
Amarabha Banerjee
26 Jul 2018
3 min read
Save for later

Is Dart programming dead already?

Amarabha Banerjee
26 Jul 2018
3 min read
Dart is an open-source, object-oriented, general-purpose programming language developed by Google in 2011. Dart uses a 'C' style syntax and optionally transcompiles into JavaScript. It is used for both client side and server-side web development. Dart is also being used for Native and Cross-platform mobile development. In spite of all the capabilities that Dart possesses, there are few rumors about Dart heading nowhere? Are there any truth to these rumors? Through this article, we will see how Dart can turn the tide around with some key new projects and application areas that are being explored recently. But first, let's talk about its recent TIOBE ranking - the de-facto standard for gauging the popularity of programming languages. The latest rankings show Dart at the 24th spot out of the 50 languages that TIOBE tracks, behind languages like R, Delphi, even Swift which was released in 2014, 3 years later after Dart was introduced. Even languages like Delphi and R have seen a recent spike in implementation across various application domain. Codementor ranks Dart at #1 in the list of programming languages one should not learn in 2018. They looked at community engagement, growth, and the job market to arrive at this conclusion. Source: Codementor.io The job trends for Dart also have been quite stagnant for some time. After a minor dip in between 2015-16, the job trends are now back at the 2014 mark. There has been neither any growth nor decline. The major cause of concern for Dart has been that very few companies are using Dart in their development stack. Other than Google backed websites such as AdWords, Google Fiber, Flutter etc, only few other major companies like Workiva, Adobe, Blossom have actively implemented Dart. The Job listings are also similar in number as in 2014, the developer salaries are pretty high. The lack of sustained growth balances that advantage out. Google had also shifted to TypeScript as the official language for it’s most popular front-end development framework Angular. This was also seen as a minor setback for Dart. When compared to other late entrants like Swift, is this a sign of worry for Dart? Source: ITJobswatch.uk The answer is a definitely 'No'. The reason being Dart’s ease of use, lack of boilerplates and extremely lightweight nature. Developers have termed it as a language for the long run. These predictions got a major boost when Google recently announced their latest Cross-Platform mobile development framework, Flutter which was written in Dart. The reviews of Flutter has also been encouraging since it paves way for simpler and native like mobile apps. The popularity of Flutter could mean a revival of Dart in the mobile development scenario. But Google might have even bigger aspirations with Dart. Google might be thinking of a possible replacement of its flagship Android operating system which is bugged by irregular update cycles due to multiple instances of it running on different devices. Google is developing an operating system called Fuchsia which is also being written in Dart. All of these point out to one single direction, Dart is here to stay. If everything goes as per Google's vision, Fuchsia will bring Dart to the forefront of mobile development.   **edited for better clarity on Dart’s comparison to other languages like Swift, Delphi and R and elaborated on Dart’s real world use cases. Read Next Why Google Dart Will Never Win The Battle For The Browser Building Games with HTML5 and Dart Google Fuchsia: What’s all the fuss about?
Read more
  • 0
  • 4
  • 57236

article-image-how-everyone-at-netflix-uses-jupyter-notebooks-from-data-scientists-machine-learning-engineers-to-data-analysts
Bhagyashree R
18 Aug 2018
4 min read
Save for later

How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts

Bhagyashree R
18 Aug 2018
4 min read
Netflix uses a variety of tools to do data analysis. One of the big ways that data scientists and engineers at Netflix interact with their data is through Jupyter notebooks. In addition to providing execution environments to users, Netflix invests in various parts of the Jupyter ecosystem and tooling. They are “reimagining what a notebook can be, who can use it, and what they can do with it.” Netflix aims to provide personalized content to their 130 million viewers. For this every day more than 1 trillion events are written into a streaming ingestion pipeline. To support this, they’ve built an industry-leading data platform which is flexible, powerful, and complex. There are so many diverse users of this platform, such as analytics engineers, data engineers, and data scientists, requiring different sets of tools and languages. To help the platform scale, they wanted to minimize the number of tools and the solution to this was the open-source tool: Jupyter notebooks. Why Jupyter notebook is so compelling for Netflix? These are the functionalities provided by notebook that benefits Netflix’s data scientists and engineers: Standard messaging API: The Jupyter protocol provides a standard messaging API with the kernels that act as computational engines. It separates where the content is written and where the content is executed. This makes it language agnostic. Editable file format: It provides an editable file format that stores the code and results together. Web-based UI: It is web-based which helps interactively writing and running code as well as visualizing outputs. How Netflix uses Jupyter Notebooks? The following are some of the use cases they use Jupyter notebooks for: Data access: Notebooks were first introduced for workflows and their adoption grew among the data scientists. Seeing this, Netflix decided to leverage its versatility and architecture for general data access. Notebooks provide an user-friendly interface for interactively running code, exploring the outputs, and visualizing data all from a single cloud-based development environment. Notebook Templates: They introduced parameterized notebooks, which allow the use of parameters in the code and take values as input at runtime. These templates help: Data scientists to run an experiment with different coefficients and summarize the results Data engineers to execute data quality audits Data analysts to share prepared queries and visualizations Software engineers to email the results of a troubleshooting script Scheduling notebooks: Next they are using notebooks for creating a unifying layer for scheduling workflows. Notebooks are used for interactive work and allows smooth move to scheduling that work to run recurrently. Many users create an entire workflow in a notebook and just copy/paste it into separate files for scheduling when they’re ready to deploy it. Notebook infrastructure: The three fundamental components of the infrastructure are: storage, compute, and interface. Source: Netflix Tech Blog Storage: The Netflix Data Platform is made of Amazon S3 and EFS for cloud storage, which notebooks treat as virtual filesystems. Each user has a home directory on EFS containing a personal workspace for notebooks. This workspace is for storing any notebook created or uploaded by a user. When a user launches a notebook interactively, all the reading and writing happens at the workspace. Compute: All the jobs on the data platform run on containers including queries, pipelines and notebooks. A container with reasonable default resources is provisioned when a user launches a notebook. Users can request more resources if they find that the provided resources are not enough. A unified execution environment with a prepared container image is provided, which has common libraries and an array of default kernels preinstalled. The orchestration and environments are managed with Titus, their container management platform. Interface: They are using nteract, a React-based frontend for Jupyter notebooks, which emphasizes simplicity and composability as core design principles.They’re also introducing native support for parameterization, which makes it easier to schedule notebooks and create reusable templates. Netflix is planning to make investments in both the frontend and backend to improve the overall notebook experience. This year they are also sponsoring JupyterCon. To read more about how Jupyter is offering value to Netflix read Netflix’s original post at Medium. 10 reasons why data scientists love Jupyter notebooks What’s new in Jupyter Notebook 5.3.0 Netflix open sources Zuul 2 cloud gateway
Read more
  • 0
  • 0
  • 55124
article-image-how-build-secure-microservices
Rick Blaisdell
13 Jul 2017
4 min read
Save for later

How to build secure microservices

Rick Blaisdell
13 Jul 2017
4 min read
A few years back, everybody was looking for an architecture that would make web and mobile application development more flexible, reliable, efficient, and scalable. In 2014, we found the answer when an innovative architectural solution was developed—Microservice. The fastest growing companies are built around microservices. What makes microservice architecture fascinating is its characteristics: Microservices are organized around competencies like recommendations, front-end, user interface. You can implement them using various programming languages, databases, software, and environment. The services lend themselves to a continuous delivery software development process. If there are any changes produced in the application, it requires only a few changes in a service. Easy to replace with other microservices. These services are independently deployable, autonomously developed, and messaging enabled.  So, it’s easy to understand why a microservice architecture is a perfect way to accelerate both web and mobile application development. However, one needs to understand how to build secure microservices. Security is the top priority for every business. Designing a safe microservices architecture can be simple if you follow these guidelines: Define access control and authorization – This is one of the crucial steps in reaching a higher level of security. It’s important to understand first how each microservice could be compromised and what damage could be done. This will make it much easier for you to develop a strategy that could safeguard against these incidents.  Map communications – Outlining the entire communication methods between microservices will give you valuable insights on any vulnerability that might eventually be exploited in case of a malicious attack. Use centralized security or configuration policies – Human error is one of the most common reasons why platforms, devices, or networks get hacked or damaged. It’s a fact! Employing a centralized security or configuration policy will reduce the human interaction with the microservices, and will build the long-desired consistency. Establish common, repeatable coding standards – The repeatable coding standards must be set up right from the developing stage. It will reduce certain divergences that might lead to exploitable vulnerabilities. Use ‘defense in depth’ to authorize vital services – From our experience, we know that a single firewall is not strong enough to protect our entire software. Thus, enabling a multi-factor authentication method, which places multiple layers of security controls is an effective way to ensure a robust security level. Use automatic security updates – This is crucial and easy to set up. Review microservices code – Having multiple experts reviewing the code is a great way of making sure that errors have not slipped through the cracks. Deploy an API gateway – If you expose one or more APIs for external access, then deploying an API gateway could reduce security risks. Moreover, you need to make sure that all API traffic is being encrypted using TSL. Actually, TSL should be used for all internal communications, right from the beginning to ensure the security of your systems. Use intrusion tools and request fuzzers – We all know that it is better to find issues before an attacker does. So, the technique ‘fuzz’ is a method that finds code vulnerabilities by sending large quantities of random data to the systems. This approach will ultimately highlight if the code could be compromised and what could cause it to fail.  Now that we’re all set with the security measures required for building a microservices, I would like to make a quick overview of the benefits that this innovative architecture has to offer: Fewer dependencies between teams Run multiple initiatives in parallel Support various technologies, frameworks, or languages Promotes ease of innovation through disposable code  Besides the tangible advantages named above, microservices are delivering increased value to your business, such as agility, comprehensibility of the software systems, independent deployability of components, and organizational alignment of services. I hope that this article will help you build a secure microservices architecture that will add value to your business.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies.
Read more
  • 0
  • 0
  • 52842

article-image-python-tensorflow-excel-and-more-data-professionals-reveal-their-top-tools
Amey Varangaonkar
06 Jun 2018
4 min read
Save for later

Python, Tensorflow, Excel and more - Data professionals reveal their top tools

Amey Varangaonkar
06 Jun 2018
4 min read
Data professionals are constantly on the lookout for the best tools to simplify their data science tasks - be it data acquisition, machine learning, or visualizing the results of the analysis. With so much on their plate already, having robust, efficient tools in the arsenal helps them a lot in reducing the procedural complexities. Not just that, the time taken to do these tasks is considerably reduced as well. But what tools do data professionals rely on to make their lives easier? Thanks to the Skill-up 2018 survey that we recently conducted, we have some interesting observations to share with you! Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Key Takeaways Python is the most widely used programming language by data professionals Python finds a wide adoption across all spectrums of data science - including data analysis, machine learning, deep learning and data visualization Excel continues to be favored by the data professionals because of its effectiveness and simplicity R is slowly falling behind Python in the race to Data Science supremacy Now, let’s look at these observations, in more depth. Python continues its ascension as the top dog Python’s rise in popularity as well as adoption over the last 3 years has been quite staggering, to say the least. Python’s ease of use, powerful analytical and machine learning capabilities as well as its applications outside of data science make it quite a popular language in the tech community. It thus comes as no surprise that it stood out from the others and was the undisputed choice of language for the data pros. R, on the other hand, seems to be finding it difficult to play catch-up to Python, with less than half the number of votes - despite being the tool of choice for many statisticians and researchers. Is the paradigm shift well and truly on? Is Python edging R out for good? Source: Packt Skill-Up Survey 2018 It is interesting to see SQL as the number 2, but considering the number of people working with databases these days it doesn’t come as a surprise. Also, JavaScript is preferred more than Java, indicating the rising need for web-based dashboards for effective Business Intelligence. Data professionals still love Excel, but Python libraries are taking over Microsoft Excel has traditionally been a highly popular tool for data analysis, especially when dealing with data with hundreds and thousands of records. Excel’s perfect setting for data manipulation and charting continues to be the reason why people still use it for basic-level data analysis, as indicated by our survey. Almost 53% of the respondents prefer having Excel in their analysis toolkit for their day to day tasks. Top libraries, tools and frameworks used by data professionals (Source: Packt Skill-Up Survey 2018) The survey also indicated Python’s rising dominance in the data science domain, with 8 out of the 10 most-used tools for data analysis being Python-based. Python’s offerings for data wrangling, scientific computing, machine learning and deep learning make its libraries the obvious choice for data professionals. Here’s a quick look at  15 useful Python libraries to make the above-mentioned data science tasks easier. Tensorflow and PyTorch are in demand AI’s popularity is soaring with every passing day as it finds applications across all types of industries and business domains. In our survey, we found machine learning and deep learning to be two of the most valuable skills to have for any data scientist, as can be seen from the word cloud below: Word cloud for the most valued skills by data professionals (Source: Packt Skill-Up Survey) Python’s two popular deep learning frameworks - Tensorflow and PyTorch have thus gained a lot of attention and adoption in the recent times. Along with Keras - another Python library - these two libraries are the most used frameworks used by data scientists and ML developers for building efficient machine learning and deep learning models. Which language/libraries do you use for your everyday Data Science tasks? Do you agree with your peers’ choice of tools? Feel free to let us know! Read more Data cleaning is the worst part of data analysis, say data scientists 30 common data science terms explained Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 52340