Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-iot-forensics-security-connected-world
Vijin Boricha
01 May 2018
3 min read
Save for later

IoT Forensics: Security in an always connected world where things talk

Vijin Boricha
01 May 2018
3 min read
Connected physical devices, home automation appliances, and wearable devices are all part of Internet of Things (IoT). All of these have two major things in common that is seamless connectivity and massive data transfer. This also brings with it, plenty of opportunities for massive data breaches and allied cyber security threats. The motive of digital forensics is to identify, collect, analyse, and present digital evidence collected from various mediums in a cybercrime incident. The multiplication of IoT devices and the increased number of cyber security incidents has given birth to IoT forensics. IoT forensics is a branch of digital forensics which deals with IoT-related cybercrimes and includes investigation of connected devices, sensors and the data stored on all possible platforms. If you look at the bigger picture, IoT forensics is a lot more complex, multifaceted and multidisciplinary in approach than traditional forensics. With versatile IoT devices, there is no specific method of IoT forensics that can be broadly used.So identifying valuable sources is a major challenge. The entire investigation will depend on the nature of the connected or smart device in place. For example, evidence could be collected from fixed home automation sensors, or moving automobile sensors, wearable devices or data store on Cloud. When compared to the standard digital forensic techniques, IoT forensics portrays multiple challenges depending on the versatility and complexity of the IoT devices. Following are some challenges that one may face in an investigation: Variance of the IoT devices Proprietary Hardware and Software Data present across multiple devices and platforms Data can be updated, modified, or lost Proprietary jurisdictions for data is stored on cloud or a different geography As such, IoT Forensics requires a multi-faceted approach where evidence can be collected from various sources. We can categorize sources of evidence into three broad groups: Smart devices and sensors; Gadgets present at the crime scene (Smartwatch, home automation appliances, weather control devices, and more) Hardware and Software; the communication link between smart devices and the external world (computers, mobile, IPS, and firewalls) External resources; areas outside the network unders investigation (Cloud, social networks, ISPs and mobile network providers) Once the evidence is successfully collected from an IoT device no matter the file system, operating system, or the platform it is based on, it should be logged and monitored. The main reason behind this is IoT devices data storage are majorly on Cloud due to its scalability and accessibility. There are high possibilities the data on Cloud can be altered which would result to an investigation failure. No doubt Cloud forensics can equally play an important role here but strengthening cyber security best practices should be the ideal motive. With ever evolving IoT devices there will always be a need for unique practice methods and techniques to break through the investigation. Cybercrime keeps evolving and getting bolder by the day. Forensics experts will have to develop skill sets to deal with the variety and complexity of IoT devices to keep up with this evolution. No matter the challenges one faces there is always a unique solution to complex problems. There will always be a need for unique, intelligent, and adaptable techniques to investigate IoT-related crimes and an even greater need for those displaying these capabilities. To learn more on IoT security, you can get you hands on a few of our books; IoT Penetration Testing Cookbook and Practical Internet of Things Security. Why Metadata is so important for IoT Why the Industrial Internet of Things (IIoT) needs Architects 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 30118

article-image-top-7-tools-for-virtual-reality-game-developers
Natasha Mathur
31 Oct 2018
12 min read
Save for later

Top 7 tools for virtual reality game developers

Natasha Mathur
31 Oct 2018
12 min read
According to Statista, the virtual reality software market is booming. It is projected to reach a value of around 24.5 billion U.S. dollars by 2020. Also, the estimated revenue of the virtual reality market in the year 2021 is3.56 billion U.S. dollars. This would be a huge increase from a very respectable 3.06 billion U.S. dollars back in 2016 This makes virtual reality a potentially lucrative opportunity if you’re a game developer. But it’s also one that’s a lot of fun, with plenty of creative opportunities, and which doesn’t require a load of money up front. Thanks to technological advancements in the VR space, it’s not easier than ever to build a VR game from scratch. But with so many virtual reality tools out there, it can be hard to know where to start. It leaves you stranded with plenty of options but no sense of direction. To help you out, we’ve consolidated a list of what we think are the top 7 tools to help you get started. 1.Unity 3d: the leading game engine at the cutting edge of the industry Developer: Unity Technologies Release date: 2005 Why choose Unity for virtual reality game development? In a nutshell:  it is the easiest way to get started with Virtual Reality development and doesn’t compromise on the quality of the developed game. Unity offers a huge 3D asset store, which is an online marketplace by Unity. In this asset store, you can easily find the 2D, 3D models, SDKs, templates, as well as different virtual reality tools that you can download and import directly to your game. One of the most popular tools that you can find in the Unity asset store is the VR toolkit. So for times, when you don’t want to spend time on building a character model from scratch, you can simply pick one from the asset store. This helps jump-start the game development process. Some of these assets are free, and for some, you have to pay one-time. Moreover, the documentation in Unity consists of vivid examples ( eg; Introduction to VR best practices), video tutorials, as well as live training sessions (eg; VR essentials pack demo). This is not only great news for the experienced game developer but the newbies too as unity makes it easy for you to quickly learn to build games, including the AAA quality virtual reality games. It also has an ever-growing community. So, for times when you get stuck somewhere during the game development process, a solid community will be there to offer you advice on resolving a wide range of issues. Languages Supported: Unity supports three development languages namely, c#, Boo, and UnityScript. Platforms supported: Unity supports all the platforms such as mobile, PC, web and console platforms. The free version supports Mac OS X, Android, iOS, Windows and among other mobile platforms. The paid version further supports  Nintendo Wii, Xbox 360 and PlayStation. The free version, however, is more than enough to dive right into the development process. Unity also supports all the major HMDs such as Oculus Rift, Steam VR/Vive, Playstation VR, Gear VR, Microsoft HoloLens, and Google’s Daydream View. Price: Unity has three versions, namely,  personal, plus and pro version. The personal version is completely free, Unity 3D plus is $35 per seat per month, and pro is $125 per seat per month. However, the personal version is more than enough to dive right into the development process. Learning curve: Unity 3d has a flat learning curve. It can be used with ease by both beginners and professionals alike. Learning resources: Unity Virtual Reality Projects - Second Edition                                   Unity Virtual Reality - Volume 1 [Video]                                   Unity Virtual Reality - Volume 2 [Video] 2. Unreal Engine 4: a free game engine with exceptional graphics and capabilities for virtual reality Developer: Epic Games Release Date: 1998 Why choose Unreal Engine for virtual reality gaming? Unreal Engine has powered games with some of the most exceptional graphics and features, so it naturally comes with features catered towards advanced Game development. For virtual reality, Unreal Engine comes with an advanced cinematics system, advanced lighting capabilities, a rendering pipeline offering 90 Hz stereo framerate or faster at high resolutions as well as tools scaling from simple to detailed scenes, environments and characters. Similar to Unity, Unreal Engine 4  also comes with an asset store, which is an online marketplace by Unreal offering animations, blueprints, code plugins, props, environments, as well as architectural visualization. Again, just like Unity’s asset store, some of the assets are paid, and some are free. Documentation provided by Unreal Engine is not as rich as the one offered by Unity and comes with basic guides and live training streams on Virtual reality development. Unreal Engine 4 also has a strong community to guide you through your game development journey. Languages supported: Unreal Engine 4 offers only C++ development language. Platforms supported: UE4 supports all the latest HMDs such as Oculus Rift, HTC Vive, Samsung Gear VR, Google VR, and Leap Motion among others. Unreal Engine 4 lets you deploy your VR game projects to Windows PC, PlayStation 4, Xbox One, Mac OS X, iOS, Android, AR, VR, Linux, SteamOS, and HTML5. You can run the Unreal Editor on Windows, Mac OS X, and Linux. Moreover, Xbox One, PlayStation 4 and Nintendo Switch console tools and code are also available at no additional cost to registered developers for their respective platform(s). Price: The great thing about UE4 is that it is very cost-effective for all the game nerds out there, as it's free to use, with a 5% royalty on gross product revenue after the first $3,000 per game per calendar quarter from commercial products. Learning Curve: Unreal Engine 4 has a steep learning curve and is suited mostly for professionals. Learning resources: Exploring Unreal Engine 4 VR Editor and Essentials of VR [Video]                    Unreal Engine 4: The Complete Beginner's Course [Video]                      3. CryEngine: a game engine with a powerful range of assets for virtual reality games Developer: Crytek Release Date: 2002 Why choose CryEngine for virtual reality game development? Similar to Unity and Unreal Engine, CryEngine also offers an asset store, offering tools and assets across different domains such as 3D modeling, scripts, sounds, animations, etc. The documentation offered by CryEngine is not as rich as Unity, which makes it difficult to approach for the beginners. However, it does have an online forum which can guide the experienced developers during their virtual reality game development journey. CryEngine also includes CE# Framework, new Sandbox Editor, Improved Profiling, Reworked Low Overhead Renderer, DirectX 12 Support, Advanced Volumetric Cloud System, new particle system, FMOD Studio support, and Visual Studio 2015 Support, which all collectively can amp up the virtual reality game development process. Languages supported: It supports languages such as C++, Flash, ActionScript, and Lua. Platforms supported: CryEngine supports Windows, Linux, PlayStation 4, Xbox One, Oculus Rift, OSVR, PSVR, and HTC Vive. Mobile support is currently under development. Price: CryEngine is free but takes five percent of the revenues generated by each game built with CryEngine - after the revenues have passed $5,000. Learning curve: CryEngine has a steep learning curve as for anything other than basic games, you need to have strong command on languages such as C++, Flash, ActionScript, and Lua. Learning resources: CryENGINE Game Programming with C++, C#, and Lua                                  CryENGINE SDK Game Programming Essentials [Video] 4. Blender: an accessible tool for building exceptional graphics and animations Developer: Blender Foundation Release Date: 1998 Why choose Blender for virtual reality? Blender, a modern 3D graphics software is not only great for 3D modeling but supports the entirety of the 3D pipeline such as rigging, animation, simulation, rendering, motion tracking, video editing, and game creation. It also comes with a built-in powerful path-tracer engine called Cycles that offers stunning ultra-realistic rendering, real-time viewport preview, PBR shaders & HDR lighting support as well as VR rendering support. It also has a solid community of developers and offers tutorials, workshops, and courses on character modeling, character animation, and blender fundamentals. Blender comes with add-ons for VR such as BlenderVR that supports CAVE/VideoWall, Head-Mounted Displays (HMD) and external rendering modality engines. It helps with the cross-platform development of virtual reality applications as well as porting of scenes from one VR platform configuration to another without any requirement to edit the actual scene. Platforms supported:  Blender supports Windows, Mac OS, and Linux Price: Blender is free to use. Learning Curve: Blender has a flat learning curve and can be used with ease by both beginners and professionals alike. Learning resources: Building a Character using Blender 3D [Video]                                     Blender 3D Basics                            5. Amazon Lumberyard: an accessible and fast tool for building virtual reality games Developer: Amazon Release Date: 2015 Why choose Amazon Lumberyard for virtual reality game development? Bases on CryEngine’s architecture, Amazon Lumberyard, is a powerful cross-platform game engine comprising of tools that help you create the highest-quality games, and connect your games to the vast storage of the AWS Cloud, and engage fans on Twitch. Lumberyard's professional tools such as its virtual reality system use Lumberyard’s Gems, self-contained packages of assets and features that can be added within your game. In fact, these gems act as templates for you to build your own gems and supports all the VR devices without requiring any engine code editing. Lumberyard is also integrated with Amazon GameLift, which is an AWS service meant for deploying, operating, and scaling dedicated game servers for session-based multiplayer games. Lumberyard also speeds up virtual reality development with the new VR Preview function. This full VR preview function is in the editor, which you can click to see in VR right away. This lets the game developers make VR-specific adjustments and level the designs right in the editor, which is quite convenient and saves a lot of time. Platforms supported: Lumberyard supports HMDs such as Oculus Rift, HTC Vive and Open Source Virtual Reality (OSVR). It offers support for  PC, Xbox One, PlayStation 4, iOS (iPhone 5S+ and iOS 7.0+), and Android (Nexus 5 and equivalents with support for OpenGL 3.0+). Lumberyard also offers support for dedicated servers on Windows and Linux. Price: Amazon Lumberyard is free, with no seat licenses, royalties, or subscriptions required. You only need to pay the standard AWS fees for the AWS services that you choose to use. Learning curve: Lumberyard has a flat learning curve and is easy to use for both novices as well as professionals. Learning resources: Learning AWS Lumberyard Game Development 6. AppGameKit -VR (AGK): an easy way to build games for beginners Developer: The Game Creators Release Date: 2017 Why choose AppGameKit-VR for virtual reality game development? AppGameKit-VR lets anyone quickly code and builds apps for multiple platforms with the help of AGKs BASIC scripting system. It adds easy to use VR commands to the core AppGameKit Script Language, which delivers immersive VR experiences. It also allows full development control for SteamVR supported head-mounted displays, touch devices, and Leap Motion hand tracking. AGK does the majority of the work for you, so it makes it super easy to code, compile and export the apps to each platform. You mainly need to focus on your game/app idea.  AGK-VR offers 60 VR commands ranging from diagnostic checks on the hardware and SteamVR, Initialising the HMD, creating standing or seated VR experiences, rendering a 3D scene to the HMD, etc. AGK also offers demos on how to how to get started with using these commands in your games. It also has an online forum where you can ask questions, learn and interact with other users. The details of the AGK script is also fully documented. Platforms supported: AGK VR offers support for Windows, Mac, Linux, iOS Android (inc Google, Amazon & Ouya), HTML5, Raspberry Pi (free from TGC website). Price: AGK is available for $29.9 Learning curve: AppGameKit VR has a flat learning curve, which is ideal for beginners and makes the VR game development quick for the experienced. 7. Oculus Medium 2.0: software designed with virtual reality in mind Developer: Oculus VR Release Date: 2016 Why choose Oculus Medium for building virtual reality games? Oculus Medium is a great tool that brings sculpting, modeling, painting and creating objects for the virtual reality world all together in a single package. It's a very handy tool to have during the character designing process. It lets you sculpt and create a variety of 3D objects to include within your VR game with the help of Oculus Touch controllers alongside the Oculus Rift. It comes with features such as grid snapping, increased layer limit, multiple lights, and 300 prefabricated stamps.  It is quite simple to use, and anyone, be it a newbie or an experienced game developer can use this tool. The rendering engine in Oculus Medium uses Vulkan, which results in smoother frame rates and better memory management when building higher resolution sculpts. Other than that, Oculus Medium offers tutorials for you to quickly get hang of different features in the tool. It also has an online forum where different VR artisans and developers discuss tips, information, and videos to share with others. Price: Oculus Medium 2.0 is available for $30 which is quite affordable for novices and professionals alike. Learning curve: Oculus Medium has a flat learning curve as its pretty approachable for novices as well as professionals.                                 Each of the tools mentioned above brings something unique in terms of their abilities and features. However, keep in mind that selecting a tool solely based on its technical features is not the best idea. Rather, figure out what works best for you, depending on your experience, and requirement. So which tools/tool are you planning to use for VR game development? Is there any tool we missed out? Let us know! Game developers say Virtual Reality is here to stay What’s new in VR Haptics? Top 7 modern Virtual Reality hardware system
Read more
  • 0
  • 0
  • 30112

article-image-learn-kotlin-next-universal-programming-language
Sugandha Lahoti
11 May 2018
14 min read
Save for later

Forget C and Java. Learn Kotlin: the next universal programming language

Sugandha Lahoti
11 May 2018
14 min read
Kotlin is fast moving towards becoming the universal programming language. What is a universal programming language? From a simplistic view, the expectation could be that one language is used for all types of programming. While that may be far-fetched in today's complex world, the expectation could be adjusted to one language becoming the dominant programming language. Most certainly, it is the single, most important language to master. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,  Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will learn how to design and prototype professional-grade applications using various features of Kotlin.[/box] Historically, different languages have used strategies appropriate for those times to become the universal programming languages: In the 1970s, C became the universal programming language. Prior to C, the programming languages of the world were divided between low-level and high-level languages, the former being the languages that were close to machine code and the latter being ones that were more concise and worked better for human understanding. The C programming language was developed as a single language that could work as a low-level and a high-level language. The Unix operating system was showcased as one that was built ground-up entirely on C, without needing another low-level language. In the 1990s, Java became the universal programming language with the Write Once Run Anywhere strategy. Prior to Java, developers needed to create different programs to run on different platforms (different operating systems running on different hardware needed different programs to run). However, with Java, programs could be written targeting a single platform, namely the Java Virtual Machine (JVM). The JVM is available on all the popular platforms and takes care of all platform-specific nuances. The Java language became the universal language by being the language in which to write programs for the JVM. Another two decades have passed, and the stage is all set to welcome the next universal language. Let's examine Kotlin's strategy to become that. Why can Kotlin be described as a better Java than any other language? How does Kotlin address areas beyond the Java world? What is Kotlin's winning strategy? What does this all mean for a smart developer? Why Kotlin vs Java? Why is being a better Java important for a language? For over a decade, Java has consistently been the world's most widely used programming language. Therefore, a language that gets crowned as being a better Java should automatically attract the attention of the world's single largest community of programmers: the Java programmers. The TIOBE index is widely referred to as a gauge of the popularity of programming languages. Updated to August 2017, the index graph is reproduced in the following illustration:   The interesting point is that while Java has been the #1 programming language in the world for the last 15 years or so, it has been in a steady state of decline for many years now. Many new languages have kept coming, and existing ones have kept improving, chipping steadily into Java's developer base; however, none of them have managed to take the #1 position from Java so far. Today, Kotlin is poised to become the most serious challenger for the better Java crown, and subsequently, to take the first place, for reasons that we will see shortly. Presently at 41st place, Kotlin is marching ahead at a fast pace. In May 2017, Google announced Kotlin to be the officially supported language for Android development in league with Java. This has turned out to be a major boost for Kotlin, and the rate of its adoption has accelerated ever since. Why not other languages? Many languages prior to Kotlin have tried to become a better Java. Let's see why they could never become one. Every language attracts the programmer community by giving them an ability to do something that was cumbersome before. Their adoption is directly driven by how much value the promise has for them and how much faith the community can put into that promise. All languages or frameworks that claimed to be a better Java and offered something worthwhile beyond what Java offers also took something back in turn. Here are a few examples: .NET framework has been the longtime rival of Java and has supported multiple languages from day one. Based on the lessons learned from Java, the .NET designers came up with better language constructs. However, the biggest hurdle for .NET was that it was a proprietary technology, and that created an impediment to its adoption. Also, .NET was more aggressive in adding newer language constructs. While the framework evolved quickly as a result of that, it broke its backward compatibility many times. Ruby (and Python) offered shortened code, enticing programming constructs, and greater expressiveness as opposed to the boring Java; however, they took away static typing support (which helps to make robust programs) and made the programs slower. Scala offered shortened code and advanced programming constructs, without sacrificing typing safety. However, Scala is complex and has a substantially high learning curve. It supports multiple coding styles. So, there is a danger that Scala code written by one developer may not be understood easily by another. These are risk factors for any project that includes a team of developers and when the application is expected to be supported over a long period, which is true about most applications anyway. Why Kotlin? Unlike other languages, Kotlin offers a lot of power over Java, while not taking anything away. Let's take a look at the following screenshot to see how: Kotlin is interoperable with Java. It is possible to write applications containing both Java and Kotlin code, calling one from the other. Calling Java code from Kotlin is simpler, as opposed to the other way around, but the former will be the case most of the times anyway, where new Kotlin code is added on top of legacy Java code. Kotlin is interoperable and can use all the Java libraries and legacy coding without having to do any code conversion. It is possible to inject Kotlin into a Java project without boiling the ocean. Concise yet expressive code While being interoperable, Kotlin code is far superior to Java code. Like Scala, Kotlin uses type inference to cut down on a lot of boilerplate code and makes it concise. (Type inference is a better feature than dynamic typing as it reduces the code without sacrificing the robustness of the end product). However, unlike Scala, Kotlin code is easy to read and understand, even for someone who may not know Kotlin. Kotlin's data class construct is the most prominent example of being concise as shown in the following: data class Employee (val id: Long, var name: String) Compared to its Java counterpart, the preceding line has packed into it the class definition, member variables, constructor, getter-setter methods, and also the utility methods, such as equals() and hashCode(). This will easily take 15-20 lines of Java code. The data classes construct is not an isolated example. There are many others where the syntax is concise and expressive. Consider the following as additional examples: Kotlin's default values to function parameters save the need to overload the functions Kotlin's extension functions can be used to add domain-specific functionality to existing classes, making it easy for someone from the domain to understand Enhanced robustness Statically typed languages have a built-in safety net because of the assurance that the compiler will catch any incorrect type cast. Both Java and Kotlin support static typing. With Java Generics introduced in Java 1.5, they both fare better over the Java releases prior to 1.5. However, Kotlin takes a big step further in addressing the Null pointer error. This Null pointer error causes a lot of checks in Java programs: String s = someOperation(); if (s != null) { ... } One can see that the null check is not needed if someOperation() never returns null. On the other hand, it is possible for a programmer to omit the null check while someOperation() returning null is a valid case. With Kotlin, the definition of someOperation() itself will return either String or String? and then there are implications on the subsequent code, so the developer just cannot go wrong. Refer the  following table: fun someOperation() : String // not nullable fun someOperation() : String? // nullable val s = someOperation() if (s != null) { // null check not needed – editor warning … } val s = someOperation() n = s.length() // error, null check imposed n = s?.length() ?: 0 // handling null condition One may point out that Java developers can use the @Nullable and @NotNull annotations or the Optional class; however, these were added quite late, most developers are not aware of them, and they can always get away with not using them, as the code does not break. Finally, they are not as elegant as putting a question mark. There is also a subtle point here. If a Kotlin developer is careless, he would write just the type name, which would automatically become a non-nullable declaration. If he wanted to make it nullable, he would have to  key in that extra question mark deliberately. Thus, you are on the side of caution, and that is as far as keeping the code robust is concerned. Another example of this robustness is found in the var/val declarations. Seasoned programmers know that most variables get a value assigned to them only once. In Kotlin, while declaring the variable, you choose val for such a variable. At the time of variable declaration, the programmer has to select between val and var, and so he puts some thought into it. On the other hand, in Java, you can get away with just declaring the type with its name, and you will rarely find any Java code that defines a variable with the final keyword, which is Java's way of declaring that the variable can be assigned a value only once. Basically, with the same maturity level of programmers, you expect a relatively more robust code in Kotlin as opposed to Java, and that's a big win from the business perspective. Excellent IDE support from day one Kotlin comes from JetBrains, who also develop a well-known Java integrated development environment (IDE): IntelliJ IDEA. JetBrains developers made sure that Kotlin has first-class support in IDEA. Not only that, they also developed a Kotlin plugin for Eclipse, which is the #1 most widely used Java IDE. Contrast this with the situation when Java appeared on the scene roughly two decades ago. There was no good IDE support. Programmers were asked to use simple text editors. Coding Java was hard, with no safety net provided by an IDE, until the Eclipse editor was open-sourced. In the case of Kotlin, the editor's suggestions being available from day one means that they can learn the language faster, make fewer mistakes, and write good quality compilable code with relative ease. Clearly, Kotlin does not want to waste any time in climbing up the ladder of popularity. Beyond being a better Java We saw that on the JVM platform, Kotlin is neat and quite superior. However, Kotlin has set its eyes beyond the JVM. Its strategy is to win based on its superior and modern feature set. Before we go ahead, let's list the top five appeals of Kotlin: Static typing (like in C or Java) means that there is built-in type safety. The compiler catches any incorrect type assignments. This makes programs robust. Kotlin is concise and expressive. Being concise implies that there is less to read and maintain. Being expressive implies better maintainability. Being a JVM language, the Kotlin programs can take advantage of the features built into the JVM, such as its cross-platform nature, memory management, high performance and sandbox security. Kotlin has inbuilt null-safety. Null references are famous as the billion-dollar mistake, as admitted by its inventor Tony Hoare and cost a great deal of unnecessary null-checks in programs. Kotlin eliminates those and makes the programs more robust. Kotlin is easy to learn, especially for Java developers. Its syntax is clean and therefore easy to understand, because of which, Kotlin programs are fun for developers to code and easy to understand, and enhancing for their peers. From a business angle, they are more robust and easy to maintain for businesses. Kotlin is in the winning camp The features of Kotlin have a good validation when one considers that other languages, which have similar features, are also growing in popularity: The Crystal language attracts Ruby programmers by adding static typing support. Similarly, TypeScript adds static typing support to JavaScript and has become the preferred language for some JavaScript frameworks. Scala and F# add functional programming support to traditional non-functional paradigms without sacrificing type safety and, hence, are more attractive. Kotlin uses functional programming, just enough to ease out the programming in a lot of cases, but not too much to make it complex. Like Kotlin, Swift, and Rust also have inbuilt null-safety. Kotlin and Swift are often compared, as their syntaxes resemble one another a lot. Server-side languages, which were getting designed after the emergence of the parallel computing phenomena, became one of the chief requirements for providing inbuilt constructs for easing the programmer's work. One can find this in both Kotlin (coroutines) and Rust. Go native strategy The Kotlin developers figured that the same strategy that is used on the JVM platform could be used on other platforms too. Consider the following illustration: On no platform does Kotlin disrupt the platform's existing technology: The JVM works with the Java bytecode and Kotlin gives an alternative to Java to generate the same bytecode (By no means is Kotlin the first alternative as there are already 200+ languages that work with JVM, but it is the most elegant one for all the reasons that we have seen previously). On modern browsers where JavaScript is the de facto standard, Kotlin can work by transpiling to JavaScript. Again, this means that Kotlin is friendly with existing browsers without making any special effort. On the Node.js platform where JavaScript is used on the server side, your Kotlin code transpiles into JavaScript, and hence there are no changes needed in the Node.js framework for Kotlin to run. In a similar way, Kotlin/Native plans to work with other technologies in a native way. Since the platform's technology is not disrupted, there are zero changes needed at the platform level to adopt Kotlin. Kotlin's compatibility with a given platform can be taken for granted from day one. This eliminates a big business risk. Kotlin's winning strategy Kotlin's winning strategy is the sum of the various factors that we have seen previously. It has a two-pronged strategy to win over the developers with the coolness of the language, and the ease of working with it, to win over business users with its business benefits. The following illustration shows us the different benefits of using Kotlin: The other benefits also include: The growing popularity of the language Endorsement from Google to make Kotlin an officially supported language in May 2017 Kotlin-specific development frameworks emerging Leading Java frameworks, such as Spring, offering Kotlin-specific improvements The growing number of applications being tried out with Kotlin The user groups spread across Kotlin developer hubs The growing number of technology companies using Kotlin With this in mind, the winning strategy for smart programmers is to master Kotlin and learn to work with Kotlin on various platforms. Being ahead of the curve as opposed to following the world after Kotlin is already big but it will be a quick path to being recognized as a leader. Further chapters of this book will help you in exactly this mission. Apart from going through this book, we strongly suggest you join the community. Join the Kotlin weekly mailing list at http://kotlinweekly.net. Join the nearest Kotlin user group at http://kotlinlang.org/community/user-groups.html. Kotlin's community on Slack is at https://kotlinlang.slack.com/. We saw how Kotlin is well positioned to take off as the universal programming language. It offers an opportunity for smart programmers to establish themselves at the forefront of this rising tide. This article was taken from the book Kotlin Blueprints. If you liked reading this piece, check out the  book to build comprehensive applications using Kotlin features.  Getting started with Kotlin programming Build your first Android app with Kotlin How to convert Java code into Kotlin
Read more
  • 0
  • 2
  • 30105

article-image-tensorflow-always-tops-machine-learning-artificial-intelligence-tool-surveys
Sunith Shetty
23 Aug 2018
9 min read
Save for later

Why TensorFlow always tops machine learning and artificial intelligence tool surveys

Sunith Shetty
23 Aug 2018
9 min read
TensorFlow is an open source machine learning framework for carrying out high-performance numerical computations. It provides excellent architecture support which allows easy deployment of computations across a variety of platforms ranging from desktops to clusters of servers, mobiles, and edge devices. Have you ever thought, why TensorFlow has become so popular in such a short span of time? What made TensorFlow so special, that we seeing a huge surge of developers and researchers opting for the TensorFlow framework? Interestingly, when it comes to artificial intelligence frameworks showdown, you will find TensorFlow emerging as a clear winner most of the time. The major credit goes to the soaring popularity and contributions across various forums such as GitHub, Stack Overflow, and Quora. The fact is, TensorFlow is being used in over 6000 open source repositories showing their roots in many real-world research and applications. How TensorFlow came to be The library was developed by a group of researchers and engineers from the Google Brain team within Google AI organization. They wanted a library that provides strong support for machine learning and deep learning and advanced numerical computations across different scientific domains. Since the time Google open sourced its machine learning framework in 2015, TensorFlow has grown in popularity with more than 1500 projects mentions on GitHub. The constant updates made to the TensorFlow ecosystem is the real cherry on the cake. This has ensured all the new challenges developers and researchers face are addressed, thus easing the complex computations and providing newer features, promises, and performance improvements with the support of high-level APIs. By open sourcing the library, the Google research team have received all the benefits from a huge set of contributors outside their existing core team. Their idea was to make TensorFlow popular by open sourcing it, thus making sure all new research ideas are implemented in TensorFlow first allowing Google to productize those ideas. Read Also: 6 reasons why Google open sourced TensorFlow What makes TensorFlow different from the rest? With more and more research and real-life use cases going mainstream, we can see a big trend among programmers, and developers flocking towards the tool called TensorFlow. The popularity for TensorFlow is quite evident, with big names adopting TensorFlow for carrying out artificial intelligence tasks. Many popular companies such as NVIDIA, Twitter, Snapchat, Uber and more are using TensorFlow for all their major operations and research areas. On one hand, someone can make a case that TensorFlow’s popularity is based on its origins/legacy. TensorFlow being developed under the house of “Google” enjoys the reputation of the household name. There’s no doubt, TensorFlow has been better marketed than some of its competitors. Source: The Data Incubator However that’s not the full story. There are many other compelling reasons why small scale to large scale companies prefer using TensorFlow over other machine learning tools TensorFlow key functionalities TensorFlow provides an accessible and readable syntax which is essential for making these programming resources easier to use. The complex syntax is the last thing developers need to know given machine learning’s advanced nature. TensorFlow provides excellent functionalities and services when compared to other popular deep learning frameworks. These high-level operations are essential for carrying out complex parallel computations and for building advanced neural network models. TensorFlow is a low-level library which provides more flexibility. Thus you can define your own functionalities or services for your models. This is a very important parameter for researchers because it allows them to change the model based on changing user requirements. TensorFlow provides more network control. Thus allowing developers and researchers to understand how operations are implemented across the network. They can always keep track of new changes done over time. Distributed training The trend of distributed deep learning began in 2017, when Facebook released a paper showing a set of methods to reduce the training time of a convolutional neural network model. The test was done on RESNET-50 model on ImageNet dataset which took one hour to train instead of two weeks. 256 GPUs spread over 32 servers were used. This revolutionary test has open the gates for many research work which have massively reduced the experimentation time by running many tasks in parallel on multiple GPUs. Google’s distributed TensorFlow has allowed all the researchers and developers to scale out complex distributed training using in-built methods and operations that optimizes distributed deep learning among servers. . Google’s distributed TensorFlow engine which is part of the regular TensorFlow repo, works exceptionally well with the existing TensorFlow’s operations and functionalities. It has allowed exploring two of the most important distributed methods: Distribute the training time of a neural network model over many servers to reduce the training time. Searching for good hyperparameters by running parallel experiments over multiple servers. Google has given distributed TensorFlow engine the required power to steal the share of the market acquired by other distributed projects such as Microsoft’s CNTK, AMPLab's SparkNet, and CaffeOnSpark. Even though the competition is tough, Google has still managed to become more popular when compared to the other alternatives in the market. From research to production Google has, in some ways, democratized deep learning., The key reason is TensorFlow’s high-level APIs making deep learning accessible to everyone. TensorFlow provides pre-built functions and advanced operations to ease the task of building different neural network models. It provides the required infrastructure and hardware which makes them one of the leading libraries used extensively by researchers and students in the deep learning domain. In addition to research tools, TensorFlow extends the services by bringing the model in production using TensorFlow Serving. It is specifically designed for production environments, which provides a flexible, high-performance serving system for machine learning models. It provides all the functionalities and operations which makes it easy to deploy new algorithms and experiments as per changing requirements and preferences. It provides an excellent feature of out-of-the-box integration with TensorFlow models which can be easily extended to serve other types of models and data. TensorFlow’s API is a complete package which is easier to use and read, plus provides helpful operators, debugging and monitoring tools, and deployment features. This has led to growing use of TensorFlow library as a complete package within the ecosystem by the emerging body of students, researchers, developers, production engineers from various fields who are gravitating towards artificial intelligence. There is a TensorFlow for web, mobile, edge, embedded and more TensorFlow provides a range of services and modules within their existing ecosystem making them as one of the ground-breaking end-to-end tools to provide state-of-the-art deep learning. TensorFlow.js for machine learning on the web JavaScript library for training and deploying machine learning models in the browser. This library provides flexible and intuitive APIs to build and train new and pre-existing models from scratch right in the browser or under Node.js. TensorFlow Lite for mobile and embedded ML It is a TensorFlow lightweight solution used for mobile and embedded devices. It is fast since it enables on-device machine learning inference with low latency. It supports hardware acceleration with the Android Neural Networks API. The future releases of TensorFlow Lite will bring more built-in operators, performance improvements, and will support more models to simplify the developer’s experience of bringing machine learning services within mobile devices. TensorFlow Hub for reusable machine learning A library which is used extensively to reuse machine learning models. Thus you can transfer learning by reusing parts of machine learning models. TensorBoard for visual debugging While training a complex neural network model, the computations you use in TensorFlow can be very confusing. TensorBoard makes it very easy to understand and debug your TensorFlow programs in the form of visualizations. It allows you to easily inspect and understand your TensorFlow runs and graphs. Sonnet Sonnet is a DeepMind library which is built on top of TensorFlow extensively used to build complex neural network models. All of this factors have made the TensorFlow library immensely appealing for building a wide spectrum of machine learning and deep learning projects. This tool has become a preferred choice for everyone from space research giant NASA and other confidential government agencies, to an impressive roster of private sector giants. Road Ahead for TensorFlow TensorFlow no doubt is better marketed compared to the other deep learning frameworks. The community appears to be moving very fast. In any given hour, there are approximately 10 people around the world contributing or improving the TensorFlow project on GitHub. TensorFlow dominates the field with the largest active community. It will be interesting to see what new advances TensorFlow and other utilities make possible for the future of our digital world. Continuing the recent trend of rapid updates, the TensorFlow team is making sure they address all the current and active challenges faced by the contributors and the developers while building machine learning and deep learning models. TensorFlow 2.0 will be a major update, we can expect the release candidate by next year early March. The preview version of this major milestone is expected to hit later this year. The major focus will be on ease of use, additional support for more platforms and languages, and eager execution will be the central feature of TensorFlow 2.0. This breakthrough version will add more functionalities and operations to handle current research areas such as reinforcement learning, GANs, building advanced neural network models more efficiently. Google will continue to invest and upgrade their existing TensorFlow ecosystem. According to Google’s CEO, Sundar Pichai “artificial intelligence is more important than electricity or fire.” TensorFlow is the solution they have come up with to bring artificial intelligence into reality and provide a stepping stone to revolutionize humankind. Read more The 5 biggest announcements from TensorFlow Developer Summit 2018 The Deep Learning Framework Showdown: TensorFlow vs CNTK Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence
Read more
  • 0
  • 0
  • 30032

article-image-12-common-malware-types-you-should-know
Savia Lobo
24 May 2018
14 min read
Save for later

12 common malware types you should know

Savia Lobo
24 May 2018
14 min read
A malware is a software with malicious intent that changes the system without the knowledge of the user. A malware uses the same technologies that are used by genuine software but the intent is bad. The following are some examples: Software such as TrueCrypt uses algorithms and techniques to encrypt a file to protect privacy, but, at the same time, ransomware uses same algorithms to encrypt files to extort the user. Similarly, Firefox uses HTTP protocol to browse the web while malware uses HTTP protocol to post its stolen data to its command and control (C&C) server In this article we will focus on the different types of malware. They can be categorized into different types based on the damage it causes to the system. It does not necessarily use a single method to cause damage; it can employ multiple ways. We will look into some known malware types: Backdoor Downloader Virus or file infector Worm Botnet Remote Access Tool (RAT) Hacktool Keylogger and password stealer Banking malware POS malware Ransomware Exploit and exploit kits To be clear, malware can act as a backdoor as well a password stealer or can be a combination of any of them. Some of the definitions are simple enough to understand in one line while others need some detailed explanation. This article is an excerpt taken from the book, 'Preventing Ransomware', written by Abhijit Mohanta, Mounir Hahad, and Kumaraguru Velmurugan. Backdoor A backdoor can be a simple functionality for a malware. It opens a port on the victim machine so that the hacker can log in without the victim's knowledge and carry out their work. A piece of backdoor malware can create a new process of itself or inject malicious code that opens a port in legitimate code executing in the system. Backdoor activity was usually part of other malware. Most of the RAT tools have a backdoor module that opens a port on the victim machine for the hacker to get in. Downloader A downloader is a piece of malicious software that downloads other malware. It has a URL for the malware that needs to be downloaded. Hence, when executed, it downloads other malware. Bedep was mostly known to download CryptoLockers. Upatre was another popular downloader. Virus or file infector File infection malware piggybacks its code in clean software. It alters an executable file on a disk in such a way that malware code is executed before or after the clean code in the file is executed. A file infector is often termed a virus in the security industry. A lot of antivirus products tag it as a virus. In the context of PE executables of Windows, a file infector can work in the following manner: Malware adds malicious code at the end of a clean executable file. It changes the entry point of the file to point the malicious code located at the end. When the exe is double-clicked, the malware code is executed first. The malicious code keeps the address of the clean code which was earlier the entry point. After completing the malicious activity, the malware code transfers control to the clean code: A virus can infect a file in several ways. It can place its code at different places in the malicious code. File infection is a way to spread in the system. Many of these file infectors infect every system file on Windows. So malware code has to execute irrespective of whether you start Internet Explorer or a calculator program. Some very famous PE file infectors are Virut, Sality, XPAJ, and Xpiro. Worm A worm spreads in a system by various mechanisms. File infection can also be considered a worm-like behavior. A worm can spread in several ways: To other computers on the network by brute forcing default usernames and passwords of network shares or other machines. By exploiting the vulnerability in network protocols. Using pen drives. When an autorun worm is executed, it looks for a pen drive attached to a system. The worm creates a copy of itself in the pen drive and also adds an autorun.inf file to the pen drive. When an infected pen drive is inserted into a new machine, autorun.inf is executed by Windows, which in turn executes the copied .exe. The copied exe can now copy itself at different locations in the new machine where the pen drive is inserted. Botnet A botnet is a piece of malware that is based on the client-server model. The victim machine that is infected with the malware is called a bot. The hacker controls the bot by using a C&C server. This is also called a bot herder. A C&C server can issue commands to the bots. If a large number of computers are infected with bots, they can be used to direct a lot of traffic toward any server. If the server is not secure enough and is incapable of handling huge traffic, it can shut down. This is usually called a denial of service (DOS) attack. A bot can use internet protocols or custom protocols to communicate with its C&C server. ZeroAccess and GameOver are famous botnets of the recent past. Keylogger and password stealer Keyloggers have been well known for a long time. They can monitor keystrokes and log them to a file. The log file can be transferred to the hacker later on. A password stealer is a similar thing. It can steal usernames and passwords from the following locations: Browsers store passwords for social networking sites, movie sites, song sites, email, and gaming sites. FTP clients such as FileZilla and SmartFTP, which can be used in companies or individuals to save data in FTP servers. Email clients such as Thunderbird and Outlook are used to access emails easily. Database clients used mostly by engineers and students Banking applications Users store passwords in password managers so that they don't have to remember them. Malware can steal passwords from these applications. LastPass and KeePass are password manager applications. Hackers can use these credentials to steal more data or access the private information of somebody or to try to access military installations. They can target executives using this kind of malware to steal their confidential information. zeus and citadel are famous password stealers. Banking malware Banking malware is financial malware. It can include the functionality of keylogging and password-stealing from the browser. Banks have come up with virtual keyboards, which is a major blow to keyloggers. Now, most malware use a man-in-the-middle (MITM) attack. In this kind of attack, a piece of malware is able to intercept the conversation between the victim and the banking site. There are two popular MITM mechanisms used by banking malware these days: form grabbing and browser injects. In form grabbing, the malware hooks the browser APIs and sends the intercepted data to its C&C server. Simultaneously, it can send the same data to the bank website too. Web inject works in the following manner: Malware can perform API hooking in the browser to intercept the web page that as requested by the victim browser. An original web page is a form in which victim needs to input various things, such as the amount they need to transfer, credentials, and so on. The malware modifies extra fields in this intercepted web page to add some extra fields, such as CVV number, PIN, and OTP, which are used for additional authentication. These additional fields are injected using an HTML form. This form varies based on the bank. Malware keeps a configuration file which tells the malware which form needs to be injected in the page of which banking site. After modifying the web page, the malware sends data to the victim's browser. So the victim sees the page with extra fields as modified by the malware. Hence, the malware is able to steal the additional parameters needed for authentication. Tibna, Shifu, Carberp, and Zeus are some famous pieces of banking malware. POS malware The method of money transfer is changing. Cash transactions in shops are changing. POS devices are installed in a lot of shops these days. Windows has a Windows POS operating system for these kinds of POS devices. The POS software in these devices is able to read the credit card information when one swipes a card in the POS device. If malware infects a POS device, it scans the POS software for credit card patterns. Credit card numbers are 16 digits. Malware scans for 16-digit patterns in the memory to identify and then steal credit card numbers. BlackPOS, Dexter, JackPOS, and BackOff are famous pieces of POS malware. Hacktool Hacktools are often used to retrieve passwords from browsers, operating systems, or other applications. They can work by brute forcing or identifying patterns. Cain and Abel, John the Ripper, and Rainbow Crack were old hack tools. Mimikatz is one of the latest hack tools associated with some top ransomware such as Wannacry and NotPetya to decode and steal the credentials of the victim. RAT A RAT acts as a remote control, like the name suggests. It can be used for both good and bad intentions. RATs can be used by system administrators to solve the issues of their clients by accessing the client's machine remotely. But since RATS usually give full access to the person sitting remotely, they can be misused by hackers. RATs have been used in sophisticated hacks lots of times. They can be misused for multiple purposes, such as the following: Monitoring keystrokes using keyloggers Stealing credentials and data from the victim machine Wiping out all data from a remote machine Creating a backdoor so that a hacker can log in Gh0st Rat, Poison Ivy, Back Orifice, Prorat, and NjRat are well-known RATs. Exploit Software is written by humans and, obviously, there will be bugs. Hackers take advantage of some of these bugs to compromise a system in an unauthorized manner. We call such bugs vulnerabilities. Vulnerabilities occur due to various reasons, but mostly due to imperfect programming. If programmers have not considered certain scenarios while programming the software, this can lead to a vulnerability in the software. Here is a simple C program that uses the function sctrcpy() to copy a string from source to destination: The programmer has failed to notice that the size of the destination is 10 bytes and the source is 23 bytes. In the program, the source is allocated 23 bytes of memory while the destination is assigned 11 bytes of memory space. When the strcpy() function copies the source into the destination, the copied string goes beyond the allocated memory of the destination. The memory beyond the memory assigned to the destination can have important things related to the program which would be overwritten. This kind of vulnerability is called buffer overflow. Stack overflow and heap overflow are commonly known as buffer overflow vulnerability. There are other vulnerabilities, such as use-after-free when an object is used after it is freed (we don't want to go into this in depth as it requires an understanding of C++ programming concepts and assembly language). A program that takes advantage of these vulnerabilities for a malicious purpose is called an exploit. To explain an exploit, we will talk about a stack overflow case. Readers are recommended to read about C programs to understand this. Exploit writing is a more complex process which requires knowledge of assembly language, debuggers, and computer architecture. We will try to explain the concept as simply as possible. The following is a screenshot of a C program. Note that this is not a complete program and is only meant to illustrate the concept: The main() function takes input from the user (argv[1]) then passes it on to the vulnerable function vulnerable_function. The main function calls the vulnerable function. So after executing the vulnerable function, the CPU should come back to the main function (that is, line no 15). This is how the CPU should execute the program: line 14 | line 4 | line 5 | line 6 | line 15. Now, when the CPU is at line 6, how does it know that it has to return to line 15 after that? Well, the secret lies in the stack. Before getting into line 4 from line 14, the CPU saves the address of line 15 on the stack. We can call the address of line 15 the return address. The stack is also meant for storing local variables too. In this case, the buffer is a local variable in vulnerable_function. Here is what the stack should look like for the preceding program: This is the state of the stack when the CPU is executing the vulnerable_function code. We also see that return address (address of line 15) is placed on the stack. Now the size of the buffer is only 16 bytes (see the program). When the user provides an input(argv[1]) that is larger than 16 bytes, the extra length of the input will overwrite the return address when strcpy() is executed. This is a classic example of stack overflow. When talking about exploiting a similar program, the exploit will overwrite the RETURN ADDRESS. As a result, after executing line 6, the CPU will go to the address which has overwritten the return address. So now the user can create a specially crafted input (argv[1]) with a length greater than 16 bytes. The input contains three parts - address of the buffer, NOP, and shellcode. The address of the buffer is the virtual memory address of the variable buffer. NOP stands for no operation instruction. As the name implies, it does nothing when executed. Shellcode is nothing but an extremely small piece of code that can fit in a very small space. Shellcode is capable of doing the following: Opening a backdoor port in the vulnerable software Downloading another piece of malware Spawning a command prompt to the remote hacker, who can access the system of the victim Elevating the privileges of the victim so the hacker has access to more areas and functions in the system: The following image shows the same stack after the specially crafted input is provided as input to the program. Here, you can see return address is overwritten with the address of the buffer so, instead of line 15, the CPU will go to the address of the buffer. After this NOP, the shellcode will be executed: The final conclusion is, by providing an input to the vulnerable program, the exploit is able to execute shellcode which can open up a backdoor or download malware. The inputs can be as follows: An HTTP request is an input for a web server An HTML page is an input for a web browser A PDF is an input to Adobe Reader And so on - the list is infinite. You can explore these using the keywords provided as it cannot be explained in a few lines and goes beyond the scope of this book. We often see vulnerabilities mentioned in blogs. Usually, a CVE number is mentioned for a vulnerability. One can find the list of vulnerabilities at http://www.cvedetails.com/. The wannacry ransomware used CVE-2017-0144 . 2017 is the year when the vulnerability was discovered. 0144 denotes that this was the 144th vulnerability discovered in 2017. Microsoft also issues advisories for vulnerabilities in Microsoft software. https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-0144 gives the details of the vulnerability. The vulnerability description tells us that the bug lies in the SMBv1 server software installed in some of Microsoft operating system versions. Also, the URL can refer to some of the exploits. Now that you know what types of malware exist, do check out the book, Preventing Ransomware to further know about the techniques to prevent malware and perform effective malware analysis. IoT Forensics: Security in an always-connected world where things talk Top 5 penetration testing tools for ethical hackers Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 29959

article-image-introducing-woz-a-progressive-webassembly-application-pwa-web-assembly-generator-written-entirely-in-rust
Sugandha Lahoti
04 Sep 2019
5 min read
Save for later

Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust

Sugandha Lahoti
04 Sep 2019
5 min read
Progressive Web Apps are already being deployed at a massive scale evidenced by their presence on most websites now. But what’s next for PWA? Alex Kehayis, developer at Stripe things its the merging of WebAssembly to PWA. According to him, the adoption of WebAssembly and ease of distribution on the web creates compelling new opportunities for application development. He has created what he calls Progressive Webassembly Applications (PWAAs) which is built entirely using Rust. In his talk at WebAssembly San Francisco Meetup, Alex walks through the creation of Woz, a PWA toolchain for Rust. Woz is a progressive WebAssembly app generator (PWAA) for Rust. Woz makes distributing your app as simple as sharing a hyperlink. Read Also: Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Web content has become efficient Alex begins his talk by pointing out how web content has become massively efficient; this is because it solves three problems: Distribution: Actually serving content to your users Unification: Write once and run it everywhere Experience: Consume content in a low friction environment Mobile applications vs Web applications Applications are kind of an elevated form of content. They tend to be more experiential, dynamic, and interactive. Alex points out the definition of ‘application’ from Wikipedia, which states that applications are software that is designed to perform a group of coordinated functions tasks and activities for the benefit of users. Despite all progress, mobile apps are still hugely inefficient to create, distribute, and use. Its distribution is generally in the hands of the duopoly, Apple and Google. The unification is generally handled through third-party frameworks such as React Native, or Xamarin. User experience on mobile apps, although performant leads to high friction as a user has to generally switch between apps, take time for it to install, load etc. Web based applications on the other hand are quite efficient to create, distribute and use. Anybody who's got an internet connection and a browser can go through the web application. For web applications, unification happens through standards, unlike frameworks which is more efficient. User experience is also quite dynamic and fast; you jump right into it and don't have to necessarily install anything. Should everybody just use web apps instead of mobile apps? Although mobile applications are a bit inefficient, they bring certain features: Native application has better performance than web based apps Encapsulation (e.g. home screen, self-contained experience) Mobile apps are offline by default Mobile apps use Hardware/sensors Native apps typically consume less battery than web apps In order to get the best of both worlds, Alex suggests the following steps: Bring web applications to mobile This has already been implemented and are called Progressive web applications Improve the state of performance and providing access. Alex says that WebAssembly is a viable choice for achieving this. WebAssembly is highly performant when it's paired with a language like Rust. Progressive WebAssembly Applications Woz, a Progressive WebAssembly Application generator Alex proceeds to talk about Woz, which is a progressive WebAssembly application generator.  It combines all the good things of a PWA and WebAssembly and works as a toolchain for building and deploying performant mobile apps with Rust. You can distribute your app as simply as sharing a hyperlink. Woz brings distribution via browsers, unification via web standards, and experience via hyperlinks. Woz uses wasm-bindgen to generate the interop calls between WebAssembly and JavaScript. This allows you to write the entire application in Rust—including rendering to the DOM. It will soon be coming with ‘managed charging’ for your apps and even provide multiple copies your users can share all with a hyperlink. Unlike all the things you need for a PWA (SSL certificate, PWA Manifest, Splash screen, Home screen icons, Service worker), PWAAs requires JS bindings to WebAssembly and to fetch, compile, and run wasm. His talks also talked about some popular Rust-based frontend frameworks Yew: “Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.” Sauron: “Sauron is an html web framework for building web-apps. It is heavily inspired by elm.” Percy: “A modular toolkit for building isomorphic web apps with Rust + WebAssembly” Seed: “A Rust framework for creating web apps” Read Also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer Josh Triplett With Woz, the goal, Alex says, was to stay in Rust and create a PWA that can be installed to your home screen. The sample app that he created only weighs about 300Kb. Alex says, “In order to actually write the app, you really only need one entry point - it’s a public method render that's decorated wasm_bindgen. The rest will kind of figure itself out. You don't necessarily need to go create your own JavaScript file.” He then proceeded to show a quick demo of what it looks like. What’s next? WebAssembly will continue to evolve. More languages and ecosystem can target WebAssembly. Progressive web apps will continue to evolve. PWAAs are an interesting proposition. We should really be liberating mobile apps and bringing them to the web. I think web assembly is kind of a missing link to some of these things. Watch Alex Kehayis’s full talk on YouTube. Slides are available here. https://www.youtube.com/watch?v=0ySua0-c4jg Other news in Tech Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 29805
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-whats-new-in-vr-haptics
Natasha Mathur
16 Jul 2018
8 min read
Save for later

What’s new in VR Haptics?

Natasha Mathur
16 Jul 2018
8 min read
Virtual Reality is evolving at a staggering rate. Some of the humankind’s most exciting tools and technologies are coming to the Virtual reality Space. One such technology which is taking over the VR world and making it more powerful is the VR haptics technology. VR Haptics technology offers an extra dimension to the VR world by letting users feel the virtual environment via the sense of touch, in addition to visual and aural perception. It makes you feel truly immersive in the artificial world. Imagine yourself in a desert seeing the sand and feeling it glide under your feet as you walk. It uses external devices like Gloves, Shoes, Joysticks, etc, via which users can receive feedback in the form of vibrations from these computer applications. This feedback provides physical sensations in the hand or other parts of the body. It also provides a realistic simulation of the movements and behaviors, similar to those realized in the real world. VR Haptics: a growing domain The VR haptics technology is growing beyond creating vibrations in game controllers. Now, in the near future, you might able to cuddle a dog and feel it licking your face in the VR world. This speaks volumes about the pace at which the haptic technology is growing. One famous example which discusses modern VR is the popular sci-fi novel “Ready Player One”. It illustrates the possibilities of haptic technology in the future. The novel explores the journey of a guy as he sets foot into a virtual reality simulator (OASIS). He uses a headset and a pair of gloves to maneuver around the virtual world. Apart from the gloves, a lot of future concept products are also covered in the novel which makes the illusion of immersion easier to picture, such as towers emitting smells in the VR world and Wind/Temperature generators that mimic real-life. Haptics came about just as head mounted displays (HMD) came to light in the 2010s. HMDs allowed people to see the virtual reality while haptic feedback gave people the opportunity to experience the virtual world and to act within it. Texture, temperature, pressure, taste, smell and other non-visual sensory inputs became real in VR. Apart from virtual reality games and apps, Haptics feedback is used widely in personal computers, mobile devices, robots, and more. But, in this article, we’ll stick to the use of haptic technology or haptic feedback in the VR space. Usually, most VR users use Touch Controllers for haptic feedback. But, recently, a lot of third-party companies are coming out with products such as gloves for systems like the Oculus Rift & HTC Vive. Here is a list of recent developments in the haptic technology for the VR world. Super affordable VR Haptic gloves by Plexus Most of the currently available options in the VR haptics field are somewhat pricey but earlier this month, Plexus announced their new product, a VR haptic and sensor glove. https://vimeo.com/276517370 Source: Plexus Key features Plexus VR haptics gloves offer a fully modular tracking solution which is capable of tracking up to 0.01 degrees of precision. These gloves are capable of individual finger tracking as well as tracking each joint on the finger, thereby, offering higher precision in the VR world. It is compatible with the HTC Vive, Oculus Rift as well as Windows Mixed Reality devices. The VR haptic gloves also come with additional adapter plates. The development kit version of the Plexus haptic gloves, priced at $249 per glove pair, can be pre-ordered on the official Plexus Website. The company will begin shipping in August 2018 but at the moment, shipping is only available to USA, Europe, Canada and Australia. Kaaya Tech’s full body tracking HoloSuit Kaaya came out with a motion capture (MoCap) suit called HoloSuit, last month, which offers motion capture as well as haptic feedback. HoloSuit is the world’s first affordable, wireless, easy to use, bi-directional, full body motion capture suit. User’s entire body movement data is captured by Holosuit and it uses haptic feedback to send information back to the user. https://www.youtube.com/watch?v=SEQsDR32gII&t=122s  Source: HoloSuit It can be used in various areas such as sports, healthcare, education, entertainment or industrial operations. Key Features The HoloSuit consists of 36 embedded sensors in the pro version and 26 embedded sensors in the less complex version. Embedded sensors carry out all the work of capturing body motion which is necessary for world-scale tracking. It also consists of 9 haptic feedback devices, and 6 embedded firing buttons ( buttons that govern specific tasks such as saving the game, pausing, etc ) which are dispersed across both arms, legs, and all the ten fingers. It delivers data wirelessly either through Wifi or Bluetooth LE to a VR setup by using Unity or a Wi-Fi SDK. The HoloSuit doesn’t come with an external camera tracking option. It supports all the major platforms such as Windows, macOS, iOS, and Android devices. A complete HoloSuit is quite expensive and starts at a regular price of $999. Jacket and Jersey are priced at $499, jersey or track pants for $399, and a pair of gloves are available for $799. HoloSuit Pro is priced at $1,599. Shipping for the full body VR haptic HoloSuit will start this November. Disney’s VR Haptic “Force Jacket” Disney came out with their VR haptic jacket, namely, “Force Jacket” back in April. It provides users with precisely directed force along with a high-frequency vibration which is felt against the user’s upper body in sync with the visual medium. The prototype is made out of a converted life jacket and is provided with 26 airbags. https://www.youtube.com/watch?v=5BOFHEow608   Source: DisneyResearchHub The Force Jacket is created by engineers at Disney Research, MIT and Carnegie Mellon University. Key Features The Haptic Jacket uses an air compressor and a vacuum pump. These air compartments in the jacket can be inflated to exert a force on the user’s body relative to force sensitive resistors. 26 air compartments are activated using microcontrollers for either pressure or vibrotactile feedback or both. Controllers are used to activating the solenoid valves which are connected to the vacuum. There are certain Jacket inflation parameters like speed, force, and duration which are specified using the haptic effects editor. The jacket makes use of the motion interface to sequentially inflate the compartments for simulating motion across the body. Each airbag within the haptic jacket can be influenced to mimic sensations such as being hit in the chest by a snowball, getting tapped on the shoulder, lime dripping on their back, getting punched in the side, and a snake coiling its body around the user. The jacket is mainly to be used in the entertainment and gaming industry and is not available for the consumer market. But, it seems to have great potential in the future for other applications as well. VR gloves by Haptx Haptx announced a pair of VR gloves back in November of last year. The gloves use micro-pneumatics technology for detailed haptics and force feedback (the ability to restrict your fingers’ movement to simulate holding objects) in the fingers. https://www.youtube.com/watch?v=2C2_kbjtjRU Source: HaptX Key Features It features technology that enables it to provide 100 points of tactile displacement feedback. It offers up to five pounds of resistance per finger. It also comes with sub-millimeter precision motion tracking The glove uses SDK of HaptX’s design, which is created by using Unreal Engine’s physics system. This tells the glove when and where it needs to apply haptic effects as well as when and how to engage the force feedback. No information on pricing or worldwide availability has been released by the company yet. But, it is rumored to launch the VR gloves for the consumer market sometime later this year. Apart from these products, there are other minor advancements that keep happening in the VR haptics space. For example, Heather Culbertson, Assistant Professor of USC's computer department, recently created a haptic armband which is capable of mimicking the sensation of a human touch. VR aims to provide you with an environment where you feel truly immersive and where you can feel the objects as in the real world. These products are bringing the VR world a step closer to achieve richer levels of immersive experiences. Gone are the days when haptic feedback was limited to just vibrating controllers and joysticks. As the technology advances, the whole new world of VR haptic devices is here to make your VR experience as seamlessly immersive as possible. In fact, some people even believe that without Haptics, VR is nothing but a picture and a sound. Game developers say Virtual Reality is here to stay CTA announces its first AR/VR Standard terminology Top 7 modern Virtual Reality hardware systems  
Read more
  • 0
  • 0
  • 29790

article-image-erp-tool-in-focus-odoo-11
Sugandha Lahoti
22 May 2018
3 min read
Save for later

ERP tool in focus: Odoo 11

Sugandha Lahoti
22 May 2018
3 min read
What is Odoo? Odoo is an all-in-one management software that offers a range of business applications. It forms a complete suite of enterprise management applications targeting companies of all sizes. It is versatile in the sense that it can be used across multiple categories including CRM, website, e-commerce, billing, accounting, manufacturing, warehouse, and project management, and inventory. The community version is free-of-charge and can be installed with ease. Odoo is one of the fastest growing open source, business application development software products available. With the announcement of version 11 of Odoo, there are many new features added to Odoo and the face of business application development with Odoo has changed. In Odoo 11, the online installation documentation continues to improve and there are now options for Docker installations. In addition, Odoo 11 uses Python 3 instead of Python 2.7. This will not change the steps you take in installing Odoo but will change the specific libraries that will be installed. While much of the process is the same as previous versions of Odoo, there have been some pricing changes in Odoo 11. There are only two free users now and you pay for additional users. There is one free application that you can install for an unlimited number of users, but as soon as you have more than one application, then you must pay $25 for each user, including the first user. If you have thought about developing in Odoo, now is the best time to start. Before I convince you on why Odoo is great, let’s take a step back and revisit our fundamentals. What is an ERP? ERP is an acronym often used for Enterprise Resource Planning. The ERP gives a global and real-time view of data that can enable companies to address concerns and drive improvements. It automates the core business operations such as the order to fulfillment and procures to pay processes. It also reduces risk management for companies and enhances customer services by providing a single source for billing and relationship tracking. Why Odoo? Odoo is Extensible and easy to customize Odoo's framework was built with extensibility in mind. Extensions and modifications can be implemented as modules, to be applied over the module with the feature being changed, without actually changing it. This provides a clean and easy-to-control and customized applications. You get integrated information Instead of distributing data throughout several separate databases, Odoo maintains a single location for all the data. Moreover, the data remains consistent and up to date. Single reporting system Odoo has a unified and single reporting system to analyze and track the status. Users can also run their own reports without any help from IT. Single reporting systems, such as those provided by Odoo ERP software helps make reporting easier and customizable. Built around Python Odoo is built using the Python programming language, which is one of the most popular languages used by developers. Large community The capability to combine several modules into feature-rich applications, along with the open source nature of Odoo, is probably the important factors explaining the community that has grown around Odoo. In fact, there are thousands of community modules available for Odoo, covering virtually every topic, and the number of people getting involved has been steadily growing every year. Go through our video, Odoo 11 development essentials to learn to scaffold a new module, create new models, and use the proper functions that make Odoo 11 the best ERP out there. Top 5 free Business Intelligence tools How to build a live interactive visual dashboard in Power BI with Azure Stream Tableau 2018.1 brings new features to help organizations easily scale analytics
Read more
  • 0
  • 0
  • 29705

article-image-what-is-quantum-entanglement
Amarabha Banerjee
05 Aug 2018
3 min read
Save for later

What is Quantum Entanglement?

Amarabha Banerjee
05 Aug 2018
3 min read
Einstein described it as “Spooky action at a distance”. Quantum entanglement is a phenomenon observed in photons where particles share information of their state - even if separated by a huge distance. This state sharing phenomenon happens almost instantaneously. Quantum particles can be in any possible state until their state is measured by an observer. These states are called Eigen-Values. In case of quantum entanglement, two particles separated by several miles of distance, when observed, change into the same state. Quantum entanglement is hugely important for modern day computation tasks. The reason is that the state information between photons travel sometimes at speeds like 10k times the speed of light. This if implemented in physical systems, like quantum computers, can be a huge boost. Source: picoquant One important concept for us to understand this idea is ‘Qubit’. What is a Qubit? It’s the unit of information in Quantum computing. Like ‘Bit’ in case of normal computers. A bit can be represented by two states - ‘0’ or ‘1’. Qbits are also like ‘bits’, but they are governed by the weirder rules of Quantum Computing. Qubits don’t just contain pure states like ‘0’ and ‘1’, but they can also exist as superposition of these two states like {|0>,|1>},{ |1>,|0>}, {|0>,|0>}, {|1>,|1>}. This particular style of writing particle states is called the Dirac Notation. Because of these unique superposition of states, the quantum particles get entangled and share their state related information. A recent research experiment by a Chinese group has claimed to have packed 18 Qubits of information in just 6 entangled photons. This is revolutionary. What this basically means is that if one bit can pack in three times the information that it can carry presently, then our computers would become three times faster and smoother to work with. The reasons which make this a great start for future implementation of faster and practical quantum computers are: It’s very difficult to entangle so many electrons There are instances of more than 18 qubits getting packed into a larger number of photons, however the degree of entanglement has been much simpler Entanglement of each new particle takes increasingly more computer simulation time Introducing each new qubit creates a separate simulation taking up more processing time. The possible reason why this experiment has worked might be credited to the multiple degrees of freedom that photons can have. This particular experiment has been performed using Photons in a networking system. The fact that such a system allows multiple degrees of freedom for the Photon meant that this result is specific to this particular quantum system. It would be difficult to replicate the results in other systems like a Superconducting Network. Still this result means a great deal for the progress of quantum computing systems and how they can evolve to be a practical solution and not just remain in theory forever. Quantum Computing is poised to take a quantum leap with industries and governments on. PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule! Q# 101: Getting to know the basics of Microsoft’s new quantum computing language  
Read more
  • 0
  • 0
  • 29693

article-image-6-tips-to-prevent-social-engineering
Guest Contributor
03 Oct 2019
10 min read
Save for later

6 Tips to Prevent Social Engineering

Guest Contributor
03 Oct 2019
10 min read
Social engineering is a tactic where the attacker influences the victim to obtain valuable information. Office employees are targeted to reveal confidential data about a corporation while non-specialists can come under the radar to disclose their credit card information. One might also be threatened that the attacker will hack/his her system if he isn’t provided the asked material. In this method, the perpetrator can take any form of disguise, but at most times, he/she poses as tech support or from a bank. However, this isn’t the case always, although the objective is the same. They sniff the information, which you conceal from everybody, by gaining your trust. Social Engineering ends successfully when the wrongdoer gets to know the victim’s weaknesses and then manipulates his trust. Often, the victim shares his private information without paying much heed to the one who contacts him. Later, the victim is blackmailed by providing his sensitive data otherwise he will be charged under unlawful situations. Examples of Social Engineering attacks As defined above, the attacker can take any form of disguise, but the most common ways will be described here. The wrongdoers update themselves daily to penetrate your system, and even you should be extremely wary of your online security. Always stay alert whenever providing someone with your private credentials. The listed examples are variations of the others. There are many others as well, but the most common has been described. The purpose of all of them is to configure you. As the name states, Social Engineering merely is how an individual can be tricked to give up everything to the person who gains his trust. Phishing Attack Phishing is a malicious attempt to access a person’s personal and sensitive information such as financial credentials. The attacker behind a phishing attack pretends as an authentic identity or source to fool an individual. This social engineering technique mainly involves email spoofing or instant messaging to the victim. However, it may steer people to insert their sensitive details into a fraudulent website, which is designed to look exactly like a legitimate site. Unwanted tech support Tech support scams are becoming wide and can have an industry-wide effect. This tactic involves fraudulent attempts to scare people while putting them into the thought that there is something wrong with their device. Attackers behind this scam try to gain money by tricking an individual into paying for the issue which never exists. Offenders usually send you emails or call you to solve issues regarding your system. Mostly, they tell you that there’s an update needed. If you are not wary of this bogus, you can land yourself in danger. The attacker might ask you to run a command on your system which will result in it getting unresponsive. This belongs to the branch of social engineering known as scareware. Scareware uses fear and curiosity against humans to either steal information or sell you useless pieces of software. Sometimes it can be harsher and can keep your data as a hostage unless you pay a hefty amount. Clickbait Technique Term clickbait refers to the technique of trapping individuals via a fraudulent link with tempting headlines. Cybercriminals take advantage of the fact that most legitimate sites or contents also use a similar technique to attract readers or viewers. In this method, the attacker sends you enticing ads related to games, movies, etc. Clickbait is most seen during peer-to-peer networking systems with enticing ads. If you click on a certain Clickbait, an executable command or a suspicious virus can be installed on your system leading it to be hacked. Fake email from a trusted person Another tactic the offender utilizes is by sending you an email from your friend’s or relative’s email address claiming he/she is in danger. That email ID will be hacked, and with this perception, it’s most likely you will fall to this attack. The sent email will have the information you should give so that you can release your contact from the threat. Pretexting Attack Pretexting is also a common form of social engineering which is used for gaining sensitive and non-sensitive information. The attackers pretext themselves as an authentic entity so that they can access the user information. Unlike phishing, pretexting creates a false sense of trust with the victim through making stories, whereas, phishing scams involve fearing and urgency. In some cases, the attack could become intense, such as in the case when the attacker manipulates the victim to carry out a task which enables them to exploit the structural lacks of a firm or organization. An example of this is, the attacker masking himself as an employee from your bank to cross-check your credentials. This is by far, the most frequent tactic used by offenders. Sending content to download The attacker sends you files containing music, movies, games or documents that appear to be just fine. A newbie on the internet will think about how lucky his day is that he got his wanted stuff without asking. Little does he know that the files he just downloaded are virus embedded. Tips to Prevent Social Engineering After understanding the most common examples of social engineering, let us have a look at how you can protect yourself from being manipulated. 1) Don’t give up your private information Will you ever surrender your secret information to a person you don’t know? No, obviously. Therefore, do not spill your sensitive information on the web unnecessarily. If you do not identify the sender of the email, discard it. Nevertheless, if you are buying stuff online, only provide your credit card information over an HTTP secure protocol. When an unknown person calls or emails you, think before you submit your data. Attackers want you to speak first and realize later. Remain skeptical and converse over a conversation regarding when the other is digging into your sensitive information. Therefore, always think of the consequences if you submit your credentials to an authorized person. 2) Enable spam filter Most email service providers come up with spam filters. Any email that is deemed as suspicious shall automatically be thrown away in the spam folder. Credible email services detect any suspicious links and files that might be harmful and warn a user to download them at your own risk. Some files with specific extensions are barred from downloading. By enabling the spam feature, you can ease yourself from categorizing emails. Furthermore, you shall be relieved from the horrendous tasks of detecting mistrustful messages. The perpetrators of social engineering will have no door to reach you, and your sensitive data will be shielded from attackers. 3) Stay cautious of your password A pro tip for you is that you should never use the same password on the platforms you log onto. Keep no traces behind and delete all sessions after you are done with surfing and browsing. Utilize the social media wisely and stay cautious of people you tag and the information you provide since an attacker might loom there. This is necessary in case your social media account gets hacked, and you have the same password for different websites, your data can be breached up to the skin. You will get blackmailed to pay the ransom to prevent your details from being leaked over the internet. Perpetrators can get your passwords pretty quickly but what happens if you get infected with ransomware? All of your files will be encrypted, and you will be forced to pay the ransom with no data back guarantee which is why the best countermeasure against this attack is to prevent it from happening primarily. 4) Keep software up to date Always update your system’s software patch. Maintain the drivers and keep a close look on your network firewall. Stay alert when an unknown person connects to your Wifi network and update your antivirus according to it. Download content from legitimate sources only and be mindful of the dangers. Hacks often take place when the software the victim’s using is out of date. When vulnerabilities are exposed, offenders exploit the system and gain access to it. Regularly updating your software can safeguard you from a ton of dangers. Consequently, there are no backdoors left for hackers to abuse. 5) Pay attention to what you do online Think of the time that you got self-replicating files on your PC after you clicked on a particular ad. Don’t what that to happen again? Train yourself to not click on Clickbait and scam advertisements. Always know that most lotteries you find online are fake. Never provide your financial details there. Carefully inspect the URL of a website you land on. Most scammers make a copy of a website’s front page and change the link slightly. This is done with such efficiency, that the average eye cannot detect a change in the URL and the user opens the website and enters his credentials. Therefore, stay alert. 6) Remain Skeptical The solution to most problems is that one should remain skeptical online. Do not click on spam links, do not open suspicious emails. Furthermore, do not pay heed to messages stating that you have won a lottery or you have been granted a check of a thousand grand. Remain skeptical of the supreme pinnacle. With this strategy, a hacker will have no attraction of reaching you out since you aren’t paying attention to him. Most of the time, this tactic has helped many people from staying safe online and has never been intercepted by hackers digitally. Consequently, as you aren’t getting attracted to suspicious content, you will be saved from social engineering. Final Words All the tips described above summarize that you are doubting, is vital for your digital secrecy. As you are doubtful, of your online presence, you are entirely protected from online manipulation. Not even you, your credit card information and other necessary information will be shielded as well since you never mentioned it to anyone in the first place. All of this was achieved when you were doubtful of what’s occurring online. You inspected the links you visited and discarded suspicious emails, and thus you are secure. With these actions taken, you have prevented social engineering from occurring. Author Bio Peter Buttler is a Cybersecurity Journalist and Tech Reporter, Currently employed as a Senior Editor at PrivacyEnd. He contributes to a number of online publications, including Infosecurity-magazine, SC Magazine UK, Tripwire, Globalsign, and CSO Australia, among others. Peter, covers different topics related to Online Security, Big data, IoT and Artificial Intelligence. With more than seven years of IT experience, he also holds a Master’s degree in cybersecurity and technology. @peter_buttlr Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT How has ethical hacking benefited the software industry 10 times ethical hackers spotted a software vulnerability and averted a crisis
Read more
  • 0
  • 0
  • 29645
article-image-react-vs-vue-javascript-framework-wars
Amarabha Banerjee
17 Jul 2018
4 min read
Save for later

React vs. Vue: JavaScript framework wars

Amarabha Banerjee
17 Jul 2018
4 min read
Before I begin, one thing that needs to be established, is we can’t for a second ignore the fact that we are going to compare two different JavaScript frameworks - React and Vue. Both frameworks are clearly different in terms of popularity and usage. While ReactJS is a JavaScript library, great for building huge web applications where data can be updated on a regular basis. Vue.js is a JavaScript framework, fit for creating highly adaptable user interfaces and sophisticated Single-page applications. On the other hand, we all have mostly come across is how React and Vue are similar in their fundamental approach. They both have a virtual DOM based approach; they both have component based structure and are Reactive in their architecture. They both tend to work around a root library with all the other tasks transferred to other libraries. Having said that, as per npm trends report, React stands well ahead in terms of monthly downloads at 2.4 million whereas Vue stands joint second with Angular at around 239k downloads. Now that we have established the popularity of front end web development frameworks, it’s time to talk about what works and what doesn’t in React and Vue.js. Comparing React.js and Vue.js Template vs JSX While everything in React is a JavaScript code written in JSX syntax, Vue depends majorly on its templates which are HTML5 and CSS3 based. Now if you are a front end developer how can this possibly affect you? Well it depends on your choice of working methodology. If you want to write code on your own and control every aspect of your application, then the React way will be much more suitable for you. But if you want to start working on a readymade template and then add features as you go then Vue should be your best choice. React is a better framework if you want to scale up Make no mistake about this. The size and scalability of your application plays a determining role on your choice of framework. The fact that React gives you more control over your application architecture is the single most reason why it is easier in React to scale up your application. But since Vue is so much dependent inherently on the templating structure, it becomes tough when to build an industrial grade application made with Vue as changing the template can be difficult. It's easier to update data with Vue than React Updating data on Vue is much simpler. The middle stage of transpiling is not needed in Vue as it directly renders into the browser and hence the process is faster. Whereas in React, the data is analyzed, then stored, then the Document Object Model (DOM) is invoked and thereafter the change takes place, which is a time consuming process. React has a bigger community than Vue React is backed by Facebook. There are many similar libraries like React such as Preact which render support to React making it a  larger community than Vue. And with larger communities developers can expect faster resolution of issues and regular community support with timely updates. Building for mobile with React and Vue The capabilities of all modern day frameworks are often judged in terms of how they allow developers build for mobile. React has a world class companion in this domain: React Native. React Native is very similar to React in terms of its component structure, and is a fairly short learning curve for anyone already using React. The introduction of Vue Native, which offers a way of writing mobile applications with NativeScript using Vue, has made it easier for mobile developers to use Vue mobile development. Chinese tech company Alibaba has also created a cross platform framework called Weex. Weex has support for Vue, and while it doesn't yet have the capabilities of React Native, it could be a mobile framework to watch. Which is better? React or Vue? To summarize, there are different aspects of Vue and React which are useful and developer friendly. However if you intend to judge them on your own, you are better off assessing what your development needs are first? How big an issue scalability is going to be? Would you be needing something for the mobile platform too? Once you have figured out these questions, the choice should be easy. Read Next: What is React.js and how does it work Is React Native really a Native framework Using React Router for Client Side Redirecting
Read more
  • 0
  • 2
  • 29466

article-image-5-best-practices-to-perform-data-wrangling-with-python
Savia Lobo
18 Oct 2018
5 min read
Save for later

5 best practices to perform data wrangling with Python

Savia Lobo
18 Oct 2018
5 min read
Data wrangling is the process of cleaning and structuring complex data sets for easy analysis and making speedy decisions in less time. Due to the internet explosion and the huge trove of IoT devices there is a massive availability of data, at present. However, this data is most often in its raw form and includes a lot of noise in the form of unnecessary data, broken data, and so on. Clean up of this data is essential in order to use it for analysis by organizations. Data wrangling plays a very important role here by cleaning this data and making it fit for analysis. Also, Python language has built-in features to apply any wrangling methods to various data sets to achieve the analytical goal. Here are 5 best practices that will help you out in your data wrangling journey with the help of Python. And at the end, all you’ll have is a clean and ready to use data for your business needs. 5 best practices for data wrangling with Python Learn the data structures in Python really well Designed to be a very high-level language, Python offers an array of amazing data structures with great built-in methods. Having a solid grasp of all the capabilities will be a potent weapon in your repertoire for handling data wrangling task. For example, dictionary in Python can act almost like a mini in-memory database with key-value pairs. It supports extremely fast retrieval and search by utilizing a hash table underneath. Explore other built-in libraries related to these data structures e.g. ordered dict, string library for advanced functions. Build your own version of essential data structures like stack, queues, heaps, and trees, using classes and basic structures and keep them handy for quick data retrieval and traversal. Learn and practice file and OS handling in Python How to open and manipulate files How to manipulate and navigate directory structure Have a solid understanding of core data types and capabilities of Numpy and Pandas How to create, access, sort, and search a Numpy array. Always think if you can replace a conventional list traversal (for loop) with a vectorized operation. This will increase speed of your data operation. Explore special file types like .npy (Numpy’s native storage) to access/read large data set with much higher speed than usual list. Know in details all the file types you can read using built-in Pandas methods. This will simplify to a great extent your data scraping. Almost all of these methods have great data cleaning and other checks built in. Try to use such optimized routines instead of writing your own to speed up the process. Build a good understanding of basic statistical tests and a panache for visualization Running some standard statistical tests can quickly give you an idea about the quality of the data you need to wrangle with. Plot data often even if it is multi-dimensional. Do not try to create fancy 3D plots. Learn to explore simple set of pairwise scatter plots. Use boxplots often to see the spread and range of the data and detect outliers. For time-series data, learn basic concepts of ARIMA modeling to check the sanity of the data Apart from Python, if you want to master one language, go for SQL As a data engineer, you will inevitably run across situations where you have to read from a large, conventional database storage. Even if you use Python interface to access such database, it is always a good idea to know basic concepts of database management and relational algebra. This knowledge will help you build on later and move into the world of Big Data and Massive Data Mining (technologies like Hadoop/Pig/Hive/Impala) easily. Your basic data wrangling knowledge will surely help you deal with such scenarios. Although Data wrangling may be the most time-consuming process, it is the most important part of the data management. Data collected by businesses on a daily basis can help them make decisions on the latest information available. It also allows businesses to find the hidden insights and use it in the decision-making processes and provide them with new analytic initiatives, improved reporting efficiency and much more. About the authors Dr. Tirthajyoti Sarkar works in San Francisco Bay area as a senior semiconductor technologist where he designs state-of-the-art power management products and applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. He has 15+ years of R&D experience and is a senior member of IEEE. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup where he is applying the state-of-the-art Computer Vision and Data Engineering algorithms and tools to develop cutting edge product. Data cleaning is the worst part of data analysis, say data scientists Python, Tensorflow, Excel and more – Data professionals reveal their top tools Manipulating text data using Python Regular Expressions (regex)
Read more
  • 0
  • 0
  • 29429

article-image-mvp-android
HariVigneshJayapalan
04 Apr 2017
6 min read
Save for later

MVP for Android

HariVigneshJayapalan
04 Apr 2017
6 min read
The Android framework does not encourage any specific way to design an application. In a way, that makes the framework more powerful and vulnerable at the same time. You may be asking yourself things like, "Why should I know about this? I'm provided with Activity and I can write my entire implementation using a few Activities and Fragments, right?” Based on my experience, I have realized that solving a problem or implementing a feature at that point of time is not enough. Over time, our apps will go through a lot of change cycles and feature management. Maintaining these over a period of time will create havoc in our application if not designed properly with separation of concerns. That’s why developers have come up with architectural design patterns for better code crafting. How has it evolved? Most developers started creating an Android app with Activity at the center and capable of deciding what to do and how to fetch data. Activity code over a period of time started to grow and became a collection of non-reusable components.Then developers started packaging those components and the Activity could use them through the exposed APIs of these components. Then they started to take pride and began breaking codes into bits and pieces as much as possible. After that, they found themselves in an ocean of components with hard-to-trace dependencies and usage. Also, later we were introduced to the concept of testability and found that regression is much safer if it’s written with tests. Developers realized that the jumbled code that they developed in the above process is very tightly coupled with the Android APIs, preventing JVM tests and also hindering an easy design of test cases. This is the classic MVC with Activity or Fragment acting as a Controller. SOLID principles SOLID principles are object-oriented design principles, thanks to dear Robert C. Martin. According to the SOLID article on Wikipedia, it stands for: S (SRP): Single responsibility principle This principle means that a class must have only one responsibility and do only the task for which it has been designed. Otherwise, if our class assumes more than one responsibility we will have a high coupling causing our code to be fragile with any changes. O (OCP): Open/closed principle According to this principle, a software entity must be easily extensible with new features without having to modify its existing code in use. Open for extension: new behavior can be added to satisfy the new requirements. Close for modification: extending the new behavior is not required to modify the existing code. If we apply this principle, we will get extensible systems that will be less prone to errors whenever the requirements are changed. We can use abstraction and polymorphism to help us apply this principle. L (LSP): Liskov substitution principle This principle was defined by Barbara Liskov and says that objects must be replaceable by instances of their subtypes without altering the correct functioning of our system. Applying this principle, we can validate that our abstractions are correct. I (ISP): Interface segregation principle This principle defines that a class should never implement an interface that does not go to use. Failure to comply with this principle means that in our implementations we will have dependencies on methods that we do not need but that we are obliged to define. Therefore, implementing a specific interface is better than implementing a general-purpose interface. An interface is defined by the client that will use it; so it should not have methods that the client will not implement. D (DIP): Dependency inversion principle The dependency inversion principle means that a particular class should not depend directly on another class, but on an abstraction (interface) of this class. When we apply this principle we will reduce dependency on specific implementations and thus make our code more reusable. MVP somehow tries to follow (not 100% completely) all of these five principles. You can try looking up clean architecture for pure SOLID implementation. What is an MVP design pattern? An MVP design pattern is a set of guidelines that if followed, decouples the code for reusability and testability. It divides the application components based on its role, called separation of concerns. MVP divides the application into three basic components: Model: The Model represents a set of classes that describes the business logic and data. It also defines business rules for data, which means how the data can be changed and manipulated. In other words, it is responsible for handling the data part of the application. View: The View represents the UI components. It is only responsible for displaying the data that is received from the presenter as the result. This also transforms the model(s) into UI. In other words, it is responsible for laying out the views with specific data on the screen. Presenter: The Presenter is responsible for handling all UI events on behalf of the view. This receives input from users via the View, then processes the user’s data with the help of Model, and passes the results back to the View. Unlike view and controller, view and presenter are completely decoupled from each other and communicates to each other by an interface. Also, Presenter does not manage the incoming request traffic as Controller. In other words, it is a bridge that connects a Model and a View. It also acts as an instructor to the View. MVP lays down a few ground rules for the abovementioned components, as listed below: A View’s sole responsibility is to draw a UI as instructed by the Presenter. It is a dumb part of the application. The View delegates all the user interactions to its Presenter. The View never communicates with Model directly. The Presenter is responsible for delegating the View’s requirements to Model and instructing the View with actions for specific events. The Model is responsible for fetching data from the server, database and file system. MVP projects for getting started Every developer will have his/her own way of implementing MVP. I’m listing a few projects down the line. Migrating to MVP will not be quick and it will take some time. Please take your time and get your hands dirty with MVP: https://github.com/mmirhoseini/marvel https://github.com/saulmm/Material-Movies https://fernandocejas.com/2014/09/03/architecting-android-the-clean-way/  About the author HariVigneshJayapalan is a Google-certified Android app developer, IDF-certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur.
Read more
  • 0
  • 0
  • 29257
article-image-5-web-development-tools-matter-2018
Richard Gall
12 Dec 2017
4 min read
Save for later

5 web development tools will matter in 2018

Richard Gall
12 Dec 2017
4 min read
It's been a year of change and innovation in web development. We've seen Angular shifting quickly, React rising to dominate, and the surprising success of Vue.js. We've discussed what 'things' will matter in web development in 2018 here, but let's get down to the key tools you might be using or learning. Read what 5 trends and issues we think will matter in 2018 in web development here. 1. Vue.js If you remember back to 2016, the JavaScript framework debate centred on React and Angular. Which one was better? You didn't have to look hard to find Quora and Reddit threads, or Medium posts comparing and contrasting the virtues of one or the other. But in 2017, Vue has started to pick up pace to enter the running as a real competitor to the two hyped tools. What's most notable about Vue.js is simply how much people enjoy using it. The State of Vue.js report reported that 96% of users would use it for their next project. While it's clearly pointless to say that one tool is 'better' than another, the developer experience offered by Vue says a lot about what's important to developers - it's only likely to become more popular in 2018. Explore Vue eBooks and videos. 2.Webpack Webpack is a tool that's been around for a number of years but has recently seen its popularity grow. Again, this is likely down to the increased emphasis on improving the development experience - making development easier and more enjoyable. Webpack, is, quite simply brings all the assets you need in front end development - like JavaScript, fonts, and images, in one place. This is particularly useful if you're developing complicated front ends. So, if you're looking for something that's going to make complexity more manageable in 2018, we certainly recommend spending some time with Webpack. Learn Webpack with Deploying Web Applications with Webpack. 3. React Okay, you were probably expecting to see React. But why not include it? It's gone from strength to strength throughout 2017 and is only going to continue to grow in popularity throughout 2018. It's important though that we don't get caught up in the hype - that, after all, is one of the primary reasons we've seen JavaScript fatigue dominate the conversation. Instead, React's success is dependent on how we integrate it within our wider tech stacks - tools like webpack, for example. Ultimately, if React continues to allow developers to build incredible UI in a way that's relatively stress-free it won't be disappearing any time soon. Discover React content here. 4. GraphQL GraphQL might seem a little left field, but this tool built by Facebook has quietly been infiltrating its way into development toolchains since it was made public back in 2015. It's seen by some as software that's going to transform the way we build APIs. This article explains everything you need to know about GraphQL incredibly well, but to put it simply, GraphQL "is about designing your APIs more effectively and getting very specific about how clients access your data". Being built by Facebook, it's a tool that integrates very well with React - if you're interested, this case study by the New York Times explains how GraphQL and React played a part in their website redesign in 2017. Learn GraphQL with React and Relay. Download or stream our video. 5. WebAssembly While we don't want to get sucked into the depths of the hype cycle, WebAssembly is one of the newest and most exciting things in web development. WebAssembly is, according to the project site, "a new portable size- and load-time-efficient format suitable for the web". The most important thing you need to know is that it's fast - faster than JavaScript. "Unlike other approaches that require plug-ins to achieve near-native performance in the browser, WebAssembly runs entirely within the Web Platform. This means that developers can integrate WebAssembly libraries for CPU-intensive calculations (e.g. compression, face detection, physics) into existing web apps that use JavaScript for less intensive work" explains Mozilla fellow David Bryant in this Medium post. We think 2018 will be the year WebAssembly finally breaks through and makes it big - and perhaps offering a way to move past conversations around JavaScript fatigue.
Read more
  • 0
  • 0
  • 29249

article-image-salesforce-lightning-platform-powerful-fast-and-intuitive-user-interface
Fatema Patrawala
05 Nov 2019
6 min read
Save for later

What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface

Fatema Patrawala
05 Nov 2019
6 min read
Salesforce has always been proactive in developing and bringing to market new features and functionality in all of its products. Throughout the lifetime of the Salesforce CRM product, there have been several upgrades to the user interface. In 2015, Salesforce began promoting its new platform – Salesforce Lightning. Although long time users and Salesforce developers may have grown accustomed to the classic user interface, Salesforce Lightning may just covert them. It brings in a modern UI with new features, increased productivity, faster deployments, and a seamless transition across desktop and mobile environments. Recently, Salesforce has been actively encouraging its developers, admins and users to migrate from the classic Salesforce user interface to the new Lightning Experience. Andrew Fawcett, currently VP Product Management and a Salesforce Certified Platform Developer II at Salesforce, writes in his book, Salesforce Lightning Enterprise Architecture, “One of the great things about developing applications on the Salesforce Lightning Platform is the support you get from the platform beyond the core engineering phase of the production process.” This book is a comprehensive guide filled with best practices and tailor-made examples developed in the Salesforce Lightning. It is a must-read for all Lightning Platform architects! Why should you consider migrating to Salesforce Lightning Earlier this year, Forrester Consulting published a study quantifying the total economic impact and benefits of Salesforce Lightning for Service Cloud. In the study, Forrester found that a composite service organization deploying Lightning Experience obtained a return on investment (ROI) of 475% over 3 years. Among the other potential benefits, Forrester found that over 3 years organizations using Lighting platform: Saved more than $2.5 million by reducing support handling time; Saved $1.1 million by avoiding documentation time; and Increased customer satisfaction by 8.5% Apart from this, the Salesforce Lightning platform allows organizations to leverage the latest cloud-based features. It includes responsive and visually attractive user interfaces which is not available within the Classic themes. Salesforce Lightning provides stupendous business process improvements and new technological advances over Classic for Salesforce developers. How does the Salesforce Lightning architecture look like While using the Salesforce Lightning platform, developers and users interact with a user interface backed by a robust application layer. This layer runs on the Lightning Component Framework which comprises of services like the navigation, Lightning Data Service, and Lightning Design System. Source: Salesforce website As part of this application layer, Base Components and Standard Components are the building blocks that enable Salesforce developers to configure their user interfaces via the App Builder and Community Builder. Standard Components are typically built up from one or more Base Components, which are also known as Lightning Components. Developers can build Lightning Components using two programming models: the Lightning Web Components model, and the Aura Components model. The Lightning platform is critical for a range of services and experiences in Salesforce: Navigation Service: The navigation service is supported for Lightning Experience and the Salesforce app. It is built with extensive routing, deep linking, and login redirection, Salesforce's navigation service powers app navigation, state changes, and refreshes. Lightning Data Service: Lightning Data Service is built on top of the User Interface API, It enables developers to load, create, edit, or delete a record in your component without requiring Apex code. Lightning Data Service improves performance and data consistency across components. Lightning Design System: With Lightning Design System, developers can build user interfaces easily including the component blueprints, markup, CSS, icons, and fonts. Base Lightning Components: Base Lightning Components are the building blocks for all UI across the platform. Components range from a simple button to a highly functional data table and can be written as an Aura component or a Lightning web component. Standard Components: Lightning pages are made up of Standard Components, which in turn are composed of Base Lightning Components. Salesforce developers or admins can drag-and-drop Standard Components in tools like Lightning App Builder and Community Builder. Lightning App Builder: Lightning App Builder will let developers build and customize interfaces for Lightning Experience, the Salesforce App, Outlook Integration, and Gmail Integration. Community Builder: For Communities, developers can use the Community Builder to build and customize communities easily. Apart from the above there are other services available within the Salesforce Lightning platform, like the Lightning security measures and record detail pages on the platform and Salesforce app. How to plan transitioning from Classic to Lightning Experience As Salesforce admins/developers prepare for the transition to Lightning Experience, they will need to evaluate three things: how does the change benefit the company, what work is needed to prepare for the change, and how much will it cost. This is the stage to make the case for moving to Lightning Experience by calculating the return on investment of the company and defining what a Lightning Experience implementation will look like. First they will need to analyze how prepared the organization is for the transition to Lightning Experience. Salesforce admins/developers can use the Lightning Experience Readiness Check, it is a tool that produces a personalized Readiness Report and shows which users will benefit right away, and how to adjust the implementation for Lightning Experience. Further Salesforce developers/admins can make the case to their leadership team by showing how migrating to Lightning Experience can realize business goals and improve the company's bottom line. Finally, by using the results of the activities carried out to assess the impact of the migration, understand the level of change required and decide on a suitable approach. If the changes required are relatively small, consider migrating all users and all areas of functionality at the same time. However, if the Salesforce environment is more complex and the amount of change is far greater, consider implementing the migration in phases or as an initial pilot to start with. Overall, the Salesforce Lightning Platform is being increasingly adopted by admins, business analysts, consultants, architects, and especially Salesforce developers. If you want to deliver packaged applications using Salesforce Lightning that cater to enterprise business needs, read this book, Salesforce Lightning Platform Enterprise Architecture, written by Andrew Fawcatt.  This book will take you through the architecture of building an application on the Lightning platform and help you understand its features and best practices. It will also help you ensure that the app keeps up with the increasing customers’ and business requirements. What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce open sources ‘Lightning Web Components framework’ “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO Build a custom Admin Home page in Salesforce CRM Lightning Experience How to create and prepare your first dataset in Salesforce Einstein  
Read more
  • 0
  • 0
  • 29239