Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-devops-for-big-data-success
Ashwin Nair
11 Oct 2017
5 min read
Save for later

DevOps might be the key to your Big Data project success

Ashwin Nair
11 Oct 2017
5 min read
So, you probably believe in the power of Big Data and the potential it has to change the world. Your company might have already invested in or is planning to invest in a big data project. That’s great! But what if I were to tell you that only 15% of the business were successfully able to deploy their Big Data projects to production. That can’t be a good sign surely! Now, don’t just go freeing up your Big Data budget. Not yet. Big Data’s Big Challenges For all the hype around Big Data, research suggests that many organizations are failing to leverage its opportunities properly. A recent survey by NewVantage partners, for example, explored the challenges facing organizations currently running their own Big Data projects or trying to adopt them. Here’s what they had to say: “In spite of the successes, executives still see lingering cultural impediments as a barrier to realizing the full value and full business adoption of Big Data in the corporate world. 52.5% of executives report that organizational impediments prevent realization of broad business adoption of Big Data initiatives. Impediments include lack or organizational alignment, business and/or technology resistance, and lack of middle management adoption as the most common factors. 18% cite lack of a coherent data strategy.”   Clearly, even some of the most successful organizations are struggling to get a handle on Big Data. Interestingly, it’s not so much about gaps in technology or even skills, but rather lack of culture and organizational alignment that’s making life difficult. This isn’t actually that surprising. The problem of managing the effects of technological change is one that goes far beyond Big Data - it’s impacting the modern workplace in just about every department, from how people work together to how you communicate and sell to customers. DevOps Distilled It’s out of this scenario that we’ve seen the irresistible rise of DevOps. DevOps, for the uninitiated, is an agile methodology that aims to improve the relationship between development and operations. It aims to ensure a fluid collaboration between teams; with a focus on automating and streamlining monotonous and repetitive tasks within a given development lifecycle, thus reducing friction and saving time. We can perhaps begin to see, then, that this approach - usually used in typical software development scenarios - might actually offer a solution to some of the problems faced when it comes to big data. A typical Big Data project Like a software development project, a Big Data project will have multiple different teams working on it in isolation. For example, a big data architect will look into the project requirements and design a strategy and roadmap for implementation, while the data storage and admin team will be dedicated to setting up a data cluster and provisioning infrastructure. Finally, you’ll probably then find data analysts who process, analyse and visualize data to gain insights. Depending on the scope and complexity of your project it is possible that more teams are brought in - say, data scientists are roped in to trains and build custom machine learning models. DevOps for Big Data: A match made in heaven Clearly, there are a lot of moving parts in a typical Big Data project - each role performing considerably complex tasks. By adopting DevOps, you’ll reduce any silos that exist between these roles, breaking down internal barriers and embedding Big Data within a cross-functional team. It’s also worth noting that this move doesn’t just give you a purely operational efficiency advantage - it also gives you much more control and oversight over strategy. By building a cross-functional team, rather than asking teams to collaborate across functions (sounds good in theory, but it always proves challenging), there is a much more acute sense of a shared vision or goal. Problems can be solved together, discussions can take place constantly and effectively. With the operational problems minimized, everyone can focus on the interesting stuff. By bringing DevOps thinking into big data, you also set the foundation for what’s called continuous analytics. Taking the principle of continuous integration, fundamental to effective DevOps practice, whereby code is integrated into a shared repository after every task or change to ensure complete alignment, continuous analytics streamlines the data science lifecycle by ensuring a fully integrated approach to analytics, where as much as possible is automated through algorithms. This takes away the boring stuff - once again ensuring that everyone within the project team can focus on what’s important. We’ve come a long way from Big Data being a buzzword - today, it’s the new normal. If you’ve got a lot of data to work with, to analyze and to understand, you better make sure you’ve the right environment setup to make the most from it. That means there’s no longer an excuse for Big Data projects to fail, and certainly no excuse not to get one up and running. If it takes DevOps to make Big Data work for businesses then it’s a MINDSET worth cultivating and running with.
Read more
  • 0
  • 0
  • 13502

article-image-what-mar-tech
Hari Vignesh
10 Oct 2017
6 min read
Save for later

What is Mar-Tech?

Hari Vignesh
10 Oct 2017
6 min read
Blending marketing and software worlds together  Marketing is rapidly becoming one of the most technology dependent departments within companies. It is now a key driver of IT purchasing, and this trend is only expected to grow.  Marketing has evolved drastically over the last few years. Today’s marketers own more customer data and touch points than ever before, and more than any other department. Marketing has become a tech powered force, and technical capabilities are slowly being ingrained into marketing DNA. This rapid switch has caused close relationships between marketing and IT departments. CMO’s have never been more likely to attend meetings alongside the CIO. Marketing is becoming a technology-driven discipline, where code and data become fundamental.  Nowadays, in the digital world, software is marketing’s eyes, ears, and hands. We can no longer afford ourselves to do something just because we think it may increase sales; we base our decisions on data and use powerful software to execute our marketing initiatives efficiently. Marketing software really helps us to simplify our day-to-day lives and save time on regular manual tasks, giving us an opportunity to focus on new campaigns and strategies. It helps us to avoid doing repetitive tasks instead allows us time for innovation, creativity, brainstorming, and putting some soul into our products and services. Due to marketing software, we can easily identify tactics that are driving new customers, converting leads, etc. And that means better ROI, and happier project managers.  New digital channels and devices, such as search engines, social media, mobile, etc. have complicated the journey of our customers. There is now such a vast level of information, it is no longer possible to manually sift through it all to separate what is essential data and what is not. We live in a very complicated and stressful world, but if we use marketing software to identify what information is really key for us, we can get 10 steps closer to our target. For example, it is no longer unfair to assume that by clicking on a t-shirt we like, we may well get recommendations based on our purchase history, preferences, and location. Could we have expected that even up to five years ago? Now, ease of purchase is a norm that is regularly implemented by marketing teams, because in the current climate, everything is done fast. We are in the era of short term gratification, folks, and if we are to meet the exceeding expectations of clients and customers, software will give us the time we need to not only meet demand, but also continue to innovate and grow for future challenges.  Defining Mar-Tech Marketing technologies provide the tools that enable marketers to… well… market. They automate difficult, time-consuming, and repetitive manual tasks to surface customer insight. Built by technologists, used by marketers. Marketing technology should aim to remove or significantly reduce the need for IT involvement. In short, it strives to keep marketing within marketing. Divisions of Mar-Tech Internal technology — what we use to manage and analyze marketing operations, such as SEO, competitive analysis, social media monitoring, etc. External technology — what we use to reach our target and deliver our content: websites, ads, landing pages, email campaigns, apps, etc. Product technology — what features we add to our products and services and how they impact marketing ecosystem. For example, social sharing features, location features with GPS. RFID and participation in the IoT or digital products with viral capabilities. Useful Mar-Techs Analytics Marketing is at an inflection point where the performance of channels, technologies, ads, offers — everything — are trackable like never before. Over a century ago, retail and advertising pioneer John Wanamaker said, “Half the money I spend on advertising is wasted, the trouble is I don’t know which half.” Today, smart marketers do know which half isn’t working. But to do that efficiently, you need to have web analytics programs set up, and have people on the marketing team who know how to use and interpret data. Conversion Optimization Conversion optimization is the practice of getting people who come to your website (or wherever you are engaging with them) to do what you want them to do as much as possible, and usually, that involves filling out a form so that at the very least you have their email address. Email Marketing Email marketing is the 800-pound gorilla of digital marketing. And I’m not talking about spamming people by buying lists that are being sold to your competitors as well. I’m talking about getting people to give you permission to email them additional information, and then sending only valuable content tailored to that person’s interests. Search Engine Marketing Search Engine Marketing includes both paid search ads, like Google AdWords, and search engine optimization (SEO) to try to get high organic search listings for your website content. Since most people, even B2B buyers of big ticket items, use search as part of their work, you need to be there when these people are searching for what you’re selling. Remarketing You’ve experienced remarketing. When you go to a website and then, when you leave that site, their ads appear on other sites that you visit. It’s really easy to set up and incredibly cost effective because you’re only advertising to people who have already expressed enough interest in you to come to your site. Mobile Friendly Half of all emails are now opened on smartphones, and soon half of search will be done on them too, so all websites need to be mobile friendly. But today, less than a third of them are. Simply put, you need to have a site that is easy to read and use on a phone. If you don’t, Google penalizes you with lower mobile search rankings. Marketing Automation Marketing automation brings it all together. It is a terrific technology that includes analytics, online forms, tracking people’s activity when they come to your website, personalizing website content, managing email campaigns, facilitating the alignment of sales and marketing through lead scoring and automated alerts to sales people, informing these activities with data from your CRM and third-party sources, and more. Forecast for the next few years in Mar-Tech Huge amounts of data about buyers, channels, and competitors can be available for CMOs and it gives endless opportunities. Companies that work in this field become unicorns in several months, not even years. If you compare the number of companies dedicated to this subject from a year ago to now, you will see that the number of them have more than doubled. The best of these companies use machine learning and data science to deliver market insights and capabilities. This is especially valuable for B2B companies, where lead times are longer and purchase decisions are more considered. Companies that are utilizing Mar-Tech are most likely to be here at the right time with the right service for the customer! About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3475

article-image-whats-difference-between-data-scientist-and-data-analyst
Erik Kappelman
10 Oct 2017
5 min read
Save for later

What's the difference between a data scientist and a data analyst

Erik Kappelman
10 Oct 2017
5 min read
It sounds like a fairly pedantic question to ask what the difference between a data scientist and data analyst is. But it isn't - in fact, it's a great question that illustrates the way data-related roles have evolved in businesses today. It's pretty easy to confuse the two job roles - there's certainly a lot of misunderstanding on the difference between a data scientist and a data analyst even within a managerial environment. Comparing data analysts and data scientists Data analysts are going to be dealing with data that you might remember from your statistics classes. This data might come from survey results, lab experiments of various sorts, longitudinal studies, or another form of social observation. Data may also come from observation of natural or created phenomenons, but the data’s form would still be similar. Data scientists on the other hand, are going to looking at things like metadata from billions of phone calls, data used to forecast Bitcoin prices that have been scraped from various places around the Internet, or maybe data related to Internet searches before and after some important event. So their data is often different, but is that all? The tools and skillset required for each is actually quite different as well. Data science is much more entwined with the field of computer science than data analysis. A good data analyst should have working knowledge of how computers, networks, and the Internet function, but they don’t need to be an expert in any of these things. Data analyst really just need to know a good scripting language that is used to handle data, like Python or R, and maybe a more mathematically advanced tool like MatLab or Mathematica for more advanced modeling procedures. A data analyst could have a fruitful career knowing only about that much in the realm of technology. Data scientists, however, need to know a lot about how networks and the Internet work. Most data scientists will need to have mastered HTTP, HTML, XML and SQL as well as scripting languages like Ruby or Python, and also object-oriented languages like Java or C. This is because data scientists spend a lot more time capturing, manipulating, storing and moving around data than a data analyst would. These tasks require a different skillset. Data analysts and data scientists have different forms of conceptual understanding There will also likely be a difference in the conceptual understanding of a data analyst versus a data scientist. If you were to ask both a data scientist and a data analyst to derive and twice differentiate the log likelihood function of the binomial logistic regression model, it is more likely the data analyst would be able to do it. I would expect data analysts to have a better theoretical understanding of statistics than a data scientist. This is because data scientists don’t really need much theoretical understanding in order to be effective. A data scientist would be better served by learning more about capturing data and analyzing streams of data than theoretical statistics. Differences are not limited to knowledge or skillset, how data scientists and data analysts approach their work is also different. Data analysts generally know what they are looking for as they begin their analysis. By this I mean, a data analyst may be given the results of a study of a new drug, and the researcher may ask the analyst to explore and hopefully quantify the impact of a new drug. A data analyst would have no problem performing this task. A data scientist on the other hand, could be given the task of analyzing locations of phone calls and finding any patterns that might exist. For the data scientist, the goal is often less defined than it is for a data analyst. In fact, I think this is the crux of the entire difference. Data scientists perform far more exploratory data analysis than their data analyst cousins. This difference in approach really explains the difference in skill sets. Data scientists have skill sets that are primarily geared toward extracting, storing and finding uses for data. The skill set to perform these tasks is the skill set of a data scientist. Data analysts primarily analyze data and their skill set reflects this. Just to add one more little wrinkle, while calling a data scientist a data analyst is basically correct, calling a data analyst a data scientist is probably not correct. This is because the data scientist is going to have a handle on more of the skills required of a data analyst than a data analyst would of a data scientist. This is another reason there is so much confusion around this subject. Clearing up the difference between a data scientist and data analyst So now, hopefully, you can tell the difference between a data scientist and a data analyst. I don’t believe either field is superior to the other. If you are choosing between which field you would like to pursue, what’s important is that you choose the field that best compliments your skill set. Luckily it's hard to go wrong because both data scientists and analysts usually have interesting and rewarding careers.
Read more
  • 0
  • 0
  • 5056

article-image-what-you-need-know-about-iot-product-development
Raka Mahesa
10 Oct 2017
5 min read
Save for later

What you need to know about IoT product development

Raka Mahesa
10 Oct 2017
5 min read
Software is eating the world. It's a famous statement made by Marc Andreessen back in 2011 about the rise of software companies and how software will disrupt many, many industries. Today, as we live among devices that run on smart software, that statement couldn't be more true. We live surrounded by dozens of devices that are connected to each other, as the Internet of Things slowly spreads throughout our world. Each year, a batch of new smart devices are introduced to the market, hoping to find a place in our connected lives.  Have you ever wondered though about how these smart devices are made? Are they a software project? Or are they actually a hardware project? What consideration do we need to think about when we're developing these products? With those questions in mind, let's take a further look into the product development of the Internet of Things.   Before we go on though, let's clarify the kind of product that we will be discussing. For this article, what counts as a product is a software or hardware project that was not made for personal use. The scale and complexity of the product doesn't really matter. It could be a simple connected camera network, it could be a brand new type of device that the world has never seen before, or it could be simply adding an analytical tool to a currently working device.  Working with hardware is expensive  Now that we have that cleared up, let's start with the first and most important thing you need to know about IoT product development: working with hardware is not only different from developing software, it's also more difficult and more expensive. In fact, the reason that so many startup companies are popping up these days, is because starting a software business is much cheaper than starting a hardware business. Before software was prevalent, it was much harder and costly to start a technology business.  Unlike software, hardware isn't easy to change. Once you're set to manufacture a particular hardware, there's no changing the end result, even if there's a mistake with your initial design. And even if your design is flawless, there could still be a problem with the material you're working with or even with the manufacturer themselves. So, when working with hardware, you need to be extra careful, because a single mistake could end up being exceptionally costly.  Fortunately, these days there are solutions that could alleviate those issues, like 3D printing. With 3D printing, we can cheaply produce our hardware design. That way, we can quickly evaluate the look and detect any issue with the hardware design without needing to go back and forth with the manufacturer. Do keep in mind that even with 3D printing, we still need to test our hardware with the actual, final material and manufacturing method.  Requirements and functionality are important  Another thing that you need to know about IoT product development is that you need to figure out the full requirement and functionality of your product very early on. Yes, when you're developing software, you also need to find out about the software requirement in the beginning, but it's a bit different with IoT, because it affects everything in the project.  You see, with software development, your toolkit is meant to be general and capable of dealing with most problems. For example, if you want to build a web application, then most of the time, the framework and language that you choose will be able to build the application that you want. The development environment for IoT doesn't work that way however, it is much more specific. A certain toolkit for IoT is meant to solve problems with certain conditions.  Coupled with the fact that IoT products have additional factors that need to be considered like power consumption, among others, choosing the right platform for the right project is a must. For example, if later in the project it was found out that you need more processing power than the one provided by your hardware platform, then you need to retool plenty of stuff.  Consider UI User interaction is another big thing you need to consider in IoT product development. A lot of devices don't have any screen or any complicated input method, so you need to figure out early how users will interact with your product. Should the user be able do any interaction right on the device? Or should the user interact with the device using their phones? Should the user be able to access the device remotely? These are all questions you need to answer before you can determine the component your product requires.  Consider connectivity  Speaking of remote access, connectivity is also another factor you will need to consider in IoT product development. While there are many ways for your product to connect to the Internet, you should also ask whether it makes sense for your product to have an Internet connection or not. Maybe your product will be placed in a spot where wireless connection doesn't reach. Maybe instead of via Internet, your product should be able to transfer its data and log whenever a storage device is connected with it.  There are a lot of things that you need to consider when you are developing products for the Internet of Things. The topics we discussed should provide you with a good place to start.  About the Author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 11860

article-image-what-kotlin
Hari Vignesh
09 Oct 2017
5 min read
Save for later

What is Kotlin?

Hari Vignesh
09 Oct 2017
5 min read
Kotlin is a statically typed programming language for the JVM, Android, and the browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Also, it’s now the official language for Android app development. Why Kotlin ? Before we begin highlighting the brilliant features of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages. How did Kotlin emerge to capture programmers' hearts? A 2013 study showed that language features matter little compared to ecosystem issues, when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs which do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool (available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project one file at a time and everything will still compile, even for complex programs that run to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well. It enforces no particular philosophy of programming, such as overly functional or OOP styling. Kotlin Features Let me summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduce the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or front-end code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100 percent Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional Programming Support Functional programming is not easy, at least in the beginning. That is, until it becomes fun. With zero-overhead lambdas and ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. 2. Higher-order functions Higher-order functions either take functions as parameters, return functions, or both. Higher-order functions are everywhere. You just pass functions to collections to make code easy to read. titles.map { it.toUpperCase()} reads like plain English. Isn’t it beautiful? 3. Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducing val keyword as well as Kotlin collections, which are immutable by default. Once the val or a collection is initialized, you can be sure about its validity. Null Safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as ‘The Billion Dollar Mistake.’ One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent of a NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that cannot (non-null references). For example, a regular variable of type String can’t hold null. How to migrate effectively to Kotlin? Migration is one of the last things that every developer or the organization wants. There are a lot of advantages when you migrate from Java to Kotlin, but the bottom line is, it will make the job of the developer easy, which in turn reduces bugs and improves the code quality and so on. Migrating effectively will always have many routes. But my advice would be to first convince the management that you need to migrate (if you’re a developer). Then you need to start writing the test cases first, to get familiar with the language. Then, as Kotlin is of interoperable capacity, you can start changing one file/module at a time. About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 23462

article-image-beyond-the-bitcoin
Packt
09 Oct 2017
2 min read
Save for later

Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief

Packt
09 Oct 2017
2 min read
More than $350 worth of cryptocurrency guides offered in support of globalgiving.com During Cybersecurity Month, Packt is partnering with Humble Bundle and three other technology publishers – Apress, John Wiley & Sons, No Starch Press - for the Humble Book Bundle: Bitcoin & Cryptocurrency, a starter eBook library of blockchain programming guides offered for as little as $1, with each purchase supporting hurricane disaster relief efforts through the nonprofit, GlobalGiving.org. Packed with over $350 worth of valuable developer information, the bundle offers coding instruction and business insights at every level – from beginner to advanced. Readers can learn how to code with Ethereum while at the same time learning about the latest developments in cryptocurrency and emerging business uses of blockchain programming. As with all Humble Bundles, customers can choose how their purchase dollars are allocated, between the publishers and charity, and can even “gift” a bundle purchase to others as their donation. Donations for as little as $1USD can support hurricane relief. The online magazine retailer, Zinio, will be offering a limited time promotion of some of their best tech magazines as well. You can find the special cryptocurrency package here. "It's very unusual for tech publishers who normally would compete to come together to do good work for a good cause," said Kelley Allen, Director of Books at Humble Bundle. "Humble Books is really pleased to be able to support their efforts by offering this collection of eBooks about such a timely and cutting-edge subject of Cryptocurrency". The package of 15 eBooks includes recent titles Bitcoin for Dummies, The Bitcoin Big Bang, BlockChain Basics, Bitcoin for the Befuddled, Mastering Blockchain, and the eBook bestseller, Introducing Ethereum and Solidity. The promotional bundles are being released globally in English, and are available in PDF, .ePub and .Mobi formats. The offer runs October 9 through October 23, 2017.
Read more
  • 0
  • 0
  • 15714
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-innovation-strategy
Hari Vignesh
08 Oct 2017
5 min read
Save for later

What is an innovation strategy?

Hari Vignesh
08 Oct 2017
5 min read
Despite massive investments of management time and money, innovation remains a frustrating pursuit in many companies. Innovation initiatives frequently fail, and successful innovators have a hard time sustaining their performance — as Polaroid, Nokia, Sun Microsystems, Yahoo, Hewlett-Packard, and countless others have found. Why is it so hard to build and maintain the capacity to innovate? The reasons go much deeper than the commonly cited cause: a failure to execute. The problem with innovation improvement efforts is rooted in the lack of an innovation strategy. Innovation strategy — definition Innovation strategy can be defined as, a plan made by an organization to encourage advancements in technology or services, usually by investing in research and development activities. For example, an innovation strategy developed by a high technology business might entail the use of new management or production procedures and the invention of technology not previously used by competitors. Innovation strategy — a short description An innovation strategy is a plan to grow market share or profits through product and service innovation. When looking at innovation strategy through a jobs-to-be-done lense, we see that an effective strategy must correctly inform which job executor, job, and segment to target to achieve the most growth, and which unmet needs must be targeted to help customers get the job done better. When it comes to creating the solution, an innovation strategy must also indicate whether a product improvement, or a disruptive or breakthrough innovation approach is best. Unfortunately, most innovation strategies fail in these regards, which is why innovation success rates are anemic. Myths that mislead Innovation strategy is not about selecting activities to pursue that are different from those of competitors. This is the myth that misleads. Selecting activities is not a strategy. An innovation strategy is about creating winning products, which means products that are in an attractive market, target a profitable customer segment, address the right unmet needs, and help customers get a job done better than any competing solution. Only after a company produces a winning product or service should it consider what activities are needed to deliver that product or service. Tactics for innovation strategy Global competition and a weak economy have made growth more challenging than ever. Yet, some organizations such as Apple, Amazon, and Starbucks seem to defy the laws of economic gravity.The most successful growth companies adopt at least four best practices. Find the next S-Curve Nothing grows forever. The best products, markets, and business models go through a predictable cycle of growth and maturity, often depicted as an S-curve. Diminishing returns set in as the most attractive customers are reached, price competition emerges, the current product loses its luster, customer support challenges emerge, new operating skills are required, and so on. Unfortunately, growth company leaders are often blinded-sided by this predictable speed bump. Once the reality of the S-curve becomes apparent, it may be too late to design the next growth strategy. The time to innovate — the innovation window — is when the first growth curve hits an inflection point. How do you know when you’re hitting the inflection point? You never know. So the best companies are forever paranoid and make innovation a continuous process. Lean on customers Successful growth companies have a deep understanding of their customers’ problems. Many are embracing tools such as the customer empathy map to uncover new opportunities to create value. This customer insight is the foundation for their lean approach to product innovation: rapid prototyping, design partnerships with lead users, and pivoting to improve their product and business model. Think like a designer Managers are trained to make choices, but they don’t always have good options. Innovation involves creating new options. This is where designers excel. Apple’s exceptional user experiences were largely the creation of Jonathan Ive, a professional designer and Steve Jobs’ right hand man. Lead the way Unless the CEO makes innovation a priority, it won’t happen. Innovation requires a level of risk-taking and failure that’s impossible without executive air cover. The best growth companies create a culture of innovation: Howard Schultz decided Starbucks had lost its way. He flew in every store manager from around the world to help redesign its café experience. Google encourages employees to spend a day per week on new ideas. P&G tracks the percentage of revenues from new products and services. Gray Advertising gives a Heroic Failure Award to the riskiest ideas… that fail! Final thoughts Finally, without an innovation strategy, different parts of an organization can easily wind up pursuing conflicting priorities — even if there’s a clear business strategy. Sales representatives hear daily about the pressing needs of the biggest customers. Marketing may see opportunities to leverage the brand through complementary products or to expand market share through new distribution channels. Business unit heads are focused on their target markets and their particular P&L pressures. R&D scientists and engineers tend to see opportunities in new technologies. Diverse perspectives are critical to successful innovation. But without a strategy to integrate and align those perspectives around common priorities, the power of diversity is blunted or, worse, becomes self-defeating.  About the author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 2570

article-image-what-micro-frontend
Amit Kothari
08 Oct 2017
6 min read
Save for later

What is a micro frontend?

Amit Kothari
08 Oct 2017
6 min read
The microservice architecture enables us to write scalable and agile backend systems. Writing independent, self-contained services give us the flexibility to quickly add a new feature or easily change an existing one without affecting the whole system. Independently deployable services also allow us to scale our services as per the demand. We will show you how you can use a similar approach for frontend applications. You will learn about micro frontend architecture, its benefits, and strategy to break down a monolith web app into micro frontends. What is micro frontend architecture? Micro frontend architecture is an approach to developing web application as a composition of small frontend apps. Instead of writing a large monolith frontend application, the application is broken down into domain specific micro frontends, which are self-contained and can be developed and deployed independently. Advantages of using micro frontends Micro frontends bring the concept and benefits of micro services to frontend applications. Each micro frontend is self-contained, which allows faster delivery as multiple teams can work on different parts of the application without affecting each other. This also gives each team the freedom to choose different technology as required. Since the micro frontends are highly decoupled, they have a lower impact on other parts of the application and can be enhanced and deployed independently. Design considerations Let's say we want to build an online shopping website using micro frontend architecture. Instead of developing the site as one large application, we can split the website into micro frontends. For example, the pages to display lists of products and product details can be one micro frontend and the pages to show order history of a user can be another micro frontend. The user interface is made up of multiple micro frontends, but we do not want our users to feel that different pages are part of different apps. Here are some of the practices we can use to decompose a frontend application into smaller micro frontends, without compromising user experience. Single responsibility The first thing to consider is how to split an application into smaller apps so that each app can be developed and deployed independently. When teams are working on the different micro frontends, we want the apps to be highly decoupled so that a change in one app would not affect the other apps. This can be achieved by building domain specific micro frontends with single responsibility and well-defined bounded context. Just like our code, we want our micro frontends to have high cohesion and low coupling i.e. all the related code should be close to each other and less dependent on other modules. If we take the example of our online shopping site again, we want all the product related UI components in the product micro frontend and all the order related functionality in the order micro frontend. Let's say we have a user dashboard screen where users can see information from different domains, they can see their pending orders and also products which are on specials. Instead of creating a dashboard micro frontend, it is recommended to have the pending order UI component as part of order micro frontend and product related components as part of product micro frontend. This will allow us to split our system vertically and have domain specific frontend and backend services. Common interface for communication and data exchange For micro frontends to work harmoniously as a single web application, they need a common and consistent way to communicate with each other. Even if they are highly independent, they still need to talk to each other. One of the common approaches is to have an application that works as an integration layer. The app can work as a container to render different micro frontends and also facilitate communication between them. For example, in our online shopping website, once a user submits an order through the shopping cart micro frontend, we want to take the user to their order lists screen. Since both the order and shopping cart micro frontends are highly decoupled and do not know about each other, we can use the container app as the orchestration layer. On receiving order submission events from the shopping cart micro frontend, the container app will navigate the user to the order micro frontend. The container app can also be used to handle cross cutting concerns like user session management, analytics, etc. This approach works well with existing monolith frontends where the existing monolith application can work as the container and any new feature can be independently developed as a micro frontend and can be integrated into the existing app. The existing functionality can be also extracted and rewritten as micro frontends as required. Consistent look and feel Although our user interface is divided into multiple micro frontends, we still want our users to feel as if they are interacting with a single application. We want our apps to have a consistent look and feel, and also the ability to make UI changes easily across multiple apps. For example, we should be able to change the font or the primary colors across multiple micro frontends. This can be done by sharing CSS and assets like images, fonts, icons, etc. We also want the apps to use same UI components, for example, if we have date picker on multiple screens, we want all the date pickers to look the same. This can be achieved by creating a common library of UI components, which can be shared by micro frontends. Using shared assets and a UI component library will allow us to make changes easily instead of having to update multiple micro frontends. In this post, we discussed micro frontends, their benefits, and things to consider before migrating to micro frontend architecture. To deliver faster, we want the ability to build, test, and deploy features independently and this can be achieved by using micro frontends and microservices. Implementing micro frontends may present its own challenges and there will be technical hurdles to overcome but the benefits outweigh the complexity. If you are using micro frontend architecture, please share your experience with us. About the author Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web application using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 17724

article-image-what-we-learned-oracle-openworld-2017
Amey Varangaonkar
06 Oct 2017
5 min read
Save for later

What we learned from Oracle OpenWorld 2017

Amey Varangaonkar
06 Oct 2017
5 min read
“Amazon’s lead is over.” These famous words by the Oracle CTO Larry Ellison in the Oracle OpenWorld 2016 garnered a lot of attention, as Oracle promised their customers an extensive suite of cloud offerings, and offered a closer look at their second generation IaaS data centers. In the recently concluded OpenWorld 2017, Oracle continued on their quest to take on AWS and other major cloud vendors by unveiling a  host of cloud-based products and services. Not just that, they have  juiced these offerings up with Artificial Intelligence-based features, in line with all the buzz surrounding AI. Key highlights from the Oracle OpenWorld 2017 Autonomous Database Oracle announced a totally automated, self-driving database that would require no human intervention for managing or fine-tuning the database. Using machine learning and AI to eliminate human error, the new database guarantees 99.995% availability. While taking another shot at AWS, Ellison promised in his keynote that customers moving from Amazon’s Redshift to Oracle’s database can expect a 50% cost reduction. Likely to be named as Oracle 18c, this new database is expected to be shipped across the world by December 2017. Oracle Blockchain Cloud Service Oracle joined IBM in the race to dominate the Blockchain space by unveiling its new cloud-based Blockchain service. Built on top of the Hyperledger Fabric project, the service promises to transform the way business is done by offering secure, transparent and efficient transactions. Other enterprise-critical features such as provisioning, monitoring, backup and recovery are also some of the standard features which this service will offer to its customers. “There are not a lot of production-ready capabilities around Blockchain for the enterprise. There [hasn’t been] a fully end-to-end, distributed and secure blockchain as a service,” Amit Zavery, Senior VP at Oracle Cloud. It is also worth remembering that Oracle joined the Hyperledger consortium just two months ago, and the signs of them releasing their own service were there already. Improvements to Business Management Services The new features and enhancements introduced for the business management services were one of the key highlights of the OpenWorld 2017. These features now empower businesses to manage their customers better, and plan for the future with better organization of resources. Some important announcements in this area were: Adding AI capabilities to its cloud services - The Oracle Adaptive Intelligent Apps will now make use of the AI capabilities to improve services for any kind of business Developers can now create their own AI-powered Oracle applications, making use of deep learning Oracle introduced AI-powered chatbots for better customer and employee engagement New features such as enhanced user experience in the Oracle ERP cloud and improved recruiting in the HR cloud services were introduced Key Takeaways from Oracle OpenWorld 2017 With the announcements, Oracle have given a clear signal that they’re to be taken seriously. They’re already buoyed by a strong Q1 result which saw their revenue from cloud platforms hit $1.5 billion, indicating a growth of 51% as compared to Q1 2016, Here are some key takeaways from the OpenWorld 2017, which are underlined by the aforementioned announcements: Oracle undoubtedly see cloud as the future, and have placed a lot of focus on the performance of their cloud platform. They’re betting on the fact that their familiarity with the traditional enterprise workload will help them win a lot more customers - something Amazon cannot claim. Oracle are riding on the AI wave and are trying to make their products as autonomous as possible - to reduce human intervention and human error, to some extent. With enterprises looking to cut costs wherever possible, this could be a smart move to attract more customers. The autonomous database will require Oracle to automatically fine-tune, patch, and upgrade its database, without causing any downtime. It will be interesting to see if the database can live up to its promise of ‘99.995% availability’. Is the role of Oracle DBAs going to be at risk, due to the automation? While it is doubtful that they will be out of jobs, there is bound to be a significant shift in their day to day operations. It is speculated that the DBAs would require to spend less time on the traditional administration tasks such as fine-tuning, patching, upgrading, etc. and instead focus on efficient database design, setting data policies and securing the data. Cybersecurity has been a key theme in Ellison’s keynote and the OpenWorld 2017 in general. As enterprise Blockchain adoption grows, so does the need for a secure, efficient digital transaction system. Oracle seem to have identified this opportunity, and it will be interesting to see how they compete with the likes of IBM and SAP to gain major market share. Oracle’s CEO Mark Hurd has predicted that Oracle can win the cloud wars, overcoming the likes of Amazon, Microsoft and Google. Judging by the announcements in the OpenWorld 2017, it seems like they may have a plan in place to actually pull it off. You can watch highlights from the Oracle OpenWorld 2017 on demand here. Don’t forget to check out our highly popular book Oracle Business Intelligence Enterprise Edition 12c, your one-stop guide to building an effective Oracle BI 12c system.  
Read more
  • 0
  • 0
  • 4997

article-image-what-is-streaming-analytics-and-why-is-it-important
Amey Varangaonkar
05 Oct 2017
5 min read
Save for later

Say hello to Streaming Analytics

Amey Varangaonkar
05 Oct 2017
5 min read
In this data-driven age, businesses want fast, accurate insights from their huge data repositories in the shortest time span — and in real time when possible. These insights are essential — they help businesses understand relevant trends, improve their existing processes, enhance customer satisfaction, improve their bottom line, and most importantly, build, and sustain their competitive advantage in the market.   Doing all of this is quite an ask - one that is becoming increasingly difficult to achieve using just the traditional data processing systems where analytics is limited to the back-end. There is now a burning need for a newer kind of system where larger, more complex data can be processed and analyzed on the go. Enter: Streaming Analytics Streaming Analytics, also referred to as real-time event processing, is the processing and analysis of large streams of data in real-time. These streams are basically events that occur as a result of some action. Actions like a transaction or a system failure, or a trigger that changes the state of a system at any point in time. Even something as minor or granular as a click would then constitute as an event, depending upon the context. Consider this scenario - You are the CTO of an organization that deals with sensor data from wearables. Your organization would have to deal with terabytes of data coming in on a daily basis, from thousands of sensors. One of your biggest challenges as a CTO would be to implement a system that processes and analyzes the data from these sensors as it enters the system. Here’s where streaming analytics can help you by giving you the ability to derive insights from your data on the go. According to IBM, a streaming system demonstrates the following qualities: It can handle large volumes of data It can handle a variety of data and analyze it efficiently — be it structured or unstructured, and identifies relevant patterns accordingly It can process every event as it occurs unlike traditional analytics systems that rely on batch processing Why is Streaming Analytics important? The humongous volume of data that companies have to deal with today is almost unimaginable. Add to that the varied nature of data that these companies must handle, and the urgency with which value needs to be extracted from this data - it all makes for a pretty tricky proposition. In such scenarios, choosing a solution that integrates seamlessly with different data sources, is fine-tuned for performance, is fast, reliable, and most importantly one that is flexible to changes in technology, is critical. Streaming analytics offers all these features - thereby empowering organizations to gain that significant edge over their competition. Another significant argument in favour of streaming analytics is the speed at which one can derive insights from the data. Data in a real-time streaming system is processed and analyzed before it registers in a database. This is in stark contrast to analytics on traditional systems where information is gathered, stored, and then the analytics is performed. Thus, streaming analytics supports much faster decision-making than the traditional data analytics systems. Is Streaming Analytics right for my business? Not all organizations need streaming analytics, especially those that deal with static data or data that hardly change over longer intervals of time, or those that do not require real-time insights for decision-making.   For instance, consider the HR unit of a call centre. It is sufficient and efficient to use a traditional analytics solution to analyze thousands of past employee records rather than run it through a streaming analytics system. On the other hand, the same call centre can find real value in implementing streaming analytics to something like a real-time customer log monitoring system. A system where customer interactions and context-sensitive information are processed on the go. This can help the organization find opportunities to provide unique customer experiences, improve their customer satisfaction score, alongside a whole host of other benefits. Streaming Analytics is slowly finding adoption in a variety of domains, where companies are looking to get that crucial competitive advantage - sensor data analytics, mobile analytics, business activity monitoring being some of them. With the rise of Internet of Things, data from the IoT devices is also increasing exponentially. Streaming analytics is the way to go here as well. In short, streaming analytics is ideal for businesses dealing with time-critical missions and those working with continuous streams of incoming data, where decision-making has to be instantaneous. Companies that obsess about real-time monitoring of their businesses will also find streaming analytics useful - just integrate your dashboards with your streaming analytics platform! What next? It is safe to say that with time, the amount of information businesses will manage is going to rise exponentially, and so will the nature of this information. As a result, it will get increasingly difficult to process volumes of unstructured data and gain insights from them using just the traditional analytics systems. Adopting streaming analytics into the business workflow will therefore become a necessity for many businesses. Apache Flink, Spark Streaming, Microsoft's Azure Stream Analytics, SQLstream Blaze, Oracle Stream Analytics and SAS Event Processing are all good places to begin your journey through the fleeting world of streaming analytics. You can browse through this list of learning resources from Packt to know more. Learning Apache Flink Learning Real Time processing with Spark Streaming Real Time Streaming using Apache Spark Streaming (video) Real Time Analytics with SAP Hana Real-Time Big Data Analytics
Read more
  • 0
  • 0
  • 36681
article-image-top-5-misconceptions-about-data-science
Erik Kappelman
02 Oct 2017
6 min read
Save for later

Top 5 misconceptions about data science

Erik Kappelman
02 Oct 2017
6 min read
Data science is a well-defined, serious field of study and work. But the term ‘data science’ has become a bit of a buzzword. Yes, 'data scientists’ have become increasingly important to many different types of organizations, but it has also become a trend term in tech recruitment. The fact that these words are thrown around so casually has led to a lot of confusion about what data science and data scientists actually is and are. I would formerly include myself in this group. When I first heard the word data scientist, I assumed that data science was actually just statistics in a fancy hat. Turns out I was quite wrong. So here are the top 5 misconceptions about data science. Data science is statistics and vice versa I fell prey to this particular misconception myself. What I have come to find out is that statistical methods are used in data science, but conflating the two is really inaccurate. This would be somewhat like saying psychology is statistics because research psychologists use statistical tools in studies and experiments. So what's the difference? I am of the mind that the primary difference lies in the level of understanding of computing required to succeed in each discipline. While many statisticians have an excellent understanding of things like database design, one could be a statistician and actually know nothing about database design. To succeed as a statistician, all the way up to the doctoral level, you really only need to master basic modeling tools like R, Python, and MatLab. A data scientist needs to be able to mine data from the Internet, create machine learning algorithms, design, build and query databases and so on. Data science is really computer science This is the other half of the first misconception. While it is tempting to lump data science in with computer science, the two are quite different. For one thing, computer science is technically a field of mathematics focused on algorithms and optimization, and data science is definitely not that. Data science requires many skills that overlap with those of computer scientists, but data scientists aren’t going to need to know anything about computer hardware, kernels, and the like. A data scientist ought to have some understanding of network protocols, but even here, the level of understanding required for data science is nothing like the understanding held by the average computer scientist. Data scientists are here to replace statisticians In this case, nothing could be further from the truth. One way to keep this straight is that statisticians are in the business of researching existing statistical tools as well as trying to develop new statistical tools. These tools are then turned around and used by data scientists and many others. Data scientists are usually more focused on applied solutions to real problems and less interested in what many might regard as pure research. Data science is primarily focused on big data This is an understandable misconception. Just so we’re clear, Wikipedia defines big data as “a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them.” Then big data is really just the study of how to deal with, well, big datasets. Data science absolutely has a lot to contribute in this area. Data scientists usually have skills that work really well when it comes to analyzing big data. Skills related to databases, machine learning, and how data is transferred around a local network or the internet, are skills most data scientists have, and are very helpful when dealing with big data. But data science is actually very broad in scope. big data is a hot topic right now and receiving a lot of attention. Research into the field is receiving a lot private and public funding. In any situation like this, many different types of people working in a diverse range of areas are going to try to get in on the action. As a result, talking up data science's connection to big data makes sense if you're a data scientist - it's really about effective marketing. So, you might work with big data if you're a data scientist - but data science is also much, much more than just big data. Data scientists can easily find a job I thought I would include this one to add a different perspective. While there are many more misconceptions about what data science is or what data scientists do, I think this is actually a really damaging misconception and should be discussed. I hear a lot of complaints these days from people with some skill set that is sought after not being able to find gainful employment. Data science is like any other field, and there is always going to be a whole bunch of people that are better at it than you. Don’t become a data scientist because you’re sure to get a job - you’re not. The industries related to data science are absolutely growing right now, and will continue to do so for the foreseeable future. But that doesn’t mean people who can call themselves data scientists just automatically get jobs. You have to have the talent, but you also need to network and do all the same things you need to do to get on in any other industry. The point is, it's not easy to get a job no matter what your field is; study and practice data science because it's awesome, don’t do it because you heard it’s a sure way to get a job. Misconceptions abound, but data science is a wonderful field of research, study, and practice. If you are interested in pursuing a career or degree related to data science, I encourage you to do so, however, make sure you have the right idea about what you’re getting yourself into. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 6650

article-image-difference-between-working-indie-and-aaa-game-development
Raka Mahesa
02 Oct 2017
5 min read
Save for later

The Difference Between Working in Indie and AAA Game Development

Raka Mahesa
02 Oct 2017
5 min read
Let's say we have two groups of video games. In the first group, we have games like The Witcher 3, Civilization VI, and Overwatch. And in the second group, we have games like Super Meat Boy, Braid, and Stardew Valley. Can you tell the difference between these two groups? Is one group of games better than the other? No, they are all good games that have achieved both critical and financial success. Are the games in the first group sequels, while games in the second group are new? No, Overwatch is a new, original IP. Are the games in the first group more expensive than the second group? Now we're getting closer. The truth is, the first group of games comes from searching Google for "popular AAA games," while the second group comes from searching for "popular indie games." In short, the games in the first group are AAA games, and in the second group are indie games. Indie vs. AAA game development Now that we've seen the difference between the two groups, why do people separate these games into two different groups? What makes these two groups of games different from each other? Some would say that they are priced differently, but there are actually AAA games with low pricing as well as indie games with expensive pricing. How about the scale of the games? Again, there are indie games with big, massive worlds, and there are also AAA games set in short, small worlds. From my perspective, the key difference between the two groups of games is the size of the company developing the games. Indie games are usually made by companies with less than 30 people, and some are even made by less than five people. On the other hand, AAA games are made by much bigger companies, usually with hundreds of employees. Game development teams: size matters Earlier, I mentioned that company size is the key difference between indie games and AAA games. So it's not surprising that it's also the main difference between indie and AAA game development. In fact, the difference in team or company size leads to every difference between the two game development processes. Let's start with something personal, your role or position in the development team. Big teams usually have every position they need already filled. If they need someone to work on the game engine, they already have a engine programmer there. If they need someone to design a level, they already have a level designer working on it. In a big team, your role is already determined from the start, and you will rarely work on any task outside of your job description. If AAA game development values specialists, then indie game development values generalists who can fill multiple roles. It's not weird at all in a small development team if a programmer is asked to deal with both networking as well as enemy AI. Small teams usually aren't able to individually cover all the needed positions, so they turn to people who are able to work on a variety of tasks. Funding across the games industry Let's move to another difference, this time from the funding aspect. A large team requires a large amount of funding, simply because it has more people that need to be paid. And, if you look at the bigger picture, it also means that video games made by a large team have a large development cost. The opposite rings true as well; indie game development has much smaller development costs because they have smaller teams. Because every project has a chance of failure, the large development cost of AAA games becomes a big problem. If you're only spending a little money, maybe you're fine with a small chance of failure, but if you're spending a large sum of money, you definitely want to reduce that risk as much as possible. This ends up with AAA game development being much more risk-averse; they're trying to avoid risk as much as possible. In AAA game development, when there's a decision that needs to be made, the team will try to make sure that they don't make the wrong choice. They will do extensive market research and they will see what is trending in the market. They'd want to grab as many audience members as possible, so if there's any design that will exclude a significant amount of customers, it will be cut out. On the other hand, indie game development doesn't spend that much money. With a smaller development cost, indie games don't need to have a massive amount of sale to recoup their costs. Because of that, they're willing to take risks with experimental and unorthodox design, giving the team the creative freedom without needing to do market research. That said, indie game development harbors a different kind of risk. Unlike their bigger counterpart, indie game developers tend to live from one game to the next. That is, they use the revenue from their current game to fund the development of their next game. So if any of their games don't perform well, they could immediately close down. And that's another difference between the two game development process, AAA game development tends to be more financially stable compared to indie development. There are more differences between indie and AAA game development, but the ones listed above are definitely some of the most prominent. All in all, one development process isn't better than the other, and it falls back on you to decide which one is better suited for you. Raka Mahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 16874

article-image-what-blockchain-means-security
Lauren Stephanian
02 Oct 2017
5 min read
Save for later

What Blockchain Means for Security

Lauren Stephanian
02 Oct 2017
5 min read
It is estimated that hacks and flaws in security have cost the US over $445B every year. It is clear at this point that the cost of hacking attacks and ransomware has increased and will continue to increase year by year. Therefore, industries—especially those that require large amounts of important data—will need to invest in technologies to continue to be more secure. By design, Blockchain is theoretically a secure means of storing data. Each transaction is detailed on an immutable ledger, which serves to prevent and detect any form of tampering. Besides this, Blockchain also eliminates the need for verification from trusted third parties, which can come at high costs. But is this a promise that the technology has yet to fulfill, or is it part of the security revolution of the future we so desperately need? How Blockchain is resolving security issues One security issue that can be resolved by Blockchain relates to the fact that many industries rely heavily on “cloud and on-demand services, where our data is accessed and processed by untrusted third parties.” There are also many situations where they may want to jointly work on data without revealing our portion to untrusted entities. Blockchain can be used to create a system where users can jointly store data and also remain anonymous. In this case, Blockchain can be used to record time-stamped events that can’t be removed—so in the case of a cyber attack, it is easy to see where it came from. The Enigma Project, originally developed at MIT, is a good example of this use case. Another issue that Blockchain can improve is data tampering. There have been a number of cyber attacks where the attackers don’t delete or steal data, but alter it. One infamous example of this is the Stuxnet malware, which severely and physically damaged Iran's nuclear program. If this data were altered on the Blockchain, the transactions will be marked and will not be able to be altered or covered, and therefore hackers will not be able to hide their tracks. Blockchain's security vulnerabilities The inalterability of Blockchain and its decentralization clearly has many advantages, however, it does not entirely remove the possibility of data being altered. It is possible to introduce data unrelated to transactions to the Blockchain, and therefore this Blockchain data could be exposed to malware. The extent to which malware could impact the entire Blockchain and all its data is not yet known, however, there have been some instances of proven vulnerabilities. One such proven vulnerability includes Vitaly Kamluk’s proof of concept software that could take information from a hacker’s Bitcoin address and essentially pull malicious data and store it on the Blockchain. Private vs. public Blockchain implementations When understanding security risks in Blockchain technology, it is also important to understand the difference between private and public implementations. On public Blockchains, anyone can read or write transactions and anyone can aggregate those transactions and publish them if they are able to solve a cryptographic puzzle. Solving these puzzles takes a lot of computer power, and therefore a high amount of energy is required to solve many of these problems. This leads to a market where most of the transactions and puzzle solving is done in countries where energy is cheapest. This, in turn, leads to centralization and potential collusion. Private Blockchains, in comparison, give the network operator control over who can read and write to the ledger. In the case of Bitcoin in particular, ownership is proven through a private key linked to a transaction and just like physical money, these can easily be lost or stolen. One estimate puts the value of lost Bitcoins at $950M. There are many pros and cons which should be considered when deciding whether or not to use Blockchain. It is important to note here that the most important thing Blockchain provides us is with the ability to track who committed a particular transaction—for good or for bad—and when. There are some security measures with which it certainly would help a great deal—especially when it comes to tracking what information was breached, altered, or stolen. However, it is not an end-all-be-all when it comes to keeping data secured. If Blockchain is to be used to store important data, such as financial information, or client health records, it should be a wrapped in a layer of other cyber security software. Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website.
Read more
  • 0
  • 0
  • 14479
article-image-how-succeed-game-development
Raka Mahesa
02 Oct 2017
5 min read
Save for later

How to Succeed in Game Development

Raka Mahesa
02 Oct 2017
5 min read
Create a game that people want and make sure people know about the game. While that statement is certainly not wrong, it's a gross oversimplification of the real situation. After all, most game developers are creating games that they think people would want to play, and they are also trying their best to tell other people about their games. Yet we still see video games fail to reach success plenty of times. So what exactly went wrong? Is the mantra above not enough? To figure out how to achieve success in game development, let's take a look at that statement and examine it further. "Create a game that people want and make sure people know about the game." This sentence contains the three ingredients for success: Game creation People's demand Game discovery The last component is much more related to marketing and is not in the scope of this post, so we won't cover that one here. But, what about the other two? Game creation is about developing a functioning game that actually works as intended. A video game is a complex product that combines various fields of study, so getting the whole package to actually work together is already a feat in itself. Meanwhile, generating demand is about the entertainment factor of the game and how much people enjoy the product. Unlike the other aspect, this one is much more subjective, but that doesn't mean we can't be methodical about it. Creating a game: Flexibility and Efficiency Let's start with the creation aspect, mainly the coding part of game development. Game development is a branch of software development, so most of the best practices of software development can also be applied when we develop a video game. These practices include stuff like keeping the code simple, following the DRY (Don't Repeat Yourself) principle, or separating the user interface from the application logic. That said, the goal of those development practices is to create working software with as few errors as possible. While we do want our games to be bug-free as well, there are also other things that we want our game software to be: flexible and efficient. A game project is a bit different from your usual software project. In a software project, the basic features of the software usually won't deviate too much from the initial design. However, in a game project, it is very common to add a brand new feature or drastically alter an existing feature because the game was found to not be fun during a test play. Having a codebase that can easily accommodate these changes is a really big boon to the game project. A lot of times, software development doesn't really have to worry about their end user not having enough processing power or memory to run the software. Meanwhile, video games usually need to squeeze every last bit of power from the user's device to make sure the game can run without a hitch. While this doesn't mean that every single calculation needs to be optimized, performance is something that needs to be constantly considered when you are developing video games. People's demand: Entertainment Value Now let's switch to the other aspect for achieving success in game development: entertainment value. Do keep in mind that "entertainment" cannot simply be equated with "fun" or "happy," because being challenged or stressed can also be entertaining, especially if we get to overcome that challenge later on. However, for people who can't find the fun aspect in being stressed, those kinds of games may hold no entertainment value whatsoever. And therein lies the biggest problem of creating fun games: different people have different ideas of what equates to fun. What the developer thinks will be fun may not actually be fun at all for the players. Of course, you can always do some play tests and figure out yourself if your game is fun or not. But being so close to the development could sometimes impair your judgement and cause you to not see an actual issue with the game. That's why you should play test your game on third parties. The sooner this play test is done, the better. After all, it is very important to figure out whether your game is fun or not as soon as possible. Because if you found out that your game isn't really fun for players, then you clearly have to change features in your game to make it more entertaining. Just remember that changing features when your game is fully coded can be a nightmare, because it could easily break or alter other features. And that's another reason for doing your play test early, so if you need to make changes to the game, you can do it while the codebase is still simple. There are many factors that you must consider to succeed in game development. So far we've discussed coding and the fun factor, but there are other aspects that determine the success of your game as well, including: art, writing, and sound design. But for now, the first two factors provide a good starting point. Good luck with your game development! About the Author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 3158

article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 16330