Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-ibm-watson-transforming-healthcare
Kunal Parikh
29 Sep 2017
5 min read
Save for later

How IBM Watson is paving the road for Healthcare 3.0

Kunal Parikh
29 Sep 2017
5 min read
[box type="shadow" align="" class="" width=""]Matt Kowalski (in Gravity): Houston, in the blind.[/box] Being an oncologist is a difficult job. Every year, 50,000 research papers are published on just Oncology. If an Oncologist were to read every one of them, it will take nearly 29 hours of reading every workday to stay updated on this plethora of information. Added to this is the challenge of dealing with nearly 1000 patients every year. Needless to say, a modern-day physician is bombarded with information that doubles every three years. This wide gap between the availability of information and the ability to access it in a manner that’s practically useful is simply getting wider. No wonder doctors and other medical practitioners can feel overwhelmed and lost in space, sometimes! [box type="shadow" align="" class="" width=""]Mission Control: Shariff, what's your status? Shariff: Nearly there.[/box] Advances in the field of Big Data and cognitive computing are helping make strides in solving this kind of pressing problems facing the healthcare industry. IBM Watson is at the forefront of solving such scenarios and as time goes by the system will only become more robust. From a strict technological standpoint, the new applications of Watson are impressive and groundbreaking: The system is capable of combing through 600,000 pieces of medical evidence, 2 million pages of text from 42 medical journals and clinical trials in the area of oncology research, and 1.5 million patient records to provide on-the-spot treatment recommendations to health care providers. According to IBM, more than 90 percent of the nurses who have worked with Watson follow the guidance the system gives to them. - Infoworld Watson, who? IBM Watson is an interactive expert system that uses cognitive computing, natural language processing, and evidence-based learning to arrive at answers to questions posed to it by its users in plain English. Watson doesn’t just stop with hypotheses generation but goes ahead and proposes a list of recommendations to the user. Let’s pause and try to grasp what this means for a healthcare professional. Imagine a doctor typing in his/her iPad “A cyst found in the under-arm of the patient and biopsy suggesting non-Hodgkin's Lymphoma”. With so many cancers and alternative treatments available to treat them, to zero down on the right cure at the right time is a tough job for an oncologist. IBM Watson taps into the collective wisdom of Oncology experts - practitioners, researchers, and academicians across the globe to understand the latest advances happening inside the rapidly evolving field of Oncology. It then culls out information most relevant to the patient’s particular situation after considering their medical history. Within minutes, Watson then comes up with various tailored approaches that the doctor can adopt to treat his/her patient. Watson can help healthcare professionals narrow down on the right diagnosis, take informed and timely decisions and put in place treatment plans for their patients. All the doctor has to do is ask a question while mentioning the symptoms a patient is experiencing. This question-answer format is pretty revolutionary in that it can completely reshape how healthcare exists. How is IBM Watson redefining Healthcare? As more and more information is fed into IBM Watson, doctors will get highly customised recommendations to treat their patients. The impact on patient care and hospital cost can be tremendous. For Healthcare professionals, Watson can Reduce/Eliminate time spent by healthcare professionals on insight mining from an ever-growing body of research Provide a list of recommended options for treatment with a score of confidence attached Design treatment plans based on option chosen In short, it can act as a highly effective personal assistant to this group. This means these professionals are more competent, more successful and have the time and energy to make deep personal connections with their patients thereby elevating patient care to a whole different level. For patients, Watson can Act an interactive interface answering their queries and connecting them with their healthcare professionals Provide at home diagnostics and healthcare advice Keep their patient records updated and synced up with their hospitals Thus, Watson can help patients make informed medical choices, take better care of themselves and alleviate the stress and anxiety induced by a chaotic and opaque hospital environment. For the healthcare industry, it means a reduction in overall cost to hospitals, reduced investment in post-treatment patient care, higher rates of success, reduction in errors due to oversight, misdiagnosis, and other human errors. This can indirectly improve key administrative metrics, lower employee burnout/churn rate, improve morale and result in other intangible benefits more.   The implications of such a transformation are not limited to health care alone. What about Insurance, Watson? IBM Watson can have a substantial impact on insurance companies too. Insurance, a fiercely debated topic, is a major cost for healthcare. Increasing revenue potential, better customer relationship and reducing cost are some areas where Watson will start disrupting medical insurance. But that’s just the beginning. Tighter integration with hospitals, more data on patient care, and more information on newer remedies will provide ground-breaking insights to insurance companies. These insights will help them figure out the right premiums and the underwriting frameworks. Moreover, the above is not a scene set in some distant future. In Japan, insurance company Fukoku Mutual Life Insurance replaced 34 employees and deployed IBM Watson. Customers of Fukoku can now directly discuss with an AI robot instead of a human being to settle payments. Fukoku made a one-time fee of $1,70,000 along with yearly maintenance of $1,28,000 to IBM for its Watson’s services. They plan to recover this cost by replacing their team of sales personnel, insurance agents, and customer care personnel - potentially saving nearly a million dollars in annual savings. These are interesting times and some may even call it anxiety-inducing. [box type="shadow" align="" class="" width=""]Shariff: No, no, no, Houston, don't be anxious. Anxiety is bad for the heart.[/box]
Read more
  • 0
  • 0
  • 16748

article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 14174

article-image-how-develop-tech-strategy
Hari Vignesh
26 Sep 2017
5 min read
Save for later

How to develop a tech strategy

Hari Vignesh
26 Sep 2017
5 min read
Technology has never been so fundamental, so strategic and so important as it is in the digital age. It is being used to create new business models, products, and services, enhance existing offerings, and create deeper, more rewarding customer experiences, and as such, businesses need to develop the right technology and IT strategy for success. What is a tech strategy? Technology strategy (information technology strategy or IT strategy) is the overall plan that consists of objective(s), principles, and tactics relating to usage of the technologies within a particular organization. Such strategies primarily focus on the technologies themselves and in some cases the people who directly manage those technologies. The strategy can be implied from the organization’s behaviors toward technology decisions and may be written down in a document. In other words, technology strategy is the task of building, maintaining, and exploiting a company’s technological assets. Why do I need a tech strategy? To compete in the new world of dynamic and disrupted digital markets, organizations need to be able to operate at the speed of digital; they need to be able to respond quickly and easily to changing market conditions, customer preferences or competitor activity. The traditional approach to IT strategy The traditional approach to developing a new technology strategy involves a fairly structured, sequential process that produces a long-term view of the organization’s technology requirements together with a plan for meeting these needs. The main steps of the classic approach are: Identify the business capabilities that will be needed over the next 3–5 years to support the organization’s strategy and realize its vision. Assess the gap between the organization’s current maturity against each capability and the level required to realize the vision. Identify how technology can be used to address any gaps between the current and required maturity level of each business capability. Design the target technology architecture that will support the required business capabilities. Assess the gap between the organization’s current and target technology architecture. Develop a prioritized roadmap for building the target technology architecture. The Agile approach to tech strategy The agile approach to technology strategy is based on many of the same activities as the classic approach, but with some key differences that take into account the need for speed and flexibility. Typical steps include: Identify the business capabilities that will be needed over the period covered by the organization’s current strategy and vision. Develop a high-level technology vision that describes the key features or characteristics that the organization’s technology platform will need in order to support the organization’s strategy. Agree the planning horizon to be covered by the technology strategy (organizations faced with fast changing markets may need to work on a 6–12-month horizon, whereas companies in more stable markets may select a 12–24 month planning period). Determine the business capabilities that will take priority during the agreed planning horizon and assess the gaps between the current and required level of each business capability. Identify and prioritize the technology initiatives required to address any gaps between the current and required level of the priority business capabilities. Develop a roadmap showing those initiatives that will be delivered during the agreed planning period. Repeat steps 3–6 towards the end of the current planning horizon. Repeat steps 1–6 whenever the organization’s vision and strategy is updated. When the business is the tech strategy In cases where technology is used as the starting point for a new business model or to create completely new products or services, the business strategy will itself be based on technology. There is an argument that, in such instances, there is no need for a separate technology strategy, as the technology initiatives, investments and priorities are an integral part of the business strategy. And the CIO and the IT function will be key players in the definition of that strategy. As with the agile approach, the no strategy case is also dependent on the IT function developing and maintaining key architectural artifacts to support the business strategy, and to shape and guide technology decisions. How you can develop an effective IT strategy For a strategy to be effective, it should answer questions of how to create value, deliver value, and capture value: In order to create value, one needs to trace back the technology and forecast on how the technology evolves, how the market penetration changes, and how to organize effectively. To capture value, you should know how to compete to gain a competitive advantage and sustain it, and how to compete in case that standards of technology is important. The final step is delivering the value, where firms define how to execute the strategy, make strategic decisions, and take decisive actions. In short, whether it’s a pure IT business or IT-dependent business, tech strategy plays a key role in handcrafting the org’s future. It’s high time to craft your firm’s strategy if you don’t have one, using any of the methods. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 4928

article-image-why-everyone-talking-about-javascript-fatigue
Erik Kappelman
21 Sep 2017
4 min read
Save for later

Why is everyone talking about JavaScript fatigue?

Erik Kappelman
21 Sep 2017
4 min read
To answer this question, let’s start by defining what exactly JavaScript fatigue is. JavaScript fatigue is best described as viewing the onslaught of new JavaScript tools, frameworks or packages as a relentless stream of shaggy dog stories instead of an endless stream of creativity and enhanced productivity. I must admit, I myself have a serious case of JavaScript fatigue. Anyone who is plugged into the tech world knows that JavaScript has been having a moment since the release of Node.js in 2009. Obviously, JavaScript was not new in 2009. Its monopoly on web scripting had already made it an old hand in the development world, but with the advent of Node.js, JavaScript began to creep out of web browsers into desktop applications and mobile apps. Pretty soon there was the MEAN stack, a web app architecture that allows for the developer to run a web app end-to-end with only JavaScript, and tools like PhoneGap allowing developers to create mobile apps with good old fashioned HTML, CSS and, you guessed it, JavaScript. I think JavaScript fatigue asks the question, should we really be excited about the emergence of ‘new’ tech based on or built for a scripting language that has been in use for almost 30 years? How did JavaScript fatigue happen? Before I answer the title question, let’s discuss how this happened. Obviously, just the creation/emergence of Node.js cannot be considered the complete explanation of JavaScript fatigue. But, when you consider that JavaScript happens to be a relatively ‘easy’ language, and the language that many people start their development journeys with, a new platform that extended the functionality of such a language (Node.js) easily became a catalyst for the JavaScript wave that has been rolling for the last few years. So, the really simple answer is that JavaScript is easy, so a bunch of people are using it. But who cares? Why is it that a bunch of people using a language that most of us already know is a bad thing? To me that sounds a lot like a good thing. The reason this is problematic actually has nothing to do with JavaScript. There is a difference between using a common language because it is productively advantageous and using a common language because of laziness. Many developers are guilty of the latter. And when a developer is lazy about one thing, they’re probably lazy about all the other things as well. Is it fair to blame JavaScript? So why are there so many lazily created frameworks, APIs, web apps and desktop applications created in JavaScript? Is it really fair to blame the language? No it is not fair. People are not really fed up with JavaScript, they’re fed up with lazy developers, and that is nothing new. Outside of literal laziness in the writing of JS code, there is a laziness based around picking the tools to solve problems. I’ve heard it said that web development or any development for that matter is really not about development tools or process, it's about the results. Regular people don’t care what technologies Amazon uses on their website, while everybody cares about using Amazon to buy things or stream videos. There has been a lot of use of JavaScript for the sake of using JavaScript. This is probably the most specific reason people are talking about JavaScript fatigue. When hammering a nail into a board, a carpenter doesn’t choose a screwdriver because the screwdriver is the newest tool in their toolbox, they choose a hammer, because it's the right tool. Sure, you could use the handle of the screwdriver to bang in that nail, and it would basically work, and then you would get to use your new tool. This is clearly a stupid way to operate. Unfortunately, many of the choices made in the development world today are centered on finding the newest JavaScript tool to solve a problem instead of finding the best tool to solve a problem. If developers eat up new tools like candy, other developers are going to keep creating them. This is the downward spiral we find ourselves in. Using technology to solve problems So, why is everyone talking about JavaScript fatigue? Because it is a real problem, and it's getting real annoying. As has been the case before, many developers have become Narcissus, admiring their code in the reflective pool of the Internet, until they turn to stone. Let’s keep an eye on the prize: using technology to solve problems. If JavaScript is used in this way, nobody would have any qualms with the current JavaScript renaissance. It's when we start developing for the sake of developing that things get a little weird.
Read more
  • 0
  • 0
  • 18278

article-image-how-has-python-remained-so-popular
Antonio Cucciniello
21 Sep 2017
4 min read
Save for later

How has Python remained so popular?

Antonio Cucciniello
21 Sep 2017
4 min read
In 1991, the Python programming language was created. It is a dynamically typed object oriented language that is often used for scripting and web applications today. It is usually paired with frameworks such as Django or Flask on the backend. Since its creation, it's still extremely relevant and one of the most widely used programming languages in the world. But why is this the case? Today we will look at the reason why the Python programming language has still been extremely popular over the last couple of years. Used by bigger companies Python is widely used by bigger technology companies. When bigger tech companies (think companies such as Google) use Python, the engineers that work there also use it. If the developers use Python at their jobs, they will take it to their next job and pass the knowledge on. In addition, Python continues to be used organically when these developers use their knowledge of this language in any of their personal projects they have as well, futher spreading its usage. Plenty of styles are not used In Python, whitespace is important, but in other languages such as JavaScript and C++ it is not. The whitespace is used to dictate the scope of the statements in that indent. By making whitespace important it reduces the need for things like braces and semicolons in your code. That reduction alone can make your code look simpler and cleaner. People are always more willing to try a language that looks and feels cleaner, because it seems easier to learn psychologically. Variety of libraries and third-party support Being around as long as it has, Python has plenty of built-in functionality. It has an extremely large standard library with plenty of things that you can use in your code. On top of that, it has plenty of third-party support libraries that make things even easier. All of this gained functionality allows programmers to focus on the more important logic that is vital to their application's core functionality. This makes programmers more efficient, and who doesn't like efficiency? Object oriented As mentioned earlier, Python is an object oriented programming language. Due to it being object oriented, more people are likely to adopt it because object orient programming allows developers to model their code very similar to real world behavior. Built-in testing Python allows you to import a package called unittest. This package is a full unit testing suite with setup and teardown functions. Having this built in, it is something that is stable for developers to use to test their applications. Readability and learnability As we mentioned earlier, the whitespace is important, therefore we do not need brackets and semicolons. Also Python is dynamically typed so it is easier to create and use variables while not really having to worry about the type. All of these topics could be difficult for new programmers to learn. Python makes it easier by removing some of the difficult parts and having nicer looking code. The reduction of difficulty allows people to choose Python as their first programming language more often than others. (This was my first programming language.) Well documented Building upon the standard libraries and the vast amount of third-party packages, the code in those applications is usually well documented. They tend to have plenty of helpful comments and tons of additional documentation to explain what is happening. From a developer stand point this is crucial. Having great documentation can make or break a language's usage for me. Multiple applications To top it off, Python has many applications it can be used in. It can be used to develop games for fun, web applications to aid businesses, and data science applications. The wide variety of usage attracts more and more people to Python, because when you learn the language you now have the power of versatility. The scope of applications is vast. With all of these benefits, who wouldn't consider Python as their next language of choice? There are many options out there, but Python tends to be superior when it comes to readability, support, documentation, and its wide application usage. About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 9416

article-image-how-assess-your-tech-teams-skills
Hari Vignesh
20 Sep 2017
5 min read
Save for later

How to assess your tech team’s skills

Hari Vignesh
20 Sep 2017
5 min read
For those of us that manage others, effectiveness is largely driven by the skills and motivation of those that report to us. So whether you are a CIO, IT division leader, or a front-line manager, you need to spend the time to assess the currents skills, abilities and career aspirations of your staff and help them put in place the plans that can support their development. And yet, you need to do this in such a way that still supports the overall near-term objectives of the organization, and properly balances the need for professional development against the day-to-day needs of the organization. There are certifications for competence in many different products. Having such certifications is very valuable and gives one a sense of the skill-set of an individual. But how do you assess someone as a journeyman programmer, tester or systems engineer, or perhaps as a master in one’s chosen discipline? This evaluation is overly subjective and places too much emphasis on “book knowledge” rather than practical application of that knowledge to develop new, innovative solutions or approaches that the organization truly needs. In other words, how do you assess the knowledge, skills and abilities (KSAs) of a person to perform their job role? This assessment problem is two-fold: For a specific IT discipline, you need a comprehensive framework by which to understand the types of skills and knowledge you should have each level — from novice to expert. For each discipline, you also need a way to accurately assess the current level ability of your technical staff members to create the baseline by which you can develop their skills to move to higher levels of proficiency. This not only helps the individual develop a realistic and achievable plan, but also gives you insights into where you have significant skills gaps in your organization. Skills Framework for the Information Age (SFIA) In 2003, a non-profit organization was founded called the Skills Framework for the Information Age (SFIA), which provides a comprehensive framework of skills in IT technologies and disciplines based on a broad industry “body of knowledge.” SFIA currently covers 97 professional skills required by professionals in roles involving information and communications technology. These skills are organized into six categories, as follows: Strategy and Architecture Change and Transformation Development and Implementation Delivery and Operation Skills and Quality Relationships and Engagement Each of the skills are described at one or more of SFIA’s seven levels of attainment — from a novice to expert. Find out more about this framework here. Although the framework helps define your needed competencies, it doesn’t tell you if your workers have the skills that match them. Building your own effective framework In order to accurately assess the current ability level of your technical staff members is to create the baseline from which you can develop their skills to higher levels of proficiency. So, the best way to progress would be by identifying the goals of the team or org and then building your own framework. So, how do we proceed? List the roles within your team To start with you need a list of the role types within your team. This isn’t the same thing as having a listing of every position on your org chart. You want to simplify the process by grouping together like roles. List the skills needed for each role Now that you’ve created a list of role types, the next step is to list the skills needed for each of these roles. What do the skills look like? They could be behavioral like “Listens to customer needs carefully to determine requirements” or they could be more technical like this sample list of engineering skills: Writing quality code Design skills Writing optimal code Programming patterns Once you have this list, it’s a valuable resource in itself. Create a survey It’s ideal if you can find out all of the relevant skills a person has, not just those for their current role. To do this, create a survey that makes it easy for your people to respond. This essentially means you need to keep it short and not ask the same question twice. To achieve this, the survey should group together each of the major role types. Use the list you created in step 2 as your starting point for this. Let’s say you have an engineering group within your organization. It may have a number of different role types within it, but there’s probably common skills across many of them. For example, many of the role types may require people to be skilled at “Programming.” Rather than listing skills more than once under each relevant role type, list them once under a common group heading. Survey your workforce With the survey designed, you are now ready to ask your workforce to respond to it. The size of your team and the number of roles will determine how you go about doing this. It’s a good practice to communicate to survey participants to explain why you are asking for their response and what will happen with the information. Analyze the data You can now reap the rewards of your skills audit process. You can analyze: The skill gaps in specific roles Skill gaps within teams or organization groups Potential successors for certain roles The number of people who have critical skills Future skill requirements This assessment not only helps employees create realistic and achievable individual development plans, but also gives you insight into where you have significant skills gaps in your team or in your organization. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 24459
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-raspberry-pi-v-arduino-which-ones-right-me
Raka Mahesa
20 Sep 2017
5 min read
Save for later

Raspberry Pi v Arduino - which one's right for me?

Raka Mahesa
20 Sep 2017
5 min read
Okay, so you’ve decided to be a maker and you’ve created a little electronic project for yourself. Maybe an automatic garage door opener, or maybe a simple media server for your home theatre. As you learn your way further into the DIY world, you realize that you need to decide on the hardware that will be the basis of your project. You’ve checked the Internet for help, and found out the two popular hardware choices for DIY projects: the Raspberry Pi and the Arduino. Since you're just starting out, it seems both hardware choices serve the same functionality. They both are able to run the program needed for your project and they both have a big community that can help you. So, which hardware should you choose? Before we can make that decision, we need to understand which hardware is best. Let's start with the Raspberry Pi. To put it simply, the Raspberry Pi is a computer with a very, very small physical size. Despite its small size, the Raspberry Pi is actually a full-fledged computer capable of running an operating system and executing various programs. By connecting the mini-computer to a screen via an HDMI cable, and to an input device like a keyboard or a mouse, people will be able to use the Raspberry Pi just like any other computer out there. The latest version even has wireless connectivity built right into the device, making it very easy for the hardware to be connected to the Internet. So, what about the Arduino? The Arduino is a microcontroller board--an integrated circuit with a computing chipset capable of running a simple program. If smart devices are run by computer processors, then "dumb devices" are run by microcontrollers. These dumb devices include things like a TV remote, air conditioner, calculator, and other simple devices. Okay, so now we have completed our crash course for both platforms, let's actually compare them, starting from the hardware aspect. Raspberry Pi is a full-blown computer, so it has most of the stuff you'd expect from a computer system. It has a quad-core ARM-based CPU running at 1,200 MHz, 1 GB of RAM, microSD card slot for storage, 4 USB 2.0 ports, and it even has a GPU to drive the display output via an HDMI port. The Raspberry Pi is also equipped with a variety of modules that enables the hardware to easily connect to other devices like camera and touchscreen. Meanwhile, the Arduino is a simple microcontroller board. It has a processor running at 16 MHz, a built-in LED, and a bunch of digital and analog pins to interface with other devices. The hardware also has a USB port that's used to upload a custom program into the board. Just from the hardware specification alone we can see that both are on a totally different level. The Raspberry Pi has a processor running at 1,200 MHz CPU clock, which is roughly similar to a low-end smartphone, whereas the processor in Arduino only runs at 16 MHz CPU clock. This means an Arduino board is only capable of running a simple program, while a Raspberry Pi can handle a much more complex one. So far it seems that Raspberry Pi is a much better choice for DIY projects. But well, we all know that a smartphone is also much more limited and slower than a desktop computer, yet no one is going to say that smartphone is useless. To understand the strength of the Arduino, we need to look at and compare the software running the hardware we're discussing. Since Raspberry Pi is a computer, the device requires an operating system to be able to function. An operating system offers many benefits, like a built-in file system and multitasking system, but it also has disadvantages like needing to be booted up first and programs requiring additional configuration so they can run automatically. On the other hand, an Arduino is running its own firmware that will execute a custom, user-uploaded program as soon as the device is turned on. The software on Arduino is more much limited, but it also means using it is pretty simple and straightforward. This theme of simplicity and complexity also extends to the software development for both platforms. Developing a software for Raspberry Pi is complex, just like developing any computer software. Meanwhile, Arduino provides a development tool that allows you to quickly develop a program in your desktop computer and easily upload it to the Arduino board via USB cable. So with all that said, which hardware platform is the right choice? Well, it depends on your project. If your project is simply about reading sensor data and processing that, then the simplicity of Arduino will help the development of your project immensely. If your project includes a lot of task and processes, like uploading data to the internet, sending you e-mail, reading image data, and other stuff, then the power of Raspberry Pi will help your project successfully do all those tasks. And, if you're just starting out and haven't really decided on your future project though, I'd suggest you to go with Arduino. The simplicity and ease-of-use of an Arduino board makes it a really great learning tool where you can focus on making new stuff instead of making your things work together. About the Author RakaMahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 24836

article-image-how-develop-game-concept
Raka Mahesa
18 Sep 2017
5 min read
Save for later

How to develop a game concept

Raka Mahesa
18 Sep 2017
5 min read
You may have an idea or a concept for a game and you may like to make a full game based on that concept. Congratulations, you're now taking the first step in the game development process. But you may be unsure of what to do next with your game concept. Fortunately, that’s what we’re here to discuss this.  How to find inspiration for a game idea A game idea or concept can come from a variety of places. You may be inspired by another medium, such as a film or a book, you may have had an exciting experience and want to share it with others, you may be playing another game and think you can do better, or you may just have a sudden flash of inspiration out of nowhere. Because ideas can come from a variety of sources, they can take on a number of different forms and robustness. So it's important to take a step back and have another look at this idea of yours.  How to create a game prototype  What should you do after your game concept has been fleshed out? Well, the next step is to create a simple prototype based on your game concept to see if it is viable and actually fun to play.  Wait, what if this is your first foray into game development and you barely have any programming skill? Well, fortunately, developing a game prototype is a good entry to the world of programming. There are many game development tools out there like GameMaker, Stencyl, and Construct 2 that can help you quickly create a prototype without having to write too many lines of code. These tools are so useful that even seasoned programmers use them to quickly build a prototype.  Should I use a game engine to prototype?  Should you use full-featured, professional game engines for making a prototype? Well, it's completely up to you, but one of the purposes of making a prototype is to be able to test out your ideas easily, so when the idea doesn't work out, you can tweak it quickly. With a full-featured game engine, even though it's powerful, it may take longer to complete simple tasks, and you end up not being able to iterate on your game quick enough.  That's also why most game prototypes are made with just simple shapes or very simple graphics. Creating those kinds of graphics doesn't take a lot of time and allows you to iterate on your game concept quickly. Imagine you're testing out a game concept and found out that enemies that just randomly hop around aren't fun, so you decide to make those enemies simply run on the ground. If you're just using a red square for your hopping enemies, you can use the same square for running enemies. But if you're using, say, frog images for those enemies, you will have to switch to a different image when you want the enemies to run. Why is prototyping so important in game development?  You may wonder why the emphasis is on creating a prototype instead of building the actual game. After all, isn't fleshing out a game concept supposed to make sure the game is fun to play? Well, unfortunately, what seems fun in theory may not be actually fun in practice. Maybe you thought that having a jump stamina would make things more exciting for a player, but after prototyping such a system, you may discover that it actually slow things down and makes the game less fun.  Also, prototyping is not just useful for measuring a game's fun, it's also useful for making sure the player has the kinds of experiences that the game concept wants to deliver. Maybe you have this idea of a game where the hero fights many enemies at once so the player can experience an epic battle. But after you prototyped it, you found out that the game felt chaotic instead of epic. Fortunately with a prototype you can quickly tweak the variables of your enemies to make the game feel more epic and less chaotic.  Using simple graphics  Using simple graphics is important for a game prototype. If players can have a good experience with a prototype that uses simple graphics, imagine the fun they'll have with the final graphics. Simple graphics are good because the experience the player feels is due to the game's functions, and not because of how the game looks.  Next steps  After you're done building the prototype and have proven that your game concept is fun to play, you can move on to the next step in the game development process. Your next step depends on the sort of game you want to make. If it's a massive game with many systems, you might want to create a proper game design document that includes how you want to expand the mechanics of your game. But if the game is on the small side with simple mechanics, you can start building the final product and assets.  Good luck on your game development journey! Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 13873

article-image-ethereum-programming-update
Packt Publishing
17 Sep 2017
1 min read
Save for later

Ethereum Programming Update

Packt Publishing
17 Sep 2017
1 min read
18th September 2017 A book entitled, “Ethereum Programming” purporting to be by Alex Leverington was erroneously advertised for pre-order on several websites (including Amazon and our own website). This was our mistake. The book does not exist, and we take full responsibility for our mistake in suggesting that the book does exist and would be launched on 4 August 2017. We sincerely apologise for any disappointment and inconvenience caused by this unintentional oversight. Followers of Alex Leverington are directed to https://nessence.net/about/, where they can find information about his work, including any forthcoming titles. Sincerely, Packt Publishing Ltd
Read more
  • 0
  • 0
  • 4141

article-image-what-difference-between-functional-and-object-oriented-programming
Antonio Cucciniello
17 Sep 2017
5 min read
Save for later

What is the difference between functional and object oriented programming?

Antonio Cucciniello
17 Sep 2017
5 min read
There are two very popular programming paradigms in software development that developers design and program to. They are known as object oriented programming and functional programming. You've probably heard of these terms before, but what exactly are they and what is the difference between functional and object oriented programming? Let's take a look. What is object oriented programming? Object oriented programming is a programming paradigm in which you program using objects to represent things you are programming about (sometimes real world things). These objects could be data structures. The objects hold data about them in attributes. The attributes in the objects are manipulated through methods or functions that are given to the object. For instance, we might have a Person object that represents all of the data a person would have: weight, height, skin color, hair color, hair length, and so on. Those would be the attributes. Then the person object would also have things that it can do such as: pick box up, put box down, eat, sleep, etc. These would be the functions that play with the data the object stores. Engineers who program using object oriented design say that it is a style of programming that allows you to model real world scenarios much simpler. This allows for a good transition from requirements to code that works like the customer or user wants it to. Some examples of object oriented languages include C++, Java, Python, C#, Objective-C, and Swift. Want to learn object oriented programming? We recommend you start with Learning Object Oriented Programming. What is functional programming? Functional programming is the form of programming that attempts to avoid changing state and mutable data. In a functional program, the output of a function should always be the same, given the same exact inputs to the function. This is because the outputs of a function in functional programming purely relies on arguments of the function, and there is no magic that is happening behind the scenes. This is called eliminating side effects in your code. For example, if you call function getSum() it calculates the sum of two inputs and returns the sum. Given the same inputs for x and y, we will always get the same output for sum. This allows the function of a program to be extremely predictable. Each small function does its part and only its part. It allows for very modular and clean code that all works together in harmony. This is also easier when it comes to unit testing. Some examples of Functional Programming Languages include Lisp, Clojure, and F#. Problems with object oriented programming There are a few problems with object oriented programing. Firstly, it is known to be not as reusable. Because some of your functions depend on the class that is using them, it is hard to use some functions with another class. It is also known to be typically less efficient and more complex to deal with. Plenty of times, some object oriented designs are made to model large architectures and can be extremely complicated. Problems with functional programming Functional programming is not without its flaws either. It really takes a different mindset to approach your code from a functional standpoint. It's easy to think in object oriented terms, because it is similar to how the object being modeled happens in the real world. Functional programming is all about data manipulation. Converting a real world scenario to just data can take some extra thinking. Due to its difficulty when learning to program this way, there are fewer people that program using this style, which could make it hard to collaborate with someone else or learn from others because there will naturally be less information on the topic. A comparison between functional and object oriented programming Both programming concepts have a goal of wanting to create easily understandable programs that are free of bugs and can be developed fast. Both concepts have different methods for storing the data and how to manipulate the data. In object oriented programming, you store the data in attributes of objects and have functions that work for that object and do the manipulation. In functional programming, we view everything as data transformation. Data is not stored in objects, it is transformed by creating new versions of that data and manipulating it using one of the many functions. I hope you have a clearer picture of what the difference between functional and object oriented programming. They can both be used separately or can be mixed to some degree to suite your needs. Ultimately you should take into the consideration the advantages and disadvantages of using both before making that decision. Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on Twitter @antocucciniello, and follow him on GitHub here.
Read more
  • 0
  • 1
  • 74311
article-image-dl-wars-pytorch-vs-tensorflow
Savia Lobo
15 Sep 2017
6 min read
Save for later

Is Facebook-backed PyTorch better than Google's TensorFlow?

Savia Lobo
15 Sep 2017
6 min read
[dropcap]T[/dropcap]he rapid rise of tools and techniques in Artificial Intelligence and Machine learning of late has been astounding. Deep Learning, or “Machine learning on steroids” as some say, is one area where data scientists and machine learning experts are spoilt for choice in terms of the libraries and frameworks available. There are two libraries that are starting to emerge as frontrunners. TensorFlow is the best in class, but PyTorch is a new entrant in the field that could compete. So, PyTorch vs TensorFlow, which one is better? How do the two deep learning libraries compare to one another? TensorFlow and PyTorch: the basics Google’s TensorFlow is a widely used machine learning and deep learning framework. Open sourced in 2015 and backed by a huge community of machine learning experts, TensorFlow has quickly grown to be THE framework of choice by many organizations for their machine learning and deep learning needs. PyTorch, on the other hand, a recently developed Python package by Facebook for training neural networks is adapted from the Lua-based deep learning library Torch. PyTorch is one of the few available DL frameworks that uses tape-based autograd system to allow building dynamic neural networks in a fast and flexible manner. Pytorch vs TensorFlow Let's get into the details - let the Python vs TensorFlow match up begin... What programming languages support PyTorch and TensorFlow? Although primarily written in C++ and CUDA, Tensorflow contains a Python API sitting over the core engine, making it easier for Pythonistas to use. Additional APIs for C++, Haskell, Java, Go, and Rust are also included which means developers can code in their preferred language. Although PyTorch is a Python package, there’s provision for you to code using the basic C/ C++ languages using the APIs provided. If you are comfortable using Lua programming language, you can code neural network models in PyTorch using the Torch API. How easy are PyTorch and TensorFlow to use? TensorFlow can be a bit complex to use if used as a standalone framework, and can pose some difficulty in training Deep Learning models. To reduce this complexity, one can use the Keras wrapper which sits on top of TensorFlow’s complex engine and simplifies the development and training of deep learning models. TensorFlow also supports Distributed training, which PyTorch currently doesn’t. Due to the inclusion of Python API, TensorFlow is also production-ready i.e., it can be used to train and deploy enterprise-level deep learning models. PyTorch was rewritten in Python due to the complexities of Torch. This makes PyTorch more native to developers. It has an easy to use framework that provides maximum flexibility and speed. It also allows quick changes within the code during training without hampering its performance. If you already have some experience with deep learning and have used Torch before, you will like PyTorch even more, because of its speed, efficiency, and ease of use. PyTorch includes custom-made GPU allocator, which makes deep learning models highly memory efficient. Due to this, training large deep learning models becomes easier. Hence, large organizations such as Facebook, Twitter, Salesforce, and many more are embracing Pytorch. In this PyTorch vs TensorFlow round, PyTorch wins out in terms of ease of use. Training Deep Learning models with PyTorch and TensorFlow Both TensorFlow and PyTorch are used to build and train Neural Network models. TensorFlow works on SCG (Static Computational Graph) that includes defining the graph statically before the model starts execution. However, once the execution starts the only way to tweak changes within the model is using tf.session and tf.placeholder tensors. PyTorch is well suited to train RNNs( Recursive Neural Networks) as they run faster in PyTorch than in TensorFlow. It works on DCG (Dynamic Computational Graph) and one can define and make changes within the model on the go. In a DCG, each block can be debugged separately, which makes training of neural networks easier. TensorFlow has recently come up with TensorFlow Fold, a library designed to create TensorFlow models that works on structured data. Like PyTorch, it implements the DCGs and gives massive computational speeds of up to 10x on CPU and more than 100x on GPU! With the help of Dynamic Batching, you can now implement deep learning models which vary in size as well as structure. Comparing GPU and CPU optimizations TensorFlow has faster compile times than PyTorch and provides flexibility for building real-world applications. It can run on literally any kind of processor from a CPU, GPU, TPU, mobile devices, to a Raspberry Pi (IoT Devices). PyTorch, on the other hand, includes Tensor computations which can speed up deep neural network models upto 50x or more using GPUs. These tensors can dwell on CPU or GPU. Both CPU and GPU are written as independent libraries; making PyTorch efficient to use, irrespective of the Neural Network size. Community Support TensorFlow is one of the most popular Deep Learning frameworks today, and with this comes a huge community support. It has great documentation, and an eloquent set of online tutorials. TensorFlow also includes numerous pre-trained models which are hosted and available on github. These models aid developers and researchers who are keen to work with TensorFlow with some ready-made material to save their time and efforts. PyTorch, on the other hand, has a relatively smaller community since it has been developed fairly recently. As compared to TensorFlow, the documentation isn’t that great, and codes are not readily available. However, PyTorch does allow individuals to share their pre-trained models with others. PyTorch and TensorFlow - A David & Goliath story As it stands, Tensorflow is clearly favoured and used more than PyTorch for a variety of reasons. Tensorflow best suited for a wide range of practical purposes. It is the obvious choice for many machine learning and deep learning experts because of its vast array of features. Its maturity in the market is important too. It has a better community support along with multiple language APIs available. It has a good documentation and is production-ready due to the availability of ready-to-use code. Hence, it is better suited for someone who wants to get started with Deep Learning, or for organizations wanting to productize their Deep Learning models. PyTorch is relatively new and has a smaller community than TensorFlow, but it is fast and efficient. In short, it gives you all the power of Torch wrapped in the usefulness and ease of Python. Because of its efficiency and speed, it's a good option for small, research based projects. As mentioned earlier, companies such as Facebook, Twitter, and many others are using Pytorch to train deep learning models. However, its adoption is yet to go mainstream. The potential is evident, PyTorch is just not ready yet to challenge the beast that is TensorFlow. However considering its growth, the day is not far when PyTorch is further optimized and offers more functionalities - to the point that it becomes the David to TensorFlow’s Goliath.
Read more
  • 0
  • 0
  • 40419

article-image-google-opensorced-tensorflow
Kunal Parikh
13 Sep 2017
7 min read
Save for later

6 reasons why Google open-sourced TensorFlow

Kunal Parikh
13 Sep 2017
7 min read
On November 9, 2015, a storm loomed over the SF Bay area creating major outages. At Mountain View, California, Google engineers were busy creating a storm of their own. That day, Sundar Pichai announced to the world that TensorFlow, their machine learning system, was going Open Source. He said: “...today we’re also open-sourcing TensorFlow. We hope this will let the machine learning community—everyone from academic researchers, to engineers, to hobbyists—exchange ideas much more quickly, through working code rather than just research papers.” That day the tech world may not have fully grasped the gravity of the announcement but those in the know knew it was a pivotal moment in Google’s transformational journey into an AI first world. How did TensorFlow begin? TensorFlow was part of a former Google product called DistBelief. DistBelief was responsible for a program called DeepDream. The program was built for scientists and engineers to visualise how deep neural networks process images. As fate would have it, the algorithm went viral and everyone started visualising abstract and psychedelic art in it. Although people were having fun playing with image forms, they were unaware of the technology that powered those images - neural networks and deep learning - the exact reason why TensorFlow was built for. TensorFlow is a machine learning platform that allows one to run a wide range of algorithms like the aforementioned neural networks and deep learning based projects. TensorFlow with its flexibility, high performance, portability, and production-readiness is changing the landscape of artificial intelligence and machine learning. Be it face recognition, music, and art creation or detecting clickbait headline for blogs, the use cases are immense. With Google open sourcing TensorFlow, the platform that powers Google search and other smart Google products is now accessible to everyone - researchers, scientists, machine learning experts, students, and others. So why did Google open source TensorFlow? Yes, Google made a world of difference to the machine learning community at large by open sourcing TensorFlow. But what was in it for Google? As it turns out, a whole lot. Let’s look at a few. Google is feeling the heat from rival deep learning frameworks Major deep learning frameworks like Theano, Keras, etc., were already open source. Keeping a framework proprietary was becoming a strategic disadvantage as most DL core users i.e. scientists, engineers, and academicians prefer using open source software for their work. “Pure” researchers and aspiring “Phds” are key groups that file major patents in the world of AI. By open sourcing TensorFlow, Google gave this community access to a platform it backs to power their research. This makes migrating the world’s algorithms from other deep learning tools onto TensorFlow theoretically possible. AI as a trend is clearly here to stay and Google wants a platform that leads this trend. An open source TensorFlow can better support the Google Brain project Behind all the PR, Google does not speak much about its pet project Google Brain. When Sundar Pichai talks of Google’s transformation from Search to AI, this project is doing all the work behind the scenes. Google Brain is headed by some of the best minds in the industry like Jeff Dean, Geoffery Hilton, Andrew NG, among many others. They developed TensorFlow and they might still have some state-of-the-art features up their sleeves privy only to them. After all, they have done a plethora of stunning research in areas like parallel computing, machine intelligence, natural language processing and many more. With TensorFlow now open sourced, this team can accelerate the development of the platform and also make significant inroads into areas they are currently researching on. This research can then potentially develop into future products for Google which will allow them to expand their AI and Cloud clout, especially in the enterprise market. Tapping into the collective wisdom of the academic intelligentsia Most innovations and breakthroughs come from universities before they go mainstream and become major products in enterprises. AI, still making this transition, will need a lot of investment in research. To work on difficult algorithms, researchers will need access to sophisticated ML frameworks. Selling TensorFlow to universities is an old school way to solve the problem - that’s why we no longer hear about products like LabView. Instead, by open-sourcing TensorFlow, the team at Google now has the world’s best minds working on difficult AI problems on their platform for free. As these researchers start writing papers on AI using TensorFlow, it will keep adding to the existing body of knowledge. They will have all the access to bleeding-edge algorithms that are not yet available in the market. Their engineers could simply pick and choose what they like and start developing commercially ready services. Google wants to develop TensorFlow as a platform-as-a-service for AI application development An advantage of open-sourcing a tool is that it accelerates time to build and test through collaborative app development. This means most of the basic infrastructure and modules to build a variety of TensorFlow based applications will already exist on the platform. TensorFlow developers can develop and ship interesting modular products by mixing and matching code and providing a further layer of customization or abstraction. What Amazon did for storage with AWS, Google can do for AI with TensorFlow. It won’t come as a surprise if Google came up with their own integrated AI ecosystem with TensorFlow on the Google Cloud promising you the AI resources your company would need. Suppose you want a voice based search function on your ecommerce mobile application. Instead, of completely reinventing the wheel, you could buy TensorFlow powered services provided by Google. With easy APIs, you can get voice based search and save substantial developer cost and time. Open sourcing TensorFlow will help Google to extend their talent pipeline in a competitive Silicon Valley jobs market Hiring for AI development is  competitive in the Silicon Valley as all major companies vie for attention from the same niche talent pool. With TensorFlow made freely available, Google’s HR team can quickly reach out to a talent pool specifically well versed with the technology and also save on training cost. Just look at the interest TensorFlow has generated on a forum like StackOverflow: This indicates that growing number of users are asking and inquiring about TensorFlow. Some of these users will migrate into power users who the Google HR team can tap into. A developer pool at this scale would never have been possible with a proprietary tool. Replicating the success and learning from Android Agreed, a direct comparison with Android is not possible. However, the size of the mobile market and Google’s strategic goal of mobile-first when they introduced Android bear striking similarity with the nascent AI ecosystem we have today and Google’s current AI-first rhetoric. In just a decade since its launch, Android now owns more than 85% of the smartphone mobile OS market. Piggybacking on Android’s success, Google now has control of mobile search (96.19%), services (Google Play), a strong connection with the mobile developer community and even a viable entry into the mobile hardware market. Open sourcing Android did not stop Google from making money. Google was able to monetize through other ways like mobile search, mobile advertisements, Google Play, devices like Nexus, mobile payments, etc. Google did not have all this infrastructure planned and ready before Android was open sourced - It innovated, improvised, and created along the way. In the future, we can expect Google to adopt key learnings from its Android growth story and apply to TensorFlow’s market expansion strategy. We can also see supporting infrastructures and models for commercialising TensorFlow emerge for enterprise developers. [dropcap]T[/dropcap]he road to AI world domination for Google is on the back of an open sourced TensorFlow platform. It appears not just exciting but also promises to be one full of exponential growth, crowdsourced innovation and learnings drawn from other highly successful Google products and services. The storm that started two years ago is surely morphing into a hurricane. As Professor Michael Guerzhoy of University of Toronto quotes in Business Insider “Ten years ago, it took me months to do something that for my students takes a few days with TensorFlow.”
Read more
  • 0
  • 0
  • 23453

article-image-will-data-scientists-become-victims-automation
Erik Kappelman
10 Sep 2017
5 min read
Save for later

Will data scientists become victims of automation?

Erik Kappelman
10 Sep 2017
5 min read
As someone who considers themselves something of a data scientist, this is an important question. Unfortunately, the answer is: it depends. It is true that some data scientists will be automated out of their usefulness. I’m not a fan of the term ‘data scientist’ for a whole bunch of reasons, not the least of which is its variable definition. For the purposes of this piece, we will define data science using Wikipedia’s definitions: “[Data science] is an interdisciplinary field about scientific methods, processes, and systems to extractknowledge or insights fromdata in various forms, either structured or unstructured.” In short, data scientists are people who practice data science (mind blown, I know). Data science defined Data science can be broadly defined to consist of three categories or processes: data acquisition or mining, data management, and data analysis. At first blush, data scientists don’t seem to be all that automatable. For one thing, data scientists already use automation to great effect, but are still involved in the process because of the creativity that is required for success. Producing creativity In my opinion, creativity is the greatest defense against automation. Although computer technology will get there eventually, producing true creativity is pretty far down the line toward complete artificial intelligence. By the time we get there, we should probably be worried about SkyNet and not job security. At present, automation is best applied to predictable repeated task. If we look at the three elements of data science mentioned earlier and try to broadly apply these criterion for likelihood of automation, we might be able to partially answer the title question. Data mining Data mining is simultaneously ripe for the picking for automators and is also a likely no-automation stronghold, at least in part. Data mining consists of a variety of processes that are often quite tedious. There is also a lot of redundancy or repetition in the performance of data mining. Let's say that there is a government agency collecting metadata on every phone call placed inside a country. Using any number of data mining techniques, a data scientist could use this metadata to pick out all kinds of interesting things, such as relationships between where calls are made, and who the calls are made to. Most of this mining would be performed by algorithms that repeatedly comb new data and old data, connecting points and creating usable information for a seemingly infinite pile of individually useless phone call metadata. So much of this process is already automated, but somebody is still there to create and implement the algorithms that are at the core of the automation process. These algorithms might be specifically or generally focused. They may need to be changed as the needs of the government agency changes. So, even if the process is highly automated, data scientists will still have to be involved in the short to medium term. Data analysis Data analysis sits in a similar place as data mining in terms of likelihood of automation. Data analysis requires a lot of creativity up front and at the end. Data analysts need to come up with a plan to analyze data, implement the plan, and then interpret the results in a meaningful way. Right now, the easiest part of this process to automate is the implementation. Eventually, artificial intelligence will advance enough that AIs can plan, implement, and interpret data analysis completely with no human involvement. I think this is still a long way off (decades even), and again, keep SkyNet in mind (one can never be too careful). Data management Data management seems like it should already be automated. The design of databases does take plenty of creativity, but it's the creative implementation of fairly rigid standards. This is a level of creativity that automation can currently handle. Data storage, queries, and other elements of data management are already well within the grasp of automation routines. So, if there is one area of data science that is going to go before the rest, it is definitely data management. Victims of automation So the answer is yes, data scientists will most likely become victims of automation, but when this happens depends on their specialty or at least their current work responsibilities. This is really true of almost all jobs, so it's not necessarily a very illuminating answer. I would say, however, that data science is a pretty safe bet if you’re worried about automation. Many other professions will be completely gone—I’m looking at you, automated car developers—by the time data scientists even begin to come under fire. Data scientists will become unemployed around the same time lower skilled computer programmers and system administrators are heading to the unemployment line. A few data scientists will continue to be employed until the bitter end. I believe this is true of any profession that involves the creative use of technology. Data scientists should not post their resumes yet. I believe the data science industry will grow for at least the next two decades before automation begins to take its toll. Many data scientists, due to their contributions to automation, will actually be the cause of other people, and perhaps themselves, losing their jobs. But, to end on a happy note, suffice to say, data science is certainly safe for the near future. About the Author Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 1807
article-image-data-science-getting-easier
Erik Kappelman
10 Sep 2017
5 min read
Save for later

Is data science getting easier?

Erik Kappelman
10 Sep 2017
5 min read
The answer is yes, and no. This is a question that could've easily been applied to textile manufacturing in the 1890s, and could've received a similar answer. By this I mean, textile manufacturing improved leaps and bounds throughout the industrial revolution, however, despite their productivity, textile mills were some of the most dangerous places to work. Before I further explain my answer, let’s agree on a definition for data science. Wikipedia defines data science as, “an interdisciplinary field about scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured.” I see this as the process of acquiring, managing, and analyzing data. Advances in data science First, let's discuss why data science is definitely getting easier. Advances in technology and data collection have made data science easier. For one thing, data science as we know it wasn’t even possible 40 years ago, but due to advanced technology we can now analyze, gather, and manage data in completely new ways. Scripting languages like R and Python have mostly replaced more convoluted languages like Haskell and Fortran in the realm of data analysis. Tools like Hadoop bring together a lot of different functionality to expedite every element of data science. Smartphones and wearable tech collect data more effectively and efficiently than older data collection methods, which gives data scientists more data of higher quality to work with. Perhaps most importantly, the utility of data science has become more and more recognized throughout the broader world. This helps provide data scientists the support they need to be truly effective. These are just some of the reasons why data science is getting easier. Unintended consequences While many of these tools make data science easier in some respects, there are also some unintended consequences that actually might make data science harder. Improved data collection has been a boon for the data science industry, but using the data that is streaming in is similar to drinking out of a firehose. Data scientists are continually required to come up with more complicated ways of taking data in, because the stream of data has become incredibly strong. While R and Python are definitely easier to learn than older alternatives, neither language is usually accused of being parsimonious. What a skilled Haskell programming might be able to do in 100 lines, might take a less skilled Python scripter 500 lines. Hadoop, and tools like it, simplify the data science process, but it seems like there are 50 new tools like Hadoop a day. While these tools are powerful and useful, sometimes data scientists spend more time learning about tools and less time doing data science, just to keep up with the industry’s landscape. So, like many other fields related to computer science and programming, new tech is simultaneously making things easier and harder. Golden age of data science Let me rephrase the title question in an effort to provide even more illumination: is now the best time to be a data scientist or to become one? The answer to this question is a resounding yes. While all of the current drawbacks I brought up remain true, I believe that we are in a golden age of data science, for all of the reasons already mentioned, and more. We have more data than ever before and our data collection abilities are improving at an exponential rate. The current situation has gone so far as to create the necessity for a whole new field of data analysis, Big Data. Data science is one of the most vast and quickly expanding human frontiers at present. Part of the reason for this is what data science can be used for. Data science can effectively answer questions that were previously unanswered. Of course this makes for an attractive field of study from a research standpoint. One final note on whether or not data science is getting easier. If you are a person who actually creates new methods or techniques in data science, especially if you need to support these methods and techniques with formal mathematical and scientific reasoning, data science is definitely not getting easier for you. As I just mentioned, Big Data is a whole new field of data science created to deal with new problems caused by the efficacy of new data collection techniques. If you are a researcher or academic, all of this means a lot of work. Bootstrapped standard errors were used in data analysis before a formal proof of their legitimacy was created. Data science techniques might move at the speed of light, but formalizing and proving these techniques can literally take lifetimes. So if you are a researcher or academic, things will only get harder. If you are more of a practical data scientist, it may be slightly easier for now, but there’s always something! About the Author Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 7441

article-image-biggest-cloud-adoption-challenges
Rick Blaisdell
10 Sep 2017
3 min read
Save for later

The biggest cloud adoption challenges

Rick Blaisdell
10 Sep 2017
3 min read
The cloud technology industry is growing rapidly, as companies are understanding the profitability and efficiency benefits that cloud computing can provide. Public, private, or a combination of various cloud models are used by 70 percent of U.S. companies who have at least one application in the cloud according to IDG Enterprise Cloud Computing Survey. In addition, almost 93 percent of organizations across the world use cloud services according to Building Trust in a Cloudy Sky Survey. Even though cloud adoption is increasing, it's important that companies develop a strategy before moving their data and using cloud technology to increase efficiency. This strategy is especially important because transitioning to the cloud is often a challenging process. If you're thinking of making this transition, here is a list of cloud adoption challenges that you should be aware of. Technology It's important to take into consideration the complex issues that can arise with new technology. For example, some applications are not built for cloud, or require certain compliance requirements that will not be met in a pure cloud environment. In this instance, a solution could be a hybrid environment with configured security requirements. People Moving to the cloud could be met with resistance, especially from people who have spent most of their time managing physical infrastructure. The largest organization will have a long transition to full cloud adoption. Small companies that are tech savvy will have an easier time making this change. Most modern IT departments will choose an agile approach to cloud adoption, although some employers might not be that experiences in these types of operational changes. The implementation takes time, but you can transform existing operating models to enable a cloud to be more approachable for the company. Psychological barriers Psychologically, there will be many questions. Will the cloud be more secure? Can I maintain my SLAs? Will I find the right technical support services? In 2017, cloud providers can meet all of those expectations and at the same time, reduce overall expenses. Costs Many organizations that decide to move to the cloud do not estimate costs properly. Even though the pricing seems to be simple, the more moving parts there are, the more the liklihood of incorrect costs estimates. When starting the migration to the cloud, look for tools that will help you estimate cloud costs and ROI, whilst taking into consideration all possible variables. Security One of the CIO's concerns when it comes to moving to the cloud is security and privacy. The management team needs to know if the cloud provider they plan to work with has a bullet proof environment. This is a big challenge because a data breach could not only put the company reputation at risk, but could also be the result of a huge financial loss for a company. The first step in adopting cloud services is to be able to identify all of the challenges that will come with the process. It is essential to work with the cloud provider to facilitate a successful cloud implementation. Are there any challenges that you consider crucial to a cloud transition? Let us know what you think in the comments section. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 7986