Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-why-everyone-calling-themselves-software-company
Hari Vignesh
12 Jun 2017
5 min read
Save for later

Why is everyone calling themselves a software company?

Hari Vignesh
12 Jun 2017
5 min read
If we consider the number of startups today, it comes close to 400+ million entrepreneurs who are trying to form 300+ million startups annually. The majority of these startups are software companies. Why has software occupied a majority of the market? Or is that just a myth? Let’s take a look at this by analyzing the pros and cons of the software industry. Why is the software market great? Though the software sector has faced setbacks, it has always been like a horse?—?which gets back up very soon. Right when the revolution was created in Silicon Valley, this sector has helped a lot of people in terms of employment, productivity, and assistance. It has other advantages as well. The need is still there The need for software is never depleting. This is because of the nature of software?—?virtual. Being virtually present can make it more flexible, customizable, scalable and can be transformed or created into anything. So for every business, there can always be a software solution and it has proved to be the best. So, as long as there is a need for software, this industry is going to rule the world. Humongous talent pool When there are more talented or skilled people, the industry’s roadmap will be set in motion for generations. The software industry holds an ecosystem of a variety of skilled personnel who strive to make an impact in this industry. Not only that, it also inspires many engineers and skilled personnel with it’s attractive luxury benefits. Accelerated results Software businesses will always produce accelerated results. Whether it is a success or a failure, it can be encountered within the first few months of starting it. This is once again a wonderful factor that I admire about this industry. Failing fast will trigger the necessary data for your next step. Attractive funding Many investors are ready to finance tech-based startups because of the accelerated results (the above factor). Either they grow 5x to 10x, or they fail immediately (which will ensure less loss for them). Initial investment is less This is not the case for all tech startups, but a few startups are created by engineers themselves and the cost of creating and deploying software will not cost much (pure software business). For example, for a B2C app business; if the co-founders are developers, then only the server cost is going to be the investment initially (time and effort is also an investment, but the context here is only money). Inspiring stories Many successful personalities have created their own milestones and people are really inspired by them all. Every tech entrepreneur wants to be the next millionaire and they are striving for that goal. With respect to the milestones that great leaders have created, this industry has also created very big empires and the empires are still growing. The point is, the limit is not set for this industry and it tends to expand every year to a new dimension. Challenges in software business Though the above factors may give you goose bumps, this industry has it’s own risks too. It can be taken as a risk or as a challenge. It all depends on one’s perspective. Market timing There something called market timing and the entire industry is dependent on it. Even if your business idea is awesome, you have sufficient funding and you have a great product; if the market is not open at that time, you will end up dismantled. Market need matters a lot and only when it is really needed, people succeed. Uncertainty is more The software market has seen potential growth but the uncertainty is more, i.e. the downfall or the recession cannot be predicted easily and the aftermath will be very catastrophic. The business will also tend to be very competitive, i.e. your product can be overthrown immediately by a competitor in few years. Not as easy as you think Surviving in this industry is not as easy as you think. Though there are many success stories, comparing millions of startups?—?the majority of them have failed. The success rate of the masses is always less and it takes great effort to be a rock star in this industry. I feel this is because of the myth that’s circulating recently in this industry?—?to be the young millionaire or to be a CEO. This makes your journey very tough if you don’t have the necessary skills and if you’re over ambitious.  Considering all of these factors, maybe starting a software business isn’t a bad idea after all. Though it has its ups and downs, it has continuously inspired a lot of lives. Every business journey is not easy and it is full of challenges.  "Computers themselves, and software yet to be developed, will revolutionize the way we learn." —Steve Jobs About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3180

article-image-when-buy-shelf-software-and-when-build-it-yourself
Hari Vignesh
12 Jun 2017
5 min read
Save for later

When to buy off-the-shelf software and when to build it yourself

Hari Vignesh
12 Jun 2017
5 min read
Balancing your company’s needs with respect to profitability, productivity, and scalability are both paramount and challenging, especially if your business is a startup. There will always be a two-road situation where you will be put in a position to pick one — whether to develop the software by yourselves or buy it. Well, let me make it simple for you. Both of these actions have their own pros and cons and it is entirely up to you to compromise a few parameters and jump to a conclusion. When to buy off-the-shelf software? Buying software is quite useful for small-scale startups when technology dependency is not tightly coupled. When you don't need to worry about the dynamic changes of the business and if it’s just like another utility for the span of 5+ years, buying is a great idea. But let’s also discuss a few other circumstances. Budget limitations Building new software and maintaining them, costs more than buying the production ready software. Canned solutions are cheaper, than building on your own, and therefore can make much more financial sense for a company with a smaller budget. Lack of technical proficiency If you don’t have an engineering team to construct software in the first place, hiring them again and crafting software will cost you a lot, if you need a quality outcome. So, it would be wise to pass on the opportunity — buy it, until you have such a team in place. Time constraints Time is a crucial factor for all businesses. You need to validate whether you have a sufficient time window for creating proprietary software. If not, preferring tailor made software is not a good idea, considering the design, development, and the maintenance time period. Businesses that do not have this time available should not immediately pursue it. Open source If the tool or software that you’re looking for is already in the open source market, then it will be very cost efficient to buy it, if there is a licensing fee. Open source software is great, handy, and can be tailored or customized according to your business needs; although, you cannot sell them though. If productivity alone matters for the new software, using the open source product will really benefit you. Not reinventing the wheel If your business case software is already production ready somewhere else, reinventing the wheel again is a waste of time and money. If you have a common business, like a restaurant, there are generally canned software solutions available that are already proven to be effective for your organization’s purpose. Buying and using them is far more effective than tailoring it yourself. Business case and competition In the case of your business being a retail furniture store, building amazing technology would unlikely be a factor that sets you apart from your competition. Recognizing the actual needs of your business case is also important before spending money and effort on custom software. When to build it yourself? Building software will cost you time and money. All we need to do is to decide whether it is worth it or not. Let’s discuss this in detail. Not meeting your expectations Even if there is canned software available for purchasing, if you strongly feel that those are not meeting your needs, you will be pushed to the only option of creating it yourself. Try customizing open source software first — if any. If your business has specialized needs, custom software may be better qualified to meet them. Not blending with existing system If you already have a system in place, you need to ensure whether or not the new system or software can work with it or take over from where the existing system left — it can be the database transition, architecture blending, etc. If the two programs do not communicate effectively, they may hinder your efficiency. If you build your own software, you can integrate with a wider set of APIs from different software and data partners. More productivity When you have enough money to invest and your focus is completely on the productivity aspect, building your custom software can aid your team to be flexible and work smarter and faster, because you clearly know what you want. Training people in canned software will cost more time and there is also the issue of human error. You can create one comprehensive technology platform as opposed to using multiple different programs. An integrated platform can yield major efficiency gains since all the data is in one place and users do not have to switch between different websites as part of their workflow. Competitive advantage When you rely on the same canned software as your rival does, it is more difficult to outperform them (outperforming doesn’t depend entirely on this, but it will create an impact). By designing your own software that is ideally suited for your specific business operations, you can garner a competitive advantage relative to your competitors. That advantage grows as you invest more heavily in your proprietary systems.  As mentioned, deciding whether to buy the software or tailoring it is entirely up to you. At the end of the day, you’re looking for software to help grow your business, so the goal should be measurable ROI. Focus on the ROI and it will help you in narrowing things down to a conclusion.  About the author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 4393

article-image-why-containers-are-driving-devops
Diego Rodriguez
12 Jun 2017
5 min read
Save for later

Why containers are driving DevOps

Diego Rodriguez
12 Jun 2017
5 min read
It has been a long ride since the days where one application would just take a full room of computing hardware. Research and innovation in information technology (IT) have taken us far and will surely keep moving even faster every day. Let's talk a bit about the present state of DevOps, and how containers are driving the scene. What are containers? According to Docker (the most popular containers platform), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It's available for Linux and Windows apps. It runs the same everytime regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure. Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standarization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff. This means a lot of things: your deployments can go packed in a much more smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standarization (remember "Convention over configuration"), less things to manage and more isolated apps means less ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm. Considerations It has been less than 10 years since containerization started, and in the technology world that is a lot, considering how fast other technologies such as web front-end frameworks and artificial intelligence [AI] are moving. In just a few years, development of this widely-deployed technology has gone mature and production-ready, coupled with microservices, the boost has taken it to new parts in the DevOps world, being now the defacto solution for many companies in their application and services deployment flow. Just before all this exciting movement started, VMs were the go-to for the many problems encountered by IT people, including myself. And although VMs are a great way to solve many of these problems, there was still room for improvement. Nowadays, the horizon seems really promising with the support of top technology companies backing tools, frameworks, services and products, all around containers, benefiting most of the daily code we develop, test, debug, and deploy on a daily basis. These days, thanks to the work of many, it's possible to have a consistent all-around lightweight way to run, test, debug, and deploy code from whichever platform you work from. So, if you code in Linux using VIM, but your coworker uses Windows using VS code, both can have the same local container with the same binaries and libraries where code is ran. This removes a lot of incompatibility issues and allows teams to enjoy production environments in their own machine, not having to worry about sharing the same configuration files, misconfiguration, versioning hassles, etc. It gets even better. Not only is there no need to maintain the same configuration files across the different services: there is less configuration to handle as a whole. Templates do most of the work for us, allowing you and your team to focus on creating and deploying your products, improving and iterating your services, changing and enhancing your code. In less than 10 lines you can specify a working template containing everything needed to run a simple Node.js service, or maybe a Ruby on Rails application, and how about a Scala cron job. Containerization supports most, if not all languages and stacks. Containers and virtualization Virtualization has allowed for acceleration in the speed in which we build things for many years. It will continue to provide us with better solutions as time goes by. Just as we went from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and finally Software as a Service (SaaS) and others (Anything as a Service? AaaS?), I am certain that we will find more abstraction beyond containers, making our life easier everyday. As most of today's tools, many virtualization and containerization ones are open source, with huge communities around them and support boards, but keep the trust in good'ol Stack Overflow. So remember to give back something to the amazing community of open source, open issues, report bugs, share the best about it and help fix and improve the lacking parts. But really, just try to learn these new and promising technologies that give us IT people a huge bump in efficiency in pretty much all aspects. About the author Diego Rodriguez Baquero is a full stack developer specializing in DevOps and SysOps. He is also a WebTorrent core team member. He can be found at https://diegorbaquero.com/. 
Read more
  • 0
  • 0
  • 49926

article-image-how-succeed-gaming-industry-10-tips
Raka Mahesa
12 Jun 2017
5 min read
Save for later

How to succeed in the gaming industry: 10 tips

Raka Mahesa
12 Jun 2017
5 min read
The gaming industry is a crowded trade. After all, it's one of those industries where you can work on something you actually love; so a lot of people are trying to get into it. And with a lot from rivals being successful in the industry is a difficult thing to accomplish. Here are 10 tips to help you succeed in the gaming enterprise. Do note that these are general tips. That way, the tips should be applicable to you regardless of your position in the industry, whether you're an indie developer working on your own games, or a programmer working for a big gaming company. Tip 1: Be creative The gaming industry is a creative one, so it makes perfect sense that you need to be creative to succeed. And you don't have to be an artist or a writer to apply your creative thinking; there are many challenges that need creative solutions. For example, a particular system in the game may need some heavy computing, so you can come up with a creative solution that instead of fully computing the problem, you merely use a simpler formula and estimate the result.  Tip 2: Capable of receiving criticism Video games are a passion for many people, probably including you. That's why it's easy to fall in love with your own idea; whether it’s a gameplay idea like how an enemy should behave, or a technical one like how a save file should be written. Your idea might not be perfect though, so it's important to be able to step back and see if another person's criticism on your idea has its merit. After all, that other person could be capable of seeing a flaw that you may have missed.  Tip 3: Be able to see the big picture A video game’s software is full of complex, interlocking systems. Being able to see the big picture, that is, seeing how changes in one system could affect another system is a really nice skill to have when developing a video game.  Tip 4: Keep up with technology Technology moves at a blisteringly fast speed. Technology that is relevant today may be rendered useless tomorrow, so it is very important to keep up with technology. Using the latest equipment may help your game project and the newest technology may provide opportunities for your games too. For example, newer platforms like VR and AR don't have many games yet, so it's easier to gain visibility there.  Tip 5: Keep up with industry trends It's not just technology that moves fast, but also the world. Just 10 years ago, it was unthinkable that millions of people would watch other people play games, or that mobile gaming would be bigger than console gaming. By keeping up with industry trends, we can understand the market for our games, and more importantly, understand our players' behavior.  Tip 6: Put yourself in your player's shoes Being able to see your games from the viewpoint of your player is a really useful skill to be had. For example, as a developer you may feel fine looking at a black screen when your game is loading its resources because you know the game is working fine, as long as it doesn't throw an error dialog. Whereas, your player probably doesn't feel the same way and thought the game just hangs when it shows you a black screen without a resource loading indicator.  Tip 7: Understand your platform and your audience This is a bit similar to the previous tip, but on a more general level. Each platform has different strengths and the audience of each platform also has different expectations. For example, games for mobile platforms are expected to be played in small time burst instead of hour long sessions, so mobile gamers expect their games to automatically save progress whenever they stop playing. Understanding this behavior is really important for developing games that satisfy players.  Tip 8: Be a team player Unless you're a one-man army, games usually are not developed alone. Since game development is a team effort, it's pretty important to get along with your teammates. Whether it's dividing tasks fairly with your programmer buddy, or explaining to the artist about the format of the art assets that your game needs.  Tip 9: Show your creation to other people When you are deep in the process of working on your latest creation, it’s hardsometimes to take a step back and assess your creation fairly. Occasionally you may even feel like your creations aren’t up to scratch. Fortunately, showing your work to other people is a relatively easy way to get good and honest feedback. And if you’re lucky, your new audience may just show you how your creation is actually to a standard level.  Tip 10: Networking This is probably the most generic tip ever, but that doesn't mean it's not true. In any industry and no matter what your position is, networking is really important. If you're an indie developer, you may connect with a development partner that shares the same vision as you. Alternatively, if you're a programmer, maybe you will connect with someone who’s looking for a senior position to lead a new game project. Networking will open the door of opportunities for you.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 8824

article-image-how-should-web-developers-learn-machine-learning
Chris Tava
12 Jun 2017
6 min read
Save for later

How should web developers learn machine learning?

Chris Tava
12 Jun 2017
6 min read
Do you have the motivation to learn machine learning? Given its relevance in today's landscape, you should be motivated to learn about this field. But if you're a web developer, how do you go about learning it? In this article, I show you how. So, let’s break this down. What is machine learning? You may be wondering why machine learning matters to you, or how you would even go about learning it. Machine learning is a smart way to create software that finds patterns in data without having to explicitly program for each condition. Sounds too good to be true? Well it is. Quite frankly many of the state-of-the-art solutions to the toughest machine learning problems don’t even come close to reaching 100 percent accuracy and precision. This might not sound right to you if you’ve been trained, or have learned, to be precise and deterministic with the solutions you provide to the web applications you’ve worked on. In fact, machine learning is such a challenging problem domain that data scientists describe problems to be tractable or not. Computer algorithms can solve tractable problems in a reasonable amount of time with a reasonable amount of resources, whereas, in-tractable problems simply can’t be solved. Decades more of R&D is needed at a deep theoretical level, to bring approaches and frameworks forward that will then take years to be applied and be useful to society. Did I scare you off? Nope? Okay great. Then you accept this challenge to learn machine learning.  But before we dive into how to learn machine learning, let's answer the question: Why does learning machine learning matter to you?  Well, you're a technologist and as a result, it’s your duty, your obligation, to be on the cutting edge. The technology world is moving at a fast clip and it’s accelerating. Take for example, the shortened duration between public accomplishments of machine learning versus top gaming experts. It took a while to get to the 2011 Watson v. Jeopardy champion, and far less time between AlphaGo and Libratus. So what's the significance to you and your professional software engineering career? Elementary dear my Watson—just like the so-called digital divide between non-technical and technical lay people, there is already the start of a technological divide between top systems engineers and the rest of the playing field in terms of making an impact and disrupting the way the world works.  Don’t believe me? When’s the last time you’ve programmed a self-driving car or a neural network that can guess your drawings? Making an impact and how to learn machine learning The toughest part about getting started with machine learning is figuring out what type of problem you have at hand because you run the risk of jumping to potential solutions too quickly before understanding the problem. Sure you can say this of any software design task, but this point can’t be stressed enough when thinking about how to get machines to recognize patterns in data. There are specific applications of machine learning algorithms that solve a very specific problem in a very specific way and it’s difficult to know how to solve a meta-problem if you haven’t studied the field from a conceptual standpoint. For me, a break through in learning machine learning came from taking Andrew Ng’s machine learning course on courser. So taking online courses can be a good way to start learning.  If you don’t have the time, you can learn about machine learning through numbers and images. Let's take a look.  Numbers Conceptually speaking, predicting a pattern in a single variable based on a direct—otherwise known as a linear relationship with another piece of data—is probably the easiest machine learning problem and solution to understand and implement.  The following script predicts the amount of data that will be created based on fitting a sample data set to a linear regression model: https://github.com/ctava/linearregression-go. Because there is somewhat of a fit of the sample data to a linear model, the machine learning program predicted that the data created in the fictitious Bob’s system will grow from 2017, 2018.  Bob’s Data 2017: 4401Bob’s Data 2018 Prediction: 5707  This is great news for Bob and for you. You see, machine learning isn’t so tough after all. I’d like to encourage you to save data for a single variable—also known as feature—to a CSV file and see if you can find that the data has a linear relationship with time. The following website is handy in calculating the number of dates between two dates: https://www.timeanddate.com/date/duration.html. Be sure to choose your starting day and year appropriately at the top of the file to fit your data. Images Machine learning on images is exciting! It’s fun to see what the computer comes up with in terms of pattern recognition, or image recognition. Here’s an example using computer vision to detect that grumpy cat is actually a Persian cat: https://github.com/ctava/tensorflow-go-imagerecognition. If setting up Tensorflow from source isn’t your thing, not to worry. Here’s a Docker image to start off with: https://github.com/ctava/tensorflow-go. Once you’ve followed the instructions in the readme.md file, simply:  Get github.com/ctava/tensorflow-go-imagerecognition Run main.go -dir=./ -image=./grumpycat.jpg Result: BEST MATCH: (66% likely) Persian cat Sure there is a whole discussion on this topic alone in terms of what Tensorflow is, what’s a tensor, and what’s image recognition. But I just wanted to spark your interest so that maybe you’ll start to look at the amazing advances in the computer vision field. Hopefully this has motivated you to learn more about machine learning based on reading about the recent advances in the field and seeing two simple examples of predicting numbers, and classifying images.I’d like to encourage you to keep up with data science in general. About the Author  Chris Tava is a Software Engineering / Product Leader with 20 years of experience delivering applications for B2C and B2B businesses. His specialties include: program strategy, product and project management, agile software engineering, resource management, recruiting, people development, business analysis, machine learning, ObjC / Swift, Golang, Python, Android, Java, and JavaScript.
Read more
  • 0
  • 0
  • 20279

article-image-essential-skills-penetration-testing
Hari Vignesh
11 Jun 2017
6 min read
Save for later

Essential skills for penetration testing

Hari Vignesh
11 Jun 2017
6 min read
Cybercriminals are continally developing new and more sophisticated ways to exploit software vulnerabilities, making it increasingly difficult to defend our systems. Today, then, we need to be proactive in how we protect our digital properties. That's why penetration testers are so in demand. Although risk analysis can easily be done by internal security teams, support from skilled penetration testers can be the difference between security and vulnerability. These highly trained professionals can “think like the enemy” and employ creative ways to identify problems before they occur, going beyond the use of automated tools. Pentesters can perform technological offensives, but also simulate spear phishing campaigns to identify weak links in the security posture of the companies and pinpoint training needs. The human element is essential to simulate a realistic attack and uncover all of the infrastructure’s critical weaknesses. Being a pen tester can be financially rewarding because trained and skilled ones can normally secure good wages. Employers are willing to pay top dollar to attract and retain talent. Most pen testers enjoy sizable salaries depending on where they live and their level of experience and training. According to a PayScale salary survey, the average salary is approximately $78K annually, ranging from $44K to $124K on the higher end. To be a better pen tester, you need to upgrade or master your art in certain aspects. The following skills will make you stand out in the crowd and will make you a better and more effective pen tester. I know what you’re thinking. This seems like an awful lot of work learning penetration testing, right? Wrong. You can still learn how to penetration test and become a penetration tester without these things, but learning all of these things will make it easier and help you understand both how and why things are done a certain way. Bad pen testers know that things are vulnerable. Good pen testers know how things are vulnerable. Great pen testers know why things are vulnerable. Mastering command-line If you notice that even in modern hacker films and series, the hackers always have a little black box on the screen with text going everywhere. It’s a cliché but it’s based in reality. Hackers and penetration testers alike use the command line a lot. Most of the tools are normally command line based. It’s not showing off, it’s just the most efficient way to do our jobs. If you want to become a penetration tester you need to be at the very least, comfortable with a DOS or PowerShell prompt or terminal. The best way to develop this sort of skillset is to learn how to write DOS Batch or PowerShell scripts. There are various command line tools that make the life of a pen-tester easy. So learning to use those tools and mastering them will enable you to pen-test your environment efficiently. Mastering OS concepts If you look at penetration testing or hacking sites and tutorials, there’s a strong tendency to use Linux. If you start with something like Ubuntu, Mint or Fedora or Kali as a main OS and try to spend some time tinkering under the hood, it’ll help you become more familiar with the environment. Setting up a VM to install and break into a Linux server is a great way to learn. You wouldn’t expect to be able to comfortably find and exploit file permission weaknesses if you don’t understand how Linux file permissions work, nor should you expect to be able to exploit the latest vulnerabilities comfortably and effectively without understanding how they affect a system. A basic understanding of Unix file permissions, processes, shell scripting, and sockets will go a long way. Mastering networking and protocols to the packet level TCP/IP seems really scary at first, but the basics can be learned in a day or two. While breaking in you can use a packet sniffing tool called Wireshark to see what’s really going on when they send traffic to a target instead of blindly accepting documented behavior without understanding what’s happening. You’ll also need to know not only how HTTP works over the wire, but also you’ll need to understand the Document Object Model (DOM) and enough knowledge about how backends work to then, further understand how web-based vulnerabilities occur. You can become a penetration tester without learning a huge volume of things, but you’ll struggle and it’ll be a much less rewarding career. Mastering programming If you can’t program then you’re at risk of losing out to candidates who can. At best, you’re possibly going to lose money from that starting salary. Why? You would require sufficient knowledge in a programming language to understand the source code and find a vulnerability in it. For instance, only if you know PHP and how it interacts with a database, will you be able to exploit SQL injection. Your prospective employer is going to need to give you time to learn these things if they’re going to get the most out of you. So don’t steal money from your own career, learn to program. It’s not hard. Being able to program means you can write tools, automate activities, and be far more efficient. Aside from basic scripting you should ideally become at least semi-comfortable with one programming languageand cover the basics in another. Web people like Ruby. Python is popular amongst reverse engineers. Perl is particularly popular amongst hardcore Unix users. You don’t need to be a great programmer, but being able to program is worth its weight in goldand most languages have online tutorials to get you started. Final thoughts Employers will hire a bad junior tester if they have to, and a good junior tester if there’s no one better, but they’ll usually hire a potentially great junior pen tester in a heartbeat. If you don’t spend time learning the basics to make yourself a great pen tester, you’re stealing from your own potential salary. If you’re missing some or all of the things above, don’t be upset. You can still work towards getting a job in penetration testing and you don’t need to be an expert in any of these things. They’re simply technical qualities that make you a much better candidate for being (and probably better paid) hired from a hiring manager and supporting interviewer’s perspective. About the author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 59587
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-can-tech-industry-learn-maker-community
Raka Mahesa
11 Jun 2017
5 min read
Save for later

What can the tech industry learn from the Maker community?

Raka Mahesa
11 Jun 2017
5 min read
Just a week prior to the writing of this post, Maker Faire Bay Area was opened for three days in San Mateo, exhibiting hundreds of makers and attracting hundreds of thousands of attendees. Maker Faire is the grand gathering for the Maker movement. It's a place where the Maker community can showcase their latest projects and connect with other fellow makers easily.  The Maker community has always had a close connection with the technology industry. They use the latest technologies in their projects, they form their community within Internet forumsand they share their projects and tutorials on video-sharing websites. It's a community born from how accessible technology nowadays is, so what can the tech industry learn from this positive community?  Let's begin with examining the community itself. What is the Maker movement?  Defining the Maker movement in a simple way is not easy. It's not exactly a movement because there's no singular entity that tries to rally people into it and decide what to do next. It's also not merely a community of tinkerers and makers that work together. The best way to sum up the entirety of the Maker movement is to say that it's a culture.  The Maker culture is a culture that revels in the creation of things. It's a culture where people are empowered to move from being a consumer to being a creator. It's a culture that involves people making the tools they need on their own. It's a culture that involves people sharing the knowledge of their creations with other people. And while the culture seems to be focused on technological projects like electronics, robotics, and 3D printing; the Maker community also involves non-technological projects like cooking, jewelry, gardening, and food.  While a lot of these DIY projects are simple and seem to be made for entertainment purposes, a few of them have the potential to actually change the world. For example, e-NABLE is an international community which has been using 3D printers to provide free prosthetic hands and arms for those who need it. This amazing community started its life when a carpenter in South Africa, who lost his fingers in an accident, collaborated with an artist-engineer in the US to create a replacement hand. Little did they know that their work would start such a large movement.  What lesson can the tech industry draw from the Maker culture?  One of the biggest takeaways of the Maker movement, is how much of it relies on collaboration and sharing. With no organization or company to back them, the community has to turn to itself to share their knowledge and encourage other people to become a maker. And only by collaborating with each other can an ambitious DIY project come to fruition. For example, robotics is a big, complex topic. It's very hard for one person to understand all the aspects needed to build a functioning robot from scratch. But by pooling knowledge from multiple people with their own specializations, such a project is possible.  Fortunately, collaboration is something that the tech industry has been doing for a while. The Android smartphone is a collaborative effort between a software company and hardware companies. Even smartphones themselves are usually made by components from different companies. And in the software developer community side, the spirit of helping each other is alive and well; as can be seen by the popularity of websites like StackOverflow and GitHub.  Another lesson that can be learned from the Maker community is the importance of accessibility in encouraging other people to join the community. The technology industry has always been worried about how there are not enough engineers for every technology company in the world. Making engineering tools and lessons more accessible to the public seems like a good way to encourage more people to be an engineer. After all, cheap 3D printers and computers, as well as easy-to-find tutorials, are the reasons why the Maker community could grow this fast.  One other thing that the tech industry can learn from the Maker community is about how a lot of big, successful projects are started by trying to solve a smaller, personal problem. One example of such project is Quadlock, a company that started its venture simply because the founders wanted to have a bottle opener integrated to their iPhone case. After realizing that other people wanted to have a similar iPhone case, they started to work on more iPhone cases and now they're running a company producing these unique cases.  The Maker Movement is such an amazing culture, and it's still growing, day by day. While all the points written above are great lessons that we can all apply in our lives, I'm sure there is still a lot more that we can learn from this wonderful community.  About the Author  RakaMahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 19145

article-image-what-are-best-programming-languages-building-apis
Antonio Cucciniello
11 Jun 2017
4 min read
Save for later

What are the best programming languages for building APIs?

Antonio Cucciniello
11 Jun 2017
4 min read
Are you in the process of designing your first web application? Maybe you have built some in the past but are looking for a change in language to increase your skill set, or try out something new. If you fit into those categories, you are in the right place. With all of the information out there, it could be hard to decide on what programming language to select for your next product or project. Because any programming language can ultimately be used to write APIs, some can be better and more efficient to use than others. Today we will be discussing what should be taken into consideration when choosing the programming language to build out APIs for your web app. Comfort is essential when it comes to programming languages This goes out to any developer who has experience in a certain language. If you already have experience in a language, you will ultimately have an easier time developing, understanding the concepts involved, and you will be able to make more progress right out of the gate. This translates to improved code and performance as well because you can spend more time on that rather than learning a brand new programming language. For example, if I have been developing in Python for a few years, but I have the option between using PHP or Python as a programming language for the project, I simply select Python due to the time saved already spent learning Python. This is extremely important because when trying to do something new, you want to limit the amount of unknowns that you will have in the project. That will help your learning and help to achieve better results. If you are a brand new developer with zero programming experience, the following section might help you narrow your results. Libraries and frameworks that support developing APIs The next question to ask in the process of eliminating potential programming languages to build out your API is: Does the language come with plenty of different options for libraries or frameworks that aid in the developing of APIs? To continue with the Python example in the previous section, there is the Django REST framework that is specifically built on top of Django. Django is a web development framework for Python, made for creating an API in the programming language faster and easier. Did you hear faster and easier? Why yes you did, and that is why this is important. These libraries and frameworks allow you to speed up the development process by containing functions and objects that handle plenty of the repetitive or dirty work in building an API. Once you have spent some time researching what is available to you in terms of libraries and frameworks for languages, it is time to check out how active the communities are. Support and community The next question to ask yourself in this process is: Are the frameworks and libraries for this programming language still being supported? If so, how active is the community of developers? Do they have continuous or regular updates to their software and capabilities? Do the updates help improve security and usability? Given that not many people use the language, nor is it being updated for bug fixes in the future, you may not want to continue using it. Another thing to pay attention to is the community of users. Are there plenty of resources for you to learn from? How clear and available is the documentation? Are there experienced developers who have blog posts on the necessary topics to learn? Are there questions being asked and answered on Stack Overflow? Are there any hard resources such as magazines or textbooks that show you how to use these languages and frameworks? Potential languages for building APIs From my experience, there are a number of better programming languages.Here is an example framework for some of these languages, which you can use to start developing your next API: Language Framework Java Spring JavaScript(Node) Express Python Django PHP Laravel Ruby Ruby on Rails   Ultimately, the programming language you select is dependent on several factors: your experience with the language, the frameworks available for API building, and how active both the support and the community are. Do not be afraid to try something new! You can always learn, but if you are concerned about speed and ease of development, use these criteria to help select the language of use. Leave a comment down below and let us know which programming language is your favorite and how you will use it in your future applications!
Read more
  • 0
  • 0
  • 69411

article-image-how-devops-can-improve-software-security
Hari Vignesh
11 Jun 2017
7 min read
Save for later

How DevOps can improve software security

Hari Vignesh
11 Jun 2017
7 min read
The term “security” often evokes negative feelings among software developers because it is associated with additional programming effort, uncertainty and roadblocks to fast development and release cycles. To secure software, developers must follow numerous guidelines that; while intended to satisfy some regulation or other, can be very restrictive and hard to understand. As a result, a lot of fear, uncertaintyand doubt can surround software security.  First, let’s consider the survey conducted by SpiceWorks, in which IT pros were asked to rank a set of threats in order of risk to IT security. According to the report, the respondents ranked the following threats as their organization’s three biggest risks to IT security as follows:  Human error Lack of process External threats  DevOps can positively impact all three of these major risk factors, without negatively impacting stability or reliability of the core business network. Let’s discuss how security in DevOps attempts to combat the toxic environment surrounding software security; by shifting the paradigm from following rules and guidelines to creatively determining solutions for tough security problems. Human error We’ve all fat-fingered configurations and code before. Usually we catch them, but once in a while they sneak into production and wreak havoc on security. A number of “big names” have been caught in this situation, where a simple typo introduced a security risk. Often these occur because we’re so familiar with what we’re typing that we see what we expect to see, rather than what we actually typed.  To reduce risk from human error via DevOps you can: Use templates to standardize common service configurations Automate common tasks to avoid simple typographical errors Read twice, execute once Lack of process First, there’s the fact that there’s almost no review of the scripts that folks already use to configure, change, shutdown, and start up services across the production network. Don’t let anyone tell you they don’t use scripts to eliminate the yak shaving that exists in networking and infrastructure, too. They do. But they aren’t necessarily reviewed and they certainly aren’t versioned like the code artifacts they are;they rarely are reused. The other problem is simply there’s no governed process. It’s tribal knowledge.  To reduce risk from a lack of process via DevOps: Define the deployment processclearly. Understand prerequisites, dependencies and eliminate redundancies or unnecessary steps. Move toward the use of orchestration as the ultimate executor of the deployment process, employing manual steps only when necessary. Review and manage any scripts used to assist in the process. External threats At first glance, this one seems to be the least likely candidate for being addressed with DevOps. Given that malware and multi-layered DDoS attacks are the most existential threats to businesses today, that’s understandable. There are entire classes of vulnerabilities that can only be detected manually by developers or experts reviewing the code. But it doesn’t really extend to production, where risks becomes reality when it’s exploited. One way that DevOps can reduce potential risk is, more extensive testing and development of web app security policies during development that can then be deployed in production.  Adopting a DevOps approach to developing those policies — and treating them like code too — provides a faster and a more likely, thorough policy that does a better job overall of preventing the existential threats from being all-too-real nightmares.  To reduce the risk of threats becoming reality via DevOps: Shift web app security policy development and testing left, into the app development life cycle. Treat web app security policies like code. Review and standardize. Test often, even in production. Automate using technology such as dynamic application security testing (DAST) and when possible, integrate results into the development life cycle for faster remediation that reduces risk earlier. Best DevOps practices Below is a list of the top five DevOps practices and tooling that can help improve overall security when incorporated directly into your end-to-end continuous integration/continuous delivery (CI/CD) pipeline: Collaboration Security test automation Configuration and patch management Continuous monitoring Identity management Collaboration and understanding your security requirements Many of us are required to follow a security policy. It may be in the form of a corporate security policy, a customer security policy, and/or a set of compliance standards (ex. SOX, HIPAA, etc). Even if you are not mandated to use a specific policy or regulating standard, we all still want to ensure we follow the best practices in securing our systems and applications. The key is to identify your sources of information for security expertise, collaborate early, and understand your security requirements early so they can be incorporated into the overall solution. Security test automation Whether you’re building a brand new solution or upgrading an existing solution, there likely are several security considerations to incorporate. Due to the nature of quick and iterative agile development, tackling all security at once in a “big bang” approach likely will result in project delays. To ensure that projects keep moving, a layered approach often can be helpful to ensure you are continuously building additional security layers into your pipeline as you progress from development to a live product. Security test automation can ensure you have quality gates throughout your deployment pipeline giving immediate feedback to stakeholders on security posture and allowing for quick remediation early in the pipeline. Configuration management In traditional development, servers/instances are provisioned and developers are able to work on the systems. To ensure servers are provisioned and managed using consistent, repeatable and reliable patternsit’s critical to ensure you have a strategy for configuration management. The key is ensuring you can reliably guarantee and manage consistent settings across your environments. Patch management Similar to the concerns with configuration management, you need to ensure you have a method to quickly and reliably patch your systems. Missing patches is a common cause of exploited vulnerabilities including malware attacks. Being able to quickly deliver a patch across a large number of systems can drastically reduce your overall security exposures. Continuous monitoring Ensuring you have monitoring in place across all environments with transparent feedback is vital so it can alert you quickly of potential breaches or security issues. It’s important to identify your monitoring needs across the infrastructure and applicationand then take advantage of some of the tooling that exists to quickly identify, isolate, shut down, and remediate potential issues before they happen or before they become exploited. Part of your monitoring strategy also should include the ability to automatically collect and analyze logs. The analysis of running logs can help identify exposures quickly. Compliance activities can become extremely expensive if they are not automated early. Identity management DevOps practices help allow us to collaborate early with security experts, increase the level of security tests and automation to enforce quality gates for security and provide better mechanisms for ongoing security management and compliance activities. While painful to some, it has to be important to all if we don’t want to make headlines.  About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 12879

article-image-what-transfer-learning
Janu Verma
08 Jun 2017
6 min read
Save for later

What is transfer learning?

Janu Verma
08 Jun 2017
6 min read
Introduction and motivation In standard supervised machine learning, we need training data, i.e. a set of data points with known labels, and we build a model to learn the distinguishing properties that separate data points with different labels. This trained model can then be used to make label predictions for new data points. If we want to make predictions for another task (with different labels) in a different domain, we cannot use the model trained previously. We need to gather training data with the new task, and train a separate model. Transfer learning provides a framework to leverage the already existing model (based on some training data) in a related domain. We can transfer the knowledge gained in the previous model to the new domain (and data). For example, if we have built a model to detect pedestrians and vehicles in traffic images, and we wish to build a model for detecting pedestrians, cycles, and vehicles in the same data, we will have to train a new model with three classes because the previous model was trained to make two-class predictions. But we clearly have learned something in the two-class situation, e.g. discerning people walking from moving vechicles. In the transfer learning paradigm, we can use our learnings from the two-label classifier to the three-label classifier that we intend to construct. As such, we can already see that transfer learning has very high potential. In the words of Andrew Ng, a leading expert in machine learning, in his extremly popular NIPS 2016 tutorial, "Transfer learning will be next driver of machine learning success." Transfer learning in deep learning Transfer learning is particularly popular in deep learning. The reason for this is that it's very expensive to train deep neural networks, and they require huge amounts of data to be able to achieve their full potential. In fact, other recent successes of deep learning can be attributed to the availablity of a lot of data and stronger computational resources. But, other than a few large companies like Google, Facebook, IBM, and Microsoft, it's very difficult to accrue data and the computational machines required for training strong deep learning models. In such a situation, transfer learning comes to the rescue. Many pre-trained models, trained on a large amount of data, have been made available publically, along with the values of billions of parameters. You can use the pre-trained models on large data, and rely on transfer learning to build models for your specific case. Examples The most popular application of transfer learning is image classification using deep convolution neural networks (ConvNets). A bunch of high performing, state-of-the-art convolution neural network based image classifiers, trained on ImageNet data (1.2 million images with 100 categories), are available publically. Examples of such models include AlexNet, VGG16, VGG19, InceptionV3, and more, which takes months to train. I have personally used transfer learning to build image classifiers on top of VGG19 and InceptionV3. Another popular model is the pre-trained distributed word embeddings for millions of words, e.g word2vec, GloVe, FastText, etc. These are trained on all of Wikipedia, Google News, etc., and provide vector representations for a huge number of words. This can then be used in a text classification model. Strategies for transfer learning Transfer learning can be used in one the following four ways: Directly use pre-trained model: The pre-trained model can be directly used for a similar task. For example, you can use the InceptionV3 model by Google to make predictions about the categories of images. These models are already shown to have high accuracy. Fixed features: The knowledge gained in one model can be used to build features for the data points, and such features (fixed) are then fed to new models. For example, you can run the new images through a pre-trained ConvNet and the output of any layer can be used as a feature vector for this image. The features thus built can be used in a classifier for the desired situation. Similarly, you can directly use the word vectors in the text classification model. Fine-tuning the model: In this strategy, you can use the pre-trained network as your model while allowing for fine-tuning the network. For example, for the image classifier model, you can feed your images to the InceptionV3 model and use the pre-trained weights as an initialization (rather than random initialzation). The model will be trained on the much smaller user-provided data. The advantage of such a strategy is that weights can reach the global minima without much data and training. You can also make a portion (usually the begining layers) fixed, and only fine-tune the remaining layers. Combining models: Instead of re-training the top few layers of a pre-trained model, you can replace the top few layers by a new classifier, and train this combined network, while keeping the pre-trained portion fixed. Remarks It is not a good idea to fine-tune the pre-trained model if the data is too small and similar to the original data. This will result in overfitting. You can directly feed the data to the pre-trained model or train a simple classifier on the fixed features extracted from it. If the new data is large, it is a good idea to fine-tune the pre-trained model. In case the data is similar to the original, we can fine-tune only the top few layers, and fine-tuning will increase confidence in our predictions. If the data is very different, we will have to fine-tune the whole network. Conclusion Transfer learning allows someone without a large amount of data or computational capabilities to take advantage of the deep learning paradigm. It is an exciting research and application direction to use off-the-shelf pre-trained models and transfer them to novel domains.  About the Author  Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute.  He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals etc.  His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area, email to schedule a meeting. Check out his personal website at http://jverma.github.io/.
Read more
  • 0
  • 0
  • 11163
article-image-what-keras
Janu Verma
01 Jun 2017
5 min read
Save for later

What is Keras?

Janu Verma
01 Jun 2017
5 min read
What is keras? Keras is a high-level library for deep learning, which is built on top of Theano and Tensorflow. It is written in Python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical techniques. The key idea behind Keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, the creator of Keras: Being able to go from idea to result with the least possible delay is key to doing good research. This is a huge advantage, especially for scientists and beginner developers because one can jump right into deep learning without getting their hands dirty. Because deep learning is currently on the rise, the demand for people trained in deep learning is ever increasing. Organizations are trying to incorporate deep learning into their workflows, and Keras offers an easy API to test and build deep learning applications without considerable effort. Deep learning research is such a hot topic right now, scientists need a tool to quickly try out their ideas, and they would rather spend time on coming up with ideas than putting together a neural network model. I use Keras in my own research, and I know a lot of other researchers relying on Keras for its easy and flexible API. What are the key features of Keras? Keras is a high-level interface to Theano or Tensorflow and any of these can be used in the backend. It is extremely easy to switch from one backend to another. Training deep neural networks is a memory and time intensive task. Modern deep learning frameworks like Tensorflow, Caffe, Torch, etc. can also run on GPU, though there might be some overhead in setting up and running the GPU. Keras runs seamlessly on both CPU and GPU. Keras supports most of the neural layer types e.g. fully connected, convolution, pooling, recurrent, embedding, dropout, etc., which can be combined in any which way to build complex models. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone, fully-configurable module, and these modules can be combined to create new models. Essentially, layers, activation, optimizers, dropout, loss, etc. are all different modules that can be assembled to build models. A key advantage of modularity is that new features are easy to add as separate modules. This makes Keras fully expressive, extremely flexible, and well-suited for innovative research. Coding in Keras is extremely easy. The API is very user-friendly with the least amount of cognitive load. Keras is a full Python framework, and all coding is done in Python, which makes it easy to debug and explore. The coding style is very minimalistic, and operations are added in very intuitive python statements. How is Keras built? The core component of Keras architecture is a model. Essentially, a model is a neural network model with layers, activations, optimization, and loss. The simplest Keras model is Sequential, which is just a linear stack of layers; other layer arrangements can be formed using the Functional model. We first initialize a model, add layers to it one by one, each layer followed by its activation function (and regularization, if desired), and then the cost function is added to the model. The model is then compiled. A compiled model can be trained, using the simple API (model.fit()), and once trained the model can be used to make predictions (model.predict()). The similarity to scikit-learn API can be noted here. Two models can be combined sequentially or parallel. A model trained on some data can be saved as an HDF5 file, which can be loaded at a later time. This eliminates the need to train a model again and again; train once and make predictions whenever desired. Keras provides an API for most common types of layers. You can also merge or concatenate layers for a parallel model. It is also possible to write your own layers. Other ingredients of a neural network model like loss function, metric, optimization method, activation function, and regularization are all available with most common choices. Another very useful component of Keras is the preprocessing module with support for manipulating and processing image, text, and sequence data. A number of deep learning models and their weights obtained by training on a big dataset are made available. For example, we have VGG16, VGG19, InceptionV3, Xception, ResNet50 image recognition models with their weights after training on ImageNet data. These models can be used for direct prediction, feature building, and/or transfer learning. One of the greatest advantage of Keras is a huge list of example code available on the Keras GitHub repository (with discussions on accompanying blog) and on the wider Internet. You can learn how to use Keras for text classification using a LSTM model, generate inceptionistic art using deep dream, using pre-trained word embeddings, building variational autoencoder, or train a Siamese network, etc. There is also a visualization module, which provides functionality to draw a Keras model. This uses the graphviz library to plot and save the model graph to a file. All in all, Keras is a library worth exploring, if you haven't already.  Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute.  He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals etc.  His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area, email to schedule a meeting. Check out his personal website here.
Read more
  • 0
  • 0
  • 17262

article-image-what-serverless-architecture-and-why-should-i-be-interested
Ben Neil
01 Jun 2017
6 min read
Save for later

What is serverless architecture and why should I be interested?

Ben Neil
01 Jun 2017
6 min read
I’ve heard the term “serverless” architecture for over a year and it took awhile before I even started seriously looking into this technology.  I was of the belief that that serverless was going to be another PaaS solution, similar to cloud foundry with even less levers to pull.  However, as I started playing with a few different use cases, I quickly discovered that a serverless approach to a problem could be fast, focused and unearthed some interesting use cases. So without any further ado I want to break down some of the techniques that make architecture “serverless” and provide you with suggestions along the way.   My four tenants of the serverless approach are as follows: Figure out where in your code base this seems a good fit. Find a service that runs FaaS. Look at dollars not cents. Profit.  The first step, as with any new solution, is to determine where in your code base a scalable solution would make sense.  By all means, when it comes to serverless, you shouldn’t get too exuberant and refactor your project in its entirety. You want to find a bottleneck that could use the high scalability options that serverless vendors can grant you. This can be math functions, image manipulations, log analysis, specific map reduce, or anything you find that may need some intensive compute, but not requiring a lot of stateful data.  A really great litmus test for this is to use some performance tooling that's available for your language, if you note that a bottleneck is related to a critical section like database access, but a spike that keeps occurring from a piece of code that perhaps works, but hasn't been fully optimized yet.  Assuming you found that piece of code (modifying it to be in a request/response pattern),and you want to expose it in a highly scalable way, you can move on to applying that code to your FaaS solution. Integrating that solution should be relatively painless, but it's worth taking a look at some of the current contenders in the FaaS ecosystem, thus leading into the second point “finding a FaaS,” which is now easier with vendors such as Hook.io, AWS Lambda, Google Cloud functions, Microsoft Azure, Hyper.sh Func, and others. Note, one of the bonuses from all the above vendors I have included is that you will only pay for the compute time of your function, meaning that as requests come in, your function will directly scale the cost of running your code.  Think of it like usingjitsu: (http://www.thomas.gazagnaire.org/pub/MLSGSSMCSLCL15.pdf), you can spin up the server and apply the function, get a result, and rinse/repeat all without having to worry about the underlying infrastructure.  Now, given your experience in general with these vendors, if you are new to FaaS, I would strongly recommend taking a look at Hook.io because you can get a free developer account and start coding to get an idea of how this pattern works for this technology. Then, after you become more familiar you can than move onto AWSLamda or Google Cloud Functions for all the different possibilities that those larger vendors can provide. Another interesting trend that has became popular from a modern aspect of serverless infrastructure is to “use vendors where applicable,” which can be restated as only focusing on the services you want to be responsible for.  Taking the least interesting parts of the application and offloading them to third parties, which translates, as a developer, to maximizing your time by often paying for just for the services you are using rather than hosting large VMs yourself, and expending the time required to maintain them effectively.  For example, it'srelatively painless to spin up an instance on AWS, Rackspace, and install a MySql server on a virtual machine, but over time the investment of your personal time to back up, monitor, and continually maintain that instance may be too much of draw for your experience, or take too much attention away from day-to-day responsibilities. You might say, well isn’t that what Docker is for? But what people often discover with visualization is it has its own problem set, which may not be what you are looking for. Given the MySql example, you can easily bring up a Docker instance, but what about keeping stateful volumes? Which drivers are you going to use for persistent storage? Questions start piling up about the short-term gains versus long-term issues, and that's when you can use a service like AWS RDS to get a database up and running for the long term. Set the backup schedule to your desire,and never you’ll have to worry about any of that maintenance (well some push button upgrades every once in a blue moon). As stated earlier, how does a serverless approach differ from having a bunch of containers with these technologies spun up through Docker compose and hooking them up to event-based systems frameworks similar to the serverless framework (https://github.com/serverless/serverless). Well,you might have something there and I would encourage anyone reading this article to take a glance. But to keep it brief, depending on your definition of serverless, those investments in time might not be what you’re looking for.  Given the flexibility and rising popularity in micro/nanoservices, alongside all the vendors that are at your disposal to take some of the burden off developing, serverless architecture has really become interesting. So, take the advantages of this massive vendor ecosystem and FaaS solutions and focus on developing. Because when all is said and done, services, software, and applications are made of functions that are fun to write, whereas the thankless task of upgrading a MySql database could stop your hair from going white prematurely.  About the author  Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github.  I’m not sure but is ‘awhile’ an adverb? Also, I thought the first sentence could read better maybe if it was a structured a little differently. E.g. The term “serverless” architecture has been thrown around for over a year, and it’s taken some time for me to start seriously looking into this [adjective] technology.
Read more
  • 0
  • 0
  • 26918

article-image-google-distill-everything-data-pros-need-know
Xavier Bruhiere
24 May 2017
6 min read
Save for later

Google Distill: Everything data pros need to know

Xavier Bruhiere
24 May 2017
6 min read
  Momentum works In 1984, the idea of an AI winter (a period where deep research in this artificial intelligence almost completely stops) appeared. The interesting thing is that it emerged both from technical disillusions, and bad press coverage. Scientists were woeful after the initial promises of artificial intelligence funding stopped, and everything broke apart for a while. This example serves as a solemn statement; the way research is communicated matters a great deal for development. Through the lens of Distill, a publishing platform for machine learning papers, we will explore why. Then, we shall see why you should care if you embrace data for a living. How we share knowledge Engineering is a subtle alchemy of knowledge and experience. The convenient state of the industry is that we are keen to share both online. It builds carriers, relationships, helps in the hiring process, enhances companies’ image, etc. The process is not perfect, but once the sources are right, it starts to become addictive. Yet, shocking exceptions occur in the middle of this never-ending stream of exciting articles that crash forever in Pocket. A majority of the content is beautifully published on carefully designed engineering blogs, or on Medium, and takes less than 10 minutes to read. Yet, sometimes the click-bait title gets you redirected on a black and white PDF with dense text and low-resolution images. We can feel the cognitive overhead of going through such papers. For one, they are usually more in-depth analyses. Putting in the effort to understand them when the material is also less practical and the content harder to parse, can distract readers. This is unfortunate because practical projects are growing in complexity, and engineers are expected to deliver deep expertise on the edge of new technologies like machine learning. So, maybe you care about consuming knowledge resulting from three years of intensive research and countless more years of collaborative work. Or maybe you want your paper to help your peers to progress, build upon it, and recognize you as a valuable contributor to their field. If so, then the Distill publishing platform could really help you out. What is Distill? Before exploring the how, let's detail the what. As explained in the footer of the website: "Distill is dedicated to clear explanations of machine learning." Or to paraphrase Michael Nielsen from Hacker News, "In a nutshell, Distill is an interactive, visual journal for machine learning research." Thanks to modern web tools, authors can craft and submit papers for review before being published. The project was founded by Chris Olah and Shan Carter, members of the Google Brain team. They put together a Steering Committee composed of venerable leaders from open source, Deep Mind, and YC research. Effectively, they gathered an exciting panel of experts to lead how machine learning knowledge should be spread in 2017. Mike Bostock, for example, is the creator of d3.js, a reference for data visualization, or [Amanda Cox] [7], a remarkable data journalist at the New York Times. Together they shape the concept of "explorable explanations," i.e. articles the reader can interact with. By directly experimenting the theories in the article, the reader can better map their own mental models with the author's explanations. It is also a great opportunity to verbalize complex problems in a form that triggers new insights and seed productive collaboration for future work. I hope it sounds exciting to you, but let's dive into the specifics of why you should care.   Tensorflow Playground Why does it matter? Improving research clarity, range of audience, efficiency, and opportunity of communication should have a beautiful side effect: accelerating research. Chris and Shan explain the pain points they intend to solve in their article Research Debt. As scientists' work piles up, we need, as a community, to transform this ever-growing debt into an opportunity that enables technologies to thrive. Especially today where most of the world wonders how the latest advances in AI will be used. With Distill you should be able to have a flexible medium to base research on clearly and explicitly, and is a great way to improve confidence in our work and push faster and safer beyond the bounds. But as a data professional, you sometimes spend your days not thinking of saving the world from robots—like thinking of yourself and your career. As a reader, I can't encourage you enough to follow the article feed or even the twitter account. There are only a few for now, but they are exceptional (in my humble opinion). Your mileage may vary depending on your experience, or how you apply data for a living, but I bet you can boost your data science skills browsing through the publications. As an author, it provides an opportunity: Reaching the level of quality and empathy of the previous publications Interacting with top-level experts in the community during reviews Gaining an advantageous visibility in machine learning and setting yourself as an expert on highly technical matters Making money by competing for the Distill Prize The articles are lengthy and demanding, but if you work in the field of machine learning, you've probably already accepted that the journey was worth it. The founders of the project have put a lot of effort into making the platform as impactful as possible for the whole community. As an author, as a reader, or even as an open source contributor, you can be part of it and advance both your accomplishments and machine learning state of the art. Distill as a project is also exciting to follow for its intrinsic philosophy and technologies. As engineers,quality blog posts overwhelm us, as we read to keep our minds sharp. The concept of explorable explanations could very well be a significant advancement in how we share knowledge online. Distill proves the technology is there, so this kind of publication could boost how a whole industry digests the flow of technologies. Explanations of really advanced concepts could become easier to grasp for beginners, or more actionable for busy senior engineers.I hope you will reflect on that and pick how Distill can move you forward. About the author Xavier Bruhiereis a Senior Data Engineer at Kpler. He is a curious, sharp, entrepreneur and engineer who has built many projects, broke most of them, and launch and scaled what was left, learning from them all. 
Read more
  • 0
  • 0
  • 2120
article-image-dispelling-myths-hybrid-cloud
Ben Neil
24 May 2017
6 min read
Save for later

Dispelling the myths of hybrid cloud

Ben Neil
24 May 2017
6 min read
The words "vendor lock" worry me more than I'd like to admit. Whether it's having too many virtual machines in ec2, an expensive lambda in Google Functions, or any random offering that I have been using to augment my on-premise Raspberry Pi cluster, it's really something I'vefeared. Over time, I realize it has impacted the way I have spoken about off-premises services. Why? Because I got burned a few times. A few months back I was getting a classic 3 AM call asking to check in on a service that was failing to report back to an on premise Sensu server, and my superstitious mind immediately went to how that third-party service had let my coworkers down. After a quick check, nothing was broken badly, only an unruly agent had hung on an on-premise virtual machine. I’ve had other issues and wanted to help dispel some of the myths around adopting hybrid cloud solutions. So, to those ends, what are some of these myths and are they actually true? It's harder and more expensive to use public cloud offerings Given some of the places I’ve worked, one of my memories was using VMware to spin up new VMs—a process that could take up to ten minutes to get baseline provisioning. This was eventually corrected by using packer to create an almost perfect VM, getting that into VMware images was time consuming, but after boot the only thing left was informing the salt master that a new node had come online.  In this example, I was using those VMs to startup a Scala http4s application that would begin crunching through a mounted drive containing chunks of data. While the on-site solution was fine, there was still a lot of work that had to be done to orchestrate this solution. It worked fine, but I was bothered by the resources that were being taken for my task. No one likes to talk with their coworker about their 75 machine VM cluster that bursts to existence in the middle of the workday and sets off resource alarms. Thus, I began reshaping the application using containers and Hyper.sh, which has lead to some incredible successes (and alarms that aren't as stressful), basically by taking the data (slightly modified), which needed to be crunched and adding that data to s3. Then pushing my single image to Hyper.sh, creating 100 containers, crunching data, removing those containers and finally sending the finalized results to an on premise service—not only was time saved, but the work flow has brought redundancy in data, better auditing and less strain on the on premise solution. So, while you can usually do all the work you need on-site, sometimes leveraging the options that are available from different vendors can create a nice web of redundancy and auditing. Buzzword bingo aside, the solution ended up to be more cost effective than using spot instances in ec2. Managing public and private servers is too taxing I’ll keep this response brief; monitoring is hard, no matter if the service, VM, database or container,is on-site or off. The same can be said for alerting, resource allocation, and cost analysis, but that said, these are all aspects of modern infrastructure that are just par for the course. Letting superstition get the better of you when experimenting with a hybrid solution would be a mistake.  The way I like to think of it is that as long as you have a way into your on-site servers that are locked down to those external nodes you’re all set. If you need to setup more monitoring, go ahead; the slight modification to Nagios or Zappix rules won’t take much coding and the benefit will always be at hand for notifying on-call. The added benefit, depending on the service, which exists off-site is maybe having a different level of resiliency that wasn't accounted for on-site, being more highly available through a provider. For example, sometimes I use Monit to restart a service or depend on systemd/upstart to restart a temperamental service. Using AWS, I can set up alarms that trigger different events to handle a predefined run-book’s, which can handle a failure and saves me from that aforementioned 3am wakeup. Note that both of these edge cases has their own solutions, which aren’t “taxing”—just par for the course. Too many tools not enough adoption You’re not wrong, but if your developers and operators are not embracing at least a rudimentary adoption of these new technologies, you may want to look culturally. People should want to try and reduce cost through these new choices, even if that change is cautious, taking a second look at that s3 bucket or Pivotal cloud foundry app nothing should be immediately discounted. Because taking the time to apply a solution to an off-site resource can often result in an immediate saving in manpower. Think about it for a moment, given whatever internal infrastructure you’re dealing with, the number of people that are around to support that application. Sometimes it's nice to give them a break. To take that learning curve onto yourself and empower your team and wiki of choice to create a different solution to what is currently available to your local infrastructure. Whether its a Friday code jam, or just taking a pain point in a difficult deployment, crafting better ways of dealing with those common difficulties through a hybrid cloud solution can create more options. Which, after all, is what a hybrid cloud is attempting to provide – optionsthat can be used to reduce costs, increase general knowledge and bolster an environment that invites more people to innovate. About the author Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github. 
Read more
  • 0
  • 0
  • 9962

article-image-what-are-edge-analytics
Peter Worthy
22 May 2017
5 min read
Save for later

What are Edge Analytics?

Peter Worthy
22 May 2017
5 min read
We already know that mobile is a big market with growing opportunities.  We are also hearing about the significant revenue that the IoT will generate. Machina Research predicts that the revenue opportunity will increase to USD$4 trillion by 2025. In the mainstream, both of these technologies are heavily reliant on the cloud, and as they become more pervasive, issues such as response delay and privacy are starting to surface. That’s where Edge Computing and Edge Analytics can help. Cloud, Fog and Mist  As the demand for more complex applications on mobile increases, we needed to offload some of the computational demands from the mobile device. An example is speech recognition and processing applications such as Siri, which needs to access cloud-based servers in order to process user’s requests.  Cloud enabled a wide range of services to be delivered on mobile due to almost unlimited processing and storage capability. However, the trade-off was the delay arising from the fact that the cloud infrastructure was often a large distance from the device.  The solution is to move some of the data processing and storage to either a location closer to the device (a “cloudlet” or “edge node”) or to the device itself. “Fog Computing” is where some of the processing and storage of data occurs between the end devices and cloud computing facilities. “Mist Computing” is where the processing and storage of data occurs in the end devices themselves. These are collectively known as “Edge Computing” or “Edge Analytics” and, more specifically for mobile, “Mobile Edge Computing” (MEC).  The benefits of Edge Analytics  As a developer of either mobile or IoT technology, Edge Analytics provides significant benefits. Responsiveness In essence, the proximity of the cloudlets or edge nodes to the end devices reduces latency in response. Often high bandwidth is possible and jitter is reduced. This is particularly important where a service is sensitive to response delays or to services with high processing demands such as VR or AR.  Scalability By processing the raw data either in the end device or in the cloudlet, the demands that are placed on the central cloud facility is reduced because smaller volumes of data need to be sent to the cloud. This allows a greater number of connections to that facility.  Maintaining privacy Maintaining privacy is a significant concern for IoT. Processing data in either end devices or at cloudlets gives the owner of that data the ability to control the data that is release before it is sent to the cloud. Further, the data can be made anonymous or aggregated before transmission.  Increased development flexibility  Developers of mobile or IoT technology are able to use more contextual information and a wider range of SDKs specific to the device.  Dealing with cloud outages In March this year, Amazon AWS had a server outage, causing significant disruption for many services that relied upon their S3 storage service. Edge computing and analytics could effectively allow your service to remain operational through a fallback to a cloudlet.  Examples  IoT technology is being used to monitor and then manage traffic conditions in a specific location. Identifying traffic congestion, determining the source of that congestion and determining alternative routes requires fast data processing. Using cloud computing results in response delays associated with the transmission of significant volumes of data both to and from the cloud. Edge Analytics means the data is processed closer to that location, and then sent to drivers in much shorter time.  Another example is supporting the distribution of localized content such as the addition of advertisements to a video stream that is only being distributed within a small area without having to divert the stream to another server for processing.  Open issues  As with all emerging technology, there are a number of open or unresolved issues for Edge Analytics. A major and non-technical issue is, what is the business model that will support the provision of cloudlets or edge nodes, and what will be the consequent cost of providing these services? Security also remains a concern: how will the perimeter security of cloudlets compare to that implemented in cloud facilities? As IoT continues to grow, so will the variety of needs for the processing and management of that data. How will cloudlets cope with this demand?  Increased responsiveness, flexibility, and greater control over data to reduce privacy breach risk are strong (and not the only) reasons for adopting Edge Analytics in the development of your mobile or IoT service. It presents a real opportunity in the differentiation of service offerings in a market that is only going to get more competitive in the near future. Consider both the different options that are available to you, and the benefits and pitfalls of those options.  About the author Peter Worthy is an Interaction Designer currently completing a PhD exploring human values and the design of IoT technology in a domestic environment.  Professionally Peter’s interests range from design research methods, understanding HCI and UX in emerging technologies, and physical computing.  Pete also works at a University tutoring across a range of subjects and supporting a project that seeks to develop context aware assistive technology for people living with dementia.
Read more
  • 0
  • 0
  • 14190