Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-6-ways-to-blow-up-your-microservices
Aaron Lazar
14 Jul 2018
6 min read
Save for later

6 Ways to blow up your Microservices!

Aaron Lazar
14 Jul 2018
6 min read
Microservices are great! They’ve solved several problems created by large monoliths, like scalability, fault tolerance, and testability, among others. However, let me assure you that everything’s not rosy yet, and there are tonnes of ways you can blow your microservices to smithereens! Here are 6 sure shot ways to meet failure with microservices, and to spice it up, I’ve included the Batman sound effects too! Disclaimer: Unless you’re Sheldon Cooper, what is and what isn’t sarcasm should be pretty evident in this one! #1 The Polyglot Conspiracy One of the most spoken about benefits of using the microservices pattern, is that you can use a variety of tools and languages to build your application. Great! Let’s say you’re building an e-commerce website with a chat option, maybe VR/AR thrown in too, and then the necessities like a payment page, etc. Obviously you’ll want to build it with microservices. Now, you also thought you might have different teams work on the app using different languages and tools. Maybe Java for the main app, Golang for some services and JavaScript for something else. Moreover, you also decided to use Angular as well as React on various components of your UI. Then one day the React team needs to fix bugs in production on Angular, because the Angular team called in sick. Your Ops team is probably pulling out their hair right now! You need to understand that different tech stacks behave differently in production! Going the Microservices route, doesn’t give you a free ticket to go to town on polyglot services. #2 Sharing isn’t always Caring Let’s assume you’ve built an app where various microservices connect to a single, shared database. It’s quite a good design decision, right? Simple, effective and what not. Now a business requirement calls for a change in the character length on one of the microservices. The team goes ahead and changes the length on one of the tables, and... That’s not all, what if you decide to use connection pools so you can reuse request to the database when required. Awesome choice! Imagine your microservices decided to run amok, submitting query after query to the database. It would knock out every other service for weeks! #3 WET is in; DRY is out? Well, everybody’s been saying Don't Repeat Yourself, these days - architects, developers, my mom. Okay, so you’ve built this application that’s based on event sourcing. There’s a list or store of events and a microservice in your application, that publishes a new event to the store when something happens. For the sake of an example, let’s say it’s a customer microservice that publishes an event “in-cart” whenever the customer selects a product. Another microservice, say “account”, subscribes to that aggregate type and gets informed about the event. Now here comes the best part! Suppose your business asks for a field type to be changed, the easiest way out is to go WET (We Enjoy Typing), making the change in one microservice and copying the code to all the others. Imagine you’ve copied to a scale of hundreds of microservices! Better still, you decided to avoid using Git and just use your event history to identify what’s wrong! You’ll be fixing bugs till you find a new job! #4 Version Vendetta We usually get carried away sometimes, when we’re building microservices. You tend to toss Kafka out of the window and rather build your own framework for your microservices. Not a bad idea at all! Okay, so you’ve designed a framework for the app that runs on event sourcing. So naturally, every microservice that’s connected will use event sourcing to communicate with the others. One fine day, your business asked for a major change in a part of the application, which you did, and the new version of one of the microservices sends the new event to the other microservices and… When you make a change in one microservice, you can’t be sure that all others will work fine, unless versions are changed in them too. You can make things worse by following a monolithic release plan for your microservices. You could keep your customers waiting for months to make their systems compatible, while you have your services ready but are waiting to release a new framework on a monolithic schedule. An awesome recipe for customer retention! #5 SPA Treatment! Oh yeah, Single Page Apps are a great way to build front end applications! So your application is built on the REST architecture and your microservices are connected to a single, massive UI. One day, your business requests for a new field to be added to the UI. Now, each microservice has it’s individual domain model and the UI has its own domain model. You’re probably clueless about where to add the new field. So you identify some free space on the front end and slap it on! Side effects add to the fun! Imagine you’ve changed a field on one service, side effects work like a ripple - passing it on to the next microservice, and then to next and they all will blow up in series like dominoes. This could keep your testers busy for weeks and no one will know where to look for the fault! #6 Bye Bye Bye, N Sync Let’s consider you’ve used synchronous communication for your e-commerce application. What you didn’t consider was that not all your services are going to be online at the same time. An offline service or a slow one can potentially lock or slow thread communication, ultimately blowing up your entire system, one service at a time! The best part is that it’s not always possible to build an asynchronous communication channel within your services. So you’ll have to use workarounds like local caches, circuit breakers, etc. So there you have it, six sure shot ways to blow up your microservices and make your Testing and Ops teams go crazy! For those of you who think that microservices have killed the monolith, think again! For the brave, who still wish to go ahead and build microservices, the above are examples of what you should beware of, when you’re building away those microservices! How to publish Microservice as a service onto a Docker How to build Microservices using REST framework Why microservices and DevOps are a match made in heaven    
Read more
  • 0
  • 0
  • 25329

article-image-packt-explains-deep-learning-in-90-seconds
Packt Publishing
01 Mar 2016
1 min read
Save for later

Packt Explains... Deep Learning in 90 seconds

Packt Publishing
01 Mar 2016
1 min read
If you've been looking into the world of Machine Learning lately you might have heard about a mysterious thing called “Deep Learning”. But just what is Deep Learning, and what does it mean for the world of Machine Learning as a whole? Take less than two minutes out of your day to find out and fully realize the awesome potential Deep Learning has with this video today.
Read more
  • 0
  • 0
  • 25322

article-image-how-to-stay-safe-while-using-social-media
Guest Contributor
08 Aug 2018
7 min read
Save for later

How to stay safe while using Social Media

Guest Contributor
08 Aug 2018
7 min read
The infamous Facebook and Cambridge Analytica data breach has sparked an ongoing and much-needed debate about user privacy on social media. Given how many people are on social media today, and how easy it is for anyone to access the information stored on those accounts, it's not surprising that they can prove to be a goldmine for hackers and malicious actors. We often don’t think about the things we share on social media as being a security risk, but if we aren’t careful, that's exactly the case. On the surface, much of what we share on social media sites and services seem to be innocuous and of little danger as far as our privacy or security is concerned. However, the most adamant cybercriminals in the business have learned how they can exploit social media sites and gain access to them to gather information. Here’s a guide, to examine the security vulnerabilities of the most popular social media networks on the Internet. It provides precautionary guidelines that you should follow. Facebook’s third-party apps: A hacker’s paradise If you take cybersecurity seriously, you should consider deleting your Facebook altogether. Some of the revelations over the last few years show the extent to which Facebook has allowed its users’ data to be used. In many cases for purposes that directly oppose their best interests, the social media giant has made only vague promises about how it will protect its users’ data. If you are going to use Facebook, you should assume that anything you post there can and will be seen by third-parties. That's so because we now know that the data of Facebook users, whose friends have consented to share their data, can also be collected without their direct authorization. One of the most common ways that Facebook is used for undermining users’ privacy is in the form of what seems like a fun game. These games consist of a name generator, in which users generate a pet name, a name of a celebrity, etc., by combining two words. These words are usually things like “mother’s maiden name” or “first pet's name.” The more astute readers might recognize that such information is regularly used as answers to secret questions in case you forget your password. By posting that information on your Facebook account, you are potentially granting hackers the information they need to access your accounts elsewhere. As a rule of thumb, its best to grant as little access as possible for any Facebook app; a third-party app that asks for extensive privileges such as access to your real-time location, contact list, microphone, camera, email, etc., could prove to be a serious security liability. Twitter: privacy as a binary choice Twitter keeps things simple in regards to privacy. It's nothing like Facebook, where you can micro-manage your settings. Instead, Twitter keeps it binary; things are either public or private. You also don’t have the opportunity to change this for individual tweets. Whenever you use Twitter, ask yourself if you want other people to know where you are right now. Remember, if you are on holiday and your house is unattended, posting that information publically could put your property at risk. You should also remember that any photos you upload with embedded GPS coordinates could be used to track you back physically. Twitter automatically strips away EXIF data, but it still reads that data to provide suggested locations. For complete security, remove the data before you upload any picture. Finally, refrain from using third-party Twitter apps such as UberSocial, HootSuite, Tweetbot. If you’re going for maximum security, avoid using any at all! Instagram: location, location, location The whole idea behind Instagram is sharing of photos and videos. It’s true sharing your location is fun and even convenient, yet few users truly understand the implications of sharing such information. While it’s not a great idea to tell a random stranger on the street that you’re going out, the same concept applies to your posts and stories that indicate your current location. Make sure to refrain from location tagging as much as possible. It’s also a good idea to remove any EXIF data before posting any photo. In fact, you should consider turning off your location data altogether. Additionally, consider making your profile private. It’s a great feature that’s often overlooked. With this setting on, you’ll be able to review every single follower before they gain access to your content. Remember that if your profile remains public anyone can see your post and follow your stories, which in most instances highlights your daily activities. Giving that kind of information to total strangers online could have detrimental outcomes, to put it lightly. Reddit: a privacy safe haven Reddit is one of the best social media sites for anonymity. For one thing, you never have to share or disclose any personal information to register with Reddit. As long as you make sure never to share any personally identifiable information and you keep your location data turned off, it's easy to use Reddit with complete anonymity. Though Reddit’s track record is almost spotless when it comes to security and privacy, it’s essential to understand your account on this social media platform could still be compromised. That’s because your email address is directly linked to your Reddit account. Thus, if you want to protect your account from possible hacks, you must take precautionary steps to secure your email account as well. Remember - everything’s connected on the Internet. VPN: a universal security tool A virtual private network (VPN) will enhance your overall online privacy and security. When you use a VPN, even the website itself won’t be able to trace you; it will only know the location of the server you're connected to, which you can choose. All the data that will be sent or received will be encrypted with a military-grade cipher. In many cases, VPN providers offer further features to enhance privacy and security. As of now, quite a few VPN services can identify and blacklist potentially malicious ads, pop-ups, and websites. With the continuous updates of such databases, the feature will only get better. Additionally, DNS leak protection and automatic Kill Switches ensure that snoopers have virtually no chances of intercepting your connection in any imaginable way. Using a VPN is a no-brainer. If you still don’t have one, rest assured that it will be one of the best investments in regards to your online security and privacy. Staying safe on social media won’t happen automatically, unfortunately, It takes effort. Make sure to check the settings available on each platform, and carefully consider what you are sharing. Never share anything so sensitive that, if it were accidentally exposed to all your followers, it would be a disaster. Besides optimizing your privacy settings, make use of all virtual security solutions such as VPN services and antimalware tools. Take these security measures and remain vigilant - that way you’ll remain safe on social media. About the author   Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online.   Mozilla’s new Firefox DNS security updates spark privacy hue and cry Google to launch a censored search engine in China, codenamed Dragonfly Did Facebook just have another security scare? Time for Facebook, Twitter and other social media to take responsibility or face regulation
Read more
  • 0
  • 0
  • 25321

article-image-how-to-secure-your-crypto-currency
Guest Contributor
08 Sep 2018
8 min read
Save for later

How to secure your crypto currency

Guest Contributor
08 Sep 2018
8 min read
Managing and earning cryptocurrency is a lot of hassle and losing it is a lot like losing yourself. While security of this blockchain based currency is a major concern, here is what you can do to secure your crypto fortune. With the ever fluctuating crypto-rates, every time, it’s now or never. While Bitcoin climbed up to $17,900 in the past, the digital currency frenzy is always in-trend and its security is crucial. No crypto geek wants to lose their currency due to malicious activities, negligence or any other reason. Before we delve into securing our crypto currencies, lets discuss the structure and strategy of this crypto vault that ensures the absolute security of a blockchain based digital currency. Why blockchains are secure, at least, in theory Below are the three core elements that contribute in making blockchain a fool proof digital technology.        Public key cryptography        Hashing        Digital signatures Public Key Cryptography This cryptography involves two distinctive keys i.e., private and public keys. Both keys decrypt and encrypt data asymmetrically. Both have simultaneous dependency of data which is encrypted by a private key and can only be decrypted with the public key. Similarly, data decrypted by public key can only be decrypted by a private key. Various cryptography schemes including TLS (Transport Layer Security protocol) and SSL (Secure Sockets Layer) have this system at its core. The strategy works well with you putting in your public key into the world of blockchain and keeping your private key confidential, not revealing it on any platform or place. Hashing Also called a digest, the hash of a message gets calculated on the basis of the contents of a message. The hashing algorithm generates a hash that is created deterministically. Data of an arbitrary length acts an input to the hashing algorithm. The outcome of this complex process is known as a calculated amount of hash with a predefined length. Due to its deterministic nature, the input and output are the same. Considering mathematical calculations, it’s easy to convert a message into hash but when it comes to obtaining an original message from hash, it is tediously difficult. Digital Signatures A digital signature is an encrypted form of hash of a message and is an outcome of a private key. Anyone who has the access to the public key can break into the digital signature by decrypting it and this can be used to get the original hash. Anyone who can read the message can calculate the hash of a message on its own. The independently calculated hash can be compared with the decrypted hash to ensure both the hashes are the same. If they both match, it is a confirmation that the message remains unaltered from creation to reception. Additionally, it is a sign of a relating private key digitally signing the message. A hash is extracted from a message and if a message gets altered, it will produce a different type of hash. Note that it is complex to reverse the process to find the message of a hash but it’s easy to compute the hash of a message. A hash that is encrypted by a private key is known as digital signature. Anyone having a public key can decrypt a digital signature and they have the ability to compare the digital signature with a calculated hash of the message. If the value of an original message is active and the message is signed by the entity having the private key, it means that the hashes are identical. What are Crypto wallets and transactions Every crypto-wallet is a combined collection of single or more wallets. A crypto-wallet is a private key and it can create a public key too. By using a public key, a public wallet address can be easily created. This makes a cryptocurrency wallet a set of private keys. To enable sharing wallet address with the public, they are converted into QR codes eliminated the need to maintain secrecy. One can always show QR codes to the world without any hesitation and anyone can send cryptocurrency using that wallet address. However, a cryptocurrency transaction needs a private key and currency sent into a wallet is owned by the owner of the wallet. In order to transact using cryptocurrency, a transaction is created that is public information. A transaction of crypto currency is a collection of information a blockchain needs. The only needed data for a transaction is the destination wallet’s address and the desired amount to be transferred. While anyone can transact in cryptocurrency, the transactions are only permitted by the blockchain if it is assured by multiple members in the network. A transaction should be digitally signed by a private key in order to get a valid status or else, it would be treated as invalid. In other words, one signs a transaction with the private key and then it gets to the blockchain. Once the blockchain accepts the key by confirming the public key data, it gets included in the blockchain that validates the transaction. Why you should guard your private key An attack on your private key is an attempt to steal your cryptocurrency. By using your private keys, an attacker attempts to digitally sign transactions from your wallet address to their address. Moreover, an attacker can destroy your private keys thus ending your access to your crypto wallet. What are some risk factors involved in owning a crypto wallet Before we move on to creating a security wall around our crypto currency, it is important to know from whom we are protecting our digital currency or who can prove to be a threat for our crypto wallets. If you lose the access to your crypto currency, you have lost it all as there isn’t any ledger with a centralized authority and once you lose the access, you can't regain it by any means. Since a crypto wallet is paired by a private and public key, losing the private key means losing your wallet. In other words, you don’t own any cryptocurrency. This is the very first and foremost threat. The next in line threat is what we hear often. Attackers, hackers or attempters who want to gain access to our cryptocurrency. The malfunctions may be opportunist or they may have their private intentions. Threats for your cryptocurrency Opportunist hackers are low profile attackers who get access to your laptop for transacting money to their public wallet address. Opportunist hackers doesn’t attack or target a person specifically, but if they get access to your crypto currency, they won’t shy away from taking your digital cash. Dedicated attackers, on the other hand, target single handedly or they may be in a group of hackers who work together for a sole purpose that is – stealing cryptocurrency. Their targets include every individual, crypto trader or even a crypto exchange. They initiate phishing campaigns and before executing the attack, they get well-versed with their target by conducting a pre-research. Level 2 attackers go for a broader approach and write malicious code that may steal private keys from a system if it gets attacked or infected. Another kind of hackers are backed by nation states. They are a collective group of people with top level coordination and established financials. They are motivated by gaining access to finances or their will. The crypto currency attacks by Lazarus Group, backed by the North Korea, are an example. How to Protect Your crypto wallet Regardless of the kind of threat, it is you and your private key that needs to be secured. Here’s how to ensure maximum security of your cryptocurrency. Throw away your access keys and you will lose your cryptocurrency forever. Obviously, you won’t do it ever and since the aforementioned thought came into your mind after reading the phrase, here are some other ways to secure your cryptocurrency fortune.       Go through the complete password recovery process. This means going through the process of forgetting the password and creating a multi-factor token. These measures should be taken while setting up a new hosted wallet or else, be prepared to lose it all.       No matter how fast the tech world progresses, basics will remain the same. You should have a printed paper backup of your keys and they should be placed in a secure location such as a bank’s locker or in a personal safe vault. Don’t forget to wipe out the printer’s memory after you are done with printing as printed files can be restored and re used to hack your digital money.       Do not keeps those keys with you nor should you be hiding those keys in a closet that can get damaged due to fire, theft, etc.       If your wallet has multi-signature enabled on it and has two public or private keys for the authorization of transactions, make it to three keys. While the third key will be controlled by an entrusted party, it will help you in the absence of a second person. About Author Tahha Ashraf is a Digital Content Producer at Cubix, a mobile app development company. He is a Certified Hubspot inbound and content marketer. He loves talking about brands, tech, blockchain and content marketing. Along with writing for the online fraternity on a variety of topics, he is fond of creativity and writes poetry in his free time. Cryptocurrency-based firm, Tron acquires BitTorrent Can Cryptocurrency establish a new economic world order? Akon is planning to create a cryptocurrency city in Senegal    
Read more
  • 0
  • 0
  • 25311

article-image-future-fetcher-context-api-replace-redux
Amarabha Banerjee
13 Jul 2018
3 min read
Save for later

Is Future-Fetcher/Context API replacing Redux?

Amarabha Banerjee
13 Jul 2018
3 min read
In JSconf 2018, the former Redux head, Dan Abramov, announced a small tool that he built to simplify data fetching and state management. It was called the future-fetcher/React Context API. Redux, so far, is one of the most popular state management tools that is used widely with React. Even Angular users are quite fond of it. Vue also has a provision for using Redux in its ecosystem. However tools like Mobx are also getting enough popularity because of their simplicity and ease of use. What’s the problem with Redux? The honest answer is that it’s simply too complicated. If we have to understand the real probability of it being replaced by any other tool, then we will have to understand how it works. The workflow is illustrated in the below figure. Source: Freecodecamp.org The above image shows how basic Flux architecture functions and Redux is based quite heavily on this architecture model. This can be very complicated for a novice web developer. A beginner level developer might just get overwhelmed with the use of functional programming concepts like ‘creation’, ‘dispatcher’ and ‘Action’ functions and using them in appropriate situations. Redux follows the same application logic and those who are not comfortable with functional programming, might find using Redux quite cumbersome. That’s where the Future-Fetcher/ Context API comes in. Context API is a production-grade, efficient API that supports things like static type checking and deep updates. In React, different application levels and layers consist of React components and these components have nested relations with each other. In other words, they are connected to each other like a tree and if one component needs to change its state, and it has to pass on the information to the next component, then it transfers an entity called ‘prop’. The state management is important because you would want your application layers to be consistent with your data, so that when one component changes state, the relevant data has to passed on to the component which will allow it to respond accordingly. In Redux, you will have to write functions as mentioned above to implement this. But in context API, the architecture looks a bit different than the Redux-Flux architecture and there lies the difference. Source: Freecodecamp.org In case of Context API, the need to write functions like Action, Dispatch etc. vanishes, that makes the job of a developer quite easy. Here, we only have ‘view’ and the ‘store’ component, where “Store” contains the dynamic state of the application layers. This simplifies a lot of processes. Although the problem of scaling might be an issue in this particular form of architecture. Still, for normal web applications, where dynamic and real time behavior are important, Context API provides a much easier way of implementation. Since this feature has been developed by the primary architect of Redux, the developer community is of the opinion that it might face a tough challenge in the days to come. Still it’s early days to say - Game Over Redux. Creating Reusable Generic Modals in React and Redux Connecting React to Redux & Firebase – Part 1 Connecting React to Redux and Firebase – Part 2
Read more
  • 0
  • 0
  • 25236

article-image-top-10-it-certifications-for-cloud-and-networking-professionals-in-2018
Vijin Boricha
05 Jul 2018
7 min read
Save for later

Top 10 IT certifications for cloud and networking professionals in 2018

Vijin Boricha
05 Jul 2018
7 min read
Certifications have always proven to be one of the best ways to boost one’s IT career. Irrespective of the domain you choose, you will always have an upperhand if your resume showcases some valuable IT certifications. Certified professionals attract employers as certifications are an external validation that an individual is competent in that said technical skill. Certifications enable individuals to start thinking out of the box, become more efficient in what they do, and execute goals with minimum errors. If you are looking at enhancing your skills and increasing your salary, this is a tried and tested method. Here are the top 10 IT certifications that will help you in uprising your IT career. AWS Certified Solution Architect - Associate: AWS is currently the market leader in the public cloud. Packt Skill Up Survey 2018 confirms this too. Source: Packt Skill Up Survey 2018 AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications. With rapid adaptation of cloud platform the need for cloud certifications has also increased. IT professionals with some experience of AWS Cloud, interested in designing effective Cloud solutions opt for this certification. This exam promises to scale your ability of architecting and deploying secure and robust applications on AWS technologies. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $119,233 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Japanese AWS Certified Developer - Associate: This is another role-based AWS certification that has gained enough traction for industries to keep it as a job validator. This exam helps individuals validate their software development knowledge which helps them develop cloud applications on AWS. IT professionals with hands-on experience in designing and maintaining AWS-based applications should definitely go for this certification to stand-out. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $116,456 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Simplified Chinese, and Japanese Project Management Professional (PMP) Project management Professional is one of the most valuable certifications for project managers. The beauty of this certification is that it not only teaches individuals creative methodologies but makes them proficient in any industry domain they look forward to pursuing. The techniques and knowledge one gets from this certification is applicable in any industry globally. This certification promises that PMP certified project managers are capable of completing projects on time, in a desired budget and ensure meeting the original project goal. Exam Fee: Non-PMI Members: $555/ PMI Members: $405 Average Salary: $113,000 per annum Number of Questions: 200 Type of Question: A combination of Multiple Choice and Open-end Passing Threshold: 80.6% Certified Information Systems Security Professional (CISSP) CISSP is one of the globally recognized security certifications. This cybersecurity certification is a great way to demonstrate your expertise and build industry-level security skills. On achieving this certification users will be well-versed in designing, engineering, implementing, and running an information security program. Users need at least 5 years of minimum working experience in order to be eligible for this certification. This certification will help you measure your competence in designing and maintaining a robust environment. Exam Fee: $699 Average Salary: $111,638 per annum Number of Questions: 250 (each question carries 4 marks) Type of Question: Multiple Choice Passing Threshold: 700 marks CompTIA Security+ CompTIA Security+ certification is a vendor neutral certification used to kick-start one’s career as a security professional. It helps users get acquainted to all the aspects related to IT security. If you are inclined towards systems administration, network administration, and security administration, this is something that you should definitely go for. With this certification users learn the latest trends and techniques in risk management, risk mitigation, threat management and intrusion detection. Exam Fee: $330 Average Salary: $95,829 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (Japanese, Portuguese and Simplified Chinese estimated Q2 2018) Passing Threshold: 750/900 CompTIA Network+ Another CompTIA certification! Why? CompTIA Network+ is a certification that helps individuals in developing their career and validating their skills to troubleshoot, configure, and manage both wired and wireless networks. So, if you are an entry-level IT professional interested in managing, maintaining, troubleshooting and configuring complex network infrastructures then, this one is for you. Exam Fee: $302 Average Salary: $90,280 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (In Development: Japanese, German, Spanish, Portuguese) Passing Threshold: 720 (on a scale of 100-900) VMware Certified Professional 6.5 – Data Center Virtualization (VCP6.5-DCV) Yes, even today virtualization is highly valued in a lot of industries. Data Center Virtualization Certification helps individuals develop skills and abilities to install, configure, and manage a vSphere 6.5 infrastructure. This industry-recognized certification validates users’ knowledge on implementing, managing, and troubleshooting a vSphere V6.5 infrastructure. It also helps IT professionals build a  foundation for business agility that can accelerate the transformation to cloud computing. Exam Fee: $250 Average Salary: $82,342 per annum Number of Questions: 46 Available language: English Type of Question: Single and Multiple Choice Passing Threshold: 300 (on a scale of 100-500) CompTIA A+ Yet another CompTIA certification that helps entry level IT professionals have an upper hand. This certification is specially for individuals interested in building their career in technical support or IT operational roles. If you are thinking more than just PC repair then, this one is for you. By entry level certification I mean this is a certification that one can pursue simultaneously while in college or secondary school. CompTIA A+ is a basic version of Network+ as it only touches basic network infrastructure issues while making you proficient as per industry standards. Exam Fee: $211 Average Salary:$79,390 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English, German, Japanese, Portuguese, French and Spanish Passing Threshold: 72% for 220-801 exam and 75% for 220-802 exam Cisco Certified Networking Associate (CCNA) Cisco Certified Network Associate (CCNA) Routing and Switching is one of the most important IT certifications to stay up-to date with your networking skills. It is a foundational certification for individuals interested in a high level networking profession. The exam helps candidates validate their knowledge and skills in networking, LAN switching, IPv4 and IPv6 routing, WAN, infrastructure security, and infrastructure management. This certification not only validates users networking fundamentals but also helps them stay relevant with skills needed to adopt next generation technologies. Exam Fee: $325 Average Salary:$55,166-$90,642 Number of Questions: 60-70 Available Languages: English, Japanese Type of Question: Multiple Choice Passing Threshold: 825/1000 CISM (Certified Information Security Manager) Lastly, we have Certified Information Security Manager (CISM), a nonprofit certification offered by ISACA that caters to security professionals involved in information security, risk management and governance. This is an advanced-level certification for experienced individuals who develop and manage enterprise information security programs. Only users who hold five years of verified experience, out of which 3 year of experience in infosec management, are eligible for this exam. Exam Fee: $415- $595 (Cheaper for members) Average Salary: $52,402 to $243,610 Number of Questions: 200 Passing Threshold: 450  (on a scale of 200-800) Type of Question: Multiple Choice Are you confused as to which certification you should take-up? Well, leave your noisy thoughts aside and choose wisely. Pick-up an exam that is inclined to your interest. If you want to pursue IT security don’t end-up going for Cloud certifications. No career option is fun unless you want to pursue it wholeheartedly. Take a right step and make it count. Why AWS is the prefered cloud platform for developers working with big data? 5 reasons why your business should adopt cloud computing Top 5 penetration testing tools for ethical hackers  
Read more
  • 0
  • 0
  • 25156
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-five-most-surprising-applications-iot
Raka Mahesa
16 Aug 2017
5 min read
Save for later

Five Most Surprising Applications of IoT

Raka Mahesa
16 Aug 2017
5 min read
The Internet of Things has been growing for quite a while now. The promise of smart and connected gadgets has resulted in many, many applications of Internet of Things. Some of these projects are useful, yet some are not. Some of these applications, like smart TV, smartwatch, and smart home, are expected, whereas others are not. Let's look at a few surprising applications that tap into the Internet of Things.  Let’s get started with a project from Google.   1. Google's Project Jacquard  Simply put, project Jacquard is a smart jacket, a literal piece of clothing that you can wear that is connected to your smartphone. By tapping and swiping on the jacket sleeve, you can control the music player and map application on your smartphone. This project is actually a collaboration between Google and Levi's, where Google invented a fabric that can read touch input and Levi's applied the technology to a product people will actually want to wear.  Even right now, the idea of a fabric that we can interact with boggles my mind. My biggest problem with wearables like smart watch and smart band is that they felt like another device we need to take care of. Meanwhile, a jacket is something that we just wear, with its smart capability being an additional benefit. Not to mention that connected fabric allows more aspects of our daily life to be integrated with our digital life.  That said, project Jacquard is not the first smart clothing, there are other projects like Athos that embeds sensor to their clothing. Still, project Jacquard is the first one that allows people to actually interact with their clothing.  2. Hapifork  Hapifork is actually one of the first smart gadgets that I was aware of. As the name alludes to, Hapifork is a smart fork with capacitive sensor, motion sensor, vibration motor and a micro USB port. You might wonder why a fork needs all those bells and whistles. Well, you see, Hapifork uses those sensors to detect your eating motion and alerts you if you are eating too fast. After all, eating too fast can cause weight gain and other physical issues, so the fork tries to help you live a healthier life.  While the idea has some merits, I'm still not sure an unwieldy smart fork is a good way to make us eat healthier. I think actually eating healthy food is a better way to do that. That said, the idea of smart eating utensils is fascinating. I would totally get a smart plate with the capability of counting the amount of calories in our food.   3. Smart food maker  In 2016 there was a wave of smart food-making devices that started and successfully completed their crowdfunding project. These devices are designed to make it easier and quicker for people to prepare food. They are designed to be much easier than just using a microwave oven, that is. The problem is, these devices are pricey and are only able to prepare a specific type of food. There is CHiP, which can bake various kind of cookies from a set of dough and there is Flatev that can bake tortillas from a pod of dough.  While the concept may initially sound weird, having a specific device to make a specific type of food is actually not that weird. After all, we already have a machine that only makes a cup of fresh coffee, so having a machine that only makes a fresh plate of cookies could be the next natural step.  4. Smart tattoo  Of all the things that can be smart and connected, a tattoo is definitely not the one that comes to my mind. But apparently that's not the case with plenty of researchers from all over the world. There have been a couple of bleeding edge projects that resulted in connected tattoos. L'Oreal has created tattoos that are able to detect ultraviolet exposure, and Microsoft and MIT have created tattoos that users can use to interact with smartphones. And late last year a group of researchers created a tattoo with an accelerometer that can detect a user's heartbeat.  So far wearables have been smart accessories that you wear daily. Since you also wear your skin every day, would it also count as wearable?   5. Oombrella If you ever thought that human isn't a creative creature, just remember that it's also a human who invented the concept of smart umbrella. Oombrella is a connected umbrella that will notify you when it's about to rain and also will notify you if you’ve left it behind in a restaurant. These functionalities may sound passable at first, until you realize that the weather notification comes from your smartphone and you just need a weather app instead of a smart umbrella. That said, this project has been successfully crowdfunded, so maybe people actually want a smart umbrella.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 25117

article-image-developers-are-technology-decision-makers
Richard Gall
01 Aug 2017
3 min read
Save for later

Developers are today's technology decision makers

Richard Gall
01 Aug 2017
3 min read
For many years, technology in large organizations has been defined by established vendors. Oracle. Microsoft. Huge corporations were setting the agenda when it came to the technology being used by businesses. These tech organizations provided solutions - everyday businesses simply signed themselves up. But this year’s Skill Up survey painted an interesting picture of a world in which developers and tech professionals have a significant degree of control over the tools they use. This is how people responded when we asked them how much choice they have over the tools they use at work: Half of all respondents have at least a significant amount of choice over the software they use at work. This highlights an important fact of life for tech pros, engineers and developers across the globe - your job is not just about building things and shipping code, it’s also about understanding the tools that are going to help you do that. To be more specific, what this highlights is that open-source is truly mainstream. What evolved as a cultural niche of sorts in the late nineties has become fundamental to the way we understand technology today. Yes, it’s true that large tech conglomerates like Apple, Facebook, and Google have a huge hold on consumers across the planet, but they aren’t encouraging lock-in in the way that the previous generation of tech giants did. In fact, they are actually pushing open-source into the mainstream. Facebook built React; Google are the minds behind Golang and TensorFlow; Apple have done a lot to evolve Swift into a language that may come to dominate the wider programming landscape. We are moving to a world of open systems, where interoperability reigns supreme. Companies like Facebook, Google, and Apple want consumer control, but when it comes to engineering and programming they want to be empowering people - people like you. If you’re not convinced, take the case of Java. Java’s interesting, because in many respects it’s a language that was representative of the closed systems of enterprise tech a decade ago. But it’s function today has changed - it’s one of the most widely used programming languages on GitHub, being used in a huge range open source projects. C# is similar - in it you can see how Microsoft’s focus has changed, the organization’s stance on open source softening to become more invested with a culture where openness is the engine of innovation. Part of the reason for this is a broader economic changes in the very foundations of how software is used today and what organizations need to understand. As trends such as microservices have grown, and as APIs become more important to the development and growth of businesses - those explicitly rooted in software or otherwise - software necessarily must become open and changeable. And, to take us back to where we started, the developers, programmers, engineers who build and manage those systems must be open and alive to the developing landscape of software they can use in the future. Decision making, then, is a critical part of what it means to work in software. That may not have always been the case, but today it’s essential. Make sure you’re making the right decision. Read this year's Skill Up report for free.
Read more
  • 0
  • 0
  • 25086

article-image-4-misconceptions-about-data-wrangling
Sugandha Lahoti
17 Oct 2018
4 min read
Save for later

4 misconceptions about data wrangling

Sugandha Lahoti
17 Oct 2018
4 min read
Around 80% of the time in data analysis is spent on cleaning and preparing data for analysis. This is, however, an important task, and is a prerequisite to the rest of the data analysis workflow, including visualization, analysis, and reporting. Although, being an important task given its nature, there are certain myths associated with data wrangling which developers should be cautious of. In this post, we will discuss four such misconceptions. Myth #1: Data wrangling is all about writing SQL query There was a time when data processing needed data to be presented in a relational manner so that SQL queries could be written. Today, there are many other types of data sources in addition to the classic static SQL databases, which can be analyzed. Often, an engineer has to pull data from diverse sources such as web portals, Twitter feeds, sensor fusion streams, police or hospital records. Static SQL query can help only so much in those diverse domains. A programmatic approach, which is flexible enough to interface with myriad sources and is able to parse the raw data through clever algorithmic techniques and use of fundamental data structures (trees, graphs, hash tables, heaps), will be the winner. Myth #2: Knowledge of statistics is not required for data wrangling Quick statistical tests and visualizations are always invaluable to check the ‘quality’ of the data you sourced. These tests can help detect outliers and wrong data entry, without running complex scripts. For effective data wrangling, you don’t need to have knowledge of advanced statistics. However, you must understand basic descriptive statistics and know how to execute them using built-in Python libraries. Myth #3: You have to be a machine learning expert to do great data wrangling Deep knowledge of machine learning is certainly not a pre-requisite for data wrangling. It is true that the end goal of data wrangling is often to prepare the data so that it can be used in a machine learning task downstream. As a data wrangler, you do not have to know all the nitty-gritties of your project’s machine learning pipeline. However, it is always a good idea to talk to the machine learning expert who will use your data and understand the data structure interface and format he/she needs to run the model fast and accurately. Myth #4: Deep knowledge of programming is not required for data wrangling As explained above, the diversity and complexity of data sources require that you are comfortable with deep notions of fundamental data structures and how a programming language paradigm handles them. Increasing deep knowledge of the programming framework (Python for example) will surely help you to come up with innovative methods for dealing with data source interfacing and data cleaning issues. The speed and efficiency of your data processing pipeline can often be benefited from using advanced knowledge of basic algorithms e.g. search, sort, graph traversal, hash table building, etc. Although built-in methods in standard libraries are optimized, having this knowledge gives you an edge for any situation. You read a guest post from Tirthajyoti Sarkar and Shubhadeep Roychowdhury, the authors of Data Wrangling with Python. We hope that these misconceptions would help you realize that data wrangling is not as difficult as it seems. Have fun wrangling data! About the authors Dr. Tirthajyoti Sarkar works as a Sr. Principal Engineer in the semiconductor technology domain where he applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup. He holds a Master Degree in Computer Science from West Bengal University Of Technology and certifications in Machine Learning from Stanford. Don’t forget to check out Data Wrangling with Python to learn the essential basics of data wrangling using Python. 30 common data science terms explained Python, Tensorflow, Excel and more – Data professionals reveal their top tools How to create a strong data science project portfolio that lands you a job
Read more
  • 0
  • 0
  • 24998

article-image-streamline-your-application-development-process-in-5-simple-steps
Guest Contributor
23 Apr 2019
7 min read
Save for later

Streamline your application development process in 5 simple steps

Guest Contributor
23 Apr 2019
7 min read
Chief Information Officers (CIOs) are under constant pressure to deliver substantial results that meet business goals. Planning a project and seeing it through to the end is a critical requirement of an effective development process. In the fast-paced world of software development, getting results is an essential key for businesses to flourish. There is a certain pleasure you get from ticking off tasks from your to-do lists. However, this becomes a burden when you are drowning with a lot of tasks on your head. Signs of inefficient processes are prevalent in every business. Unhappy customers, stressed out colleagues, disappointing code reviews, missed deadlines, and increases in costs are just some of the examples that are the direct result of dysfunctional processes. By streamlining your workflow you will be able to compete with modern technologies like Machine Learning and Artificial Intelligence. Gaining access to such technologies will also help you to automate the workflow, making your daily processes even smoother. Listed below are 5 steps that can help you in streamlining your development process. Step 1: Creating a Workflow This is a preliminary step for companies who have not considered creating a better workflow. A task is not just something you can write down, complete, and tick-off. Complex, software related tasks are not like the “do-the-dishes” type of tasks. Usually, there are many stages in software development tasks like planning, organizing, reviewing, and releasing. Regardless of the niche of your tasks, the workflow should be clear. You can always use software tools such as Zapier, Nintex, and ProcessMaker, etc. to customize your workflow and assign levels-of-importance to particular tasks. This might appear as micro-management at first, but once it becomes a part of the daily routine, it starts to get easier. Creating a workflow is probably the most important factor to consider when you are preparing to streamline your software development processes. There are several steps involved when creating a workflow: Mapping the Process Process mapping mainly focuses on the visualization of the current development process which allows a top-down view of how things are working. You can do process mapping via tools such as Draw.io, LucidCharts, and Microsoft Visio, etc. Analyze the Process Once you have a flowchart or a swim lane diagram setup, use it to investigate the problems within the process. The problems can range from costs, time, employee motivation, and other bottlenecks. Redesign the Process When you have identified the problems, you should try to solve them step by step. Working with people who are directly involved in the process (e.g Software Developers) and gaining an on-the-ground insight can prove very useful when redesigning the processes. Acquire Resources You now need to secure the resources that are required to implement the new processes. With regards to our topic, it can range from buying licensed software, faster computers, etc. Implementing Change It is highly likely that your business processes change with existing systems, teams, and processes. Allocate your time to solving these problems, while keeping the regular operations in the process. Process Review This phase might seem the easiest, but it is not. Once the changes are in place, you need to review them accordingly so that they do not rise up again Once the workflow is set in place, all you have to do is to identify the bugs in your workflow plan. The bugs can range anywhere from slow tasks, re-opening of finished tasks, to dead tasks. What we have observed about workflows is that you do not get it right the first time. You need to take your time to edit and review the workflow while still being in the loop of the workflow. The more transparent and active your process is, the easier it gets to spot problems and figure out solutions. Step 2: Backlog Maintenance Many times you assume all the tasks in your backlog to be important. They might have, however, this makes the backlog a little too jam-packed. Well, your backlog will not serve a purpose unless you are actively taking part in keeping it organized. A backlog, while being a good place to store tasks, is also home to tasks that will never see the light of day. A good practice, therefore, would be to either clean up your backlog of dead tasks or combine them with tasks that have more importance in your overall workflow. If some of the tasks are relatively low-priority, we would recommend creating a separate backlog altogether. Backlogs are meant to be a database of tasks but do not let that fact get over your head. You should not worry about deleting something important from your backlog, if the task is important, it will come back. You can use sites like Trello or Slack to create and maintain a backlog. Step 3: Standardized Procedure for Tasks You should have an accurate definition of “done”. With respect to software development, there are several things you need to consider before actually accomplishing a task. These include: Ensure all the features have been applied The unit tests are finished Software information is up-to-date Quality assurance tests have been carried out The code is in the master branch The code is deployed in the production This is simply a template of what you can consider “done” with respect to a software development project. Like any template, it gets even better when you include your additions and subtractions to it. Having a standardized definition of “done” helps remove confusion from the project so that every employee has an understanding of every stage until they are finished. and also gives you time to think about what you are trying to achieve. Lastly, it is always wise to spend a little extra time completing a task phase, so that you do not have to revisit it several times. Step 4: Work in Progress (WIP) Control The ultimate factor that kills workflow is multi-tasking. Overloading your employees with constant tasks results in an overall decline in output. Therefore, it is important that you do not exert your employees with multiple tasks, which only increases their work in progress. In order to fight the problem of multitasking, you need to reduce your cycle times by having fewer tasks at one time. Consider setting a WIP limit inside your workflow by introducing limits for daily and weekly tasks. This helps to keep control of the employee tasks and reduces their burden. Step 5: Progress Visualization When you have everything set up in your workflow, it is time to represent that data to present and potential stakeholders. You need to make it clear that all of the features are completed and the ones you are currently working on. And if you will be releasing the product on time or no? A good way to represent data to senior management is through visualizations. With visualizations, you can use tools like Jira or Trello to make your data shine even more. In terms of data representation, you can use various free online tools, or buy software like Microsoft PowerPoint or Excel. Whatever tools you might use, your end-goal should be to make the information as simple as possible to the stakeholders. You need to avoid clutter and too much technical information. However, these are not the only methods you can use. Look around your company and see where you are lacking in your current processes. Take note of all of them, and research on how you can change them for the better. Author Bio Shawn Mike has been working with writing challenging clients for over five years. He provides ghostwriting, and copywriting services. His educational background in the technical field and business studies has given him the edge to write on many topics. He occasionally writes blogs for Dynamologic Solutions. Microsoft Store updates its app developer agreement, to give developers up to 95% of app revenue React Native Vs Ionic: Which one is the better mobile app development framework? 9 reasons to choose Agile Methodology for Mobile App Development
Read more
  • 0
  • 0
  • 24946
article-image-machine-learning-apis-for-google-cloud-platform
Amey Varangaonkar
28 Jun 2018
7 min read
Save for later

Machine learning APIs for Google Cloud Platform

Amey Varangaonkar
28 Jun 2018
7 min read
Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AW. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost. The following excerpt is taken from the book 'Cloud Analytics with Google Cloud Platform' authored by Sanket Thodge. GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs: Cloud Speech API A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.  The architecture of the Cloud Speech API is as follows: In other words, this model enables speech to text conversion by ML. The components used by the Speech API are: REST API or Google Remote Procedure Call (gRPC) API Google Cloud Client Library JSON API Python Cloud DataLab Cloud Data Storage Cloud Endpoints The applications of the model include: Voice user interfaces Domotic appliance control Preparation of structured documents Aircraft / direct voice outputs Speech to text processing Telecommunication It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage. Now, as we have learned about the concepts and the applications of the model, let's learn some use cases where we can implement the model: Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products. Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena. A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms. Cloud Translation API Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature. This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing. The following image shows the architecture of the translation model:  In other words, the cloud translation API is an adaptive Machine Translation Algorithm. The components used by this model are: REST API Cloud DataLab Cloud data storage Python, Ruby Clients Library Cloud Endpoints The most important application of the model is the conversion of a regional language to a foreign language. The cost of text translation and language detection is $20 per 1 million characters. Use cases Now, as we have learned about the concepts and applications of the API, let's learn two use cases where it has been successfully implemented: Rule-based Machine Translation Local Tissue Response to Injury and Trauma We will discuss each of these use cases in the following sections. Rule-based Machine Translation The steps to implement rule-based Machine Translation successfully are as follows: Input text Parsing Tokenization Compare the rules to extract the meaning of prepositional phrase Find word of inputted language to word of the targeted language Frame the sentence of the targeted language Local tissue response to injury and trauma We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows: Hemorrhaging from lesioned vessels and blood clotting Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells) Resorption of blood clot Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts Cloud Vision API Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents. The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system. The architecture for the Cloud Vision API is as follows: We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text. The components used in the API are: Client Library REST API RPC API OCR Language Support Cloud Storage Cloud Endpoints Applications of the API include: Industrial Robotics Cartography Geology Forensics and Military Medical and Healthcare Cost: Free of charge for the first 1,000 units per month; after that, pay as you go. Use cases This technique can be successfully implemented in: Image detection using an Android or iOS mobile device Retinal Image Analysis (Ophthalmology) We will discuss each of these use cases in the following topics. Image detection using Android or iOS mobile device Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple: Input the image Run the Cloud Vision API Executes methods for detection of Face, Label, Text, Web and Document properties Generate the response in the form of phrase or string Populate the image details as a text view Retinal Image Analysis – ophthalmology Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows: Input the images of an eye Estimate the retinal biomarkers Do the process to remove the effected portion without losing necessary information Identify the location of specific structures Identify the boundaries of the object Find similar regions in two or more images Quantify the image with retinal portion damage You can learn a lot more about the machine learning capabilities of GCP on their official documentation page. If you found the above excerpt useful, make sure you check out our book 'Cloud Analytics with Google Cloud Platform' for more information on why GCP is a top cloud solution for machine learning and AI. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Google announce the largest overhaul of their Cloud Speech-to-Text  
Read more
  • 0
  • 0
  • 24932

article-image-automation-and-robots-trick-or-treat
Savia Lobo
31 Oct 2018
3 min read
Save for later

Automation and Robots - Trick or Treat?

Savia Lobo
31 Oct 2018
3 min read
Advancements in AI are on a path of reinventing the way organizations work. Last year, we wrote about RPA, which made front-end manual jobs redundant. This year, we have actual robots on the field. Last month, iRobot, the intelligent robot making company revealed its latest robot, Roomba i7+, that maps and stores your house and also empties the trash automatically. Last week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019, which will encourage efficient robotic automation in highly dynamic environments. Earlier this month, Amazon announced that it is opening a chain of 3,000 cashier-less stores across the US by 2021. And most recently, Walmart also announced that it is going to launch a cashierless store next year. The terms ‘Automation’ and ‘Robotics’ sometimes have a crossover, as Robots can be used to automate physical tasks while many types of automation have nothing to do with physical robots. The emergence of AI robots will reduce the need for a huge human workforce, boost the productivity of organizations and reduce their time to market. For example, customer service and other front-end jobs can function 24*7*365 without an uninterrupted service. Within industrial automation, robots can automate time-consuming physical processes. Collaborative robots will carry out a task in the same way a human would, albeit more efficiently! The positives aside, AI there is a danger of it getting out of control as machines can go rogue without humans in the loop. That is why members of European Parliament (MEPs) passed a resolution recently on banning autonomous weapon systems. They emphasized that weapons like these, without proper human control over selecting and attacking targets are a disaster waiting to happen. At the more mundane end of the social spectrum, the dangers of automation are still very real. Robots are expected to significantly replace a lot of human labor. For instance, as per the World Economic Forum survey, in 5 years, machines will do half of our job tasks of today as 1 in 2 employees would need reskilling/upskilling. Another study by renowned economist Andy Haldane, The Bank of England’s chief economist says 15 million jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce. As of now, having AI for organizations is a treat due to the different advantages they provide over humans. Although it will replace jobs, people can upskill their knowledge to continue thriving in the automation augmented future. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance Home Assistant: an open source Python home automation hub to rule all things smart
Read more
  • 0
  • 0
  • 24829

article-image-top-6-java-machine-learningdeep-learning-frameworks-cant-miss
Kartikey Pandey
08 Dec 2017
4 min read
Save for later

Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss

Kartikey Pandey
08 Dec 2017
4 min read
The data science tech market is buzzing with new and interesting Machine Learning libraries and tools almost everyday. In an increasingly growing market, it becomes difficult to choose the right tool or set of tools. More importantly, Artificial Intelligence and Deep Learning based projects require a different approach than traditional programming which makes things tricky to zero-in on one library or a framework. The choice of a framework is largely based upon the type of problem, one is expecting to solve. But there are other considerations too. Speed is one such factor that more or less would always play an important role in decision making. Other reasons could be how open-ended it is, architecture, functions, complexity of use, support for algorithms, and so on. Here, we present to you six Java libraries for your next Deep Learning and Artificial Intelligence project you shouldn’t miss if you are a Java loyalist or simply a web developer who wants to enter the world of deep learning. DeepLearning4j (DL4J) One of the first, commercial grade, and most popular deep learning frameworks developed in Java. It also supports other JVM languages (Java, Clojure, Scala). What’s interesting about the DL4J, is that it comes with an in-built GPU support for the training process. It also supports Hadoop YARN for distributed application management. It is popular for solving problems related to image recognition, fraud detection and NLP. MALLET Mallet (Machine Learning for Language Toolkit) is an open source Java Machine Learning toolkit. It supports NLP, clustering, modelling, and classification. The most important capability of Mallet is its support for a wide variety of algorithms such as Naive Bayes and Decision Trees. Another useful feature it has is topic modelling toolkit. Topic models are useful when analyzing large collections of unlabelled texts.   Massive Online Analysis (MOA) MOA is an open source data streaming and mining framework for real time analytics. It has a strong and growing community and is similar and related to Weka. It also has the ability to deal with massive data streams. Encog This framework supports a wide array of algorithms and neural networks such as Artificial Neural Network, Bayesian Network, Genetic Programming and algorithms. Neuroph Neuroph as the name suggests offers great simplicity when working on neural networks. The main USP of Neuroph is its incredibly useful GUI (Graphical User Interface) tool that helps in creating and training neural networks. Neuroph is a good choice of framework when you have a quick project on hand and you don’t want to spend hours learning the theory. Neuroph helps you quickly set up and running in putting neural networks to work for your project. Java Machine Learning Library The Java Machine Learning Library offers a great set of reference implementation of algorithms that you can’t miss for your next Machine Learning project. Some of the key highlights are support vector machines and clustering algorithms. These are a few key frameworks and tools you might want to consider when working on your next research work. The Java ML library ecosystem is vast with many tools and libraries to support, and we just touched the tip of that iceberg in this article. One particular tool that deserve an honourable mention is Environment for Developing KDD-Applications Supported by Index-Structure (ELKI). It is designed particularly with researchers and research students kept in mind. The main focus of ELKI is its broad coverage of data algorithms which makes it a natural fit for research work. What’s really important while choosing any of the above or tools outside of the list is a good understanding of the requirements and the problems you intend to solve. To reiterate, some of the key considerations to bear in mind before zeroing in on a tool would be - support for algorithms, implementation of neural networks, dataset size (small, medium, large), and speed.
Read more
  • 0
  • 0
  • 24771
article-image-why-are-android-developers-switching-java-kotlin
Hari Vignesh
23 Jan 2018
4 min read
Save for later

Why are Android developers switching from Java to Kotlin?

Hari Vignesh
23 Jan 2018
4 min read
When we talk about Android app development, the first programming language that comes to mind is 'Java'. However Java isn’t the only language you can use for Android programming – you can use any language that compiles to the JVM. Recently, a new language has caught the attention of the Android community – Kotlin. Kotlin has actually been around since 2011, but it was only in May 2017 that Google announced that the language was to become an officially supported language in the Android operating system. This is one of the many reasons why Kotlin’s adoption has been so dramatic. The Realm report, published at the end of 2017 suggests that Kotlin is likely to overtake Java in terms of usage in the next couple of years. When you want to work on custom Android applications, an advanced technology will help you achieve your goals. Java and Kotlin are commonly used languages for Google for writing Android Apps. A great importance is given to programming languages because it might cut down some of your time and money. Want to learn Kotlin? Find Kotlin eBooks and videos in our library. There are many reasons why mobile developers are choosing to switch from Java to Kotlin. Below are some of the most significant. Kotlin is easy for anyone who knows Java to learn Similarities in typing and syntax make Kotlin very easy to master for anyone who’s already working with Java. If you’re worried about a steep learning curve, you'll be pleasantly surprised by how easy it is for developers to dive into coding in Kotlin. Kotlin is evolving with a lot of support from the developer community. A lot of developers who contribute to Kotlin’s evolution are freelancers who find work on different platforms and experience a wide range of smaller projects with varied needs. Other contributors include larger companies and industry giants like Google. Kotlin needs 20 percent less coding compared to Java. Java is a bit outdated, which means every new launch has to support features included in the previous version. This eventually increases the code to write, resulting in absence of layer-to-layer architecture. If you compare the coding of Java class and Kotlin class, you will find that the one written in Kotlin will be much more compact than the one written in Java. Kotlin has Android Studio support Because Kotlin is built by JetBrains, it’s unsurprising that Android Studio (also a JetBrains product) has excellent support for Kotlin. Android Studio makes it incredibly easy to configure Kotlin in your project; it’s as straightforward as simply opening a few menus. Your IDE will have no problem understanding, compiling and running Kotlin code once you have set up Kotlin for Android Studio. After configuring Kotlin for Android Studio, you can convert the entire Java source file into a Kotlin file. The fact that Kotlin is Java compatible makes it a uniquely useful language that can leverage JVMs while at the same time be used to update and improve enterprise-level solutions that have enormous codebases written in Java. Kotlin is great for procedural programming Every programming paradigm has its own set of strengths and weaknesses. There will always be certain scenarios where one is more effective than another. One thing that’s so appealing about Kotlin is that it combines the strengths of two different approaches – procedural and functional. True, the largely procedural approach can sometimes be the most challenging aspect of the language when you first start to get to grips with it. However, the level of control such an approach can give you is well worth the investment of your time. Kotlin makes development more efficient and your life easier This follows on nicely from the point above. While certain aspects of Kotlin require patience and concentration to master, in the long run, with less code, errors and bugs will be greatly reduced. That saves you time, making coding much more enjoyable rather than an administrative nightmare of spaghetti code. There are plenty of features in Kotlin that makes it a practical solution to today’s programming challenges. Where JetBrains takes the language next remains to be seen – we could, perhaps, see Kotlin make a move towards iOS development, and if it compiled to JavaScript we may also begin to see it used more and more within web development. Of course, this will largely be down to JetBrain’s goals and just how much they want Kotlin to dominate the developer landscape. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 24744

article-image-how-to-migrate-from-magento-1-to-magento-2-a-comprehensive-guide
Guest Contributor
15 Aug 2019
8 min read
Save for later

How to migrate from Magento 1 to Magento 2. A comprehensive guide

Guest Contributor
15 Aug 2019
8 min read
Migrating from Magento 1 to Magento 2 has been one of the most commonly discussed topics in the world of eCommerce. Magento 2 was made available in 2015. Subsequently, Magento declared it will end its official support to Magento 1 in 2020. This makes the migration to Magento not only desirable but also necessary. Why you should migrate to Magento 2 As mentioned above, support to Magento 1 ends 2020. Here’s a list of the six most important reasons why migration from Magento 1.x to Magento 2 is important for your Magento store. Security Once the official support to Magento ends, security patches for different versions of Magento 1.x will no longer be offered. That means, if you continue running your Magento website on Magento 1.x, you’ll be exposed to a variety of risks and threats, many of which may have no official solution. Competition When your store is practically the only store that hasn’t migrated to Magento 2, you are at a severe competitive disadvantage. So while your competitors enjoy all the innovations that will continue happening on Magento 2, your Magento 1 website will be left out. Mobile friendly From regular shopping to special holiday purchases, an increasingly bigger proportion of e-commerce businesses come from mobile devices. Magento 2 is better optimized for mobile phones as compared to Magento 1. Performance In the e-commerce industry, better performance leads to better business, increased revenue and higher conversions. Magento 2 enables up to 66% faster add-to-cart server response times than Magento 1. Hence, Magento 2 becomes your best bet for growth. Checkout The number of steps for checkout has been slashed in Magento 2, marking a significant improvement in the buying process. Magento 2 offers the Instant Purchase feature which lets repeat customers purchase faster. Interface Magento 1 had an interface that wasn’t always friendly. Magento 2 has delved deeper to find the exact pain-points and made the new interface extremely user-friendly. Adding new products, editing product features or simply looking for tools has become easier with  Magento 2. FAQs for Magento migration By when should I migrate my store? All forms of official support for Magento 1 will be discontinued on June 2020, you should be migrating your store before that. Your Magento e-commerce store should be ready well before the deadline, so it’s highly recommended you start working towards the migration right away. How long will the migration take? It’s difficult to answer that question without further information about your store. The size of your store, its database and the kind of customization you need are some of the factors that influence the time horizon. Should I hire a Magento developer for the migration or should I let my in-house team deal with it? As with the earlier question, this question too needs further information. If you’re having your own team do it, allow them a good deal of time to learn a number of things and factor in a few false-starts as well. However, doing the migration all by yourself means you’ll have to divert a lot of in-house resources to the migration. That can negatively impact your ongoing business and put undue pressure on your revenue streams. Nearly all Magento stores have found that instead if they hire an experienced Magento 2 developer, they get better outcomes. Pre-migration checklist for moving from Magento 1 to Magento 2 Before you carry out the actual migration, you’ll want to prepare your site for the migration. Here’s your pre-migration checklist for Magento 1 to Magento 2 Filter your data. As you move to a better more sophisticated technology, you don’t want to carry outdated data or data that’s no way relevant to your business needs. There’s no point loading the new system with stuff that will only hog resources without ever being useful. So begin by removing data that’s not going to be useful. Critique your site. This is perhaps the best time to have a close look at your site and seriously consider upgrading it. Advanced technology like Magento 2 will produce even better results if your site reflects the current trends in e-commerce store design. Magento 2 offers better opportunities and you don’t want to be left out just because your site isn’t equipped to encash them. Build redundancy. Despite all your planning, there’s always a small risk of some kind of data loss. To safeguard yourself against it, make sure you replicate your Magento 1.x database. When you are actually implementing the migration, use this replicated database as your source for migration, without disturbing the original. Prepare to freeze admin activities. When you begin the dry run or the actual migration, continuing your administrative activities can alter your database. That would result in a patchy migration with some loose ends. To prevent this, go through a drill to prepare your business to stop all admin activities when you practice dry run and actual implementation of migration from Magento 1 to Magento 2. Finalize your blueprints. Unless absolutely critical, don’t waver from your original plans. Sticking to what you had planned will produce the best results. Changes that have not been factored in, can slow down or weaken your migration and even make it more expensive. Steps for migration from Magento 1 to Magento Migration from Magento 1 to Magento 2 doesn’t just depend on 1 activity but it is interdependent on multiple activities. They are: Data Migration Theme Migration Customization Migration, and Extension Migration Let’s look at each of them separately. Data Migration Step 1: Download Magento 2 without taking in the sample data. Follow the steps given for the setup and install the platform. Step 2: You will need a Data Migration Tool to transfer your data. You can download it from the official website. Remember, the Data Migration Tool version should be the same as the Magento 2 codebase version. Step 3: Feed the public keys and private keys for authorization. The keys too are available from the Magento site. Step 4: Configure the Data Migration Tool. How you configure it depends on which Magento 2 edition (Community Edition or Enterprise Edition) you would be using. You may not migrate from Enterprise Edition to Community Edition. Step 5: The next step is a mapping between Magento 1 and Magento 2 databases. Step 6: Get into maintenance mode to prepare for the actual migration. This will stop all administrative activities. Step 7: In the final step, you may migrate the Magento site, along with the system configuration like shipping and payments. Theme Migration Unlike Data Migration, Theme Migration in Magento doesn’t have standard tools that will take care of your theme migration. That’s also because of the fact that the frontend templates and their codes are hugely different in Magento 1.x and Magento 2.x So instead of looking for a tool, the best way out will be to get a new theme. You could either buy a Magento 2 theme that suits your style and requirements and customize it or develop one. This is one of the reasons why we suggested, upgrading your entire Magento store. Customization Migration The name customization itself suggests that what works for one online store won’t fit another. Which is why there’s no single way of migrating any of the customizations you might have done for your Magento 1. So you’ll be required to design all the customizations you need. However, there’s an important point to remember. Because of its efficiency and versatility, your store on Magento 2 may need lesser customization than you believe. So before you hurry into re-designing everything, take time to study what exactly you need and to what degree Magento 2 satisfies those needs. As you migrate from Magento 1.x to Magento 2.x, the number of customizations will possibly turn out to be considerably fewer than what you originally planned. Extension Migration Again, the same rule applies for extensions and plugins. What plugins worked for Magento 1 will likely not work for Magento 2 and you will have to build them again. Instead of interpreting it as something that’s frustrating, you can actually take it as an opportunity to correct minor errors and improve the overall experience. A dedicated Magento developer who specializes in Magento migration services can be of great help here. Final remarks on Magento migration If all this sounds a little overwhelming, relax, you’re not alone. Because Magento 2 is considerably superior to Magento 1, the migration may appear more challenging than what you had originally bargained for. In any case, the migration is compulsory; otherwise, you’ll face security threats and won’t be able to handle the competition. From the year 2020, this migration will not be a choice, so you might as well begin early so that you have more time to plan out things better. If you need help, a competent Magento web development company can make the migration more efficient and easier for you. Author Bio Kaartik Iyer is the Founder & CEO at Infigic Technologies, a web and mobile app development company. Kaartik has contributed to sites like Huffington Post, Yourstory, Tamebay to name a few. He's passionate about fitness, entrepreneurship, startups and all things digital. You can connect with him on LinkedIn for a quick chat on any of these topics. Why should your e-commerce site opt for Headless Magento 2? Adobe is going to acquire Magento for $1.68 Billion 5 things to consider when developing an eCommerce website
Read more
  • 0
  • 0
  • 24707