Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-5-habits-successful-developers
Hari Vignesh
10 May 2017
5 min read
Save for later

5 habits of successful developers

Hari Vignesh
10 May 2017
5 min read
There are about 18.2 million software developers worldwide, and that number is expected to rise to 26.4 million by 2019 – a 45% increase, says Evans Data Corp, in its latest Global Developer Population and Demographic Study.That means that the number ofpeople with your skill setis going to increase, by a lot. Start developing successful habits now so you’ll stand out from the crowd later. 1. Staying up to date Successful developers know that this is mandatory. Keep learning new things or work on something – even if it’s just a small side project – every day. Staying updated in the IT industry is critical.Let me give you a simple example: the evolution of mobile phones.Initially,mobile phones took five years to reach the masses but when smartphones were introduced they reached billions in two years. So, with respect to the technology from the year 2000 - 2010, every technology or framework took five years to mature. But now, technology is becoming outdated in less than two years. To withstand this heat, you need to be awake and keep your eyes and ears open for new emerging technology and frameworks.So, how do you stay up to date? Medium is a great platform to start. Also, engage in other developer communities and tech forums. 2. Lucid understanding and ownership Understanding is the foundation for every developer, and this should be your strength. Your peers or leaders will prefer to explain things to you only once, and you should be in a position to grasp and perform tasks without raising any flags. Time is a crucial factor for everyone in the organization so the organization will expect you to understand processes quickly. How do you understand quickly? You need to upgrade your domain knowledge constantly. For example, if you’re working in a health care sector, you need to be half doctor — you need to understand things better so you can deliver a high quality product. Successful developers produce quality products and, in order to think about quality, you need to be an expert in your domain. 3. Crafting with best practices If we have two developers with the same experiences, what metrics can you use todetermine which one isbetter? Well, it’s based on best practices. A task can be achieved in multiple ways but whoever provides the best, easiest, and most scalable solution is obviously more sellable than the other person.Qualities like providing multiple solutions, scalable solutions, optimal solutions — all these will manifest when you gain more experience. You can gain this experience quickly if you spend more time with developer documentation and community, as well as by asking the right questions.  Writing clean, scalable, and optimal code is one of the most valued skills of all time in the software industry. Nobody can reach the ultimate level, but you should keep a check on this all the time.There are multiple paths you can take to learn best practices. The best path is to grab a mentor. Find experts in your field, discuss the problems with them, and their experience will definitely show you the best practices. 4. Avoiding your comfort zone Almost 90% of developers just want to hang out in their comfort zones. They opt for big corporate jobs where they’ll have scheduled work every day, party on the weekends, etc. Well, even if you’re employed with a big tech giant organization, you should be doing something outside of that work like   freelancing, open source contributions, core and library creation, and much more. This will obviously take up some of your free time and be less comfortable but the end result will be beautiful.You need to do something different to get something different.  If you prefer being in your comfort zone then trust me, you will lose your job after five years. You need to constantly prove your presence in this industry to keep being successful. Engage yourself in conferences, tech meet-ups, make hobby projects, present white papers, improve your skillsets, and much more. 5. Community interaction Most developers in the world are self-taught (including myself). We learn everything from the developer community for free. What have we done in return for the community? Well, successful developers play a vital role in contributing to open source or to the developer community. They write articles, present things in meet-ups, organize meet-ups, and share knowledge. This will in-turn help the others who are dependent on free education and that’s how the fraternity will grow. So, how can you help the community? Here are some suggestions to get your started: You can start writing your tips, tricks, and experiences in a blog. You can start making “how to” videos. You can contribute to open source projects. You can start writing free libraries. You can start presenting in meet-ups. You can participate in developer community programs. Mentor a few projects. If you couldn’t manage time for making new habits, here’s a simple trick: If you need to inject any habit in your life, all you need to do is to practice it strictly for 21 days. If you you're able to do that, your brain and body will pick it up automatically for the rest of your life. About the Author  Hari VigneshJayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 3957

article-image-what-progressive-web-app-and-why-should-i-care
Antonio Cucciniello
09 May 2017
4 min read
Save for later

What is a progressive web app?

Antonio Cucciniello
09 May 2017
4 min read
You've probably heard plenty of buzz about something called progressive web apps over the past couple of years – you might have even been given the opportunity to use some of these on your devices. You’re also here reading this article, so it’s probably safe to say you’re also at least somewhat interested in learning more about progressive web apps. Let’s dive into what they are, some characteristics of one, and how progressive web apps affect you as a developer. What’s this all about then? A progressive web app is a program that is stored on a server somewhere and given to the user on a web browser, but is delivered with and interacted with as a native application experience. Stated more simply, it is a web application that feels like a native application to the user. It is built using web development technologies (browser, server, database, etc.), but it's created with the design and feel of being a native application for the end users. It is a great attempt at creating an application that combines the benefits of a web-based application and a native application. Progressive web apps have some defining characteristics, like they are: Reliable: The app should load instantly even under poor network conditions. Lighting fast and app-like: The app should respond to the user's actions with speed and with a smooth interaction. Engaging and responsive: The app should give the feeling that it was made specifically for that device, but it should be able to work across all platforms. Protected and secure: Since it is still a web app, it is served over HTTPS to make sure the contents of the app are not messed with. Installable: The app can be saved to a device's home screen for offline usage. Linkable: The app can be shared and accessed through a URL. Up-to-date: The application is always up to date using service workers.  Why should you care? Now let's dive into why application developers should be interested in progressive web apps. As you probably already noticed when reading the list above this, there are plenty of benefits to using progressive web apps for the user. First off, it keeps the simplicity and speed of developing a web application. It is built using the same old web technology that you have been building your web application with, which tends to be easier and cheaper to get developed compared to a native application because that is device specific, and involves learning more techonologies. Second, it has service workers that allow users to use the application with some offline functionality. The service workers usually cache application resources in order to be used offline. In a standard web app, you would not be able to access anything offline, but in a progressive web app it gives that added benefit to the user. Third, it allows for fluidity between all of your devices. Because the user interface and the interactions with it are the same on all devices, it is easy for the user to use the progressive web app on multiple platforms. Fourth, learning about the topic of building a progressive web application does not involve you learning a new technology if you have already been developing web applications for some time now. All you need to do as a developer is to build the web application with the correct principles in mind when starting out. Looking ahead Progressive web appsare an awesome combination of a web app and a native app that have the combined benefits of developing either/or, and bringing it to the user in one application. You can build the application more easily, it can be used at least partially offline, it allows for a nice fluidity between all of your devices, and it does not require much extra learning on your part. I would highly suggest you take this approach into consideration when building your next application. If you wanted to take a look at some of the progressive web apps that are out today, check out this link. It gives you a link to some of the better progressive web applications to date.  About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello. 
Read more
  • 0
  • 0
  • 33828

article-image-what-jamstack-and-why-should-i-care
Antonio Cucciniello
07 May 2017
4 min read
Save for later

What is JAMstack and why should I care?

Antonio Cucciniello
07 May 2017
4 min read
What is JAMstack? JAMstack, according to the project site, offers you a "modern web development architecture based on client-side JavaScript, reusable APIs, and prebuild Markup." As you can tell by the acronym, it utilizes JavaScript, APIs and Markup as core the components for the development stack. It can be used in any website or web application that does not depend on tight coupling between the client and the server. It sounds simple, but let's dive a little deeper into the three parts. JavaScript The JavaScript is basically any form of client-side JavaScript. It can be used to handle requests and responses, front-end frameworks such as React and Angular, any client side libraries, or plain old JavaScript. APIs The APIs consists of any and all server-side processes or database commands that your web app needs to handle. These APIs can originate from third-party APIs out there, or from a custom API that you created for this application. The APIs communicate with JavaScript through HTTP calls. Markup This is Markup that is templated and is built at deploy time. This is done using a build tool such as grunt, or a static site generator. Now that you know the individual parts of how this works, let's discuss how you could optimize this stack with a few of the best practices. JAMstack best practices Hosted on Content Delivery Network It is a good idea to distribute all the code to CDNs to reduce the load time of each page. The JAMStack websites do not rely on server-side code so they can be distributed on CDNs much easier. All code in Git In order to increase development speed and have others contribute to your site, all of the code should be in source control. If using git, should be able to simply clone the repository and install the third party packages that your project requires. From there, developers should be smooth sailing to making changes. Build tools to automate builds Use tools like Babel, Webpack and Browserify to automate repetitive tasks to reduce development time. You want your builds to be automatic in order to run in order for users to see your changes Use atomic deploying Atomic Deploying allows you to deploy all of your changes at once, when all of your files are built. This allows the changes to be displayed after all files are uploaded and built. Instant cache purge The cache on your CDN may hold the old assets after you create a new build and deploy your changes. In order to make sure the client sees the changes that you implemented, you want to be able to clear the cache of CDNs that host your web application. Enough about best practices already, how about the benefits? Why should YOU care? There are a couple of benefits for you as a developer to building your next application using JAMstack. Let's discuss them. Security Here we are removing server-side parts that would normally be closely working with the client-side components. When we remove the server-side components we are reducing the complexity of the application. That makes the client-side components easier to build and maintain. Making for easier development process, and therefore increasing the security and reliability of your website. Cost Since we are removing the server-side parts, we do not need as many servers to host the application and we do not need as many backend engineers to handle the server-side functionality. Thus reducing your overall cost significantly. Speed Since we are using prebuilt Markup that is built at deploy time, we reduce the amount of work that needs to get completed at runtime. That will, in hand, increase the speed of your site because the page will be built already. JAMstack - key takeaways In the end, JAMstack is just a web development architecture that makes its web apps using JavaScript, APIs and prebuilt Markup. It has several advantages such as increased security, reduced cost, and faster speed. Here is a link to some examples of web apps that are built using JAMStack. Under each one, they list the tools that were used to make the app, including the front-end frameworks, static site builders, build tools and various APIs that were utilized. If you enjoyed this post share it on twitter! Leave a comment down low and let me know your thoughts on JAMStack and how you will use it in your future applications! Possible Resources Check out my GitHub View my personal blog Check out my YouTube Channel This is a great talk on JAMStack
Read more
  • 0
  • 0
  • 4620

article-image-android-o-whats-new-and-why-its-been-introduced
Raka Mahesa
07 May 2017
5 min read
Save for later

Android O: What's new and why it's been introduced

Raka Mahesa
07 May 2017
5 min read
Eclaire, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, Kit Kat, Lollipop, Marshmallow, and Nougat. If you thought that was just a list of various sweet treats, well, you're not wrong, but it's also a list of Android version names. And if you guessed that the next version of Android starts with O, well you're exactly right because Google themselves have announced Android O – the latest version of Android.  So, what's new in the O version of Android? Let's find out.  Notifications have always been one of Android's biggest strengths. Notifications on Android are informative, versatile, and customizable so they fit their users' need. Google clearly understands this and has kept improving the notification system of Android. They have overhauled how the notifications look, made notifications more interactive, and given users a way to manage the importance of each notification. So, of course, for this version of Android, Google added even more features to the notification system.  The biggest feature added to the notification system on Android O is the Notification Channel. Basically, Notification Channel is an API that allows developers to define categories for notifications from their apps. App users will then be able to control the setting for each category of notifications. This way, users can fine tune applications so they only show notifications that the users think are important.  For example, let's say you have a chat application and it has 2 notification channels. The first channel is for notifying users when a new chat message arrives and the second one is for when the user is added to someone else's friend list. Some users may only care about the new chat messages, so they can turn off certain types of notifications instead of turning of all notifications from the app.  Other features added to Android O notification system is Notification Snoozing and Notification Timeout. Just like in alarm, Notification Snoozing allows the user to snooze a notification and let it reappear later when the user has time. Meanwhile, Notification Timeout allows developers to set a timeout duration for the notifications. Imagine that you want to notify a user about a flash sale that only runs for 2 hours. By adding timeout, the notification can remove itself when the event is over. Okay, enough about notifications – what else is new in Android O?  Autofill Framework  One of the newest things introduced with Android O is the Autofill Framework. You know how browsers can remember your full name, email address, home address, and other stuff and automatically fill in a registration form with that data? Well, the same capability is coming to Android apps via the Autofill Framework. An app can also register itself as an Autofill Service. For example, if you made a social media app, you can let other apps use the user's account data from your app to help users fill their forms.  Account data  Speaking of account data, with Android O, Google has removed the ability for developers to get user's account data using the GET_ACCOUNT permission, forcing developers to use the account chooser dialog instead. So with Android O, developers can no longer automatically fill in a text field with the user's email address and name, and have to let users pick accounts on their own.  And it's not just form filling that gets reworked. In an effort to improve battery life and phone performance, Android O has added a number of limitations to background processes. For example, on Android O, apps running in the background (that is, apps that don't have any of their interface visible to users) will not be able to get users’ location data as frequently as before. Also, apps in the background can no longer create and use background processes.  Do keep in mind that some of those limitations will impact any application running on Android O, not just apps that were built using the O version of the SDK. So if you have an app that relies on background processes, you may want to check your app to ensure it works fine on Android O.  App icons  Let's talk about something with more visual: App icons. You know how manufacturers add custom skins to their phones to differentiate their products from competitors? Well, some time ago they also changed the shape of all app icons to fit the overall UI of their phones and thisbroke some carefully designed icons. Fortunately, with the Adaptive Icon feature introduced in Android O, developers will be able to design an icon that can adjust to a variety of shapes.  We've covered a lot, but there are still tons of other features added to Android O that we haven't discussed, including: multi-display support, a new native Audio API, Keyboard Navigation, new APIs to manage WebView, new Java 8 APIs, and more. Do check out the official documentation for those.  That being said, we're still missing the most important thing: What is going to be the full name for Android O? I can only think of Oreo at the moment. What about you?  About the author  Raka Mahesa is a game developer at Chocoarts (chocoarts.com), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 18522

article-image-webgl-20-what-you-need-know
Raka Mahesa
01 May 2017
5 min read
Save for later

WebGL 2.0: What you need to know

Raka Mahesa
01 May 2017
5 min read
Earlier this year, Google and Mozilla released a version of Chrome and Firefox that has full support for WebGL 2.0. While some of the previous versions of their browsers also have support for WebGL 2.0, those versions by default disable the WebGL 2.0 feature. By enabling WebGL 2.0 in their latest browser version, it seems both Google and Mozilla are confident that this bleeding edge web technology can finally be used by most users without any problems.  So, what is WebGL 2.0? How does it differ from the previous version of WebGL? What, in fact, is WebGL?  To answer those questions, let's go back in time a little bit. In the early 1990s, graphics intensive applications were expensive to create because the software had to be customized for each type of graphic processing hardware. Imagine having to write an app for each smartphone vendor separately; it would cost many man hours. So, to mitigate this problem, a standard for graphics computing was introduced. This standard is called OpenGL (which stands for Open Graphics Library).  When mobile phones with display screens were introduced, people realized that mobile technology also needed a standard for graphics computing. However, OpenGL is a standard primarily for desktop-class hardware, so they realized that they would need a different standard that could work with the limited capability of mobile hardware. And thus OpenGL ES (Embedded System) was branched out from OpenGL, and the initial version was released in the early 2000s.  The same progression happened to web technology. In 2009, web applications became increasingly graphic-intensive, so a graphical standard called WebGL was introduced to help software developers. One thing noted, however, was that users could access web applications from both desktop and mobile devices, so WebGL needed to work on both platforms. To accommodate that, WebGL was created based on the OpenGL ES specification instead of the desktop-focused OpenGL.  Technology keeps advancing. As graphics hardware becomes more capable, additional features get added to the graphical standards. The latest version of OpenGL ES, version 3.0, was released in 2012 to keep up with the advancement in mobile GPU. WebGL 1.0, however, was still based on OpenGL ES 2.0. So in 2017, the specification for WebGL 2.0, which was based on OpenGL ES 3.0, was finally released.  As we can see from the timeline, WebGL 2.0 is really fresh out the oven. In fact, it's so new, that at the time of writing this article, the only browsers that support the standards are Google Chrome, Mozilla Firefox, and the Opera browser. WebGL 2.0 support on Safari is still under development. Also, it's worth noting that no mobile browser supports WebGL 2.0 by default (WebGL 2.0 support on Chrome for Android can be enabled via a hidden menu).  Considering the limited number of compatible platforms, as developers, we really can't rely on the user to have the necessary browser for our apps. So, with that limitation in mind, we have to always check for the browser's capability and prepare a fallback method in the event that the browser does not support WebGL 2.0.  So, how does WebGL version 2.0 differ from version 1.0? Fortunately, nothing major has changed with the way the library is used. This latest version of WebGL simply adds additional features and also makes some optional extensions of the library to be included by default.  One of the WebGL 1.0 extensions that have been made mandatory on WebGL 2.0 is the Instancing extension, which enables developers to render multiple copies of the same mesh efficiently. This feature is very useful for drawing decorative objects, like grass. Another extension that has been included in WebGL 2.0 is Depth Texture, which is used a lot for computing lighting and creating shadow maps.  Another major addition to WebGL 2.0 is the support for GLSL 3.0 ES, the latest programming language for the OpenGL shader. With this version of GLSL, a loop in the shader is no longer restricted to a constant integer. Not just that, GLSL 3.0 ES also brings additional matrix operations (like an inverse function) that will make coding a shader much easier.  WebGL 2.0 also offers much better support for textures. With version 2.0, the non-power of 2D textures are finally supported, which means the size of your texture image is no longer limited to 32, 64, 128, 256, and such. 3D textures are also supported now, which is pretty useful for volumetric effects such as light rays and smoke, as well as for storing medical scans.  WebGL 2.0 also adds support for more texture formats such as RGBA32, RGBA16, R11F_G11F_B10F, SRGB8, and others. More compressed texture formats that are not platform-specific are also supported, including: COMPRESSED_RGB8_ETC2, COMPRESSED_RGBA8_ETC2_EAC, and more.  There are other additions to WebGL 2.0, such as Multiple Draw Buffer, Transform Feedback, Uniform Buffer Object, and more. To learn about these and much more, see the official WebGL 2.0 specifications to check out all those additions in detail.  About this author Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 32833

article-image-8-things-keep-mind-when-starting-out-web-dev
Shawn Major
01 May 2017
7 min read
Save for later

8 Things to Keep in Mind When Starting Out in Web Dev

Shawn Major
01 May 2017
7 min read
Experienced web developers reveal what’s needed to get ahead of the curve – and stay there!  We’ve asked eight members of Packt’s author community to weigh in on what developers who are just starting their careers can do to get that competitive edge. These experienced developers have some useful tips for the web devs of tomorrow:  Find your niche and dig in Harold Dost, Principal Consultant at Raastech, recommends that fledgling developers should take time to see what tools are available – these will help you build a strong knowledge foundation and pave the way to becoming an expert. Mapt, a skill learning platform for developers, is one of the best resources out there witha growing library of over 4,000 tech eBooks and video courses. As far as building your skillset goes, Harold says that you should “hone a core skill (maybe two or three), and then diversify on the rest. This will allow you to specialise and give you the in-depth knowledge which will be necessary as you go further in your career.” This doesn’t mean you should just pick a few things and then that’s that for the rest of your career though. You have to be on the lookout for new opportunities and directions to expand your skillset. Haroldagrees, saying that, “at the same time as specialising, be sure to keep learning about new technologies to allow you to grow and improve the work you produce.” Keep learning and start writing Luis Augusto Weir, Oracle ACEDirector and Principal at Capgemini UK, encourages web devs to “be to be passionate about learning and, of course, about coding.” There are so many ways to educate yourself, but he thinks that reading books is still the best thing you can do to further your education. “Reading books is surely a way to get ahead,” Luis says, “as well as lots of other interactive ways to learn like YouTube, blogs, online courses and so on.Not only does a huge amount of effort go into writing books, but nothing beats a good book to read whilst on the train, or bus. Bringing a book with you wherever you go means you’re always equipped to learn.” Adrian Ward, who is an Oracle ACE Associate and runs the Addidici OBIEE consultancy, affirms that in addition to reading, writing was also a crucial part of his education. Adrian says that writing anything, including “blogs, articles, books or presentations,” will give you a better understanding of the topics you are learning and compel you to keep learning new things. “If you’re writing about something, you certainly have to learn about it first!” Belén Cruz Zapata, a Mobile Software Engineer at Groupon, advises developers to “keep learning new thing.”She has first-hand experience with the benefits of blog writing, showing that writing can create opportunities for developers. Belén recounts how she came to write a book for us, saying, “I have a blog that I used to write a review about Android Studio when it was announced for the first time. Packt found my article and contacted me to write a book about it.” Recharge your batteries Sten Vesterli, Senior Principle Consultant at Scott/Tiger, says that as a developer you need “to manage your energy, and find ways to replenish it when it's running low." This is such an important skill that developers need to learn. Stenreasons that “if you have high energy, you can learn any skill and it will remain employable. If you have low energy, you will have a hard time learning something new and will be in danger of being left behind by technological changes.” Every developer will have to figure out their own recharging strategy. Sten says, “I've found that meditation and triathlons work for me, but others will have different things that give them energy.”There is no wrong way to recharge – whether it’s binge watching your favorite show, going for a run, hanging out with friends, or something else – so make sure you block out some time for you to do you. Do what works Phil Wilkins, Senior Consultant at Capgemini, urges graduates and fledgling web devs to challenge both fads and the status quo by thinking critically about the work they are doing and the tools they are using. You need to ensure that you’re not solely using a piece of tech out of habitor prioritizing novelty for novelty’s sake. Make sure that the direction you’re going is relevant and the tools you’re using are the ones best suited for the job. Phil says, “Many will consider me a heretic, but the industry is sometimes a little quick with the next shiny thing and some 'not-invented-here' thinking. I think you should challenge those around you to really understand the tools they’re using, and question whether they’re the right tools to do the job well. Reflecting on what you’re doing and challenging yourself, or being challenged by someone else, to do something better will drive better understanding and insight that can then be applied in later life.” Stay curious; ask questions Phil also advocates for developers to keep their sense of curiosity. He says, “Questioning why something is a good answer to a problem is as important as to how to answer the problem.” Phil adds that “understanding this may not make you a guru, but it will give you a foundation to work with peers in an engaging manner and set you up for future success. Ultimately IT is here to solve problems, and knowing why certain things are good answers rather than that they simply are good answers, means you stand the best chance of developing good solutions.” Network your face off Adrian Ward, who earlier emphasized the importance of writing, has another crucial piece of advice for those getting started with web development. It can be summed up in a single word: Network. You’ve probably heard it a million times already, but networking can really help you get a foot in the door. Many of Packt’s experts confirm that getting out there and connecting with people within your industry is an effective tactic for getting ahead. “Just get involved with the community,” Adrian says. There are so many ways to connect with people these days, so you can start with a method that you’re most comfortable with and then go from there. You can go to events organized by your university or college, go to conferences or local tech meet-ups, or even look for people to connect with on LinkedIn. Apply for jobs that will help you grow Robert van Mölken, Senior Integration and Cloud Specialist for AMIS, advises graduates and fledgling web devs who are looking for jobs to actively seek out employers that invest in their employees. As a developer, this means that the company has training and incentives to keep you up to date with the latest tech and ideas. “Things are changing so fast these days that you can’t sit still if you want to be relevant in two years time," Robert says. "Companies that allow their developers to go to conferences, both locally and further afield, will find that they will learn upcoming skills much faster, going beyond the point of knowledge you can get from investing in and learning from books.” Robert recommends that you, “invest some personal time to experiment with new technologies and IT innovations. Don’t fall behind on stuff just because you are comfortable with what you do every day at work. Find opportunities to speak up, to give presentation about what you learned, and share your experiences. Then you will get noticed, and a world of possibilities will open up to you.” Remember: You’ve got this Sreelatha Sankaranarayanan, Information Developer at IBM, thinks that the young developers of today have generally got it together. She says, “I think they are doing things right. They are willing to learn, explore and experiment. They have fewer inhibitions, are more confident and are willing to go all out. Good luck is all that I would like to say to them.” No developer is an island. Learn from our global author community and enjoy unlimited access to eBooks and Video Courses with a Mapt subscription.
Read more
  • 0
  • 0
  • 6953
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-blockchain
Lauren Stephanian
28 Apr 2017
6 min read
Save for later

What is Blockchain?

Lauren Stephanian
28 Apr 2017
6 min read
The difference between Blockchain and Bitcoin Before we explore Blockchain in depth, it’s worth looking at the key differences between Blockchain and Bitcoin, which are very closely associated with each other. Both were conceived by a mysterious figure who goes by the alias Satoshi Nakamoto; while both ideas are revolutionary, the key distinction is this: Blockchain was created for the implementation of Bitcoin, has a broader application. Bitcoin is ultimately just a cryptocurrency, and is actually very similar to any other currency in its use. So, then, what is Blockchain? Blockchain, put simply, is a way to store data or transactions on a growing ledger. The way that it works allows us to safely rely on the data shown to us, because it is built on the concept of decentralized consensus.  Decentralized consensus is reached by a Blockchain because each block containing some data (for example, a certain amount of Bitcoins heading to someone's account) within the chain has an encryption called a hashing function that connects it to the next block along with a time stamp. This chain of blocks continuously grows, and once a transaction is recorded, it is immutable--there is no going back and altering the data, only building on top of it. Everyone sees the same unchanged data on a Blockchain, and therefore actions based on this data, such as sending money to someone, can be safely taken because the information shown cannot be disputed because every party agrees. Other uses for Blockchain Although Blockchain is most commonly associated with Bitcoin, there are many different uses for this technology. Blockchain will change our way of not only working with one another, but also storing data, identifying ourselves, and even voting, by making these actions easier and secure by design. Below are eight ways Blockchain technology will revolutionize the way we conduct business, and the way governments function. All of these areas create opportunities for developers. Cryptocurrencies The most common use for Blockchain is, as mentioned above, to create or mine for cryptocurrencies. You can set up a store to accept these cryptocurrencies as payment, and you can send money around the world to different people, or mine for money by using a tool called a miner, which collects money every time a new transaction is made. These currencies are theoretically totally safeguarded from theft. Digital assets Just as you might do with regular currencies, you can create financial securities using cryptocurrencies. For example you can use Bitcoin to create derivatives, such as futures contracts, based on what you think the future value of Bitcoins will be. You can also create stocks and bonds, and almost any other security you might want to trade. Decentralized exchanges In a similar vein, Blockchain can be used to make financial exchanges safer, faster, and easier to track. A decentralized exchange is an exchange market that does not rely on third parties to hold onto customers’ funds and instead allows customers to trade directly and anonymously with each other. It reduces risk of hacking because there is no single point of failure and it is potentially faster because third parties are not involved. Smart contracts Smart contracts or decentralized applications (DApps) are just contracts that you can use in any way you might use a regular contract. Because they are based on Blockchain technology, they are decentralized and therefore any third parties or middlemen (i.e. lawyers) can be effectively removed from the equation. They are cheaper, faster, and more secure than traditional contracts, which is why governments, banks, and real estate companies are beginning to turn to them. Distributed Cloud Storage Blockchain technology will be a huge disruptor in the data storage industry. Currently a third party, such as Dropbox, controls all your cloud data. Should they want to take it, alter it, or remove it, they are legally allowed to do so and you won't be able to do anything to stop them. However, any one party cannot control decentralized cloud storage, and therefore your data is more secure and untouchable by anyone you wouldn't want to interfere with it. Verifiable Data (Identification) Identity theft is a problem caused by lack of security and information being held by a centralized party. Blockchain can decentralize databases holding verifiable data like identification and make them less susceptible to hacking and theft. Once verifiable data is decentralized, it can easily be checked for accuracy by third parties needing to access it. Data breaches, such as what happened to Anthem in early 2015, are becoming more and more of a common occurrence, which indicates that we need our technology to adapt in order to keep up with the ever-changing landscape of the Internet. Blockchain will be the answer to this; it's just a matter of when this change will happen. Digital voting Perhaps the most relevant item on this list is digital voting. Currently an innumerable amount of countries are host to reports of voter fraud, and whether they are true or not, this casts doubts on the credibility of any administration’s leadership. Additionally, using Blockchain technology could eventually allow for online voting, which would assist in correcting low voter turnout because it makes voting easier and more accessible to all citizens. In 2014, a political party in Denmark became the first group to use Blockchain in their voting process. This secured their results and made them more credible. It would be advantageous for more countries to begin using Blockchain to verify election results and some might even begin adopting the ability to count online votes for increased participation. Academic credentialing At least at one point in your life, you have probably heard about people lying about their alma mater on their resume, or even editing their transcripts. By having schools and certification programs upload credentials to a decentralized database, Blockchain can make verifying these important details fast and painless for all prospective employers. What are the key takeaways? Despite the lack of implementation in current businesses and governments, Blockchain will change our society for the better. It is already established in the footbed of the tech realm, and will remain until the next great revolutionary technology comes along to change the way we store and share data. About the Author Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website, http://lstephanian.com.
Read more
  • 0
  • 0
  • 18622

article-image-4-gaming-innovations-are-impacting-all-tech
Raka Mahesa
26 Apr 2017
5 min read
Save for later

4 Gaming innovations that are impacting all of tech

Raka Mahesa
26 Apr 2017
5 min read
Video games are a medium that sits at the intersection of entertainment, art, and technology. Considering that video games are a huge industry with over $90 billion in yearly revenues and how the various fields of technology are connected to each other, it makes sense that video games also have an impact on other industries, doesn't it? So let's talk about how gaming has expanded beyond its own industry.  Innovation in hardware  For starters, video games are a big driver in the computer hardware industry. People who mostly use their computer for working with documents, or for browsing the Internet don't really need high-end hardware. A decent processor, an okay amount of RAM, and just a few hundred gigabytes of storage is all they need to have their computers working for them. On the other hand, people who use their computer to play games need high-end hardwareto play the latest games.  These gamers want to play games in the best possible setting, so they demand a GPU that can render their games quickly. This leads to a tight competition between graphic card companies who try their best to produce the most capable GPU at the lowest price possible.  And it's not just GPU. Unlike movies with their 24 frames per second, games can have a higher number of frames per second. Because games with a high number of FPS have better animation, hardware makers have started to produce computer monitors with higher refresh rates that can show more frames per second. They've also produced auxiliary hardware (i.e. keyboards, etc.) that are more sensitive to user input because competitive gamers really appreciate all the extra precision they can get from their hardware.  In short, video games have spurred various innovations in computer hardware technology. And it's simply because those innovations provide users with a better gaming experience.  One of the interesting parts in this aspect is how the progress looks like a loop. When a game developer produces a video game that requires the most advanced hardware, hardware manufacturers then create better hardware that can render a game more efficiently. Then, game developers notice this additional capability and make sure their next game uses this extra resource, and so on. This endless cycle is the fuel that keeps computer hardware progressing. Innovation in AI research and technology Another interesting aspect is how the pursuit of a better GPU has benefitted the research of artificial intelligence. Unlike your usual application, artificial intelligence usually runs their processes in parallel instead of sequentially. Modern day CPU, unfortunately, isn't really constructed to run hundreds of processes at the same time. On the other hand, GPUs are designed to process multiple pixels at the same time, which makes them the perfect hardware to run artificial intelligence.  So, thanks to the progress in GPU technology, you don't need a special workstation to run your artificial intelligence project anymore. You just need to hook an off-the-shelf GPU to your PC and your artificial intelligence is ready to run, making AI research accessible to anyone with a computer. And because video games are a big factor in the progress of graphics hardware, we can say that video games have made an indirect impact in the accessibility of AI technology.  Innovation in virtual reality and augmented reality Another field that video games have made an impact on is virtual and augmented reality. One of the reasons that virtual reality and augmented reality are making a comeback in recent years is because consumer graphics hardware is now powerful enough to run VR apps. As you may know, VR apps require hardware that's more powerful than your usual mainstream computer. Fortunately, gaming computers nowadays are powerful enough to run those VR apps without causing motion sickness. Even Facebook, who isn't really a gaming company, focuses their VR effort on video games because right now the only computer that can run VR properly is a gaming computer.  And it's not just VR and AR. These days, when a new platform is launched, its ability to play video games usually becomes one of its selling points. When AppleTV was launched, its capability to play games was highlighted. Microsoft also had a big showcase using Hololens and Minecraft to demonstrate how the device would work. Video games have become one of the default ways for companies to demonstrate the capabilities of their devices, and to attract more developers to their platform.  Innovation beyond technology  The impact of video games isn't limited to only technological fields. Many have found video games to be an effective teaching and therapeutic tool. For example, soldiers in the army are encouraged to play military shooter games during their off-duty time, so they can stay in a soldier mindset even when they're not on duty. As for therapy, many studies have found that video games can be a great aid in treating patients with traumatic disorders as well as improving autistic patients' social skills.  These fields are just a sample of those that have benefited, and innovated, from the gaming industry. There are still many other fields in which games have made an impact, including: serious games, gamification, simulation, and more.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 6025

article-image-raspberry-pi-zero-w-what-you-need-know-and-why-its-great
Raka Mahesa
25 Apr 2017
6 min read
Save for later

Raspberry Pi Zero W: What you need to know and why it's great

Raka Mahesa
25 Apr 2017
6 min read
On February 28th, 2017, the Raspberry Pi Foundation announced the latest product in the Raspberry Pi series – the Raspberry Pi Zero W. The new product adds wireless connectivity to the Raspberry Pi Zero and is being retailed for just $10. This is great news for enthusiasts and hobbyists all around the world.  Wait, wait, Raspberry Pi? Raspberry Pi Zero? Wireless? What are we talking about? Okay, so, to understand the idea behind Raspberry Pi Zero W and the benefits it brings, we need to back up a bit and talk about the Raspberry Pi series of products and its history. The Raspberry Pi's history The Raspberry Pi is a computer that's the size of a credit card and was made available to the public for the low price of $35. And yes, despite the size and the price of the product, it's a full-fledged computer capable of running an operating system like Linux and Android, though Windows is a bit too heavy for it to run. It came with 2 USB ports and a HDMI port so you can plug your keyboard, mouse, and monitor into it and treat it just like your everyday computer.  The first generation of the Raspberry Pi was released in February 2012 and was an instant hit among the DIY and hobbyist crowd. The small-sized and low-priced computer proved to be perfect to power up their DIY projects. By the time this post was written, 10 million Raspberry Pi computers have been sold and countless numbers of projects using the miniature computer have been made. It has been used in projects including: home arcade boxes, automated pet feeders, media centers, security cameras, and many, many others.  The second generation of the Raspberry Pi was launched in February 2015. The computer now offered a higher-clocked, quad-core processor with 1 GB of RAM and was still being sold at $35. Then, a year later in February 2016, the Raspberry Pi 3 was launched. While the price remained the same, this latest generation of the computer boasted higher performance as well as wireless connectivity via WiFi and Bluetooth.  What's better than a $35 computer?  The Raspberry Pi has come a long way but, with all of that said, do you know what's better than a $35 computer? A $5 computer that’s even smaller, which is exactly what was launched in November 2015: the Raspberry Pi Zero. Despite its price, this new computer is actually faster than the original Raspberry Pi and, by using micro USB and mini HDMI instead of the normal-sized port, the Raspberry Pi Zero managed to shrink down to just half the size of a credit card.  Unfortunately, using micro USB and mini HDMI ports leads to another set of problems. Most people need additional dongles or converters to connect to those ports, and those accessories can be as expensive as the computer itself. For example: a micro-USB to Ethernet connector will cost $5, a micro-USB to USB connector will cost $4, and a micro-USB WiFi adapter will cost $10.  Welcome the Raspberry Pi Zero W  Needing additional dongles and accessories that cost as much as the computer itself pretty much undermines the point of a cheap computer. So to mitigate that the Raspberry Pi Zero W, a Raspberry Pi Zero with integrated WiFi and Bluetooth connectivity, was introduced in February 2017 at the price of $10. Here are the hardware specifications ofthe Raspberry Pi Zero W: Broadcom BCM2835 single-core CPU @1GHz 512MB LPDDR2 SDRAM Micro USB data port Micro USB power port Mini HDMI port with 1080p60 video output Micro SD card slot HAT-compatible 40-pin header Composite video and reset headers CSI camera connector 802.11n wireless LAN Bluetooth 4.0  Its dimensions are 65mm x 30mm x 5mm (for comparison, the size of a Raspberry Pi 3 is 85mm x 56mm x 17mm).  There are several things to note about the hardware. One of them is that the 40-pin GPIO connector is not soldered out of the box; you have to solder it yourself. These unsoldered connectors are what allow the computer to be so slim and will be pretty useful to people who don't need a GPIO connection.  Another thing to note is that the wireless chip is the same wireless chip found in the Raspberry Pi 3, so they should behave and perform pretty similarly. And because the rest of the hardware is basically the same as the ones found in the Raspberry Pi Zero, you can think of the RaspberryPi Zero W as a fusion between both series.  Is the wireless connectivity worth the added cost? You may wonder if the wireless connectivity is worth the additional $5. Well, it really depends on your use case. For example, in my home everything is already wireless and I don't have any LAN cables that I can plug in to connect to the Internet, so wireless connectivity is a really big deal for me.  And really, there are a lot of projects and places where having wireless connectivity could help a lot. Imagine if you want to setup a camera in front of your home that would send an email to you every time it spots a particular type of car. Without a WiFi connection, you would have to pull your Ethernet cable all the way out there to have an Internet connection. And it's not just the Internet to consider – having Bluetooth connectivity is a really practical way to connect to other devices, like your phone for instance.  All in all, the Raspberry Pi Zero W is a great addition to the Raspberry Pi line of computers. It's affordable, it's highly capable, and with the addition of wireless connectivity it has become practical to use too. So go get your hands on one and start your own project today.  About the author Raka Mahesa is a game developer at Chocoarts: chocoarts.com, who is interested in digital technology in general. In his spare time, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 21123

article-image-what-standards-are-needed-iot
Raka Mahesa
17 Apr 2017
5 min read
Save for later

What standards are needed in IoT

Raka Mahesa
17 Apr 2017
5 min read
The Internet is one of the greatest inventions of the 20th century. It started its life in the mainframe computers of military and academic organizations before making a jump to personal computers near the turn of the century. A few years later, the Internet made another jump to mobile phones and enabled us to connect to the World Wide Web anywhere and anytime we wanted. Each time we connect a device to the Internet, the device seems to become smarter and more useful; so, what if we connect every device to the Internet? This is happening, and this is how we have created the Internet of Things, also known as IoT. With all that said, it's important to keep in mind that IoT is not simply about having Internet connectivity on our devices. IoT is about a network of devices,or things, that can communicate with each other so that they can be more useful to their user. You may wonder how does having devices that communicate with each other make them more useful? Well, just imagine, when you set the alarm on your phone to wake you up at 7 AM, it will automatically tell your lamps to turn on 10 minutes after that, and your coffee machine to start brewing right away,or when you exit your house, your networked gate will tell your air conditioner to turn itself off to conserve electricity, or just a simple communication like turning on the heater from your phone on your way home. That is IoT on a small scale. If you go bigger, you can get grander systems like a bus system that dynamically allocates itself based on how crowded a bus stop is,a fully networked car fleet that can prevent traffic gridlock from happening,or an automated agricultural system that farm owners can control from an app on their phones. While governments all around the world are trying their best to build their own connected cities, the small scale IoT unfortunately doesn't really reach the masses. Even a simple house automation system is limited to either the hobbyist or just a single type of device,like the Phillips Hue. One of the reasons behind this problem is the lack of unifying standards between devices, and when every component behaves differently, it can be difficult to create a fully networked system. (image from http://techstory.in/wp-content/uploads/2016/06/Smart-Home-2.jpg) Before we talk further about standards, specifically for IoT, we first need to discuss technological standards in general. What are standards? What's the benefit of having standards in technology? To put it simply, standards are a set of rules that specify how a certain thing should behave. When a piece of technology follows a standard, all the parties involved know how the device should work and can focus on other, more important things. For example, since all audio jacks on a phone follow the same standards, users can buy headphones of any brand without any worry, and headphone manufacturers can focus on the audio performance of their accessories because they know it will work on every buyer's phone. In a world without standards, the users of,say,a Samsung phone would only be able to use a Samsung headphone. What if Samsung's headphones don't have the quality you want? What if the only good headphones available cannot be used on Samsung phones? By following a standard, a company can focus only on their phones while another company can focus only on their headphones, each playing through their strength. (image from https://www.extremetech.com/wp-content/uploads/2016/09/35jack.jpg) So now that we better understand standards, what kinds of standards do we need for the Internet of Things? The most important standard that's needed in the Internet of Things at the moment is a communication standard—a set of rules for communicating between devices. One part of this standard should be about hardware, for example, what kind of connectivity should be used for communication. Should it be WiFi? Should it be bluetooth? Should wired connection even be considered? The other part of the needed communication standard should be about software. For example, what messaging protocol should be used for communication? What kind of encryption should be used for the message packets? We should also keep in mind that these connected devices may not have the computing power needed to process heavily encrypted messages. Another standard that is needed is a development standard. Software developers need a standardized way of developing for connected devices. Right now, all of the IoT hardware platforms, such as Arduino, Tessel, and Intel Edison, have their own development environment. Imagine if an app that could run on an LG phone couldn't be executed on a Xiaomi phone. One other standard that we need for the Internet of Things is a standard for security or authentication. How do we ensure that the person accessing a device is the same person accessing another device in the system? Right now, authentication is handled by creating an account on our phone, but a better, independent authentication method is sorely needed in the Internet of Things. We still have a long way to go before we can fully realize the potential of the Internet of Things. Right now, a lot of big companies are competing with each other, pursuing their own platform for connected devices. I believe, however, the way for IoT to move forward is for these companies to start collaborating and setting standards that will define the future of Internet of Things. About the Author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Healso regularly tweets from @legacy99.
Read more
  • 0
  • 0
  • 3270
article-image-five-reasons-why-xamarin-will-change-mobile-development
Packt
13 Apr 2017
2 min read
Save for later

Five reasons why Xamarin will change mobile development

Packt
13 Apr 2017
2 min read
Find out why Xamarin will change mobile development Developers of enterprise desktop and web applications are all looking for ways to extend their apps to mobile platforms without starting from scratch or compromising user experience. Most developers and organizations are turning to cloud-first and mobile-first solutions, with speeed and agility in mind. After all, it's a competitive world out there and you need to give yourself the best chance of success without breaking too much of a sweat trying to deliver. There's a few options, but Xamarin is quickly becoming the cross-platform mobile development framework of choice. If you're not sure what to make of Xamarin, or feel you need a way of separating fact from fiction, hype from reality, look no further than this free white paper, Five Reasons Xamarin Will Change Mobile Development, that we've put together with our friends at Syncfusion. If you want the summary before you dig deeper into Xamarin, here's the quick pitch: deliver native applications with more speed and less cost. To learn more, and to find out how you can start to take advantage of Xamarin, simply head on over to Syncfusion to download the white paper and find out how to bring Xamarin into your current organizational toolkit. Visit Syncfusion to download the white paper now.
Read more
  • 0
  • 0
  • 14685

article-image-what-it-learn-language-coding-bootcamp
Mary Gualtieri
12 Apr 2017
5 min read
Save for later

What is it like to learn a language at a coding bootcamp?

Mary Gualtieri
12 Apr 2017
5 min read
Why I joined a coding bootcamp I was 25. I couldn’t figure out the direction for my life. I had several failed attempts at a higher education,but I could never find my happiness at a university. I began to search for alternative ways to get an education that wouldn’t take me four years to get into the workforce. I finally came across Wyncode Academy, a 9week coding bootcamp. I read that it would allow me to enter the workforce and make a decent salary without investing years to get me there. This was the perfect solution for me.  Did I know anything about a coding language going into this? Only the bare basics.  Choosing the direction of whether to go to a traditional school or an alternative school can be a daunting task. You really have to decide what is the best path for you. A bootcamp was the best path for ME. It was a hard experience. It was not easy. I had to get out of my comfort zone. I had to shut off my world for 9weeks—eating, breathing, and sleeping code. My family would only see me for five minutes when I would come out of my room to get a cup of coffee and leave to go to Wyncode to begin my day. But, it was a small sacrifice and a short season in my life that would eventually end. What was my experience like?  Beginning a coding bootcamp was like beginning middle school all over again. I walked into a room of twenty-five people, from 20 year olds to 55. We looked at each other, not knowing what to say to one another.  We were from all different walks of life. Some of us were lawyers, some of us were business owners, and some of us were looking to do something different with our careers. Mostly, we were all looking for a change and to breathe new life into us.  Our days would begin early. We would get to our classroom around 7:30 in the morning to begin coding. If one of us didn’t understand a homework problem, we had someone who knew how to solve the problem. We always had someone that would explain the why and the how to get to a solution.  At around 10 in the morning, we would begin our lectures on various topics. In the afternoon, we had some type of activity to apply what we had just learned, like a hackathon or a live coding session. We would end our lectures around 4:00 in the afternoon. After that time was where the real learning and my curiosity for coding began. I had all this knowledge I had just learned, but now I had to apply it on my own.  Toward the end of my bootcamp experience, it was finally time to apply what I had learned from the previous weeks. We worked in teams to build full-stack applications and give real pitches to judges. It simulated what it would be like to pitch an idea to real investors.  Even though my time at my bootcamp ended, my time as a web developer was just beginning. I learned enough to keep my curiosity there and to keep going. On our last day of our bootcamp, we were told to keep the ABCs—Always Be Coding. That rang true for me.  During my bootcamp tenure, I learned Ruby on Rails, HTML, CSS, and JavaScript as my foundation. It was a good foundation to begin with because I was taught fundamentals and theories that could be applied to different languages. My curiosity, in the end, was what pushed me to seek out more in coding languages.After my last day of my bootcamp, I committed myself to being a JavaScript developer. I began learning other JavaScript libraries that would not only make a desirable candidate to hire, but also make me happy and interested in what I was doing.  Fast-forward a couple of years later. I am now the lead web developer for a small communications company. I’m still a constant learner, but now I get to invest into the people I get to help lead. I look forward to going to work everyday, and it all started with going to a coding bootcamp.  The best part of my bootcamp experience was the confidence I gained to succeed in the workforce. I believe in myself and finally have the “I can” attitude that I had been lacking. Doors have opened for me that I didn’t know I wanted open. I gained an unbeknownst family that shared a great experience with me. I gained a business acumen that has set me apart from the person next to me.  Going to a coding bootcamp was one of the best decisions I have ever made. My only wish is I would have done it sooner.  About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri. 
Read more
  • 0
  • 0
  • 2772

article-image-11-ways-developers-can-make-impact-management
Hari Vignesh
12 Apr 2017
8 min read
Save for later

11 ways developers can make an impact on management

Hari Vignesh
12 Apr 2017
8 min read
The annual closing for your organization is coming. Is your increment satisfactory? Did someone outperform you?  Did they outperform you without working as hard as you are? Have you ever worried how to impress your boss or your management? Are you earning credits for your work? If you don’t have a proper answer, you’re in the right place. In this blog post, I'll share a few hacks that'll keep you on the road for impressing your management.  Pay Attention to the Minor Details This is the most crucial rule. I have seen many supersmart and talented programmers write very crisp and clean code, and they’ve done it fast. But sometimes, they miss out minor requirement details. These minor details can include applying formatting to a field, validating a date field for proper input, formatting a table, or even a font on a page. If I were your project manager or lead, I wouldn't be happy with you if you failed to pay attention to these minor details. It's never good to repeat work over and over again. Rethink Before You Deliver One of my bad habits is not reviewing my work. Even after I write an article, I hate to review it. You must make sure to review (sanity or unit test) your work before you deliver it and see how you can improve it. Sometimes, thisis difficult due to tight deadlines, and in that case, I would hold project managers and team leaders accountable for that. If you need extra time to review it, let your boss know. If he does not give you extra time, then you do not need to worry about it. Choose Design Patterns Wisely Design patterns are good for undertaking TTD (Test-Driven Development). This reduces the cost for testing, and the chances of the build being stable are high. Adapting to the right tech stack, choosing the right framework, calculating risks on migration, and delivering flexible and maintainable code, all help in building trust with your management. In short, you can impress them by delivering products with the latest and scalable tech, and also by reducing the maintenance cost. Show Up Excited and Eager to Learn Every Day This one quality is lacking with many employees and especially among fresh grads. All of their excitement on their work seems to last only for a few months. This really impacts their career growth in a bad way and it definitely creates a negative impression among management. How do you solve this problem? The ethical solution is to pick a job that doesn't feel like a job. You should love to do it every day. In short, take up your passion as your job. If it's too late for that, then make up your mind and learn about your domain every day. Remember, knowledge is power. The more knowledgeable you are, the more you will be respected by your management, and it will help in cultivating trust. There is a Japanese word called “Kaizen," which means continuous improvement. This philosophy suits all things in life. Continuous improvements will impact your career in a positive way. Clear Understanding of Roles and Responsibilities Most people fail to keep up with their roles and responsibilities. The roles and responsibilities are like promises made to the company. Employees should never forget these or be so casual about it. These promises determine your daily activity, and remember that the organization is paying for that. So it’s time that we follow those promises to the dot. For developers especially, in most of the companies, they will have the following responsibilities: Writing clean and maintainable code Reviewing the code regularly and following standards Using optimal solutions Constant improvement in their performance Coordinating with co-workers to meet deliverables Contributing more hours when requested by the company These are some of the basic responsibilities that I’ve seen in many JDs. Mostly, following all of them, or at least 90 percent of them, will create a hugely positive impact. Understanding Client or Product Requirements (end to end) If you’re working for a services firm, you need to understand the client’s requirements to the dot. If the requirements are coming from multilevel hierarchy, set up a meet with your peers, vet your understanding, and get an approved document regarding the requirements. It’s a very good practice to document every conversation you've had in these meeting (minutes of meeting, or MOM) and communicate all of their conversations via e-mail (just to have proof with time stamping) to avoid getting into unnecessary politics within the firm. If you’re working for a product-based company, it’s mandatory to understand the product’s needs and purpose. And the management will love it if you come up with doubts and issues regarding the requirements because it will help them to refine the items on their plate better. So before beginning your work, clarify most of your doubts and vet the solution that you’re planning to implement. Sticking to the Time As a developer, you’ll be delivering plenty of items throughout your career. It’s critical to stick to the time promised in order to maintain your trust and reputation. So before starting your work, it’s essential to negotiate the time period regarding the deliverables. If the timeline for what the management is suggesting is impossible, it’s your responsibility to make them understand and compromise on respective aspects. In short, delivering things on or within the promised time is very critical to building trust with your management. If you’re good at delivering things on time, then the management will definitely expose you to multiple opportunities and benefits.  Understanding the Psychology of Your Peers, Management, and Clients You must understandwhat makes your management happy. For that you need to understand their behavior. It can be clients, peers, or your management, but you can understand their character by just talking to them on a regular basis.The equation is simple. Make the client happy or satisfied, and you will in turn make the management happy. If it’s a product-based company, make the users and customers happy; this will definitely make your management happy.  To make your client or customer happy, you need to know what kind of person they are and what they need. To know what they need, you need to establish a good rapport and talk to them frequently. So give them what they need and the equation executes on its own. Demand Responsibility Management always encourages and loves people who can take up any of the responsibilities available on their plate. So if don't have much work, or if you feel you’re not occupied, feel free to ask your management for additional responsibilities (sometimes, managers will keep you free to see how you’re doing — are you demanding work or enjoying the benefits as it is?). If there are any vacant roles, make a request with your management and help them understand why you will be the right fit for it. It’s all up to you on how you play your card. Feed Them with Ideas Regularly The management isn't the only idea vending machine. You also play an important part in it. Sharing continuous ideas and brainstorming ways to improve the product or deliverables will showcase your management skills to your management team. Also, if you’ve a wonderful idea and management has decided to implement it, then play your card in showcasing that it was your idea. Plus, if your ideas get approval, it will come only to you for implementation. So make sure that your idea is totally implementable. Show Your Growth on Professional Networks If you’re not on professional networks like LinkedIn and AngelList, please start creating your profiles today. For every professional achievement within the company or outside, record them constantly and make sure that your management team sees it. This will constantly remind them of your growth, and will prevent them from forgetting you and your contributions. Remember: You don’t just have a job, you have an opportunity. You have a chance to prove yourself. Show up hungry. Make it matter. Thank you for spending your valuable time reading this article. I hope you’ve gained few tips to take home and practice. If you’ve liked this article, please share.  About the author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter at @HariofSpades. 
Read more
  • 0
  • 0
  • 2799
article-image-what-does-infrastructure-code-actually-mean
Raka Mahesa
12 Apr 2017
5 min read
Save for later

What does 'Infrastructure as Code' actually mean?

Raka Mahesa
12 Apr 2017
5 min read
15 years ago, adding an additional server to a project's infrastructure was a process that could take days, if not weeks. Nowadays, thanks to cloud technology, you can get a new server ready for your project in just a few seconds with a couple of clicks.  New and better technology doesn't mean it doesn't have its own set of problems though. Because it's very easy to add servers to your project now, your capability to manage your project infrastructure usually doesn't grow as fast as the size of your infrastructure. This leads to a lot of problems in the backend, such as inconsistent server configurations or configurations that can't be replicated. It's a common problem among massive web projects, so various approaches to tackle that problem are being devised. One such approach is known as 'Infrastructure as Code.' (image from http://cdn2.itpro.co.uk/sites/itpro/files/server_room.jpg)  Before we go on talking about Infrastructure as Code, let's first make sure that we understand the basics, which is infrastructure configuration and automation. Before an infrastructure (or a server) can be used to run a web application, it first has to be configured so that it has all of the requirements needed to run that application. This configuration ranges from very basic, such as operating systems and database types, to user accounts and software runtimes. And when dealing with virtual machines, configuration can even include the amount of RAM, storage space, and processing power a server would have.  All of those configurations are usually done by typing in the required commands on a terminal connected to the infrastructure. Of course, you can do it all manually, typing commands to install the needed software one by one, but what if you have to do that to tens, if not hundreds, of servers? That's where infrastructure automation comes in. By saving all of the needed commands to a script file, we can easily repeat this process to other servers that need to be configured by simply running that script.  All right, now that we have the basics behind us, let's move on. What does Infrastructure as Code really mean?  Infrastructure as Code, also known as Programmable Infrastructure, is a process for managing, computing, and networking infrastructure using software development methodologies. These methodologies include version control, testing, continuous integration, and other practices. It's an approach for handling servers by treating infrastructure as if it were code, hence the name.  But wait, because infrastructure automation uses script files for configuring servers, isn't it the same as treating infrastructure as code? Does it mean that Infrastructure as Code is just a cool term for infrastructure automation? Or are they actually different things?  Well, infrastructure automation is indeed one part of the Infrastructure as Code process, but it's the other part—the software development practices part—that differentiates the two of them. By employing software project methodologies, Infrastructure as Code can ensure that the automation will work reliably and consistently on every part of your infrastructure.  For example, by using version control systems on the server configuration script, any changes made to the file will be tracked, so when a problem arises in the server, we can find out exactly which changes caused that problem. Another software development practice that can be applied on infrastructure automation is automated testing. Having this practice would make it safer for developers to add changes to the script because any error added to the project can be detected quickly. All of these practices help ensure that the script configuration files are correct and reliable, which in turn ensures a robust and consistent infrastructure. (image from https://assets.pcmag.com/media/images/417346-back-up-your-cloud-how-to-download-all-your-data.jpg?thumb=y)  There's also one more thing to consider. Do not confuse Infrastructure as Code (IaC) with Infrastructure as a Service (IaaS). Infrastructure as a Service is a cloud computing service that provides infrastructure to developers and helps them manage it. This service allows developers to easily monitor and configure resources in their infrastructure. Examples for these types of cloud services are Amazon Web Services, Microsoft Azure, and the Google Compute Engine.  So, if both Infrastructure as Code and Infrastructure as a Service help developers manage their infrastructure, how do they exactly differ? Well, to put it in simple terms, IaaS is a tool (hammer) that gives developers a way to quickly configure their infrastructure, while Infrastructure as Code is a method to utilize such tools (carpentry). Just like how you can do carpentry without a hammer, you're not restricted to using IaaS if you want to run Infrastructure as Code practices on your infrastructure.  That said, one of the big requirements of being able to run Infrastructure as Code practices is to run the project on a dynamic infrastructure system. That is, a platform where you can programmatically create, destroy, and manage infrastructure resources on demand. While you can implement this system on your own private infrastructure, most of the IaaS available on the market already has this capability, making itthe perfect platform to run the Infrastructure as Code process.  That's the gist of the Infrastructure as Code approach. There are plenty of tools out there that enable you to apply Infrastructure as Code, including Ansible, Puppet, and Chef. Go check them out if you want to try this methodology for yourself.  About the author Raka Mahesa is a game developer at Chocoarts, http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets at @legacy99. 
Read more
  • 0
  • 0
  • 3795

article-image-ai-and-raspberry-pi-machine-learning-and-iot-whats-impact
RakaMahesa
11 Apr 2017
5 min read
Save for later

AI and the Raspberry Pi: Machine Learning and IoT, What's the Impact?

RakaMahesa
11 Apr 2017
5 min read
Ah, Raspberry Pi, the little computer that could. On its initial release back in 2012, it quickly gained popularity among creators and hobbyists as a cheap and portable computer that could be the brain of their hardware projects. Fast forward to 2017, and Raspberry Pi is on its third generation and has been used in many more projects across various fields of study.  Tech giants are noticing this trend and have started to pay closer attention to the miniature computer. Microsoft, for example, released Windows 10 IoT Core, a variant of Windows 10 that could run on a Raspberry Pi. Recently, Google revealed that they have plans to bring artificial intelligence tools to the Pi. And not just Google's AI, more and more AI libraries and tools are being ported to the Raspberry Pi every day.  But what does it all mean? Does it have any impact on Raspberry Pi’s usage? Does it change anything in the world of Internet of Things? For starters, let's recap what the Raspberry Pi is and how it has been used so far. The Raspberry Pi, in short, is a super cheap computer (it only costs $35) and is the size of a credit card. However, despite its ability to be usedasa usual, general-purpose computer, most people useRaspberry Pias the base of their hardware projects.  These projects range from simple toy-like projects to complicated gadgets that actually do important work. They can be as simple as a media center for your TV or as complex as a house automation system. Do keep in mind that this kind of projectscan always be built using desktop computers, but it's not really practical to do so without the low price and the small size of the Raspberry Pi.  Before we go on talking about having artificial intelligence on the Raspberry Pi, we need to have the same understanding of AI. (from http://i2.cdn.turner.com/cnn/2010/TECH/innovation/07/09/face.recognition.facebook/t1larg.tech.face.recognition.courtesy.jpg)  Artificial Intelligence has a wide range of complexity. It can range from a complicated digital assistant like Siri, to a news-sorting program, to a simple face detection system that can be found in many cameras. The more complicated the AI system, the bigger the computing power required by the system. So, with the limited processing power we have on the Raspberry Pi, the types of AI that can run on that mini computer will be limited to the simple ones as well.  Also, there's another aspect of AI called machine learning. It's the kind of technology that enables an AI to play and win against humans in a match of Go. The core of machine learning is basically to make a computer improve its own algorithm by processing a large amount of data. For example, if we feed a computer thousands of cat pictures, it will be able to define a pattern for 'cat' and use that pattern to find cats in other pictures.  There are two parts in machine learning. The first one is the training part, where we let a computer find an algorithm that suits the problem. The second aspect is the application part, where we apply the new algorithm to solve the actual problem. While the application part can usually be run on a Raspberry Pi, the training part requires a much higher processing power. To make it work, the training part is done on a high-performance computer elsewhere, and the Raspberry Pi only executes the training result. So, now we know that the Raspberry Pi can run simple AI. But what's the impact of this?  Well, to put it simply, having AI will enable creators to build an entirely new class of gadgets on the Raspberry Pi. It will allow the makers to create an actually smart device based on the small computer. Without AI, the so-called smart device will only act following a limited set of rules that have been defined. For example, we can develop a device that automatically turns off lights at a specific time every day, but without AI we can't have the device detect if there's anyone in the room or not.  With artificial intelligence, our devices will be able to adapt to unscripted changes in our environment. Imagine connecting a toy car with a Raspberry Pi and a webcam and have the car be able to smartly map its path to the goal, or a device that automatically opens the garage door if it sees our car coming in. Having AI on the Raspberry Pi will enable the development of such smart devices.  There's another thing to consider. One of Raspberry Pi's strong points is its versatility. With its USB ports and GPIO pins, the computer is able to interface with various digital sensors. The addition of AI will enable the Raspberry Pi to process even more sensors like fingerprint readers or speech recognition with a microphone, further enhancing its flexibility.  All in all, artificial intelligence is a perfect addition to the Raspberry Pi. It enables the creation of even smarter devices based on the computer and unlocks the potential of the Internet of Things to every maker and tinkerer in the world. About the author RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/),who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 25955