Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-developers-are-technology-decision-makers
Richard Gall
01 Aug 2017
3 min read
Save for later

Developers are today's technology decision makers

Richard Gall
01 Aug 2017
3 min read
For many years, technology in large organizations has been defined by established vendors. Oracle. Microsoft. Huge corporations were setting the agenda when it came to the technology being used by businesses. These tech organizations provided solutions - everyday businesses simply signed themselves up. But this year’s Skill Up survey painted an interesting picture of a world in which developers and tech professionals have a significant degree of control over the tools they use. This is how people responded when we asked them how much choice they have over the tools they use at work: Half of all respondents have at least a significant amount of choice over the software they use at work. This highlights an important fact of life for tech pros, engineers and developers across the globe - your job is not just about building things and shipping code, it’s also about understanding the tools that are going to help you do that. To be more specific, what this highlights is that open-source is truly mainstream. What evolved as a cultural niche of sorts in the late nineties has become fundamental to the way we understand technology today. Yes, it’s true that large tech conglomerates like Apple, Facebook, and Google have a huge hold on consumers across the planet, but they aren’t encouraging lock-in in the way that the previous generation of tech giants did. In fact, they are actually pushing open-source into the mainstream. Facebook built React; Google are the minds behind Golang and TensorFlow; Apple have done a lot to evolve Swift into a language that may come to dominate the wider programming landscape. We are moving to a world of open systems, where interoperability reigns supreme. Companies like Facebook, Google, and Apple want consumer control, but when it comes to engineering and programming they want to be empowering people - people like you. If you’re not convinced, take the case of Java. Java’s interesting, because in many respects it’s a language that was representative of the closed systems of enterprise tech a decade ago. But it’s function today has changed - it’s one of the most widely used programming languages on GitHub, being used in a huge range open source projects. C# is similar - in it you can see how Microsoft’s focus has changed, the organization’s stance on open source softening to become more invested with a culture where openness is the engine of innovation. Part of the reason for this is a broader economic changes in the very foundations of how software is used today and what organizations need to understand. As trends such as microservices have grown, and as APIs become more important to the development and growth of businesses - those explicitly rooted in software or otherwise - software necessarily must become open and changeable. And, to take us back to where we started, the developers, programmers, engineers who build and manage those systems must be open and alive to the developing landscape of software they can use in the future. Decision making, then, is a critical part of what it means to work in software. That may not have always been the case, but today it’s essential. Make sure you’re making the right decision. Read this year's Skill Up report for free.
Read more
  • 0
  • 0
  • 19654

article-image-5-hurdles-overcome-javascript
Antonio Cucciniello
26 Jul 2017
5 min read
Save for later

The 5 hurdles to overcome in JavaScript

Antonio Cucciniello
26 Jul 2017
5 min read
If you are new to JavaScript, you may find it a little confusing depending on what computer language you were using before. Although JavaScript is my favorite language to use today, I cannot say that it was always this way. There were some things I truly disliked and was genuinely confused with in JavaScript. At this point I have come to accept these things. Today we will discuss the five hurdles you may come across in the JavaScript programing language. Global variables No matter what programming language you are using, it is never a good idea to have variables, functions, or objects as part of your global scope. It is good practice to limit the amount of global variables as much as possible. As programs get larger, there is a greater chance of naming collisions and giving access to code that does not necessarily need it by making it global. When implementing things, you want a variable to have a large enough scope as you need it to be. In JavaScript, you can access some global variables and objects through window. You can add things to this if you would like, but you should not do this. Use of Bitwise Operators As you probably know, JavaScript is a high level language that does not communicate with the hardware much. There are these things called Bitwise Operators that allow you to compare the bits of two variables. For instance x & y does an AND operation on x and y. The problem with this is, in JavaScript there is no such thing as integers, only double precision floating point numbers. So in order to do the bitwise operation, it must convert x and y to integers, compare the bits, and then convert them back to floating point numbers. This is much slower to perform and really should not be done, but then again it is somehow allowed. Coding style variations From seeing many different open source repositories, there does not seem to be one coding style standard that everyone adheres too. Some people love semicolons, others hate them. Some people adore ES6, other people despise it. Personally, I am fan of using standard for coding style, and I use ES5. That is soley my opinion though. When comparing code with other people who have completely different styles, it can be difficult to use their code or write something similar. It would be nice to have a more generally accepted style that is used by all JavaScript developers. It would make us overall more productive. Objects Coming from a class-based language, I found the topic of prototypical inheritance difficult to understand and use. In prototypical inheritance all objects inherit from Object.prototype. That means that if you try to refer to a property of an object that you have not defined for yourself, but it exists as part of Object.prototype, then it will execute using that property or function. There is a chain of objects where each object inherits all of the properties from its parent and all of that parents' parents. Meaning, your object might have access to plenty of functions it does not need. Luckily, you can override any of the parent functions by establishing a function for this object. A large amount of falsy values Here is a table of falsy values that are used in JavaScript: False Value Type 0 Numbers NaN Numbers '' String false Boolean null Object undefined undefined All of these values represent a different falsy value, but they are not interchangeable. They only work for their type in JavaScript. As a beginner, trying to figure out how to check for errors at certain points in your code can be tricky. Not to harp on the problem with global variables again, but undefined and NaN are both variables that are part of global scope. This means that you can actually edit the values of them. This should perhaps be illegal, because this one change can affect your entire product or system. Conclusion As mentioned in the beginning, this post is simply an opinion. I am coming from a background in C/C++ and then to JavaScript. These were the top 5 problems I had with JavaScript that made me really scratch my head. You might have a completely different opinion reading this from your different technical background. I hope you share your opinion! If you enjoyed this post, tweet and tell me your least favorite part of using JavaScript, or if you have no such problems, then please share your favorite JavaScript feature! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 17026

article-image-say-hello-deepmap
Graham Annett
26 Jul 2017
5 min read
Save for later

Say hello to DeepMap

Graham Annett
26 Jul 2017
5 min read
Yet another company has emerged, with the announcement of DeepMap, making huge strides in the world of self-driving cars. The web page and announcements mention that many of the engineers are former Google and Apple employees (both seem to make maps and directions a huge aspect of their products, with Google maps and Apple maps being used by a vast amount of people day-to-day), which is incredibly promising because these sorts of engineers are probably exquisitely insightful and have unique ideas that may not have been carried out at an already established company, or may have taken much longer to come to fruition. Self-driving vehicles             From a basic standpoint, self-driving vehicles are built around taking whatever input the car has, and generating an output that relates to speed and direction (i.e. for a traditional driving image, what speed and direction should the car use to keep on a path to whatever destination and avoid crashing). Many of the startups doing this seem to be focused on using image-based deep learning systems as the underlying technology (typically, convolutional neural networks). These systems have made incredible strides in recent years along with the companies implementing these autonomous vehicles having made tremendous progress (think Google’s self driving car, or Tesla, who recently hired Andrej Karpathy as head of AI). There has also been a recent scramble for many other companies to enter the autonomous vehicle arena, and create competitive offerings (for instance the recent company Andrew Ng has been associated with) and even recent scandals such as the Google-Uber lawsuit. These events are signs that this technology is going to become incredibly commonplace very soon, and will be an integral technology that people will come to expect in day-to-day life, somewhat akin to smartphones. LiDAR systems             One of the interesting things after looking into DeepLearning is that the company and it’s underlying technology seems to be heavily focused on LiDAR systems versus the approach that many other companies seem to be using with camera/image based mapping. LiDAR is different in that it depends on pulsing lights to create a representation of 3D surfaces around it (quite an oversimplification). While I'm not an expert in autonomous vehicles, I’m guessing that a combination of LiDAR-based and image-based approaches will make for the first true autonomous vehicle in that simply relying on one type of data is too dangerous when the stakes of self-driving cars carry huge implications for the technology companies behind them. A continuously updating and dynamic system             After reading through the introduction post by the cofounders, James Wu and Mark Wheeler, I was stuck by the fact that the company raised a sizable amount of money for something in stealth, and also by the many novel explanations and ideas in the post. One of the ideas that struck me as incredibly profound is viewing maps not as a static image that may be useful to humans, but as a continuously updating and dynamic system that incorporates an entire data stack and is useful to a machine such as a self-driving car. This may be incredibly obvious to people already in the autonomous vehicle industry, but as an outsider, it made me think that maps without a dynamic and huge data stack underlying them will not only be useless to autonomous vehicles, but perhaps even dangerous. The map that humans see and glean knowledge from compared to what would be useful to machines would be fundamentally different, and makes me curious about the implications in other realms that deep learning and “AI” will have (for instance in NLP and time series data). Healthy competition             Having actual competition among companies in the pursuit of a true self driving car (there is a 5 level SAE classification and while I am not entirely sure where Tesla or Google's vehicles currently stand, no one has yet to achieve level 5, which is what would be necessary for autonomous cars to most likely replace large swaths of industry and have worldwide impact) is a revolutionary achievement from a technological standpoint as it creates and encourages companies to try new and novel approaches on a field that will likely never be fully ‘solved,’ but in need of constant and continuous improvement much like other technology fields. In conclusion, the technical ideas that DeepMap is bringing along with yet another company to push forward the prospects and chances of autonomous vehicles becoming commonplace is incredibly promising and something to keep an eye on. Hopefully the products and technology they claim to be working on will be as groundbreaking as they propose, and not just crash and burn out like many technology startups seem to do. About the author Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me.
Read more
  • 0
  • 0
  • 3331

article-image-most-important-skills-you-need-devops
Rick Blaisdell
19 Jul 2017
4 min read
Save for later

The most important skills you need in DevOps

Rick Blaisdell
19 Jul 2017
4 min read
During the last couple of years, we’ve seen how DevOps has exploded and has become one of the most competitive differentiators for every organization, regardless of their size. When talking about DevOps, we refer to agility and collaboration, keys that unlock a business’s success. However, to make it work for your business, you have to first understand how DevOps works, and what skills are required for adopting this agile business culture. Let’s look over this in more detail. DevOps culture Leaving the benefits aside, here are the three basic principles of a successful DevOps approach: Well-defined processes Enhanced collaboration across business functions Efficient tools and automation  DevOps skills you need Recently, I came across an infographic showing the top positions that technology companies are struggling to fill, and DevOps was number one on the list. Surprising? Not really. If we're looking at the skills required for a successful DevOps methodology, we will understand why finding a good DevOps engineer akin to finding a needle in a haystack. Besides communication and collaboration, which are the most obvious skills that a DevOps engineer must have, here is what draws the line between success, or failure. Knowledge of Infrastructure – whether we are talking about datacenter-based or cloud infrastructure, a DevOps engineer needs to have a deep understanding of different types of infrastructure and its components (virtualization, networking, load balancing, etc). Experience with infrastructure automation tools – taking into consideration that DevOps is mainly about automation, a DevOps engineer must have the ability to implement automation tools at any level. Coding - when talking about coding skills for DevOps engineers, I am not talking about just writing the code, but rather delivering solutions. In a DevOps organization, you need to have well-experienced engineers that are capable of delivering solutions. Experience with configuration management tools – tools such as Puppet, Chef, or Ansible are mandatory for optimizing software deployment and you need engineers with the know-how. Understanding continuous integration – being an essential part of a DevOps culture, continuous integration is the process that increases the engagement across the entire team and allows source code updates to be run whenever is required. Understanding security incident response – security is the hot button for all organizations, and one of the most pressing challenges to overcome. Having engineers that have a strong understanding of how to address various security incidents and developing a recovery plan, is mandatory for creating a solid DevOps culture.  Beside the above skills that DevOps engineers should have, there are also skills that companies need to adopt: Agile development – agile environment is the foundation on which the DevOps approach has been built. To get the most out of this innovative approach, your team needs to have strong collaboration capabilities to improve their delivery and quality. You can create your dream team by teaching different agile approaches such as Scrum, Kaizen, and Kanban. Process reengineering – forget everything you knew. This is one good piece of advice. The DevOps approach has been developed to polish and improve the traditional Software Development Lifecycle but also to highlight and encourage collaboration among teams, so an element of unlearning is required.  The DevOps approach has changed the way people collaborate with each other, and improving not only the processes, but their products and services as well. Here are the benefits:  Ensure faster delivery times – every business owner wants to see his product or service on the market as soon as possible, and the DevOps approach manages to do that. Moreover, since you decrease the time-to-market, you will increase your ROI; what more could you ask for? Continuous release and deployment – having strong continuous release and deployment practices, the DevOps approach is the perfect way to ensure the team is continuously delivering quality software within shorter timeframes. Improve collaboration between teams – there has always been a gap between the development and operation teams, a gap that has disappeared once DevOps was born. Today, in order to deliver high-quality software, the devs and ops are forced to collaborate, share, and revise strategies together, acting as a single unit.  Bottom line, DevOps is an essential approach that has changed not only results and processes, but also the way in which people interact with each other. Judging by the way it has progressed, it’s safe to assume that it's here to stay.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies. 
Read more
  • 0
  • 0
  • 32760

article-image-skill-up-2017-what-we-learned-about-tech-pros
Packt
17 Jul 2017
2 min read
Save for later

Skill Up 2017: What we learned about tech pros and developers

Packt
17 Jul 2017
2 min read
The results are in. 4,731 developers and tech professionals have spoken. And we think you’ll find what they have to say pretty interesting. From the key tools and trends that are disrupting and changing the industry, to learning patterns and triggers, this year’s report takes a bird’s eye view on what’s driving change and what’s impacting the lives of developers and tech pros around the globe in 2017. Here’s the key findings - but download the report to make sure you get the full picture of your peers professional lives. 60% of our respondents have either a ‘reasonable amount of choice’ or a significant ‘amount of choice’ over the tools they use at work - which means that understanding the stack and the best ways to manage it is a key part of any technology professionals knowledge. 28% of respondents believe technical expertise is used either ‘poorly’ or ‘very poorly’ in their organization. Almost half of respondents believe their manager has less technical knowledge than they do. People who work in tech are time poor - 64% of respondents say time is the biggest barrier to their professional development The Docker revolution is crossing disciplines, industries and boundaries - it’s a tool being learned by professionals across industries. Python is the go-to language for a huge number of different job roles - from management to penetration testers. 40% of respondents dedicate time to learning every day - a further 44% dedicate time once a week. Young tech workers are keen to develop the skillset they need to build a career but can find it hard to find the right resources - they also say they lack motivation Big data roles are among the highest paying in the software landscape - demonstrating that organizations are willing to pay big bucks for people with the knowledge and experience. Tools like Kubernetes and Ansible are increasing in popularity - highlighting that DevOps is becoming a methodology - or philosophy - that organizations are starting to adopt. That’s not everything - but it should give you a flavour of the topics that this year’s report touches on. Download this year’s Skill Up report here.
Read more
  • 0
  • 0
  • 11879

article-image-how-create-tech-teams-talk
Hari Vignesh
16 Jul 2017
5 min read
Save for later

How to create tech teams that talk

Hari Vignesh
16 Jul 2017
5 min read
Great tech teams are not born, they’re made. While greatness can be a product of stringent and cutthroat practices, building a talented and happy team can be a pleasant — not painful — process. It won’t be easy, though. Keeping your tech team motivated isn’t just about throwing a budget for a monthly dinner. If you want to retain your best and brightest, you’ll need to establish organizational excellence, giving employees opportunities to develop or do different work.  “Employees want interesting work that challenges them. Performing meaningful work gives them a feeling that what they do is important, and provides opportunities for growth so that they feel competent,” says Irene de Pater, an assistant professor at the National University of Singapore (NUS) Business School’s Department of Management and Organization. Don’t build a wall around your tech team  It’s important to include other key team members in the interview process. A poor culture fit can lead to turnover that costs companiesup to 60 percent of the person’s annual salary. A good fit ensures that engineers and members of different functions can effectively communicate and work with each other. Especially in products requiring complicated engineering, companies may risk critical failure if engineers are not coordinated. For example, ina case by the Harvard Business Review, the A380 “superjumbo” by Airbus overran time and budget constraints due to incompatibilities in the design of the plane’s fuselage. This was discovered late in development. They could have avoided this by using a shared communication platform and compatible computer-aided design (CAD) tools.  To bring down this wall around your team, you can start pairing up members initially on small projects. Let the paired programmers share a single desktop and review the code together. This allows developers to work together to find the best approach to creating good code. Give people space, lots of it  Keeping your developers happy isn’t a mystery. Give them space, and let them build stuff. Being able to invent and innovate without pressure allows employees to see their work as meaningful, and helps them develop closer relationships with others. “Employees want good relationships with their colleagues and superiors so that they feel they have friendships and social support at work,” Professor de Pater says. Feedback can sting, but it shouldn’t hurt too much  Having a culture of honest feedback will encourage employees to contribute thoughts and ideas more fearlessly. Constructive honesty can be part of the training process. Managers can focus on positive reasons for giving feedback, prepare for the session well, and handle emotional reactions calmly. For example, instead of telling a new coder that his work isn’t up to your standards, a senior developer would first start a conversation focusing on what the team is trying to achieve — and whether the code is being written in the best approach. This way, the feedback is non-aggressive.  Moreover, tech teams should focus on giving 360-feedback. Traditionally, feedback is top-down. “360-feedback” is when supervisors, subordinates, and peers provide staff with constructive advice, allowing an objective and holistic look into a person’s work and relationships in the company. When delivered supportively, 360-feedback can increase self-awareness and improve individual and team effectiveness,studies show. The feedback needs to be translated into intentional action to develop new habits, or change existing ones to remain effective. Find passionate technologists  Another step in building your dream team is to find passionate technologists. We’re not talking about developers who come to the office and don’t complain. We’re talking about people who spend their whole day coding, just to go home and work on a side project — a blog, an app, an open source project.  You want these people on your side, because they’ll go above and beyond when it comes to the technical side of your organization. They’ll work hard to keep you up-to-date on the latest techniques and will take pride in helping your company succeed. It’s easy to spot these passionate individuals during the interview process. All you need to do is ask interviewees about outside projects. If they’re passionate about what they do, you’ll be able to tell by how they talk about it. Grab the passionate ones. The challenge of coordination  With so many teams moving so quickly, coordination will become a challenge. This is partly addressed by returning to a core team principle: strive for autonomy and independence. You must encourage teams to pursue projects that are within their power to take from idea to completion without the immediate need for external help. However, this eliminates the need to coordinate amongst teams altogether, and the fact is, there are inevitably projects that require multiple teams to collaborate. In those cases, there are four ways to improve coordination: Each of your teams have a planning meeting every two weeks. Anyone can attend these meetings. Each of these teams have a demo every two weeks, in which they show off the work they’ve done recently. Every team that is working together can attend each other’s demos. Weekly product backlog meeting, where all product teams share upcoming projects and discuss metrics related to recently-launched features. Finally, each team’s lead developer and product owner takes specific responsibility of pro-actively reaching out to other teams to discuss upcoming work.  These approaches are intentionally lightweight and simple. They rely on people’s own initiative to share their work, communicate actively with others, and work out the details themselves to address the many challenges of coordination.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 2792
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-are-technical-skills-overrated-when-hiring-tech-pros
Hari Vignesh
13 Jul 2017
5 min read
Save for later

Are technical skills overrated when hiring tech pros?

Hari Vignesh
13 Jul 2017
5 min read
The giants of the tech industry have a lock on employer branding, but analysis of the personalities of their staff presents a different picture of what it’s really like in the trenches. A study published recently by Good&Co analyzed the psychometric data gained from anonymous personality quizzes completed by 4,364 tech employees, of what they believe are perceived as the five most innovative companies in Silicon Valley: Apple, Google, Facebook, Microsoft, and IBM. In total, the two-year-long study also analyzed 10 million responses from 250,000 users. Questions ranged from thoughts and feelings about networking to how they handle problems at work.  The study concluded that Facebook lags behind the others for cultivating a culture of creativity. Microsoft employees are more innovative than those at Apple. Both Apple and Twitter’s corporate cultures are the most accurate representation of how employees perceived them. Research from the Workforce Institute at Kronos and WorkplaceTrends indicated that part of the disconnect came from a dissonance between HR, management, and staff regarding who was in charge of creating and driving a company’s culture.  With respect to the technical professionals, the majority of the companies look for technical skills only. If it’s programming, they test them with many rounds. Only the survivors get the job offers. For tech people, technical skills are absolutely necessary, but there are other things that need equivalent validation. Attitude and mindset matters  There are those who are firmly in the “attitude camp.” They argue that you can teach skills but the attitude is a reflection of personality, something hardwired and difficult to both teach and change. They also share the view that even the most skilled and experienced employees will fail if they have a poor attitude.  Managers who hire for attitude believe it ensures you employ people who are better able to collaborate, receive feedback, adapt to workplace changes, and ‘muck in’ when times get tough. That in doing so, you will always hire people who will go the extra mile, and continually seek ways to improve. A lack of direct experience may be an asset  The new hire’s lack of direct experience may actually serve as an asset, because with less history to cloud their vision, they may see problems in a new way and from a fresh perspective. This fresh perspective may result in them generating many new ideas and innovations. Lessexperienced new hires may be willing to take more risks because they haven’t developed the fear level that often comes with extensive corporate experience.  A fresh perspective will also undoubtedly result in new hires questioning existing practices, and these inquiries may result in new approaches, ideas, and innovations. Because their lack of credentials in previous jobs may have increased the pressure on them to continually prove themselves, these lower-experienced new hires may have been forced to excel in other important areas including building relationships, creating stronger support networks, learning how to work harder, and how to bring a “find-a-way” approach to their work. Train ’em up, watch ’em go  So then, can you get it wrong when prioritizing attitude over skill? In short, yes. Inexperienced employees with excellent attitudes generally rise to the challenge but they do require training from more experienced and costly employees, and this takes time and money. Let’s face it, developing in-depth industry specific skill sets and critical thinking can take months, if not years. And what happens when you invest in this and they take off? Furthermore, many believe hiring for attitude rather than skill encourages bias, which can lead to discrimination and a lack of diversity in new hires. Is it healthy to have a team who all share the same attitudes? What can go wrong?  So what are the costs of getting it wrong? We all know hiring is no perfect science. In fact, it’s important to get it wrong, so you learn from your mistakes. And let’s face it, when highly experienced applicants are scarce, but you have an urgent requirement for the skill, it’s an easy trap to fall into, employing an applicant despite attitudinal ‘red flags’. If the choice is between leaving the position vacant for an unknown period of time versus managing the concerning characteristics, you can live with them, right?  Well, many believe “no”, you can’t. Hiring someone because they are “really qualified” but have a bad attitude can poison a workplace. In a very short space of time, they can undo all the hard work you have done to build a fantastic team and culture. Plus, you risk losing some of your best people, who won’t want to work with them. And what manager wants to spend time closely monitoring and managing a toxic team member? Final thoughts  When hiring tech pros, of course one would hope they have sufficient technical skills and foundation. But it is also imperative to ensure that your new hire has the right values and attitude for the company, in order to fit the culture. After all, if they have a great working attitude, probation period checks and training can ensure that their skills are consistently honed and maintained.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 18896

article-image-how-build-secure-microservices
Rick Blaisdell
13 Jul 2017
4 min read
Save for later

How to build secure microservices

Rick Blaisdell
13 Jul 2017
4 min read
A few years back, everybody was looking for an architecture that would make web and mobile application development more flexible, reliable, efficient, and scalable. In 2014, we found the answer when an innovative architectural solution was developed—Microservice. The fastest growing companies are built around microservices. What makes microservice architecture fascinating is its characteristics: Microservices are organized around competencies like recommendations, front-end, user interface. You can implement them using various programming languages, databases, software, and environment. The services lend themselves to a continuous delivery software development process. If there are any changes produced in the application, it requires only a few changes in a service. Easy to replace with other microservices. These services are independently deployable, autonomously developed, and messaging enabled.  So, it’s easy to understand why a microservice architecture is a perfect way to accelerate both web and mobile application development. However, one needs to understand how to build secure microservices. Security is the top priority for every business. Designing a safe microservices architecture can be simple if you follow these guidelines: Define access control and authorization – This is one of the crucial steps in reaching a higher level of security. It’s important to understand first how each microservice could be compromised and what damage could be done. This will make it much easier for you to develop a strategy that could safeguard against these incidents.  Map communications – Outlining the entire communication methods between microservices will give you valuable insights on any vulnerability that might eventually be exploited in case of a malicious attack. Use centralized security or configuration policies – Human error is one of the most common reasons why platforms, devices, or networks get hacked or damaged. It’s a fact! Employing a centralized security or configuration policy will reduce the human interaction with the microservices, and will build the long-desired consistency. Establish common, repeatable coding standards – The repeatable coding standards must be set up right from the developing stage. It will reduce certain divergences that might lead to exploitable vulnerabilities. Use ‘defense in depth’ to authorize vital services – From our experience, we know that a single firewall is not strong enough to protect our entire software. Thus, enabling a multi-factor authentication method, which places multiple layers of security controls is an effective way to ensure a robust security level. Use automatic security updates – This is crucial and easy to set up. Review microservices code – Having multiple experts reviewing the code is a great way of making sure that errors have not slipped through the cracks. Deploy an API gateway – If you expose one or more APIs for external access, then deploying an API gateway could reduce security risks. Moreover, you need to make sure that all API traffic is being encrypted using TSL. Actually, TSL should be used for all internal communications, right from the beginning to ensure the security of your systems. Use intrusion tools and request fuzzers – We all know that it is better to find issues before an attacker does. So, the technique ‘fuzz’ is a method that finds code vulnerabilities by sending large quantities of random data to the systems. This approach will ultimately highlight if the code could be compromised and what could cause it to fail.  Now that we’re all set with the security measures required for building a microservices, I would like to make a quick overview of the benefits that this innovative architecture has to offer: Fewer dependencies between teams Run multiple initiatives in parallel Support various technologies, frameworks, or languages Promotes ease of innovation through disposable code  Besides the tangible advantages named above, microservices are delivering increased value to your business, such as agility, comprehensibility of the software systems, independent deployability of components, and organizational alignment of services. I hope that this article will help you build a secure microservices architecture that will add value to your business.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies.
Read more
  • 0
  • 0
  • 43912

article-image-why-do-so-many-companies-fail-take-cyber-security-seriously
Hari Vignesh
11 Jul 2017
5 min read
Save for later

Why do so many companies fail to take cyber security seriously?

Hari Vignesh
11 Jul 2017
5 min read
Consider this: in the past year cyber thieves have stolen $81m from the central bank of Bangladesh, derailed Verizon's $4.8 billion takeover of Yahoo, and even allegedly interfered in the U.S. presidential election. Away from the headlines, a black market in computerized extortion, hacking-for-hire and stolen digital goods is booming. The problem is about to get worse, especially as computers become increasingly entwined with physical objects and vulnerable human bodies thanks to the Internet of Things and the innovations of embedded systems. A recent survey has once again highlighted the urgent need for UK business to take cyber security more seriously. The survey found that 65% of companies don’t have any security solutions deployed onto their mobile devices, and 68% of companies do not have an awareness program aimed at employees of all levels to ensure they are cyber aware. In addition to this, the survey found that 76% of companies still don’t have controls in place to detect and prevent zero-day/unknown malware entering their organizations, and 74% don’t have an incident management process established to respond to cyber incidents and prevent reoccurrences. What are the most common types of data breaches? The most common attack is still a structured query language (SQL) injection. SQL injections feature heavily in breaches of entire systems because when there is a SQL injection vulnerability, it provides the attacker with access to the entire database. Why is it common for large companies to have these types of errors?  There are a number of factors. One is that companies are always very cost conscious, so they’re always trying to do things on a budget in terms of the development cost. What that often means is that they’re getting under-skilled people. It doesn’t really cost anything more to build code that’s resilient to SQL injection. The developers building it have got to know how it works. For example, if you’re offshoring to the cheapest possible rates in another country, you’re probably going to get inexperienced people of very minimal security prowess.  Companies generally don’t tend to take it seriously until after they’ve had a bad incident. You can’t miss it. It’s all over the news every single day about different security incidents, but until it actually happens to an organization, the penny just doesn’t seem to drop. Leaving the windows open  This is not a counsel of despair. The risk from fraud, car accidents, and the weather can never be eliminated completely either. But societies have developed ways of managing such risk — from government regulation to the use of legal liability and insurance to create incentives for safer behavior.  Start with regulation. Government’s first priority is to refrain from making the situation worse. Terrorist attacks, like the ones in St Petersburg and London, often spark calls for encryption to be weakened so that the security services can better monitor what individuals are up to. But it is impossible to weaken encryption for terrorists alone. The same protection that guards messaging programs like WhatsApp also guard bank transactions and online identities. Computer security is best served by encryption that is strong for everyone.  The next priority is setting basic product regulations. A lack of expertise will always hamper the ability of users of computers to protect themselves. So governments should promote “public health” for computing. They could insist that Internet-connected gizmos be updated with fixes when flaws are found. They could force users to change default usernames and passwords. Reporting laws, already in force in some American states, can oblige companies to disclose when they or their products are hacked. That encourages them to fix a problem instead of burying it. What are the best ways for businesses to prevent cyber attacks?  There are a number of different ways of looking at it. Arguably, the most fundamental thing that makes a big difference for security is the training of technology professionals. If you’re a business owner, ensuring that and you’ve got people working for you who are building these systems, making sure they’re adequately trained and equipped is essential.  Data breaches are often related to coding errors. A perfect example is an Indian pathology lab, which had 43,000 pathology reports on individuals leaked publically. The individual who built the lab’s security system was entirely unequipped. Though it may not be the only solution, a good start in improving cyber security is ensuring that there is investment in the development of the people creating the code.Let us know where you’d start!  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 6120

article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 21654
article-image-how-nodejs-changing-web-development
Antonio Cucciniello
05 Jul 2017
5 min read
Save for later

How is Node.js Changing Web Development?

Antonio Cucciniello
05 Jul 2017
5 min read
If you have remotely been paying attention to what is going on in the web development space, you know that Node.js has become extremely popular and is many developers’ choice of backend technology. It all started in 2009 by Ryan Dahl. It is a JavaScript runtime that is built on Google Chrome's V8 JavaScript Engine.Over the past couple of years, more and more engineers have moved towards Node.js in many of the their web applications. With plenty of people using it now, how has Node.js changed web development? Scalability Scalability is the one thing that makes Node.js so popular. Node.js runs everything in a single thread. This single thread is event driven (due to JavaScript being the language that it is written with). It is also non-blocking. Now, when you spin up a server in your Node web app, every time a new user connects to the server, that launches an event. That event gets handled concurrently with the other events that are occurring or users that are connecting to the server. In web applications built with other technologies, this would slow down the server after a large amount of users. In contrast, with a Node application, and the non-blocking event driven nature, this allows for highly scalable applications. This now allows companies that are attempting to scale, to build their apps with Node, which will prevent any slowdowns they may have had. This also means they do not have to purchase as much server space as someone using a web app that was not developed with Node. Ease of Use As previously mentioned, Node.js is written with JavaScript. Now, JavaScript was always used to add functionality to the frontend of applications. But with the addition of Node.js, you can now write the entire application in JavaScript. This now makes it so much easier to be a frontend developer who can edit some backend code, or be a backend engineer who can play around with some frontend code. This in turn makes it so much easier to become a Full Stack Engineer. You do not really need to know anything new except the basic concepts of how things work in the backend. As a result, we have recently seen the rise in a full stack JavaScript developer. This also reduces the complexity of working with multiple languages; it minimizes any confusion that might arise when you have to switch from JavaScript on the front end to whatever language would have been used on the backend.  Open Source Community When Node was released, NPM, node package manager, was also given to the public. The Node package manager does exactly what it says on the tin: it allows developers to quickly add and use third party libraries and frameworks in their code. If you have used Node, then you can vouch for me here when I say there is almost always a package that you can use in your application that can make it easier to develop your application or automate a larger task. There are packages to help create http servers, help with image processing, and help with unit testing. If you need it, it’s probably been made. The even more awesome part about this community is that it’s growing by the day, and people are extremely active by contributing the many open source packages out there to help developers with various needs. This increases the productivity of all developers that are using Node in their application because they can shift the focus from something that is not that important in their application, to the main purpose of it. Aid in Frontend Development With the release of Node it did not only benefit the backend side of development, it also benefitted the frontend. With new frameworks that can be used on the frontend such as React.js or virtual-dom, these are all installed using NPM. With packages like browserify you can also use Node’s require to use packages on the frontend that normally would be used on the backend! You can be even more productive and develop things faster on the front end as well! Conclusion Node.js is definitely changing web development for the better. It is making engineers more productive with the use of one language across the entire stack. So, my question to you is, if you have not tried out Node in your application, what are you waiting for? Do you not like being more productive? If you enjoyed this post, tweet about your opinion of how Node.js changed web development. If you dislike Node.js, I would love to hear your opinion as well! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 49939

article-image-what-separates-good-developer-great-developer
Antonio Cucciniello
05 Jul 2017
4 min read
Save for later

What separates a good developer from a great developer?

Antonio Cucciniello
05 Jul 2017
4 min read
For developers looking to go to the next level and become a great developer, as opposed to someone who is considered just a good developer, this post is for you. In my opinion, there is a fine line between the two, but it could involve long term consequences if you do not get yourself to that next level. This may also be a different route than most of you expected, because I will be talking about various "soft" skills that will make a programmer great in the long term. Ability to learn A software engineer's capacity to learn new topics effectively and quickly is crucial. In a world of changing frameworks and standards that evolve by the day, this skill has never been needed more. You do not want someone on your team who only knows Java, and does not want to learn anything else. Just because your company is using Java today, does not mean that it will be using Java a few years from now. Even two months from now it could be completely different. You ideally want someone who has displayed their ability to go deep with at least one language or technology and can transfer that depth of knowledge over to other areas at an extremely fast rate. Positive thinker In life, your attitude toward a situation could greatly effect where you end up. You do not simply just want a deep technical experience.The way you think about the things presented to you truly affects your ability to produce daily. A positive thinker is someone who always sees the benefits that can be taken from a situation. That person is also more likely to take beneficial risks and will produce more in the end. The mindset and attitude of this individual will influence other team members as well and make the entire team more productive as a result. Great interpersonal & intrapersonal skills This skill can be described as someone who knows how to work with others in a positive and likeable manner. They know how to handle themselves, their emotions, and communicate with others. Youaren’t alone, you must deal with others. If you cannot deal with others in a positive manner, then your ability to create something revolutionary will be stunted. Having this skill allows you to avoid conflicts with others, helps get ideas across effectively, and provides a better work environment that will make for a more productive and effective team. Ability to teach others Not everyone is a good teacher. As you probably already know, there are plenty of teachers that are not particularly good at teaching. It is a difficult skill to put yourself in someone else's shoes and move them to the understanding of a specific topic. If you have a good teacher on your engineering team, you can greatly increase the level of your team's output. You do not need to have a PHD in linguistics or have written three novels to get ideas across. You just need to be able to make things clear for the other person. Inaccessible knowledge is knowledge wasted, so ensure that you explain in a way that does not leave people guessing. Make long term vs short term tradeoffs Sometimes, developers can solve problems that you give them, but without the perspective necessary for the larger and longer-term picture. They may be able to develop something very fast that solves an immediate problem, but create it in a way that will not allow this functionality to scale. Other times there may be a deadline to ship some code, but the engineer may take his time developing software that handles every case. In this scenario, you want them to make a short term tradeoff to hit the deadline and refactor later. You want someone who can make these time versus quality tradeoffs, depending on the intricacies of the scenario. Conclusion Overall, there are some qualities that make a good developer transcend into a great developer: the capacity to learn, the ability to teach effectively, having a positive attitude at all times, having awesome communication and interpersonal skills, and the capacity to make decisions that can affect short term and long term results. If you enjoyed this post I would love to hear your opinion. About the author  Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello 
Read more
  • 0
  • 0
  • 2982

article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 11748
article-image-how-succeed-gaming-industry-10-tips
Raka Mahesa
12 Jun 2017
5 min read
Save for later

How to succeed in the gaming industry: 10 tips

Raka Mahesa
12 Jun 2017
5 min read
The gaming industry is a crowded trade. After all, it's one of those industries where you can work on something you actually love; so a lot of people are trying to get into it. And with a lot from rivals being successful in the industry is a difficult thing to accomplish. Here are 10 tips to help you succeed in the gaming enterprise. Do note that these are general tips. That way, the tips should be applicable to you regardless of your position in the industry, whether you're an indie developer working on your own games, or a programmer working for a big gaming company. Tip 1: Be creative The gaming industry is a creative one, so it makes perfect sense that you need to be creative to succeed. And you don't have to be an artist or a writer to apply your creative thinking; there are many challenges that need creative solutions. For example, a particular system in the game may need some heavy computing, so you can come up with a creative solution that instead of fully computing the problem, you merely use a simpler formula and estimate the result.  Tip 2: Capable of receiving criticism Video games are a passion for many people, probably including you. That's why it's easy to fall in love with your own idea; whether it’s a gameplay idea like how an enemy should behave, or a technical one like how a save file should be written. Your idea might not be perfect though, so it's important to be able to step back and see if another person's criticism on your idea has its merit. After all, that other person could be capable of seeing a flaw that you may have missed.  Tip 3: Be able to see the big picture A video game’s software is full of complex, interlocking systems. Being able to see the big picture, that is, seeing how changes in one system could affect another system is a really nice skill to have when developing a video game.  Tip 4: Keep up with technology Technology moves at a blisteringly fast speed. Technology that is relevant today may be rendered useless tomorrow, so it is very important to keep up with technology. Using the latest equipment may help your game project and the newest technology may provide opportunities for your games too. For example, newer platforms like VR and AR don't have many games yet, so it's easier to gain visibility there.  Tip 5: Keep up with industry trends It's not just technology that moves fast, but also the world. Just 10 years ago, it was unthinkable that millions of people would watch other people play games, or that mobile gaming would be bigger than console gaming. By keeping up with industry trends, we can understand the market for our games, and more importantly, understand our players' behavior.  Tip 6: Put yourself in your player's shoes Being able to see your games from the viewpoint of your player is a really useful skill to be had. For example, as a developer you may feel fine looking at a black screen when your game is loading its resources because you know the game is working fine, as long as it doesn't throw an error dialog. Whereas, your player probably doesn't feel the same way and thought the game just hangs when it shows you a black screen without a resource loading indicator.  Tip 7: Understand your platform and your audience This is a bit similar to the previous tip, but on a more general level. Each platform has different strengths and the audience of each platform also has different expectations. For example, games for mobile platforms are expected to be played in small time burst instead of hour long sessions, so mobile gamers expect their games to automatically save progress whenever they stop playing. Understanding this behavior is really important for developing games that satisfy players.  Tip 8: Be a team player Unless you're a one-man army, games usually are not developed alone. Since game development is a team effort, it's pretty important to get along with your teammates. Whether it's dividing tasks fairly with your programmer buddy, or explaining to the artist about the format of the art assets that your game needs.  Tip 9: Show your creation to other people When you are deep in the process of working on your latest creation, it’s hardsometimes to take a step back and assess your creation fairly. Occasionally you may even feel like your creations aren’t up to scratch. Fortunately, showing your work to other people is a relatively easy way to get good and honest feedback. And if you’re lucky, your new audience may just show you how your creation is actually to a standard level.  Tip 10: Networking This is probably the most generic tip ever, but that doesn't mean it's not true. In any industry and no matter what your position is, networking is really important. If you're an indie developer, you may connect with a development partner that shares the same vision as you. Alternatively, if you're a programmer, maybe you will connect with someone who’s looking for a senior position to lead a new game project. Networking will open the door of opportunities for you.  About the author  Raka Mahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 8824

article-image-why-containers-are-driving-devops
Diego Rodriguez
12 Jun 2017
5 min read
Save for later

Why containers are driving DevOps

Diego Rodriguez
12 Jun 2017
5 min read
It has been a long ride since the days where one application would just take a full room of computing hardware. Research and innovation in information technology (IT) have taken us far and will surely keep moving even faster every day. Let's talk a bit about the present state of DevOps, and how containers are driving the scene. What are containers? According to Docker (the most popular containers platform), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It's available for Linux and Windows apps. It runs the same everytime regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure. Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standarization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff. This means a lot of things: your deployments can go packed in a much more smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standarization (remember "Convention over configuration"), less things to manage and more isolated apps means less ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm. Considerations It has been less than 10 years since containerization started, and in the technology world that is a lot, considering how fast other technologies such as web front-end frameworks and artificial intelligence [AI] are moving. In just a few years, development of this widely-deployed technology has gone mature and production-ready, coupled with microservices, the boost has taken it to new parts in the DevOps world, being now the defacto solution for many companies in their application and services deployment flow. Just before all this exciting movement started, VMs were the go-to for the many problems encountered by IT people, including myself. And although VMs are a great way to solve many of these problems, there was still room for improvement. Nowadays, the horizon seems really promising with the support of top technology companies backing tools, frameworks, services and products, all around containers, benefiting most of the daily code we develop, test, debug, and deploy on a daily basis. These days, thanks to the work of many, it's possible to have a consistent all-around lightweight way to run, test, debug, and deploy code from whichever platform you work from. So, if you code in Linux using VIM, but your coworker uses Windows using VS code, both can have the same local container with the same binaries and libraries where code is ran. This removes a lot of incompatibility issues and allows teams to enjoy production environments in their own machine, not having to worry about sharing the same configuration files, misconfiguration, versioning hassles, etc. It gets even better. Not only is there no need to maintain the same configuration files across the different services: there is less configuration to handle as a whole. Templates do most of the work for us, allowing you and your team to focus on creating and deploying your products, improving and iterating your services, changing and enhancing your code. In less than 10 lines you can specify a working template containing everything needed to run a simple Node.js service, or maybe a Ruby on Rails application, and how about a Scala cron job. Containerization supports most, if not all languages and stacks. Containers and virtualization Virtualization has allowed for acceleration in the speed in which we build things for many years. It will continue to provide us with better solutions as time goes by. Just as we went from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and finally Software as a Service (SaaS) and others (Anything as a Service? AaaS?), I am certain that we will find more abstraction beyond containers, making our life easier everyday. As most of today's tools, many virtualization and containerization ones are open source, with huge communities around them and support boards, but keep the trust in good'ol Stack Overflow. So remember to give back something to the amazing community of open source, open issues, report bugs, share the best about it and help fix and improve the lacking parts. But really, just try to learn these new and promising technologies that give us IT people a huge bump in efficiency in pretty much all aspects. About the author Diego Rodriguez Baquero is a full stack developer specializing in DevOps and SysOps. He is also a WebTorrent core team member. He can be found at https://diegorbaquero.com/. 
Read more
  • 0
  • 0
  • 45046