Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-what-keras
Janu Verma
01 Jun 2017
5 min read
Save for later

What is Keras?

Janu Verma
01 Jun 2017
5 min read
What is keras? Keras is a high-level library for deep learning, which is built on top of Theano and Tensorflow. It is written in Python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical techniques. The key idea behind Keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, the creator of Keras: Being able to go from idea to result with the least possible delay is key to doing good research. This is a huge advantage, especially for scientists and beginner developers because one can jump right into deep learning without getting their hands dirty. Because deep learning is currently on the rise, the demand for people trained in deep learning is ever increasing. Organizations are trying to incorporate deep learning into their workflows, and Keras offers an easy API to test and build deep learning applications without considerable effort. Deep learning research is such a hot topic right now, scientists need a tool to quickly try out their ideas, and they would rather spend time on coming up with ideas than putting together a neural network model. I use Keras in my own research, and I know a lot of other researchers relying on Keras for its easy and flexible API. What are the key features of Keras? Keras is a high-level interface to Theano or Tensorflow and any of these can be used in the backend. It is extremely easy to switch from one backend to another. Training deep neural networks is a memory and time intensive task. Modern deep learning frameworks like Tensorflow, Caffe, Torch, etc. can also run on GPU, though there might be some overhead in setting up and running the GPU. Keras runs seamlessly on both CPU and GPU. Keras supports most of the neural layer types e.g. fully connected, convolution, pooling, recurrent, embedding, dropout, etc., which can be combined in any which way to build complex models. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone, fully-configurable module, and these modules can be combined to create new models. Essentially, layers, activation, optimizers, dropout, loss, etc. are all different modules that can be assembled to build models. A key advantage of modularity is that new features are easy to add as separate modules. This makes Keras fully expressive, extremely flexible, and well-suited for innovative research. Coding in Keras is extremely easy. The API is very user-friendly with the least amount of cognitive load. Keras is a full Python framework, and all coding is done in Python, which makes it easy to debug and explore. The coding style is very minimalistic, and operations are added in very intuitive python statements. How is Keras built? The core component of Keras architecture is a model. Essentially, a model is a neural network model with layers, activations, optimization, and loss. The simplest Keras model is Sequential, which is just a linear stack of layers; other layer arrangements can be formed using the Functional model. We first initialize a model, add layers to it one by one, each layer followed by its activation function (and regularization, if desired), and then the cost function is added to the model. The model is then compiled. A compiled model can be trained, using the simple API (model.fit()), and once trained the model can be used to make predictions (model.predict()). The similarity to scikit-learn API can be noted here. Two models can be combined sequentially or parallel. A model trained on some data can be saved as an HDF5 file, which can be loaded at a later time. This eliminates the need to train a model again and again; train once and make predictions whenever desired. Keras provides an API for most common types of layers. You can also merge or concatenate layers for a parallel model. It is also possible to write your own layers. Other ingredients of a neural network model like loss function, metric, optimization method, activation function, and regularization are all available with most common choices. Another very useful component of Keras is the preprocessing module with support for manipulating and processing image, text, and sequence data. A number of deep learning models and their weights obtained by training on a big dataset are made available. For example, we have VGG16, VGG19, InceptionV3, Xception, ResNet50 image recognition models with their weights after training on ImageNet data. These models can be used for direct prediction, feature building, and/or transfer learning. One of the greatest advantage of Keras is a huge list of example code available on the Keras GitHub repository (with discussions on accompanying blog) and on the wider Internet. You can learn how to use Keras for text classification using a LSTM model, generate inceptionistic art using deep dream, using pre-trained word embeddings, building variational autoencoder, or train a Siamese network, etc. There is also a visualization module, which provides functionality to draw a Keras model. This uses the graphviz library to plot and save the model graph to a file. All in all, Keras is a library worth exploring, if you haven't already.  Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute.  He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals etc.  His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area, email to schedule a meeting. Check out his personal website here.
Read more
  • 0
  • 0
  • 17262

article-image-tackle-trolls-machine-learning-filtering-inappropriate-content
Amarabha Banerjee
15 Aug 2018
4 min read
Save for later

Tackle trolls with Machine Learning bots: Filtering out inappropriate content just got easy

Amarabha Banerjee
15 Aug 2018
4 min read
The most feared online entities in the present day are trolls. Trolls, a fearsome bunch of fake or pseudo online profiles, tend to attack online users, mostly celebrities, sports person or political profiles using a wide range of methods. One of these methods is to post obscene or NSFW (Not Safe For Work) content on your profile or website where User Generated Content (USG) is allowed. This can create unnecessary attention and cause legal troubles for you too. The traditional way out is to get a moderator (or a team of them). Let all the USGs pass through this moderation system. This is a sustainable solution for a small platform. But if you are running a large scale app, say a publishing app where you publish one hundred stories a day, and the success of these stories depend on the user interaction with them, then this model of manual moderation becomes unsustainable. More the number of USGs, more is the turn-around time, larger the moderation team size. This results in escalating costs, for a purpose that’s not contributing to your business growth in any manner. That’s where Machine Learning could help. Machine Learning algorithms that can scan images and content for possible abusive or adult content is a better solution that manual moderation. Tech giants like Microsoft, Google, Amazon have a ready solution for this. These companies have created APIs which are commercially available for developers. You can incorporate these APIs in your application to weed out the filth served by the trolls. The different APIs available for this purpose are Microsoft moderation, Google Vision, AWS Rekognition & Clarifai. Dataturks have made a comparative study on using these APIs on one particular dataset to measure their efficiency. They used a YACVID dataset with 180 images, manually labelled 90 of these images as nude and the rest as non-nude. The dataset was then fed to the 4 APIs mentioned above, their efficiency was tested based on the following parameters. True Positive (TP): Given a safe photo, the API correctly says so False Positive (FP): Given an explicit photo but the API incorrectly classifies it as safe. False negative (FN): Given a safe photo but the API is not able to detect so and True negative(TN): Given an explicit photo and the API correctly says so. TP and TN are two cases which meant the system behaved correctly. An FP meant that the app was vulnerable to attacks from trolls, FN meant the efficiency of the systems were low and hence not practically viable. 10% of the cases would be such that the API can’t decide whether its explicit or not. Those would be sent for manual moderation. This would bring down the maintenance cost of the moderation team. The results that they received are shown below: Source: Dataturks As it is evident from the above table, the best standalone API is Google vision with a 99% accuracy and 94% recall value. Recall value implies that if the same images are repeated, it can recognize them with 94% precision. The best results however were received with the combination of Microsoft and Google. The comparison of the response times are mentioned below: Source: dataturks The response time might have been affected with the fact that all the images accessed by the APIs were stored in Amazon S3. Hence AWS API might have had an unfair advantage on the response time. The timings were noted for 180 image calls per API. The cost is the lowest for AWS Rekognition - $1 for 1000 calls to the API. It’s $1.2 for Clarifai, $1.5 for both Microsoft and Google. The one notable drawback of the Amazon API was that the images had to be stored as S3 objects, or converted into that. All the other APIs accepted any web links as possible source of images. What this study says is that the power of filtering out negative and explicit content in your app is much easier now. You might still have to have a small team of moderators, but their jobs will be made a lot easier with the ML models implemented in these APIs. Machine Learning is paving the way for us to be safe from the increasing menace of Trolls, a threat to free speech and open sharing of ideas which were the founding stones of internet and the world wide web as a whole. Will this discourage Trolls from continuing their slandering or will it create a counter system to bypass the APIs and checks? We can only know in time. Facebook launches a 6-part Machine Learning video series Google’s new facial recognition patent uses your social network to identify you! Microsoft’s Brad Smith calls for facial recognition technology to be regulated
Read more
  • 0
  • 0
  • 17236

article-image-do-you-need-artificial-intelligence-and-machine-learning-expertise-in-house
Guest Contributor
22 Jan 2019
7 min read
Save for later

Do you need artificial intelligence and machine learning expertise in house?

Guest Contributor
22 Jan 2019
7 min read
Developing artificial intelligence expertise is a challenge. There’s a huge global demand for practitioners with the right skills and knowledge and a lack of people who can actually deliver what’s needed. It’s difficult because many of the most talented engineers are being hired by the planet’s leading tech companies on salaries that simply aren’t realistic for many organizations. Ultimately, you have two options: form an in-house artificial intelligence development team or choose an external software development team or consultant with proven artificial intelligence expertise. Let’s take a closer look at each strategy. Building an in-house AI development team If you want to develop your own AI capabilities, you will need to bring in strong technical skills in machine learning. Since recruiting experts in this area isn’t an easy task, upskilling your current in-house development team may be an option. However, you will need to be confident that your team has the knowledge and attitude to develop those skills. Of course, it’s also important to remember that a team building artificial intelligence is comprised of a range of skills and areas of expertise. If you can see how your team could evolve in that way, you’re halfway to solving your problem. AI experts you need for building a project  Big Data engineers: Before analyzing data, you need you collect, organize, and process it. AI is usually based on big data, so you need the engineers who have experience working with structured and unstructured data, and can build a secure data platform. They should have sound knowledge of Hadoop, Spark, R, Hive, Pig, and other Big Data technologies.  Data scientists: Data scientists are a vital part of your AI team. They work their magic with data, building the models, investigating, analyzing, and interpreting it. They leverage data mining and other techniques to surface hidden insights and solve business problems. NLP specialists: A lot of AI projects involve Natural Language Processing, so you will probably need NLP specialists. NLP allows computers to understand and translate human language serving as a bridge between human communication and machine interpretation. Machine learning engineers: These specialists utilize machine learning libraries, deploying ML solutions into production. They take care of the maintainability and scalability of data science code. Computer vision engineers: They specialize in imagery recognition, correlating image to a particular metric instead of correlating metrics to metrics. For example, computer vision is used for modeling objects or environments (medical image analysis), identification tasks (a species identification system), and processes controlling (industrial robots).  Speech recognition engineers: You will need these experts if you want to build your speech recognition system. Speech recognition can be very useful in telecommunication services, in-car systems, medical documentation, and education. For instance, it is used in language learning for practicing pronunciation. Partnering with an AI solution provider If you realize that recruiting and building your own in-house AI team is too difficult and expensive, you can engage with an external AI provider. Such an approach helps companies keep the focus on their core expertise and avoid the headache of recruiting the engineers and setting up the team. Also, it allows them to kick off the project much faster and thus gain a competitive advantage. Factors to consider when choosing an artificial intelligence solution provider AI engineering experience Due to the huge popularity of AI these days, many companies claim to be professional AI development providers without practical experience. Hence it’s extremely important to do extensive research. Firstly, you should study the portfolio and case studies of the company. Find out which AI, machine learning or data science projects your potential vendor worked on and what kind of artificial intelligence solutions the company has delivered. For instance, you may check out these European AI development companies and the products they developed. Also, make sure a provider has experience in the types of machine learning algorithms (supervised, unsupervised, and reinforcement), data structures and algorithms, computer vision, NLP, etc that are relevant to your project needs. Expertise in AI technologies Artificial Intelligence covers a multitude of different technologies, frameworks, and tools. Make sure your external engineering team consists of professional data scientists and data engineers who can solve your business problems. Building the AI team and selecting the necessary skill set might be challenging for businesses that have no internal AI expertise. Therefore, ask a vendor to provide tech experts or delivery managers who will advise you on the team composition and help you hire the right people. Capacities to scale a team When choosing a team, you should consider not only your primary needs but also the potential growth of your business. If you expect your company to scale up, you’ll need more engineering capacities. Therefore, take into account your partner’s ability to ramp up the team in the future. Also, consider factors such as the vendor’s employer image and retention rate since your ability to attract top AI talent and keep them on your project will largely depend on it. Suitable cooperation model It is essential to choose the AI company with a cooperation model that fits your business requirements. The most popular cooperation models are Fixed Price, Time and Material, and Dedicated Development Team. Within the fixed price model all the requirements and the scope of work are set from the start, and you as a customer need to have them described to the smallest detail as it will be extremely difficult to make change requests during the project. However, it is not the best option for AI projects since they involve a lot of R&D and it is difficult to define everything at the initial stage. Time and material model is the best for small projects when you don’t need the specialists to be fully dedicated to your project. This is not the best choice for AI development as the hourly rates of AI engineers are extremely high and the whole project would cost you a fortune with this type of contract. In order to add more flexibility yet keep control over the project budget, it is better to choose a dedicated development team model or staff augmentation. It will allow you to change the requirements when needed and have control over your team. With this type of engagement, you will be able to keep the knowledge within your team and develop your AI expertise as developers will work exclusively for you. Conclusion If you have to deal with the challenge of building AI expertise in your company, there are two possible ways to go. First off, you can attract local AI talent and build the expertise in-house. Then you have to assemble the team of data scientists, data engineers, and other specialists depending on your needs. However, developing AI expertise in-house is always time- and cost-consuming taking into account the shortage of well-qualified machine learning specialists and superlative salary expectations. The other option is to partner with an AI development vendor and hire an extended team of engineers. In this case, you have to consider a number of factors such as the company’s experience in delivering AI solutions, the ability to allocate the necessary resources, the technological expertise, and its capabilities to satisfy your business requirements. Author Bio Romana Gnatyk is Content Marketing Manager at N-IX passionate about software development. Writing insightful content on various IT topics, including software product development, mobile app development, artificial intelligence, the blockchain, and different technologies. Researchers introduce a machine learning model where the learning cannot be proved “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning
Read more
  • 0
  • 0
  • 17232

article-image-techs-culture-war-entrepreneur-egos-v-engineer-solidarity
Richard Gall
12 Jul 2018
10 min read
Save for later

Tech’s culture war: entrepreneur egos v. engineer solidarity

Richard Gall
12 Jul 2018
10 min read
There is a rift in the tech landscape that has been shifting quietly for some time. But 2018 is the year it has finally properly opened. This is the rift between the tech’s entrepreneurial ‘superstars’ and a nascent solidarity movement, both of which demonstrate the two faces of the modern tech industry. But within this ‘culture war’ there’s a broader debate about what technology is for and who has the power to make decisions about it. And that can only be a good thing - this is a conversation we’ve needed for some time. With the Cambridge Analytica scandal, and the shock election results to which it was tied, much contemporary political conversation is centered on technology’s impact on the social sphere. But little attention has been paid to the way these social changes or crises are actually enforcing changes within the tech industry itself. If it feels like we’re all having to pick sides when it comes to politics, the same is true when it comes to tech. The rise of the tech ego If you go back to the early years of software, in the early part of the twentieth century, there was little place for ego. It’s no accident that during this time computing was feminized - it was widely viewed as administrative. It was only later that software became more male dominated, thanks to a sexist cultural drive to establish male power in the field. This was arguably the start of egos tech takeover- after all, men wanted their work to carry a certain status. Women had to be pushed out to give them it. It’s no accident that the biggest names in technology - Bill Gates, Steve Wozniak, Steve Jobs - are all men. Their rise was, in part, a consequence of a cultural shift in the sixties. But it’s recognise the fact that in the eighties, these were still largely faceless organizations. Yes, they were powerful men, but the organizations they led were really just the next step out from the military industrial complex that helped develop software as we know it today. It was only when ‘tech’ properly entered the consumer domain that ego took on a new value. As PCs became part of every day life, attaching these products to interesting and intelligent figures was a way of marketing these products. It’s worth remarking that it isn’t really important whether these men had huge egos at all. All that matters is that they were presented in that way, and granted an incredible amount of status and authority. This meant that complexity of software and the literal labor of engineering could be reduced to a relatable figure like Gates or Jobs. We can still feel the effects of that today: just think of the different ways Apple and Microsoft products are perceived. Tech leaders personify technology. They make it marketable. Perhaps tech ‘egos’ were weirdly necessary. Because technology was starting to enter into everyone’s lives, these figures - as much entrepreneurs as engineers - were able to make it accessible and relatable. If that sounds a little far fetched, consider what the tech ‘ninja’ or the ‘guru’ really means for modern businesses. It often isn’t so much about doing something specific, but instead about making the value and application of those technologies clear, simple, and understandable. When companies advertise for these roles using this sort of language they’re often trying to solve an organizational problem as much as a technical one. That’s not to say that being a DevOps guru at some middling eCommerce company is the same as being Bill Gates. But it is important to note how we started talking in this way. Similarly, not everyone who gets called a ‘guru’ is going to have a massive ego (some of my best friends are cloud gurus!), but this type of language does encourage a selfish and egotistical type of thinking. And as anyone who’s worked in a development team knows, that can be incredibly dangerous. From Zuckerberg to your sprint meeting - egos don’t care about you Today, we are in a position where the discourse of gurus and ninjas is getting dangerous. This is true on a number of levels. On the one hand we have a whole new wave of tech entrepreneurs. Zuckerberg, Musk, Kalanick, Chesky, these people are Gates and Jobs for a new generation. For all their innovative thinking, it’s not hard to discern a certain entitlement from all of these men. Just look at Zuckerberg and his role in the Cambridge Analytica Scandal. Look at Musk and his bizarre intervention in Thailand. Kalanick’s sexual harassment might be personal, but it reflects a selfish entitlement that has real professional consequences for his workforce. Okay, so that’s just one extreme - but these people become the images of how technology should work. They tell business leaders and politicians that tech is run by smart people who ostensibly should be trusted. This not only has an impact on our civic lives but also on our professional lives too. Ever wonder why your CEO decides to spend big money on a CTO? It’s because this is the model of modern tech. That then filters down to you and the projects you don’t have faith in. If you feel frustrated at work, think of how these ideas and ways of describing things cascade down to what you do every day. It might seem small, but it does exist. The emergence of tech worker solidarity While all that has been happening, we’ve also seen a positive political awakening across the tech industry. As the egos come to dictate the way we work, what we work on, and who feels the benefits, a large group of engineers are starting to realize that maybe this isn’t the way things should be. Disaffection in Silicon Valley This year in Silicon Valley, worker protests against Amazon, Microsoft and Google have all had an impact on the way their companies are run. We don’t necessarily hear about these people - but they’re there. They’re not willing to let their code be used in ways that don’t represent them. The Cambridge Analytica scandal was the first instance of a political crisis emerging in tech. It wasn’t widely reported, but some Facebook employees asked to move across to different departments like Instagram or WhatsApp. One product designer, Westin Lohne, posted on Twitter that he had left his position saying “morally, it was extremely difficult to continue working there as a product designer.” https://twitter.com/westinlohne/status/981731786337251328 But while the story at Facebook was largely disorganized disaffection, at Google there was real organization against Project Maven. 300 Google employees signed a petition against the company’s AI initiative with the Pentagon. In May, a number of employees resigned over the issue. One is reported as saying “over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” Read next: Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon A similar protest happened at Amazon, with an internal letter to Jeff Bezos protesting the use of Rekognition - Amazon’s facial recognition technology - by law enforcement agencies, including ICE. “Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter stated, according to Gizmodo. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.” Microsoft saw a similar protest, sparked, in part, by the shocking images of families being separated at the U.S./Mexico border. Despite the company distancing itself over ICE’s activities, many employees were vocal in their opposition. “This is the sort of thing that would make me question staying,” said one employee, speaking to Gizmodo. A shift in attitudes as tensions emerge True, when taken individually, these instances of disaffection may not look like full-blown solidarity. But together, it amounts to a changing consciousness across Silicon Valley. Of course, it wouldn’t be wrong to say that a relationship between tech, the military, and government has always existed. But the reason things are different is precisely because these tensions have become more visible, attitudes more prominent in public discourse. It’s worth thinking about these attitudes and actions in the context of hyper-competitive Silicon Valley where ego is the norm, and talent and flair is everything. Signing petitions carries with it some risk - leaving a well-paid job you may have spent years working towards is no simple decision. It requires a decisive break with the somewhat egotistical strand that runs through tech to make these sorts of decisions. While it might seem strange, it also shouldn’t be that surprising. If working in software demands a high level of collaboration, then collaboration socially and politically is really just the logical development from our professional lives. All this talk about ‘ninjas’, ‘gurus’ and geniuses only creates more inequality within the tech job market - whether you’re in Silicon Valley, Stoke, or Barcelona, or Bangalore, this language actually hides the skills and knowledge that are actually most valuable in tech. Read next: Don’t call us ninjas or rockstars, say developers Where do we go next? The future doesn’t look good. But if the last six months or so are anything to go by there are a number of things we can do. On the one hand more organization could be the way forward. The publishing and media industries have been setting a great example of how unionization can work in a modern setting and help workers achieve protection and collaborative power at work. If the tech workforce is going to grow significantly over the next decade, we’re going to see more unionization. We’ve already seen technology lead to more unionization and worker organization in the context of the gig economy - Deliveroo and Uber drivers, for example. Gradually it’s going to return to tech itself. The tech industry is transforming the global economy. It’s not immune from the changes it’s causing. But we can also do more to challenge the ideology of the modern tech ego. Key to this is more confidence and technological literacy. If tech figureheads emerge to make technology marketable and accessible, the way to remove that power is to demystify it. It’s to make it clear that technology isn’t a gift, the genius invention of an unfathomable mind, but instead that it’s a collaborative and communal activity, and a skill that anyone can master given the right attitude and resources. At its best, tech culture has been teaching the world that for decades. Think about this the next time someone tells you that technology is magic. It’s not magic, it’s built by people like you. People who want to call it magic want you to think they’re a magician - and like any other magician, they’re probably trying to trick you.
Read more
  • 0
  • 0
  • 17170

article-image-is-it-actually-possible-to-have-a-free-and-fair-election-ever-again-pulitzer-finalist-carole-cadwalladr-on-facebooks-role-in-brexit
Bhagyashree R
18 Apr 2019
6 min read
Save for later

“Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit

Bhagyashree R
18 Apr 2019
6 min read
On Monday, Carole Cadwalladr, a British journalist and Pulitzer award finalist, in her TED talk revealed how Facebook impacted the Brexit voting by enabling the spreading of calculated disinformation. Brexit, short for “British exit”, refers to UK’s withdrawal from the European Union (EU). Back in June 2016, when the United Kingdom European Union membership referendum happened, 51.9% of the voters supported leaving the EU. The final conclusion was set to come out on 29 March 2019, but it is now extended to 31 October 2019. Cadwalladr was asked by the editor of The Observer, the newspaper she was working at the time, to visit South Wales to investigate why so many voters there had elected to leave EU. So, she decided to visit Ebbw Vale, a town at the head of the valley formed by the Ebbw Fawr tributary of the Ebbw River in Wales. She wanted to find out why this town had the highest percentage of ‘Leave’ votes (62%). Brexit in South Wales: The reel and the real After reaching the town, Cadwalladr recalls that she was “taken aback” when she saw how this town has evolved over the years. The town was gleaming with new infrastructures including entrepreneurship center, sports center, better roads, and more, all funded by the EU. After seeing this development, she felt “a weird sense of unreality” when a young man stated his reason for voting to leave the EU was that it has failed to do anything for him. Not only this young man but people all over the town also stated the same reason for voting to leave the EU. “They said that they wanted to take back control,” adds Cadwalladr. Another major reason behind Brexit was immigration. However, Cadwalladr adds that she barely saw any immigrants and was unable to relate to the immigration problem the citizens of the town were talking about. So, she verified her observation with the actual records and was surprised to find that Ebbw Vale, in fact, has one of the lowest immigration rates. “So I was just a bit baffled because I couldn’t really understand where people were getting their information from,” she adds. So, after her story got published, a reader reached out to her regarding some Facebook posts and ads, which she described to her as “quite scary stuff about immigration, and especially about Turkey.” These posts were misinforming people that Turkey was going to join the EU and its 76 million population will promptly emigrate to current member states. “What happens on Facebook, stays on Facebook” After getting informed about these ads, when Cadwalladr checked Facebook to look for herself, she could not find even a trace of them because there is no archive of ads that are shown to people on Facebook. She said,  “This referendum that will have this profound effect on Britain forever and it already had a profound effect. The Japanese car manufacturers that came to Wales and the North-East people who replaced the mining jobs are already going because of Brexit. And, this entire referendum took place in darkness because it took place on Facebook.” And, this is why the British parliament has called Mark Zuckerberg several times to get answers to their questions, but each time he refused. Nobody has a definitive answer to questions like what ads were shown to people, how these ads impacted them, how much money was spent on these ads, or what data was analyzed to target these people, but Facebook. Cadwalladr adds that she and other journalists observed that during the referendum multiple crimes happened. In Britain, there is a limited amount of budget that you are allowed to spend on election campaigns to prevent politicians from buying the votes. But, in the last few days before the Brexit vote the  “biggest electoral fraud in Britain” happened. It was found that the official Vote Leave campaign laundered £750,000 from another campaign entity that was ruled illegal by the electoral commission. This money was spent, as you can guess, on the online disinformation campaigns. She adds, “And you can spend any amount of money on Facebook or on Google or on YouTube ads and nobody will know, because they're black boxes. And this is what happened.” The law was also broken by a group named “Leave.EU”. This group was led by Nigel Farage, a British politician, whose Brexit Party is doing quite well in the European elections. The campaign was funded by Arron Banks, who is being referred to the National Crime Agency because the electoral commission was not able to figure out from where he was able to provide the money. Going further into the details, she adds, “And I'm not even going to go into the lies that Arron Banks has told about his covert relationship with the Russian government. Or the weird timing of Nigel Farage's meetings with Julian Assange and with Trump's buddy, Roger Stone, now indicted, immediately before two massive WikiLeaks dumps, both of which happened to benefit Donald Trump.” While looking into Trump’s relationship to Farage, she came across Cambridge Analytica. She tracked down one of its ex-employees, Christopher Wiley, who was brave enough to reveal that this company has worked for Trump and Brexit. It used data from 87 million people from Facebook to understand their individual fears and better target them with Facebook ads. Cadwalladr’s investigation involved so many big names, that it was quite expected to get some threats. The owner of Cambridge Analytica, Robert Mercer threatened to sue them multiple times. Later on, one day ahead of publishing, they received a legal threat from Facebook. But, this did not stop them from publishing their findings in the Observer. A challenge to the “gods of Silicon Valley” Addressing the leaders of the tech giants, Cadwalladr said, “Facebook, you were on the wrong side of history in that. And you were on the wrong side of history in this -- in refusing to give us the answers that we need. And that is why I am here. To address you directly, the gods of Silicon Valley: Mark Zuckerberg and Sheryl Sandberg and Larry Page and Sergey Brin and Jack Dorsey, and your employees and your investors, too.” These tech giants can’t get away by just saying that they will do better in the future. They need to first give us the long-overdue answers so that these type of crimes are stopped from happening again. Comparing the technology they created to a crime scene, she now calls for fixing the broken laws. “It's about whether it's actually possible to have a free and fair election ever again. Because as it stands, I don't think it is,” she adds. To watch her full talk, visit TED.com. Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook AI introduces Aroma, a new code recommendation tool for developers Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs
Read more
  • 0
  • 0
  • 17157

article-image-difference-between-native-mobile-development-vs-cross-platform-development
Amarabha Banerjee
27 Jun 2018
4 min read
Save for later

What’s the difference between cross platform and native mobile development?

Amarabha Banerjee
27 Jun 2018
4 min read
Mobile has become an increasingly important part of many modern businesses tech strategy. In everything from eCommerce to financial services, mobile applications aren’t simply a ‘nice to have’, they’re essential. Customers expect them. The most difficult question today isn’t ‘do we need a mobile app’ Instead, it’s ‘which type of mobile app should we build: native vs cross platform?’ There are arguments to be made for cross platform mobile development and native app development. Developers who have worked on either project will probably have an opinion on the right way to go. Like many things in tech, however, the cross platform v native debate is really a question of which one is right for you. From both a business and capability perspective, you need to understand what you want to achieve and when. Let’s take a look at the difference between cross-platform framework or a native development platforms. You should then feel comfortable enough to make the right decision about which mobile platform is right for you. Cross platform development? A cross platform application runs across all mobile operating systems without any extra coding. By all mobile operating systems, I mean iOS and Android (windows phones are probably on their way out). A cross platform framework provides all the tools to help you create cross-platform apps easily. Some of the most popular cross- platform frameworks include: Xamarin Corona SDK appcelerator titanium PhoneGap Hybrid mobile apps One specific form of cross-platform mobile  application is Hybrid. With hybrid mobile apps, the graphical user interface (GUI) is developed using HTML5. These are then wrapped in native webpack containers and deployed on iOS and Android devices. A native app is specifically designed for one particular operating system. This means it will work better in that specific environment than one created for multiple platforms. One of the latest native android development framework is Google Flutter. For iOS, it’s Xcode.. Native mobile development vs Cross platform development If you’re a mobile developer, which is better? Let’s compare cross platform development with mobile development: Cross-platform development is more cost effective. This is simply because you can reuse 80% of your code becase you’re essentially building one application. The cost of native development is roughly double to that of Cross-platform development, although cost of android development is roughly 30% more than iOS development. Cross-platform development takes less time. Although some coding has to be done natively, the time taken to develop one app is, obviously, less than to develop two. Native apps can use all system resources. No other app can have any additional features . They are able to use the maximum computing power provided by the GPU and CPU; this means that load times are often pretty fast.. Cross platform apps have restricted access to system resources. Their access is dependent on framework plugins and permissions. Hybrid apps usually take more time to loadbecause smartphone GPUs are generallyless powerful than other machines. Consequently, unpacking a HTML5 UI takes more time on a mobile device. The same reason forced Facebook to shift their mobile apps from Hybrid to Native which according to facebook, improved their app load time and loading of newsfeed and images in the app. The most common challenge with about cross-platform mobile development is been balancing the requirements of iOS and Android UX design. iOS is quite strict about their UX and UI design formats. That increases the chances of rejection from the app store and causes more recurring cost. A critical aspect of Native mobile apps is that if they are designed properly and properly synchronized with the OS, they get regular software updates. That can be quite a difficult task for cross-platform apps. Finally, the most important consideration that should determine your choice are your (or the customer’s) requirements. If you want to build a brand around your app, like a business or an institution, or your app is going to need a lot of GPU support like a game, then native is the way to go. But if your requirement is simply to create awareness and spread information about an existing brand or business on a limited budget then cross-platform is probably the best route to go down. How to integrate Firebase with NativeScript for cross-platform app development Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! A cross-platform solution with Xamarin.Forms and MVVM architecture  
Read more
  • 0
  • 0
  • 17105
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 17004

article-image-progression-maker
Travis Ripley
30 Jun 2014
6 min read
Save for later

Progression of a Maker

Travis Ripley
30 Jun 2014
6 min read
There’s a natural path for the education of a maker that takes place within the techshops and makerspaces. It begins in the world of tools you may already know, like handheld tools or power tools, and quickly creeps into an unknown world of machines suited to bring any desire to fruition. At first, taking any classes may seem like a huge investment, but the payback you receive from the knowledge is priceless. I can’t even put a price on the payback I’ve earned from developing these maker skills, but I can tell you that the number of opportunities is overflowing. I know it doesn’t sound like much, but the opportunities to grow and learn also increase your connections and that’s what helps you to create an enterprise. Your options for education all depend upon what is available to you locally. As the ideology of technological dissonance has been growing culturally, it is influencing advancements on open source and open hardware. It has a big impact on the trend of creating incubators, startups, techshops, and makerspaces on a global scale. When I first began my education into the makerspace, I was worried that I’d never be able to learn it all. I started small by reading blogs and magazines, and eventually I decided to take a chance and sign up for a membership at our local makerspace: http://www.Makerplace.com. There I was given access to a variety of tools that would be too bulky and loud for my house and workspace, not to mention extremely out of my price range. When I first started at the Makerplace, I was overwhelmed by the amount of technology that was available to me, and I was daunted by the degree of difficulty it would take to even use these machines. But you can only learn so much from videos and books; the real trial begins when you put that knowledge to work with hands-on experience. I was ready to get some experience under my belt. The degree of difficulty for a student can vary, obviously, by experience, and how well one grasps the concepts. I started by taking a class that offers a brief introduction to a topic and some guidance from an expert. After that, you learn on your own and will break things such as materials, end mills, electronic components, and lots of consumables (I do not condone breaking fingers, body parts, or huge expensive tools). This stage is key, because once you understand what can and will go wrong, you’ll undeniably want more training from an expert. And as the saying goes, “practice makes perfect,” which is the key to mastery. As you begin your education, it will become apparent to you what classes will need to come next. The best place to start is learning the obvious software necessary to develop your tangible goods. For those of you who are interested I will list the suggested order of the tools and experience I have learned from ground zero. I suggest the first tools to learn are the Laser, Waterjet, and Plasma CNC cutters, as they can precisely cut shapes out of sheet type material. The laser is the easiest to learn, and can be used to not only cut, but engrave wood, acrylics, metal, and other sheet type materials. Most likely the makerspaces and hackerspaces that you have access to will have this available. The Waterjet and Plasma CNC machines will depend upon the workshop, since they require more room, along with the outfitting of vapor and fume containment equipment. The next set of tools that require a bigger learning curve are the Multi-Axis CNC Mills, Routers, Conventional Mill, and Lathe. CNC (Computer Numerical Control) is the automation of machine tools. These processes of controlled material removal today are collectively known as Subtractive Manufacturing. This requires you to take unfinished work pieces made of materials such as metals, plastics, ceramics, and wood and create 2D/3D shapes, which can be made into tools or finished as tangible objects. The CNC routers are for the same process, but they use sheet materials, such as plywood, MDF, and foam. The first time I took a tour of the makerplace, these machines looked so intimidating. They were big, loud, and I had no clue what they were used for. It wasn’t until I gained further insight into manufacturing that I understood how valuable these tools are. The learning curve is gradual, since there are multiple moving parts and operations happening at once. I took the CNC fundamentals class, which was required before operating any of these machines. I then completed the conventional Mill and Lathe classes before moving on to the CNC machines. I suggest the steps in this order, since understanding the conventional process will play an integral role in how you design your parts to be machined using the CNC machines. I found out the hard way why endmills were called consumables, as I scrapped many parts and broke many endmills. This is a great skill to understand as it directly compliments the Additive processes, such as 3D printing. Once you have a grasp on the basics of automated machinery, the next step is to learn welding and plasma cutting equipment and metal forming tools. This skill opens many possibilities and opportunities to makers, such as making and customizing frames, chassis, and jigs. Along the way you will also learn how to use the metal forming tools to create and craft three-dimensional shapes from thin-gauge sheet metal. And last but not least, depending on how far you want to develop your learning, there are large air compressors, such as bead blasters and paint sprayers used with tools that require constant pressure in the metal forming category. There is also high temperature equipment, such as furnaces, ovens, and acrylic sheet benders, and my personal new favorite, the vacuum formers that bend and form plastic into complex shapes. With all of these new skills under my belt, a network of like-minded individuals, and a passion for knowledge in manufacturing and design, I was able to produce and create products at a pro level, which totally changed my career. Whatever your curious intentions may be, I encourage you to take on a new challenge, such as learning manufacturing skills, and you will be guaranteed a transformative look at the world around you, from consumer to maker. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 16989

article-image-how-can-artificial-intelligence-support-your-big-data-architecture
Natasha Mathur
26 Sep 2018
6 min read
Save for later

How can Artificial Intelligence support your Big Data architecture?

Natasha Mathur
26 Sep 2018
6 min read
Getting a big data project in place is a tough challenge. But making it deliver results is even harder. That’s where artificial intelligence comes in. By integrating artificial intelligence into your big data architecture, you’ll be able to better manage, and analyze data in a way that provide a substantial impact on your organization. With big data getting even bigger over the next couple of years, AI won’t simply be an optional extra, it will be essential. According to IDC, the accumulated volume of big data will increase from 4.4 zettabytes to roughly 44 zettabytes or 44 trillion GB, by 2020. Only by using Artificial Intelligence will you really be able to properly leverage such huge quantities of data. The International Data Corporation (IDC) also predicted a need for 181,000 people with deep analytical skills, data management, and interpretation skills, this year. AI comes to rescue again. AI can ultimately compensate for the lack of analytical resources today with the power of machine learning that enables automation.  Now that we know why Big data needs AI, let’s have a look at how AI helps big data. But, for that, you first need to understand the big data architecture. While it’s clear that artificial intelligence is an important development in the context of big data, what are the specific ways it can support and augment your big data architecture? It can, in fact, help you across every component in the architecture. That’s good news for anyone working with big data, and good for organizations that depend on it for growth as well. Artificial Intelligence in Big data Architecture In a big data architecture, data is collected from different data sources and then moves forward to other layers. Artificial Intelligence in data sources Using machine learning, this process of structuring data becomes easier, thereby, making it easier for the organizations to store and analyze their data. Now, keep in mind that large amounts of data from various sources can sometimes make data analysis even harder. This is because we now have access to heterogeneous sources of data that add different dimensions and attributes to the data. This further slows down the entire process of collecting data. To make things much quicker and more accurate, it’s important to consider only the most important dimensions. This process is what’s called data dimensionality reduction (DDR). With DDR, it is important to keep note of the fact that the model should always convey the same information without any loss of insight or intelligence. Principal Component Analysis or PCA is another useful machine learning method that’s used for dimensionality reduction. PCA performs feature extraction, meaning it combines all the input variables from the data, then drops the “least important” variables while making sure to retain the most valuable parts of all of the variables. Also, each of the “new” variables after PCA are independent of each other. Artificial Intelligence in data storage Once data is collected from the data source, it then needs to be stored. AI can allow you to automate storage with machine learning. This also makes structuring the data easier. Machine learning models automatically learn to recognize patterns, regularities, and interdependencies from unstructured data and then adapt, dynamically and independently, to new situations. K-means clustering is one of the most popular unsupervised algorithms for data clustering, which is used when there’s large-scale data without any defined categories or groups. The K-means Clustering algorithm performs pre-clustering or classification of data into larger categories. Unstructured data gets stored as binary objects, annotations are stored in NoSQL databases, and raw data is ingested into data lakes. All this data act as input to machine learning models. This approach is great as it automates refining of the large-scale data. So, as the data keeps coming, the machine learning model will keep storing it depending on what category it fits. Artificial Intelligence in data analysis After the data storage layer comes the data analysis part. There are numerous machine learning algorithms that help with effective and quick data analysis in big data architecture. One such algorithm that can really step up the game when it comes to data analysis is Bayes Theorem. Bayes theorem uses stored data to ‘predict’ the future. This makes it a wonderful fit for big data. The more data you feed to a Bayes algorithm, the more accurate its predictive results become. Bayes Theorem determines the probability of an event based on prior knowledge of conditions that might be related to the event. Another machine learning algorithm that great for performing data analysis are decision trees. Decision trees help you reach a particular decision by presenting all possible options and their probability of occurrence. They’re extremely easy to understand and interpret. LASSO (least absolute shrinkage and selection operator) is another algorithm that will help with data analysis. LASSO is a regression analysis method. It is capable of performing both variable selection and regularization which enhances the prediction accuracy and interpretability of the outcome model. The lasso regression analysis can be used to determine which of your predictors are most important. Once the analysis is done, the results are presented to other users or stakeholders. This is where data utilization part comes into play. Data helps to inform decision making at various levels and in different departments within an organization. Artificial intelligence takes big data to the next level Heaps of data gets generated every day by organizations all across the globe. Given such huge amount of data, it can sometimes go beyond the reach of current technologies to get right insights and results out of this data. Artificial intelligence takes the big data process to another level, making it easier to manage and analyze a complex array of data sources. This doesn’t mean that humans will instantly lose their jobs - it simply means we can put machines to work to do things that even the smartest and most hardworking humans would be incapable of. There’s a saying that goes "big data is for machines; small data is for people”, and it couldn’t be any truer. 7 AI tools mobile developers need to know How AI is going to transform the Data Center How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 16907

article-image-difference-between-working-indie-and-aaa-game-development
Raka Mahesa
02 Oct 2017
5 min read
Save for later

The Difference Between Working in Indie and AAA Game Development

Raka Mahesa
02 Oct 2017
5 min read
Let's say we have two groups of video games. In the first group, we have games like The Witcher 3, Civilization VI, and Overwatch. And in the second group, we have games like Super Meat Boy, Braid, and Stardew Valley. Can you tell the difference between these two groups? Is one group of games better than the other? No, they are all good games that have achieved both critical and financial success. Are the games in the first group sequels, while games in the second group are new? No, Overwatch is a new, original IP. Are the games in the first group more expensive than the second group? Now we're getting closer. The truth is, the first group of games comes from searching Google for "popular AAA games," while the second group comes from searching for "popular indie games." In short, the games in the first group are AAA games, and in the second group are indie games. Indie vs. AAA game development Now that we've seen the difference between the two groups, why do people separate these games into two different groups? What makes these two groups of games different from each other? Some would say that they are priced differently, but there are actually AAA games with low pricing as well as indie games with expensive pricing. How about the scale of the games? Again, there are indie games with big, massive worlds, and there are also AAA games set in short, small worlds. From my perspective, the key difference between the two groups of games is the size of the company developing the games. Indie games are usually made by companies with less than 30 people, and some are even made by less than five people. On the other hand, AAA games are made by much bigger companies, usually with hundreds of employees. Game development teams: size matters Earlier, I mentioned that company size is the key difference between indie games and AAA games. So it's not surprising that it's also the main difference between indie and AAA game development. In fact, the difference in team or company size leads to every difference between the two game development processes. Let's start with something personal, your role or position in the development team. Big teams usually have every position they need already filled. If they need someone to work on the game engine, they already have a engine programmer there. If they need someone to design a level, they already have a level designer working on it. In a big team, your role is already determined from the start, and you will rarely work on any task outside of your job description. If AAA game development values specialists, then indie game development values generalists who can fill multiple roles. It's not weird at all in a small development team if a programmer is asked to deal with both networking as well as enemy AI. Small teams usually aren't able to individually cover all the needed positions, so they turn to people who are able to work on a variety of tasks. Funding across the games industry Let's move to another difference, this time from the funding aspect. A large team requires a large amount of funding, simply because it has more people that need to be paid. And, if you look at the bigger picture, it also means that video games made by a large team have a large development cost. The opposite rings true as well; indie game development has much smaller development costs because they have smaller teams. Because every project has a chance of failure, the large development cost of AAA games becomes a big problem. If you're only spending a little money, maybe you're fine with a small chance of failure, but if you're spending a large sum of money, you definitely want to reduce that risk as much as possible. This ends up with AAA game development being much more risk-averse; they're trying to avoid risk as much as possible. In AAA game development, when there's a decision that needs to be made, the team will try to make sure that they don't make the wrong choice. They will do extensive market research and they will see what is trending in the market. They'd want to grab as many audience members as possible, so if there's any design that will exclude a significant amount of customers, it will be cut out. On the other hand, indie game development doesn't spend that much money. With a smaller development cost, indie games don't need to have a massive amount of sale to recoup their costs. Because of that, they're willing to take risks with experimental and unorthodox design, giving the team the creative freedom without needing to do market research. That said, indie game development harbors a different kind of risk. Unlike their bigger counterpart, indie game developers tend to live from one game to the next. That is, they use the revenue from their current game to fund the development of their next game. So if any of their games don't perform well, they could immediately close down. And that's another difference between the two game development process, AAA game development tends to be more financially stable compared to indie development. There are more differences between indie and AAA game development, but the ones listed above are definitely some of the most prominent. All in all, one development process isn't better than the other, and it falls back on you to decide which one is better suited for you. Raka Mahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 16879
article-image-defending-your-business-from-the-next-wave-of-cyberwar-iot-threats
Guest Contributor
15 Sep 2018
6 min read
Save for later

Defending your business from the next wave of cyberwar: IoT Threats

Guest Contributor
15 Sep 2018
6 min read
There’s no other word for the destabilization of another nation through state action other than war -- even if it’s done with ones and zeros. Recent indictments of thirteen Russians and three Russian companies tampering with US elections is a stark reminder. Without hyperbole it is safe to say that we are in the throes of an international cyber war and the damage is spreading massively over the corporate economy. Reports have reached a fever pitch and the costs globally are astronomical. According to Cybersecurity Ventures, damage related to cybercrime in general is projected to hit $6 trillion annually by 2021. Over the past year, journalists for many news agencies have reported credible studies regarding the epidemic of state sponsored cyber attacks. Wired and The Washington Post among many others have outlined threats that have reached the US energy grid and other elements of US infrastructure. However, the cost to businesses is just as devastating. While many attacks have been government targeted, businesses are increasingly at risk from state sponsored cyber campaigns. A recent worldwide threat assessment from the US Department of Justice discusses several examples of state-sponsored cyber attacks that affect commercial entities including diminishing trust from consumers, ransomware proliferation, IoT threats, the collateral damage from disruptions of critical infrastructure, and the disruption of shipping lanes. How Cyberwar Affects Us on a Personal Level An outcome of cyberwarfare that isn’t usually considered, but a large amount of damage is reflected in human capital. This can be found in the undermining of consumer and employee confidence in the ability of a company to protect data. According to a recent study examining how Americans feel about internet privacy in 2018, 51% of respondents said their main concern was online threats stealing their information, and over a quarter listed that they were particularly concerned about companies collecting/sharing their personal data. This kind of consumer fear is justified by a seeming lack of ability of companies to protect the data of individuals. Computing and quantitative business expert Dr. Benjamin Silverstone points out that recent cyber-attacks focus on the information of consumers (rather than other confidential documentation or state secrets which may have greater protection). Silverstone says, “Rather than blaming the faceless cyber-criminals, consumers will increasingly turn to the company that is being impersonated to ask how this sort of thing could happen in the first place. The readiness to share details online, even with legitimate companies, is being affected and this will damage their business in the long term.” So, how can businesses help restore consumer confidence? You should: Increase your budget toward better cybercrime solutions and tell your consumers about it liberally. Proven methods include investing in firewalls with intrusion prevention tools, teaching staff how to detect and avoid malware software, and enforcing strict password protocols to bolster security. Invest in two-factor authorization so that consumers feel safer when accessing your product Educate your consumer base -- it is equally important that everyone be more aware when it comes to cyber attack. Give your consumers regular updates about suspected scams and send tips and tricks on password safety. Ransomware and Malware Attacks CSO Online reports that ransomware damage costs exceeded $5 billion in 2017, 15 times the cost in 2015. Accordingly, Cybersecurity Ventures says that costs from ransomware attacks will rise to $11.5 billion next year. In 2019, they posit, a business will fall victim to a ransomware attack every 14 seconds. But is This International Warfare? The North Korean government’s botnet has been shown to be able to pull off DDoS attacks and is linked to the wannacry ransomware attack. In 2017, over 400,000 machines were infected by the wannacry virus, costing companies  over $4 Billion in over 150 countries. To protect yourself from ransomware attacks: Back up your data often and store in non-networked spaces or on the cloud. Ransomware only works if there is a great deal of data that is at risk. Encrypt whatever you can and keep firewalls/two-factor authorization in place wherever possible. Keep what cyber experts call the  “crown jewels” (the top 5% most important and confidential documents) on a dedicated computer with very limited access. The Next Wave of Threat - IoT IoT devices make mundane tasks like scheduling or coordination more convenient. However, proliferation of these devices create cybersecurity risk. Companies are bringing in devices like printers and coffee makers that are avenues for hackers to enter a network.   Many experts point to IoT as their primary concern. A study from shared assessment found that 97% of IT respondents felt that unsecured IoT devices could cause catastrophic levels of damage to their company. However, less than a third of the companies represented reported thorough monitoring of the risks associated with third-party technology. Here’s a list of how to protect yourself from IoT threats: Evaluate what data IoT devices are accumulating and limit raw storage. Create policies regarding anonymizing user data as much as possible. Apply security patches to any installed IoT device. This can be as simple as making sure you change the default password. Vet your devices - make sure you are buying from sources that (you believe) will be around a long time. If the business you purchase your IoT device from goes under, they will stop updating safety protocols. Make a diversified plan, just in case major components of your software set up are compromised. While we may not be soldiers, a war is currently on that affects us all and everyone must be vigilant. Ultimately, communication is key. Consumers rely on businesses to protect them from individual attack. These are individuals who are more likely to remain your customers if you can demonstrate how you are maneuvering to respond to global threats. About the author           Zach is a freelance writer who likes to cover all things tech. In particular, he enjoys writing about the influence of emerging technologies on both businesses and consumers. When he's not blogging or reading up on the latest tech trend, you can find him in a quiet corner reading a good book, or out on the track enjoying a run. New cybersecurity threats posed by artificial intelligence Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 0
  • 16820

article-image-minecraft-programmers-sandbox
Aaron Mills
30 Jan 2015
6 min read
Save for later

Minecraft: The Programmer's Sandbox

Aaron Mills
30 Jan 2015
6 min read
If you are familiar with gaming or the Java programming language, you've almost certainly heard of Minecraft. This extremely popular game has captured the imagination of a generation. The premise of the game is simple: you are presented with a near-infinitely explorable world built out of textured one-meter cubes. Modifying the landscape around you is simple and powerful. There are things to craft, such as weapons and mechanisms. There are enemies to fight or hide from, animals to tame, and crops to farm. However, the game doesn't actually provide you with any kind of goal or story beyond what you define for yourself. This makes Minecraft the perfect example of a Sandbox Game, if not the golden standard. But more than that, it has also become a Sandbox for people who like to write code. So let us take a moment and delve into why this is so and what it means for Minecraft to have become “The Programmer's Sandbox”. Originally the product of one man, Markus “Notch” Persson, Minecraft is written entirely in Java. The choice of Java as the language has helped define Minecraft in many ways. On the surface, we have the innate portability that Java provides. But when you dig deeper, Java opens up a whole new realm of possibilities. This is largely because of the inherent ease with which Java applications can be inspected, decompiled, and modified. This means that any part of the code can be changed in any way, allowing us to rewrite the game as we desire. This has lead to a large and vibrant modding community, perhaps even the largest such community ever to exist. The Minecraft modding community would not be what it is today without the herculean efforts of several groups of people, sinces the raw code isn't particularly modding friendly. It's obfuscated and not very extensible in ways that let mods exist side by side. But the efforts of teams such as the Mod Coder Pack (MCP) and Forge have changed that. Today, getting started with Minecraft modding is as simple as downloading Forge, running a one-line setup command (gradlew setupDecompWorkspace eclipse), and pointing your IDE at the resulting folder. From there you can dive straight into the code and create a mod that will be compatible with the vast majority of all other mods. And this opens up realms of possibilities for anyone with an interest in seeing their own creations become part of a vibrant explorable world. It is this desire that has driven the community to innovate and design the tools to let anyone just jump into Minecraft modding and get their feet wet in minutes. As an example, here is a simple mod that I have created that adds a block in Minecraft. This is simple, but will give you an idea of what an example looks like: package com.example.examplemod; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.EventHandler; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.registry.GameRegistry; import net.minecraft.block.Block; import net.minecraft.block.BlockStone; @Mod(modid = ExampleMod.MODID, version = ExampleMod.VERSION) public class ExampleMod { public static final String MODID = "examplemod"; public static final String VERSION = "1.0"; @EventHandler public void init(FMLInitializationEvent event) { // some example code Block simpleBlock = new BlockStone().setBlockName("simpleBlock").setBlock TextureName("examplemod:simpleBlock"); GameRegistry.registerBlock(simpleBlock, "simpleBlock"); } } And here is a figure showing the block from the mod in Minecraft: The Minecraft modding community consists of a wide range of people, from the self-taught programmers to the industry code experts. The reason that such a wide range of modders exists is because the code is both accessible enough for the novice and flexible enough for the expert. Adding a new decorative block can be done with just a few simple lines of code, but mods can also become major projects with a line count in the tens or even hundreds of thousands. So whether this is your first time writing code, or you are a Java Guru, you can quickly and easily bring your creations to life in the sandbox world of Minecraft. People have created all kinds of crazy new things for Minecraft: Massive Toroidal Fusion Reactors, Force-fields, ICBMs, Arcane Magic Runes, Flying Magic Carpets, Pipes for pumping fluids around, Zombie Apocalypse Mini-Games, and even entirely new dimensions with giant Bosses and quests and loot. You can even find a mod that lets you visit the Moon. There really is no limit to what you can add to Minecraft. In many cases, people have taken elements from other game genres and incorporated them into the game: RPG Leveling Systems, Survival Horror Adventures, FPS Shooters, and more. These are just some examples of things that people have actually added to the game. The simplicity and flexibility of the game makes this possible. There are several factors that make Minecraft a particularly accessible game to mod. For one, the art assets are all fairly simple. You don't need HD textures or high poly models; the game's art style intentionally avoids these. It instead opts for pixel art and blocky models. So even if you are a genius coder, but have no real skills in textures and modeling, it's still possible to make something that looks good and fits into the game. But the reverse is also true: if you are a great artist, but your coding skills are weak, you can still create awesome decorative blocks. And if you need help with code, there are dozens, if not hundreds, of Open Source mods to learn from and copy. So yes, Minecraft may be a fun Sandbox game by itself. But if you are the type of person who wants to get your hands a bit dirty, it opens up a whole realm of possibilities, a realm where you are no longer limited by the vision of the game's creators but can make your own vision a reality. This is the true beauty of Minecraft: it really can be whatever you want it to be. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 16815

article-image-why-data-science-needs-great-communicators
Erik Kappelman
16 Jan 2018
4 min read
Save for later

Why data science needs great communicators

Erik Kappelman
16 Jan 2018
4 min read
One of the biggest problems facing data science (and many other technical industries) today is communication. This is true on both an individual level, but at a much bigger organizational and cultural level. On the one hand, we can all be better communicators, but at the same time organizations and businesses can do a lot more to facilitate knowledge and information sharing. At an individual level, it’s important to recognize that some people find communication very difficult. Obviously it’s a cliché that many of these people find themselves in technical industries, and while we shouldn’t get stuck on stereotypes, there is certainly an element of truth in it. The reasons why this might be the case is incredibly complex, but it may be true that part of the problem is how technology has been viewed within institutions and other organizations. This is the sort of attitude that says “those smart people just have bad social skills. We should let them do what they're good at and leave them alone.” There are lots of problems with this and it isn’t doing anyone any favors, from the people that struggle with communication to the organizations who encourage this attitude. Statistics and communicating insights Let’s take a field like statistics. There is a notion that you do not need to be good at communicating to be good at statistics; it is often viewed as a primarily numerical and technical skill. However, when you think about what statistics really is, it becomes clear that that is nonsensical. The primary purpose of the field is to tease out information and insights from noisy data and then communicate those insights. If you don’t do that you’re not doing statistics. Some forms of communication are inherent to statistical research; graphs and charts communicate the meaning of data and most statisticians or data scientists have a well worn skill of chart making. But there’s more than just charts – great visualizations, great presentations can all be the work of talented statisticians.  Of course, there are some data-related roles where communication is less important. If you’re working on data processing and storage, for example, being a great communicator may not be quite as valuable. But consider this: if you can’t properly discuss and present why you’re doing what you’re doing to the people that you work with and the people that matter in your organization you’re immediately putting up a barrier to success. The data explosion makes communication even more important There is an even bigger reason data science needs great communicators and it has nothing to do with individual success. We have entered what I like to call the Data Century. Computing power and tools using computers, like the Internet, hit a sweet spot somewhere around the new millennium and the data and analysis now available to the world is unprecedented. Who knows what kind of answers this mass of data holds? Data scientists are at the frontier of the greatest human exploration since the settling of the New World. This exploration is faced inward, as we try to understand how and why human beings do various things by examining the ever growing piles of data. If data scientists cannot relay their findings, we all miss out on this wonderful time of exploration and discovery. People need data scientists to tell them about the whole new world of data that we are just entering. It would be a real shame if the data scientists didn’t know how. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 16683
article-image-align-your-product-experience-strategy-with-business-needs
Packt Editorial Staff
02 May 2018
10 min read
Save for later

Align your product experience strategy with business needs

Packt Editorial Staff
02 May 2018
10 min read
Build a product experience strategy around the needs of stakeholders Product experience strategists need to conduct thorough research to ensure that the products being developed and launched align with the goals and needs of the business. Alignment is a bit of a buzzword that you're likely to see in HBR and other publications, but don't dismiss it - it isn't a trivial thing, and it certainly isn't an abstract thing. One of the pitfalls of product experience strategy - and product management more generally - is that understanding the needs of the business isn't actually that straightforward. There's lots of moving parts, lots of stakeholders. And while everyone should be on the same page, even subtle differences can make life difficult. This is why product experience strategists do detailed internal research. It: Helps designers to understand the company's vision and objectives for the product. It allows them to understand what's at stake. Based on this, they work with stakeholders to align product objectives and reach a shared understanding on the goals of design. Once organizational alignment is achieved, the strategist uses research insights to develop a product experience strategy. The research is simply a way of validating and supporting that strategy. The included research activities are: Stakeholder and subject-matter expert (SME) interviews Documents review Competitive research Expert product reviews   Talk to key stakeholders Stakeholders are typically senior executives who have a direct responsibility for, or influence on, the product. Stakeholders include product managers, who manage the planning and day-to-day activities associated with their product, and have a direct decision-making authority over its development. In projects that are important to the company, it is not uncommon for the executive leadership from the chief executive and down to be among the stakeholders due to their influence and authority to the direct overall product strategy. The purpose of stakeholder interviews is to gather and understand the perspective of each individual stakeholder and align the perspectives of all stakeholders around a unified vision around the scope, purpose, outcomes, opportunities and obstacles involved in undertaking a new product development project. Gaps among stakeholders on fundamental project objectives and priorities, will lead to serious trouble down the road. It is best to surfaces such deviations as early as possible, and help stakeholders reach a productive alignment. The purpose of subject-matter experts (SMEs) interviews is to balance the strategic high- level thinking provided by stakeholders, with detailed insights of experienced employees who are recognized for their deep domain expertise. Sales, customer service, and technical support employees have a wealth of operational knowledge of products and customers, which makes them invaluable when analyzing current processes and challenges. Prior to the interviews, the experience strategist prepares an interview guide. The purpose of the guide is to ensure the following: All stakeholders can respond to the same questions All research topics are covered if interviews are conducted by different interviewers Interviews make the best use of stakeholders' valuable time Some of the questions in the guide are general and directed at all participants, others are more specific and focus on the stakeholders specific areas of responsibility. Similar guides are developed for SME interviews. In-person interviews are the best, because they take place at the onset of the project and provide a good opportunity to build rapport and trust between the designer and interviewee. After a formal introduction regarding the purpose of the interview and general questions regarding the person's role and professional experience, the person is asked for their personal assessment and opinions on various topics. Here is a sample of different topics: Objectives and obstacles Prioritized goals for the project What does success look like What kind of obstacles the project is facing, and suggestions to overcome them Competition Who are your top competitors Strength and weaknesses relative to the competition Product features and functionality Which features are missing Differentiating features Features to avoid The interviews are designed to last no more than an hour and are documented with notes and audio recordings, if possible. The answers are compiled and analyzed and the result is presented in a report. The report suggests a unified list of prioritized objectives, and highlights gaps and other risks that have been reported. The report is one of the inputs into the development of the overall product experience strategy. Experts understand product experience better than anyone Product expert reviews, sometimes referred to as heuristic evaluations, are professional assessments of a current product, which are performed by design experts for the purpose of identifying usability and user experience issues. The thinking behind the expert review technique is very practical. Experience designers have the expertise to assess the experience quality of a product in a systematic way, using a set of accepted heuristics. A heuristic is a rule of thumb for assessing products. For example, the error prevention heuristic deals with how well the evaluated product prevents the user from making errors. The word heuristic often raises questions about its meaning, and the method has been criticized for its inherent weaknesses due to the following: Subjectivity of the evaluator Expertise and domain knowledge of the evaluator Cultural and demographic background of the evaluator These weaknesses increase the probability that the outcome of an expert evaluation will reflect the biases and preferences of the evaluator, resulting in potentially different conclusions about the same product. Still, expert evaluations, especially if conducted by two evaluators, and their aligned findings, have proven to be an effective tool for experience practitioners who need a fast and cost-effective assessment of a product, particularly digital interfaces. Jacob Nielsen developed the method in the early 1990s. Although there are other sets of heuristics, Nielsen's are probably the most known and commonly used. His initial set of heuristics was first published in his book, Usability Engineering, and is brought here verbatim, as there is no need for modification: Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators--unseen by the novice user--may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Every product experience strategy needs solid competitor research Most companies operate in a competitive marketplace, and having a deep understanding of the competition is critical to the success and survival. Here are few of the questions that a competitive research helps addresses: How does a product or service compare to the competition? What are the strength and weaknesses of competing offerings? What alternatives and choices does the target audience have? Experience strategists use several methods to collect and analyze competitive information. From interviews with stakeholder and SMEs, they know who the direct competition is. In some product categories, such as automobiles and consumer products, companies can reverse-engineer competitive products and try to match or surpass their capabilities. Additionally, designers can develop extensive experience analysis of such competitive products, because they can have a first-hand experience with it. With some hi-tech products, however, some capabilities are cocooned within proprietary software or secret production processes. In these cases, designers can glean the capabilities from an indirect evidence of use. The Internet is a main source of competitive information, from the ability to have a direct access to a product online, to reading help manuals, user guides, bulletin boards, reviews, and analysis in trade publications. Occasionally, unauthorized photos or documents are leaked to the public domain, and they provide clues, sometimes real and sometimes bogus, about a secret upcoming product. Social media too is an important source of competitive data in the form of customers reviews on Yelp, Amazon, or Facebook. With the wealth of this information, a practical strategy to surpass the competition and delivering a better experience can be developed. For example, Uber has been a favorite car hailing service for a while. This service has also generated public controversy and had dissatisfied riders and drivers who are not happy with its policies, including its resistance for tips. By design, a tipping function is not available in the app, which is the primary transaction method between the rider, company and, driver. Research indicates, however, that tipping for the service is a common social norm and that most people tip because it makes them feel better. Not being able to tip places riders in an uncomfortable social setting and stirs negative emotions against Uber. The evidence of dissatisfaction can be easily collected from numerous web sources and from interviewing actual riders and drivers. For Uber competitors, such as Lyft and Curb, by making tipping an integrated part of their apps, provides an immediate competitive edge that improves the experience of both riders, who have an option to reward the driver for their good service, and drivers, who benefit from an increased income. This, and additional improvements over the inferior Uber experience, become a part of an overall experience strategy that is focused on improving the likelihood that riders and drivers will dump Uber in their favor. [box type="note" align="" class="" width=""]You read an extract from the book Exploring Experience Design written by Ezra Schwartz. This book will help you unify Customer Experience, User Experience and more to shape lasting customer engagement in a world of rapid change.[/box] 10 tools that will improve your web development workflow 5 things to consider when developing an eCommerce website RESTful Java Web Services Design  
Read more
  • 0
  • 0
  • 16666

article-image-risk-wearables-how-secure-your-smartwatch
Sam Wood
10 Jun 2016
4 min read
Save for later

The Risk of Wearables - How Secure is Your Smartwatch

Sam Wood
10 Jun 2016
4 min read
Research suggests we're going to see almost 700 million smartwatches and wearable units shipped to consumers over the next few years. Wearables represent an exciting new frontier for developers - and a potential new cyber security risk. Smartwatches record a surprisingly large amount of data, and that data often isn't very secure. What data do smartwatches collect? Smartwatches are stuffed full of sensors, to monitor your body and the world around you. A typical smartwatch might include any of the following: Gyroscope Accelerometer Light detection Heart rate monitor GPS Pedometer Through SDKs like Apple's ResearchKit, or through firmware like on the FitBit apps can be created to allow a wearable to monitor and collect this very physical personal data. These data collection is benign and useful - but encompasses some very personal parts of an individuals life such as health, daily activities, and even sleeping patterns. So is it secure? Where is the data stored and how can hackers access it? Smart wearables almost always link up to another 'host' device and that device is almost always a mobile phone. Data from wearables is stored and analysed on that host device, and is in turn vulnerable to the myriad of attacks that can be undertaken against mobile devices. Potential attacks include: Direct USB Connection: Physically linking your wearable with a USB port, either after theft or with a fake charging station. Think it's unlikely? So called 'juice jacking' is more common than you might think. WiFi, Bluetooth and Near Field Communication: Wearables are made possible by wireless networks, whether Bluetooth, WiFi, or NFC. This makes them especially vulnerable to the myriad of wireless attacks it is possible to execute - even something a simple as rooting a device over WiFi with SSH. Malware and Web-based Attacks: Mobile devices remain highly vulnerable to attacks from malware and web-based attacks such as StageFright. Why is this data a security risk? You might be thinking "What do I care if some hacker knows how much I walk during the day?" But access to this data has some pretty scary implications. Our medical records are sealed tight for a reason - do you really want a hacker to be able to intuit the state of your health from your heart rate and exercise? What about if they then sell that data to your medical insurer? Social engineering is one of the most used tools of anyone seeking access to a secure system or area. Knowing how a person slept, where they work out, when their heart rate has been elevated - even what sort of mood they might be in - all makes it that much easier for a hacker to manipulate human weakness. Even if we're not a potential gateway into a highly secured organization, this data can be hypothetically used by dodgy advertisers and products to target us when we're at our most vulnerable. For example, 'Freemium' games often have highly sophisticated models for when to push their paid content and turn us into 'whales' who are always buying their product. Access to elements of our biometrics would only make this that much easier. What does this mean? As our lives integrate more and more with information technology, our data moves further and further outside of our own control. Wearables mean the recording of some of our most intimate details - and putting that data at risk in turn. Even when we work to keep it secure, it only takes one momentary lapse to put it at risk to anyone who's ever been interested in seeing it. Information security is only going to get more vital to all of us. Acknowledgements This blog is based on the presentation delivered by Sam Phelps at the Security BSides London 2016.
Read more
  • 0
  • 0
  • 16626