Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-top-14-cryptocurrency-trading-bots
Guest Contributor
21 Jun 2018
9 min read
Save for later

Top 14 Cryptocurrency Trading Bots - and one to forget

Guest Contributor
21 Jun 2018
9 min read
Men in rags became millionaires and rich people bite the dust within minutes, thanks to crypto currencies. According to a research, over 1500 crypto currencies are being traded globally and with over 6 million wallets, proving that digital currency is here not just to stay but to rule. The rise and fall of crypto market isn’t hidden from anyone but the catch is—cryptocurrency still sells like a hot cake. According to Bill Gates, “The future of money is digital currency”. With thousands of digital currencies rolling globally, crypto traders are immensely occupied and this is where cryptocurrency trading bots come into play. They ease out the currency trade and research process that results in spending less effort and earning more money not to mention the hours saved. According to Eric Schmidt, ex CEO of Google, “Bitcoin is a remarkable cryptographic achievement and the ability to create something that is not duplicable in the digital world has enormous value.” The crucial part is - whether the crypto trading bot is dependable and efficient enough to deliver optimum results within crunch time. To make sure you don't miss an opportunity to chip in cash in your digital wallet, here are the top 15 crypto trading bots ranked according to the performance: 1- Gunbot Gunbot is a crypto trading bot that boasts of detailed settings and is fit for beginners as well as professionals. Along with making custom strategies, it comes with a“Reversal Trading” feature. It enables continuous trading and works with almost all the exchanges (Binance, Bittrex, GDAX, Poloniex, etc). Gunbot is backed by thousands of users that eventually created an engaging and helpful community. While Gunbot offers different packages with price tags of 0.02 to 0.15 BTC, you can always upgrade them. The bot comes with a lifetime license and is constantly upgraded. Haasbot Hassonline created this cryptocurrency trading bot in January 2014. Its algorithm is very popular among cryptocurrency geeks. It can trade over 500 altcoins and bitcoins on famous exchanges such as BTCC, Kraken, Bitfinex, Huobi, Poloniex, etc. You need to put a little input of the currency and the bot will do all the trading work for you. Haasbot is customizable and has various technical indicator tools. The cryptocurrency trading bot also recognizes candlestick patterns. This immensely popular trading bot is priced between 0.12BTC and 0.32 BTC for three months. 3- Gekko Gekko is a cryptocurrency trading bot that supports over 18 Bitcoin exchanges including Bitstamp, Poloniex, Bitfinex, etc. This bot is a backtesting platform and is free for use. It is a full fledged open source bot that is available on the GitHub. Using this bot is easy as it comes with basic trading strategies. The webinterface of Gekko was written from scratch and it can run backtests, visualize the test results while you monitor your local data with it. Gekko updates you on the go using plugins for Telegram, IRC, email and several different platforms. The trading bot works great with all operating systems such as Windows, Linux and macOS. You can even run it on your Raspberry PI and cloud platforms. 4- CryptoTrader CyrptoTrader is a  cloud-based platform which allows users to create automated algorithmic trading programs in minutes. It is one of the most attractive crypto trading bot and you wont need to install any unknown software with this bot. A highly appreciated feature of CryptoTrader is its Strategy Marketplace where users can trade strategies. It supports major currency exchanges such as Coinbase, Bitstamp, BTCe and is supported for live trading and backtesting. The company claims its cloud based trading bots are unique as compared with the currently available bots in the market. 5- BTC Robot One of the very initial automated crypto trading bot, BTC Robot offers multiple packages for different memberships and software. It provides users with a downloadable version of Windows. The minimum robot plan is of $149. BTC Robot sets up quite easily but it is noted that its algorithms aren't great at predicting the markets. The user mileage in BTC Robot varies heavily leaving many with mediocre profits. With the trading bot’s fluctuating evaluation, the profits may go up or down drastically depending on the accuracy of algorithm. On the bright side the bot comes with a sixty day refund policy that makes it a safe buy. 6- Zenbot Another open source trading bot for bitcoin trading, Zenbot can be downloaded and its code can be modified too. This trading bot hasn't got an update in the past months but still, it is among one of the few bots that can perform high frequency trading while backing up multiple assets at a time. Zenbot is a lightweight artificially intelligent crypto trading bot and supports popular exchanges such as Kraken, GDAX, Poloniex, Gemini, Bittrex, Quadriga, etc. Surprisingly, according to the GitHub’s page, Zenbot’s version 3.5.15 bagged an ROI of 195% in just a mere period of three months. 7- 3Commas 3Commas is a famous cryptocurrency trading bot that works well with various exchanges including Bitfinex, Binance, KuCoin, Bittrex, Bitstamp, GDAX, Huiboi, Poloniex and YOBIT. As it is a web based service, you can always monitor your trading dashboard on desktop, mobile and laptop computers. The bot works 24/7 and it allows you to take-profit targets and set stop-loss, along with a social trading aspect that enables you to copy the strategies used by successful traders. ETF-Like feature allows users to analyze, create and back-test a crypto portfolio and pick from the top performing portfolios created by other people. 8- Tradewave Tradewave is a platform that enables users to develop their own cryptocurrency trading bots along with automated trading on crypto exchanges. The bot trades in the cloud and uses Python to write the code directly in the browser. With Tradewave, you don't have to worry about the downtime. The bot doesn't force you to keep your computer on 24x7 nor it glitches if not connected to the internet. Trading strategies are often shared by community members that can be used by others too. However, it currently supports very few cryptocurrency exchanges such as Bitstamp and BTC-E but more exchanges will be added in coming months. 9- Leonardo Leonardo is a cryptocurrency trading bot that supports a number of exchanges such as Bittrex, Bitfinex, Poloniex, Bitstamp, OKCoin, Huobi, etc. The team behind Leonardo is extremely active and new upgrades including plugins are in the funnel. Previously, it cost 0.5 BTC but currently, it is available for $89 with a license of single exchange. Leonardo boasts of two trading strategy bots including Ping Pong Strategy and Margin Maker Strategy. The first strategy enables users to set the buy and sell price leaving all of the other plans to the bot while the Margin Maker strategy can buy and sell on price adjusted according to the direction in the market. This trading bot stands out in terms of GUI. 10- USI Tech USI Tech is a trading bot that is majorly used for forex trading but it also offers BTC packages. While majority of trading bots require an initial setup and installation, USI uses a different approach and it isn't controlled by the users. Users are needed to buy-in from their expert mining and bitcoin trade connections and then, the USI Tech bot guarantees a daily profit from the transactions and trade. To earn one percent of the capital daily, customers are advised to choose feature rich plans.. 11- Cryptohopper Cryptohopper  is a 24/7 cloud based trading bot that means it doesn't matter  if you are on the computer or not. Its system enables users to trade on technical indicators with subscription to a signaler who sends buy signals. According to the Cryptohopper’s website, it is the first crypto trading bot that is integrated with professional external signals. The bot helps in leveraging bull markets and has a latest dashboard area where users can monitor and configure everything. The dashboard also includes a configuration wizard for the major exchanges including Bittrex, GDAX, Kraken,etc. 12- My Bitcoin Bot MBB is a team effort from Brad Sheridon and his proficient teammates who are experts of cryptocurrency investment. My Bitcoin Bot is an automated trading software that can be accessed by anyone who is ready to pay for it. While the monthly plan is of $39 a month, the yearly subscription for this auto-trader bot is available for of $297. My bitcoin bot comes with heaps of advantages such as unlimited technical support, free software updates, access to trusted brokers list, etc. 13- Crypto Arbitrager A standalone application that operates on a dedicated server, Crypto Arbitrager can leverage robots even when the PC is off. The developers behind this cryptocurrency trading bot claim that this software uses code integration of financial time series. Users can make money from the difference in rates of Litecoins and Bitcoins. By implementing the advanced strategy of hedge funds, the trading bot effectively manages savings of users regardless of the state of the cryptocurrency market. 14- Crypto Robot 365 Crypto Robot 365 automatically trades your digital currency. It buys and sells popular cryptocurrencies such as Ripple, Bitcoin, ethereum, Litecoin, Monero, etc. Rather than a signup fee, this platform charges its commision on a per trade basis. The platform is FCA-Regulated and offers a realistic achievable win ratio. According to the trading needs, users can tweak the system. Moreover, it has an established trading history and  it even offers risk management options. Down The Line While cryptocurrency trading is not a piece of cake, trading with currency bots may be confusing for many. The aforementioned trading bots are used by many and each is backed by years of extensive hard work. With reliability, trustworthiness, smartwork and proactiveness being top reasons for choosing any cryptocurrency trading bot, picking up a trading bot is a hefty task. I recommend you experiment with small amount of money first and if your fate gets to a shining start, pick the trading bot that perfectly suits your way of making money via cryptocurrency. About the Author Rameez Ramzan is a Senior Digital Marketing Executive of Cubix - mobile app development company.  He specializes in link building, content marketing, and site audits to help sites perform better. He is a tech geek and loves to dwell on tech news. Crypto-ML, a machine learning powered cryptocurrency platform Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Apple changes app store guidelines on cryptocurrency mining
Read more
  • 0
  • 1
  • 18723

article-image-ubers-kepler-gl-an-open-source-toolbox-for-geospatial-analysis
Pravin Dhandre
28 Jun 2018
4 min read
Save for later

Uber's kepler.gl, an open source toolbox for GeoSpatial Analysis

Pravin Dhandre
28 Jun 2018
4 min read
Geography Visualization, also called as Geovisualization plays a pivotal role in areas like cartography, geographic information systems, remote sensing and global positioning systems. Uber, a peer-to-peer transportation network company headquartered at California believes in data-driven decision making and hence keeps developing smart frameworks like deck.gl for exploring and visualizing advanced geospatial data at scale. Uber strives to make the data web-based and shareable in real-time across their teams and customers. Early this month, Uber surprised the geospatial market with its newly open-source toolbox, kepler.gl, a geoanalytics tool to gain quick insights from geospatial data with amazing and intuitive visualizations. What’s exactly Kepler.gl is? kepler.gl is a visualization-rich web platform, developed on top of deck.gl, a WebGL-powered data visualization library providing real-time visual analytics of millions of geolocation points. The platform provides visual exploration of geographical data sets along with spatial aggregation of all data points collected. The platform is said to be data-agnostic with a single interface to convert your data into insightful visualizations. https://www.youtube.com/watch?v=i2fRN4e2s0A The platform is very user-friendly where one can just drag the CSV or the GeoJSON files and drop them into the browser to visualize the dataset more intuitively. The platform is supported with different map layers, filtering option, aggregation feature through which you can get the final visualization in an animated format or like a video. The usability of features is so high that you can apply all the metrics available to your data points without much of a hassle. The web platform exhibits high performance where you can get insights from your spatial data in less than 10 minutes and that too in a single window. Another advantage of this framework is it does not involve any sort of coding and hence non-technical users can also reap the benefits by churn valuable insights from the data points. The platform is also equipped with some advanced, complex features such as 2D cartographic plane,a separate dimension for altitude, visibility of height of hexagon and grids. The users seem happy with the new height feature which helps them detect abnormalities and illicit traits in an aggregated map. With the filtering menu, the analysts and engineers can compare their data and have a granular look at their data points. This option also helps in reading the histogram well and one can easily detect outliers and make their dataset more reliable. It  has a feature to add playback to time series data points which makes getting useful information of real time location systems easy. The team at Uber looks at this toolbox with a long-term vision where they are planning to keep adding new features and enhancements to make it highly functional and a single-click visualization dashboard. The team has already announced that they would be powering it up with two major enhancements to the current functionality in next couple of months. They would add support on, More robust exploration: There will be interlinkage between charts and maps, and support for custom charts, maps and widgets like the renowned BI tool Tableau through which it will facilitate analytics teams to unveil deeper insights. Addition of newer geo-analytical capabilities: To support massive datasets, there will be added features on data operations such as polygon aggregation, union of data points, operations like joining and buffering. Companies across different verticals such as Airbnb, Atkins Global, Cityswifter, Mapbox have found great value in kepler.gl offerings and are looking towards engineering their products to leverage this framework. The visualization specialists at these companies have already praised Uber for building such a simple yet fast platform with remarkable capabilities. To get started with kepler.gl, read the documentation available at Github and start creating visualizations and enhance your geospatial data analysis. Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Data Visualization with ggplot2
Read more
  • 0
  • 0
  • 18680

article-image-machine-generated-videos-like-deepfakes-trick-or-treat
Natasha Mathur
30 Oct 2018
3 min read
Save for later

Machine generated videos like Deepfakes - Trick or Treat?

Natasha Mathur
30 Oct 2018
3 min read
A Reddit user named “DeepFakes” had posted real-looking explicit videos of celebrities last year. He made use of deep learning techniques to insert celebrities’ faces into the adult movies. Since then the term “Deepfakes” has been used to describe deep learning techniques that help create realistic looking fake videos or images. Video tampering is usually done using generative adversarial networks. Why is everyone afraid of deepfakes? Deepfakes are problematic as they make it very hard to differentiate between the fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, fake celebrities videos and political abuse. For instance, people create face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. This not only counts as cyberbullying but poses major threat overall as one can create a fake video showing world leaders declaring war on a country. Moreover, given that deepfakes seem so real, its victims often suffer through feelings of embarrassment and shame. Deepfakes also cause major reputational harm. One such example is of a 24-year-old, Noelle Martin, whose battle with deepfake pornography started six years ago. Anonymous predators stole her non-sexual images online and then doctored them into pornographic videos. Martin says she faces harassment from people till this day. Other victims of deepfakes pornography include celebrities such as Michelle Obama, Emma Watson, Natalie Portman, Ivanka Trump, Kate Middleton, and so forth. But, Deepfakes isn’t just limited to pornography and has made its way to many other spheres. Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. From destroying careers by creating fake evidence of them doing something inappropriate to showing soldiers killing innocent civilians, deepfakes have been wreaking havoc. In defense of deepfakes Just as any tool can be used for good and bad, deepfakes is just an effective machine learning tool that creates realistic videos. Even though deepfakes are majorly used for inappropriate activities, some have put it to good use. For instance, GANs or generative adversarial networks (which help create deepfakes) can create realistic images of skin lesions and create examples of liver lesions, which plays a major role in medical research. Other examples include filmmakers using deepfakes for making great videos with swapped in backgrounds, snapchat face swap photo filters, and face swap e-cards (eg; jib jab app) among others.   Are deepfakes trick or treat? If we make pros and cons list for deepfakes, cons seem to outweigh the pros as of today. Although it has its potential good applications, it is majorly used as a tool for harassing and misinforming people. There is a long way to go till deepfakes achieves itself a good rep and right now, it is mostly fake videos, fake images, false danger warnings, and revenge porn. Trick or treat? I spy a total TRICK!
Read more
  • 0
  • 0
  • 18515

article-image-what-blockchain-means-security
Lauren Stephanian
02 Oct 2017
5 min read
Save for later

What Blockchain Means for Security

Lauren Stephanian
02 Oct 2017
5 min read
It is estimated that hacks and flaws in security have cost the US over $445B every year. It is clear at this point that the cost of hacking attacks and ransomware has increased and will continue to increase year by year. Therefore, industries—especially those that require large amounts of important data—will need to invest in technologies to continue to be more secure. By design, Blockchain is theoretically a secure means of storing data. Each transaction is detailed on an immutable ledger, which serves to prevent and detect any form of tampering. Besides this, Blockchain also eliminates the need for verification from trusted third parties, which can come at high costs. But is this a promise that the technology has yet to fulfill, or is it part of the security revolution of the future we so desperately need? How Blockchain is resolving security issues One security issue that can be resolved by Blockchain relates to the fact that many industries rely heavily on “cloud and on-demand services, where our data is accessed and processed by untrusted third parties.” There are also many situations where they may want to jointly work on data without revealing our portion to untrusted entities. Blockchain can be used to create a system where users can jointly store data and also remain anonymous. In this case, Blockchain can be used to record time-stamped events that can’t be removed—so in the case of a cyber attack, it is easy to see where it came from. The Enigma Project, originally developed at MIT, is a good example of this use case. Another issue that Blockchain can improve is data tampering. There have been a number of cyber attacks where the attackers don’t delete or steal data, but alter it. One infamous example of this is the Stuxnet malware, which severely and physically damaged Iran's nuclear program. If this data were altered on the Blockchain, the transactions will be marked and will not be able to be altered or covered, and therefore hackers will not be able to hide their tracks. Blockchain's security vulnerabilities The inalterability of Blockchain and its decentralization clearly has many advantages, however, it does not entirely remove the possibility of data being altered. It is possible to introduce data unrelated to transactions to the Blockchain, and therefore this Blockchain data could be exposed to malware. The extent to which malware could impact the entire Blockchain and all its data is not yet known, however, there have been some instances of proven vulnerabilities. One such proven vulnerability includes Vitaly Kamluk’s proof of concept software that could take information from a hacker’s Bitcoin address and essentially pull malicious data and store it on the Blockchain. Private vs. public Blockchain implementations When understanding security risks in Blockchain technology, it is also important to understand the difference between private and public implementations. On public Blockchains, anyone can read or write transactions and anyone can aggregate those transactions and publish them if they are able to solve a cryptographic puzzle. Solving these puzzles takes a lot of computer power, and therefore a high amount of energy is required to solve many of these problems. This leads to a market where most of the transactions and puzzle solving is done in countries where energy is cheapest. This, in turn, leads to centralization and potential collusion. Private Blockchains, in comparison, give the network operator control over who can read and write to the ledger. In the case of Bitcoin in particular, ownership is proven through a private key linked to a transaction and just like physical money, these can easily be lost or stolen. One estimate puts the value of lost Bitcoins at $950M. There are many pros and cons which should be considered when deciding whether or not to use Blockchain. It is important to note here that the most important thing Blockchain provides us is with the ability to track who committed a particular transaction—for good or for bad—and when. There are some security measures with which it certainly would help a great deal—especially when it comes to tracking what information was breached, altered, or stolen. However, it is not an end-all-be-all when it comes to keeping data secured. If Blockchain is to be used to store important data, such as financial information, or client health records, it should be a wrapped in a layer of other cyber security software. Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website.
Read more
  • 0
  • 0
  • 18485

article-image-apple-usb-restricted-mode-everything-you-need-to-know
Amarabha Banerjee
15 Jun 2018
4 min read
Save for later

Apple USB Restricted Mode: Here's Everything You Need to Know

Amarabha Banerjee
15 Jun 2018
4 min read
You must have heard about the incident where the FBI was looking to unlock the iPhone of a mass shooting suspect (one of the attackers in the San Bernardino shooting in 2015). The feds could not unlock the phone, as Apple didn’t budge from their stand of protecting user data. After a few days, police said that they have found a private agency to open the phone. The seed of that feud between the feds and Apple has evolved into a fully grown tree now. This month, Apple announced a new security feature called restricted USB mode. This disables the device’s lightning port after one hour of being locked. Quite expectedly, the law enforcement agencies are not at ease with this particular development. This feature was first introduced in the iOS 11.3 release and then retracted in the next release. But now Apple plans to introduce this feature in the upcoming iOS 12 beta release. The reason as stated by Apple is to protect user data from third party hackers and malwares which have the potential to access iPhone data remotely. You must be wondering, to what extent are these threats genuine. Whether this will mean you locking yourself out of your phone unwittingly with nothing to get you out of the situation. Well, the answer is multilayered. Firstly, if you are not an avid supporter of data privacy and feel you have nothing to hide, then this move might just annoy you for a while. You might wonder about times  when your phone is locked and suddenly forget your unlocking/ passkey. Pretty simple, write it somewhere safe and remember where you have kept it. But in case you are like me, you keep seeing the recent news of user data being hacked, and that worries you. Users are being profiled by different companies for varying end objectives from selling products to shaping up your opinion about politics and other aspects of your life. As such this news might make you a bit comfortable about your next iOS update. Private agencies coming up with solutions to open locked iPhones worried Apple. Companies like Cellebrite and Grayshift are selling devices that can hack any locked Apple device (iPhone and iPad) by using the lightning port. The apparent price of one such device is around 15k USD. What prompted Apple to introduce this security feature into their devices was that government agencies were buying these devices on a regular basis to hack into devices. Hence the threat was real, and the only way to address over 700 million iPhone users’ fears seemed to be introducing the USB restricted mode. The war is however just beginning. Third party companies are already claiming that they have devised a way to overcome this new security feature, which is yet unconfirmed. But Apple is sure to take cognizance of this fact and press their developers more to stay ahead in this cat and mouse game. This has not gone well with the law enforcement agencies as well, they see it as an attempt by Apple to create more hurdles in preventing serious and heinous crimes such as paedophilia. Their side of the argument states that now with the one hour timer since the user locks his or her phone, it becomes much more difficult for them to indict the guilty because they have more room to escape. What do you think this means? Does this give you more faith on your Apple product and will it really compel you to buy that $1200 iPhone with the confidence that your banking data, personal messages, pictures and your other sensitive data are safe at the hands of Apple? Or will it empower the perpetrators of crime to have more confidence that now their activities are not just protected by a passkey, but by an hour of time since they lock it, after which it becomes a black box? No matter what your thoughts are, the war is on, between hackers and Apple. If you belong to either of these communities, these are exciting times. If you are one of the 700 million Apple users, you can feel a bit more secure after the iOS 12 update rolls out. Apple changes app store guidelines on cryptocurrency mining Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others
Read more
  • 0
  • 0
  • 18439

article-image-shift-swift-2017
Shawn Major
27 Jan 2017
3 min read
Save for later

Shift to Swift in 2017

Shawn Major
27 Jan 2017
3 min read
It’s a great time to be a Swift developer because this modern programming language has a lot of momentum and community support behind it and a big future ahead of it. Swift became a real contender when it became open source in December 2015, giving developers the power to build their own tools and port it into the environments in which they work. The release of Swift 3 in September 2016 really shook things up by enabling broad scale adoption across multiple platforms – including portability to Linus/x86, Raspberry Pi, and Android. Swift 3 is the “spring cleaning” release that, while not being backwards compatible, has resulted in a massively cleaner language and ensured sound and consistent language fundamentals that will carry across to future releases. If you’re a developer using Swift, the best thing you can do is get on board with Swift 3 as the next release promises to deliver stability from 3.0 onwards. Swift 4 is expected to be released in late 2017 with the goals of providing source stability for Swift 3 code and ABI stability for the Swift standard library. Despite this shake up that occurred with the new release, developers are still enthusiastic about Swift – it was one of the “most loved” programming languages in StackOverflow’s 2015 and 2016 Developer Surveys. Swift was also one of the top 3 trending techs in 2016 as it’s been stealing market share from Objective C. The keen interest that developers have in Swift is reflected by the +35,000 stars it has amassed on Github and the impressive amount of ongoing collaboration between its core team and the wider community. Rumour has it that Google is considering making Swift a “first class” language and that Facebook and Uber are looking to make Swift more central to their operations. Lyft’s migration of its iOS app to Swift in 2015 shows that the lightness, leanness, and maintainability of the code are worth it and services like the web server and toolkit Perfect are proof that the server-side Swift is ready. People are starting to do some cool and surprising things with Swift. Including… Shaping the language itself. Apple has made a repository on Github called swift-evolution that houses proposals for enhancements and changes to the Swift language. Developers are bringing Swift 3 to as many ARM-based systems as possible. For example, you can get Swift 3 for all the Raspberry Pi boards or you can program a robot in Swift on a BeagleBone. IBM has adopted Swift as the core language for their cloud platform. This opens the door to radically simpler app dev. Developers will be able to build the next generation of apps in native Swift from end-to-end, deploy applications with both server and client components, and build microservice APIs on the cloud. The Swift Sandbox lets developers of any level of experience can actively build server-based code. Since launching it’s had over 2 million code runs from over 100 countries. We think there are going to be a lot of exciting opportunities for developers to work with Swift in the near future. The iOS Developer Skill Plan on Mapt is perfect for diving into Swift and we have plenty of Swift 3 books and videos if you have more specific projects in mind.The large community of developers using iOS/OSX and making libraries combined with the growing popularity of Swift as a general-purpose language makes jumping into Swift a worthwhile venture. Interested in what other developers have been up to across the tech landscape? Find out in our free Skill Up: Developer Talk report on the state of software in 2017.
Read more
  • 0
  • 0
  • 18378
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-devops-for-big-data-success
Ashwin Nair
11 Oct 2017
5 min read
Save for later

DevOps might be the key to your Big Data project success

Ashwin Nair
11 Oct 2017
5 min read
So, you probably believe in the power of Big Data and the potential it has to change the world. Your company might have already invested in or is planning to invest in a big data project. That’s great! But what if I were to tell you that only 15% of the business were successfully able to deploy their Big Data projects to production. That can’t be a good sign surely! Now, don’t just go freeing up your Big Data budget. Not yet. Big Data’s Big Challenges For all the hype around Big Data, research suggests that many organizations are failing to leverage its opportunities properly. A recent survey by NewVantage partners, for example, explored the challenges facing organizations currently running their own Big Data projects or trying to adopt them. Here’s what they had to say: “In spite of the successes, executives still see lingering cultural impediments as a barrier to realizing the full value and full business adoption of Big Data in the corporate world. 52.5% of executives report that organizational impediments prevent realization of broad business adoption of Big Data initiatives. Impediments include lack or organizational alignment, business and/or technology resistance, and lack of middle management adoption as the most common factors. 18% cite lack of a coherent data strategy.”   Clearly, even some of the most successful organizations are struggling to get a handle on Big Data. Interestingly, it’s not so much about gaps in technology or even skills, but rather lack of culture and organizational alignment that’s making life difficult. This isn’t actually that surprising. The problem of managing the effects of technological change is one that goes far beyond Big Data - it’s impacting the modern workplace in just about every department, from how people work together to how you communicate and sell to customers. DevOps Distilled It’s out of this scenario that we’ve seen the irresistible rise of DevOps. DevOps, for the uninitiated, is an agile methodology that aims to improve the relationship between development and operations. It aims to ensure a fluid collaboration between teams; with a focus on automating and streamlining monotonous and repetitive tasks within a given development lifecycle, thus reducing friction and saving time. We can perhaps begin to see, then, that this approach - usually used in typical software development scenarios - might actually offer a solution to some of the problems faced when it comes to big data. A typical Big Data project Like a software development project, a Big Data project will have multiple different teams working on it in isolation. For example, a big data architect will look into the project requirements and design a strategy and roadmap for implementation, while the data storage and admin team will be dedicated to setting up a data cluster and provisioning infrastructure. Finally, you’ll probably then find data analysts who process, analyse and visualize data to gain insights. Depending on the scope and complexity of your project it is possible that more teams are brought in - say, data scientists are roped in to trains and build custom machine learning models. DevOps for Big Data: A match made in heaven Clearly, there are a lot of moving parts in a typical Big Data project - each role performing considerably complex tasks. By adopting DevOps, you’ll reduce any silos that exist between these roles, breaking down internal barriers and embedding Big Data within a cross-functional team. It’s also worth noting that this move doesn’t just give you a purely operational efficiency advantage - it also gives you much more control and oversight over strategy. By building a cross-functional team, rather than asking teams to collaborate across functions (sounds good in theory, but it always proves challenging), there is a much more acute sense of a shared vision or goal. Problems can be solved together, discussions can take place constantly and effectively. With the operational problems minimized, everyone can focus on the interesting stuff. By bringing DevOps thinking into big data, you also set the foundation for what’s called continuous analytics. Taking the principle of continuous integration, fundamental to effective DevOps practice, whereby code is integrated into a shared repository after every task or change to ensure complete alignment, continuous analytics streamlines the data science lifecycle by ensuring a fully integrated approach to analytics, where as much as possible is automated through algorithms. This takes away the boring stuff - once again ensuring that everyone within the project team can focus on what’s important. We’ve come a long way from Big Data being a buzzword - today, it’s the new normal. If you’ve got a lot of data to work with, to analyze and to understand, you better make sure you’ve the right environment setup to make the most from it. That means there’s no longer an excuse for Big Data projects to fail, and certainly no excuse not to get one up and running. If it takes DevOps to make Big Data work for businesses then it’s a MINDSET worth cultivating and running with.
Read more
  • 0
  • 0
  • 18351

article-image-is-initiative-q-a-pyramid-scheme-or-just-a-really-bad-idea
Richard Gall
25 Oct 2018
5 min read
Save for later

Is Initiative Q a pyramid scheme or just a really bad idea?

Richard Gall
25 Oct 2018
5 min read
If things seem too good to be true, they probably are. That's a pretty good motto to live by, and one that's particularly pertinent in the days of fake news and crypto-bubbles. However, it seems like advice many people haven't heeded with Initiative Q, a new 'payment system' developed by the brains behind PayPal technology. That's not to say that Initiative Q certainly is too good to be true. But when an organisation appears to be offering almost hundreds of thousands of dollars to users who simply offer an email and then encourage others to offer theirs, caution is essential. If it looks like a pyramid scheme, then do you really want to risk the chance that it might just be a pyramid scheme? What is Initiative Q? Initiative Q, is, according to its founders, "tomorrow's payment network." On its website it says that current methods of payment, such as credit cards, are outdated. They open up the potential for fraud and other bad business practices, as well as not being particularly efficient. Initiative Q claims that is it going to develop an alternative to these systems "which aggregate the best ideas, innovations, and technologies developed in recent years." It isn't specific about which ideas and technological innovations its referring to, but if you read through the payment model it wants to develop, there are elements that sound a lot like blockchain. For example, it talks about using more accurate methods of authentication to minimize fraud, and improving customer protection by "creating a network where buyers don’t need to constantly worry about whether they are being scammed" (the extent to which this turns out to be deliciously ironic remains to be seen). To put it simply, it's a proposed new payment system that borrows lots of good ideas that still haven't been shaped into a coherent whole. Compelling, yes, but alarm bells are probably sounding. Who's behind Initiative Q? There are very few details on who is actually involved in Initiative Q. The only names attached to the project are Saar Wilf, an entrepreneur who founded Fraud Sciences, a payment technology that was bought by PayPal in 2008, and Lawrence White, Professor of Monetary Theory and Policy and George Mason University. The team should grow, however. Once the number of members has grown to a significant level, the Initiative Q team say "we will continue recruiting the world’s top professionals in payment systems, macroeconomics, and Internet technologies." How is Initiative Q supposed to work? Initiative Q explains that for the world to adopt a new payment network is a huge challenge - a fair comment, because after all, for it to work at all, you need actors within that network who believe in it and trust it. This is why the initial model - which looks and feels a hell of a lot like a pyramid or Ponzi scheme - is, according to Initiative Q, so important. To make this work, you need a critical mass of users. Initiative Q actually defends itself from accusations that it is a Pyramid scheme by pointing out that there's no money involved at this stage. All that happens is that when you sign up you receive a specific number of 'Qs' (the name of the currency Initiative Q is proposing). These Qs obviously aren't worth anything at the moment. The idea is that when the project actually does reach critical mass, it will take on actual value. Isn't Initiative Q just another cryptocurrency? Initiative Q is keen to stress that it isn't a cryptocurrency. That said, on its website the project urges you to "think of it as getting free bitcoin seven years ago." But the website does go into a little more detail elsewhere, explaining that "cryptocurrencies have failed as currencies" because they "focus on ensuring scarcity" while neglecting to consider how people might actually use them in the real world." The implication, then, is that Initiative Q is putting adoption first. Presumably, it's one of the reasons that it has decided to go with such an odd acquisition strategy. Ultimately though, it's too early to say whether Initiative Q is or isn't a cryptocurrency in the strictest (ie. fully de-centralized etc.) sense. There simply isn't enough detail about how it will work. Of course, there are reasons why Initiative Q doesn't want to be seen as a cryptocurrency. From a marketing perspective, it needs to look distinctly different from the crypto-pretenders of the last decade. Initiative Q: pyramid scheme or harmless vaporware? Because no money is exchanged at any point, it's difficult to call Initiative Q a ponzi or pyramid scheme. In fact it's actually quite hard to know what to call it. As David Gerard wrote in a widely shared post from June, published when Initiative Q had a first viral wave, "the Initiative Q payment network concept is hard to critique — because not only does it not exist, they don’t have anything as yet, except the notion of “build a payment network and it’ll be awesome.” But while it's hard to critique, it's also pretty hard to say that it's actually fraudulent. In truth, at the moment it's relatively harmless. However, as David Gerard points out in the same post, if the data of those who signed up is hacked - or even sold (although the organization says it won't do that) - that's a pretty neat database of people who'll offer their details up in return for some empty promises of future riches.
Read more
  • 0
  • 0
  • 18282

article-image-emoji-scavenger-hunt-showcases-tensorflow-js
Richard Gall
03 Apr 2018
3 min read
Save for later

Emoji Scavenger Hunt showcases TensorFlow.js

Richard Gall
03 Apr 2018
3 min read
What is Emoji Scavenger Hunt? Emoji Scavenger Hunt is a game built using neural networks. Developed by Google using TensorFlow.js, a version of the machine learning library designed to run on browsers, the game showcases how machine learning can be brought to web applications. But more importantly, TensorFlow.js, which was announced at the end of March at the TensorFlow Developer Summit looks like it could be a tool to define the next few years of web development, making machine learning more accessible to JavaScript developers than ever before. Start playing now. At the moment Emoji Scavenger Hunt is pretty basic, but the central idea is pretty cool. When you open up the web page in your browser and click 'Let's Play', the app asks for access to your camera. The game then starts: you'll see a countdown, before your camera opens and the web application asks you to find an example of an emoji in the real world. If you find yourself easily irritated you're probably not going to get addicted, as Google seem to have done their best to cultivate an emoji-esque mise en scene. But the game nevertheless highlights not only how neural networks work, but also, in the context of TensorFlow.js, how they might operate in a browser. Of course, one of the reasons Emoji Scavenger Hunt is so basic is because a core part of the game is training the neural network. Presumably, as more people play it, the neural network will improve at 'guessing' what objects in the real world relate to which emoji on your keyboard. TensorFlow.js will bring machine learning to the browser What's exciting is how TensorFlow.js might help shape the future of web development. It's going to make it much easier for JavaScript developers to get started with machine learning - on Reddit a number of users were thankful that they could now use TensorFlow without touching a line of Python code. On the other hand - perhaps a little less likely - TensorFlow.js might lead to more machine learning developers using JavaScript. If games like Emoji Scavenger Hunt become the norm, engineers and data scientists will have a new way to train algorithms - getting users to do it for them. TensorFlow.js and deeplearn.js Eagle-eyed readers who have been watching TensorFlow closely might be thinking here - what about deeplearn.js? Fortunately, the TensorFlow team have an answer: TensorFlow.js... is the successor to deeplearn.js, which is now called TensorFlow.js Core. TensorFlow.js and the future of machine learning The announcement of TensorFlow.js highlights that Google and the core development team behind TensorFlow have a clear focus on the future. They're already the definitive library for machine learning and deep learning. What this will do is spread its dominance into new domains. Emoji Scavenger Hunt is pointing the way - we're sure to see plenty of machine learning imitators and innovators over the next few years.
Read more
  • 0
  • 0
  • 18207

article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 18205
article-image-product-development-need-developers-and-product-managers-collaborate
Packt Editorial Staff
04 Aug 2018
16 min read
Save for later

Effective Product Development needs developers and product managers collaborating on success metrics

Packt Editorial Staff
04 Aug 2018
16 min read
Modern product development is witnessing a drastic shift. Disruptive ideas and ambiguous business conditions have changed the way products are developed. Product development is no longer guided by existing processes or predefined frameworks. Delivering on time is a baseline metric, as is software quality. Today, businesses are competing to innovate. They are willing to invest in groundbreaking products with cutting-edge technology. Cost is no longer the constraint—execution is. Can product managers then continue to rely upon processes and practices aimed at traditional ways of product building? How do we ensure that software product builders look at the bigger picture and do not tie themselves to engineering practices and technology viability alone? Understanding the business and customer context is essential for creating valuable products. In this article, we are going to identify what success means to us in terms of product development. This article is an excerpt from the book Lean Product Management written by Mangalam Nandakumar. For the kind of impact that we predict our feature idea to have on the Key Business Outcomes, how do we ensure that every aspect of our business is aligned to enable that success? We may also need to make technical trade-offs to ensure that all effort on building the product is geared toward creating a satisfying end-to-end product experience. When individual business functions take trade-off decisions in silo, we could end up creating a broken product experience or improvising the product experience where no improvement is required. For a business to be able to align on trade-offs that may need to be made on technology, it is important to communicate what is possible within business constraints and also what is not achievable. It is not necessary for the business to know or understand the specific best practices, coding practices, design patterns, and so on, that product engineering may apply. However, the business needs to know the value or the lack of value realization, of any investment that is made in terms of costs, effort, resources, and so on. The section addresses the following topics: The need to have a shared view of what success means for a feature idea Defining the right kind of success criteria Creating a shared understanding of technical success criteria "If you want to go quickly, go alone. If you want to go far, go together. We have to go far — quickly." Al Gore Planning for success doesn't come naturally to many of us. Come to think of it, our heroes are always the people who averted failure or pulled us out of a crisis. We perceive success as 'not failing,' but when we set clear goals, failures don't seem that important. We can learn a thing or two about planning for success by observing how babies learn to walk. The trigger for walking starts with babies getting attracted to, say, some object or person that catches their fancy. They decide to act on the trigger, focusing their full attention on the goal of reaching what caught their fancy. They stumble, fall, and hurt themselves, but they will keep going after the goal. Their goal is not about walking. Walking is a means to reaching the shiny object or the person calling to them. So, they don't really see walking without falling      as a measure of success. Of course, the really smart babies know to wail their way to getting the said shiny thing without lifting a toe. Somewhere along the way, software development seems to have forgotten about shiny objects, and instead focused on how to walk without falling. In a way, this has led to an obsession with following processes without applying them to the context and writing perfect code, while disdaining and undervaluing supporting business practices. Although technology is a great enabler, it is not the end in itself. When applied in the context of running a business or creating social impact, technology cannot afford to operate as an isolated function. This is not to say that technologists don't care about impact. Of course, we do. Technologists show a real passion for solving customer problems. They want their code to change lives, create impact, and add value. However, many technologists underestimate the importance of supporting business functions in delivering value. I have come across many developers who don't appreciate the value of marketing, sales, or support. In many cases, like the developer who spent a year perfecting his code without acquiring a single customer, they believe that beautiful code that solves the right problem is enough to make a business succeed. Nothing can be further from the truth Most of this type of thinking is the result of treating technology as an isolated function. There is a significant gap that exists between nontechnical folks and software engineers. On the one hand, nontechnical folks don't understand the possibilities, costs, and limitations of software technology. On the other hand, technologists don't value the need for supporting functions and communicate very little about the possibilities and limitations of technology. This expectation mismatch often leads to unrealistic goals and a widening gap between technology teams and the supporting functions. The result of this widening gap is often cracks opening in the end-to-end product experience for the customer, thereby resulting in a loss of business. Bridging this gap of expectation mismatch requires that technical teams and business functions communicate in the same language, but first they must communicate. Setting SMART goals for team In order to set the right expectations for outcomes, we need the collective wisdom of the entire team. We need to define and agree upon what success means for each feature and to each business function. This will enable teams to set up the entire product experience for success. Setting specific, measurable, achievable, realistic, and time-bound (SMART) metrics can resolve this. We cannot decouple our success criteria from the impact scores we arrived at earlier. So, let's refer to the following table for the ArtGalore digital art gallery: The estimated impact rating was an indication of how much impact  the business expected a feature idea to have on the Key Business Outcomes. If you recall, we rated this on a scale of 0 to 10. When the estimated impact of a Key Business Outcomes is less than five, then the success criteria for that feature is likely to be less ambitious. For example, the estimated impact of "existing buyers can enter a lucky draw to meet an artist of the month" toward generating revenue is zero. What this means is that we don't expect this feature idea to bring in any revenue for us or put in another way, revenue is not the measure of success for this feature idea. If any success criteria for generating revenue does come up for this feature idea, then there is a clear mismatch in terms of how we have prioritized the feature itself. For any feature idea with an estimated impact of five or above, we need to get very specific about how to define and measure success. For instance, the feature idea "existing buyers can enter a lucky draw to meet an artist of the month" has an estimated impact rating of six towards engagement. This means that we expect an increase in engagement as a measure of success for this feature idea. Then, we need to define what "increase in engagement" means. My idea of "increase in engagement" can be very different from your idea of "increase in engagement." This is where being S.M.A.R.T. about our definition of success can be useful. Success metrics are akin to user story acceptance criteria. Acceptance criteria define what conditions must be fulfilled by the software in order for us to sign off on the success of the user story. Acceptance criteria usually revolve around use cases and acceptable functional flows. Similarly, success criteria for feature ideas must define what indicators can tell us that the feature is delivering the expected impact on the KBO. Acceptance criteria also sometimes deal with NFRs (nonfunctional requirements). NFRs include performance, security, and reliability. In many instances, nonfunctional requirements are treated as independent user stories. I also have seen many teams struggle with expressing the need for nonfunctional requirements from a customer's perspective. In the early days of writing user stories, the tendency for myself and most of my colleagues was to write NFRs from a system/application point of view. We would say, "this report must load in 20 seconds," or "in the event of a network failure, partial data must not be saved."  These functional specifications didn't tell us how/why they were important for an end user. Writing user stories forces us to think about the user's perspective. For example, in my team we used to have interesting conversations about why a report needed to load within 20 seconds. This compelled us to think about how the user interacted with our software. It is not uncommon for visionary founders to throw out very ambitious goals for success. Having ambitious goals can have a positive impact in motivating teams to outperform. However, throwing lofty targets around, without having a plan for success, can be counter-productive. For instance, it's rather ambitious to say, "Our newsletter must be the first to publish artworks by all the popular artists in the country," or that "Our newsletter must become the benchmark for art curation." These are really inspiring words, but can mean nothing if we don't have a plan to get there. The general rule of thumb for this part of product experience planning is that when we aim for an ambitious goal, we also sign up to making it happen. Defining success must be a collaborative exercise carried out by all stakeholders. This is the playing field for deciding where we can stretch our goals, and for everyone to agree on what we're signing up to, in order to set the product experience up for success. Defining key success metrics For every feature idea we came up with, we can create feature cards that look like the following sample. This card indicates three aspects about what success means for this feature. We are asking these questions: what are we validating? When do we validate this? What Key Business Outcomes does it help us to validate? The criteria for success demonstrates what the business anticipates as being a tangible outcome from a feature. It also demonstrates which business functions will support, own, and drive the execution of the feature. That's it! We've nailed it, right? Wrong. Success metrics must be SMART, but how specific is the specific? The preceding success metric indicates that 80% of those who sign up for the monthly art catalog will enquire about at least one artwork. Now, 80% could mean 80 people, 800 people, or 8000 people, depending on whether we get 100 sign-ups, 1000, or 10,000, respectively! We have defined what external (customer/market) metrics to look for, but we have not defined whether we can realistically achieve this goal, given our resources and capabilities. The question we need to ask is: are we (as a business) equipped to handle 8000 enquiries? Do we have the expertise, resources, and people to manage this? If we don't plan in advance and assign ownership, our goals can lead to a gap in the product experience. When we clarify this explicitly, each business function could make assumptions. When we say 80% of folks will enquire about one artwork, the sales team is thinking that around 50 people will enquire. This is what the sales team  at ArtGalore is probably equipped to handle. However, marketing is aiming for 750 people and the developers are planning for 1000 people. So, even if we can attract 1000 enquiries, sales can handle only 50 enquiries a month! If this is what we're equipped for today, then building anything more could be wasteful. We need to think about how we can ramp up the sales team to handle more requests. The idea of drilling into success metrics is to gauge whether we're equipped to handle our success. So, maybe our success metric should be that we expect to get about 100 sign-ups in the first three months and between 40-70 folks enquiring about artworks after they sign up. Alternatively, we can find a smart way to enable sales to handle higher sales volumes. Before we write up success metrics, we should be asking a whole truckload of questions that determine the before-and-after of the feature. We need to ask the following questions: What will the monthly catalog showcase? How many curated art items will be showcased each month? What is the nature of the content that we should showcase? Just good high-quality images and text, or is there something more? Who will put together the catalog? How long must this person/team(s) spend to create this catalog? Where will we source the art for curation? Is there a specific date each month when the newsletter needs     to go out? Why do we think 80% of those who sign up will enquire? Is it because of the exclusive nature of art? Is it because of the quality of presentation? Is it because of the timing? What's so special about our catalog? Who handles the incoming enquiries? Is there a number to call    or is it via email? How long would we take to respond to enquiries? If we get 10,000 sign-ups and receive 8000 enquiries, are we equipped to handle these? Are these numbers too high? Can we still meet our response time if we hit those numbers? Would we still be happy if we got only 50% of folks who sign up enquiring? What if it's 30%? When would we throw away the idea of the catalog? This is where the meat of feature success starts taking shape. We  need a plan to uncover underlying assumptions and set ourselves up for success. It's very easy for folks to put out ambitious metrics without understanding the before-and-after of the work involved in meeting that metric. The intent of a strategy should be to set teams up for success, not for failure. Often, ambitious goals are set without considering whether they are realistic and achievable or not. This is so detrimental that teams eventually resort to manipulating the metrics or misrepresenting them, playing the blame game, or hiding information. Sometimes teams try to meet these metrics by deprioritizing other stuff. Eventually, team morale, productivity, and delivery take a hit. Ambitious goals, without the required capacity, capability, and resources to deliver, are useless. Technology to be in line with business outcomes Every business function needs to align toward the Key Business Outcomes and conform to the constraints under which the business operates. In our example here, the deadline is for the business to launch this feature idea before the Big Art show. So, meeting timelines is already a necessary measure of success. The other indicators of product technology measures could be quality, usability, response times, latency, reliability, data privacy, security, and so on. These are traditionally clubbed under NFRs (nonfunctional requirements). They are indicators of how the system has been designed or how the system operates, and are not really about user behavior. There is no aspect of a product that is nonfunctional or without a bearing on business outcomes. In that sense, nonfunctional requirements are a misnomer. NFRs are really technical success criteria. They are also a business stakeholder's decision, based on what outcomes the business wants to pursue. In many time and budget-bound software projects, technical success criteria trade-offs happen without understanding the business context or thinking about the end-to-end product experience. Let's take an example: our app's performance may be okay when handling 100 users, but it could take a hit when we get to 10,000 users. By then, the business has moved on to other priorities and the product isn't ready to make the leap. This depends on how each team can communicate the impact of doing or not doing something today in terms of a cost tomorrow. What that means is that engineering may be able to create software that can scale to 5000 users with minimal effort, but in order to scale to 500,000 users, there's a different level of magnitude required. There is a different approach needed when building solutions for meeting short-term benefits, compared to how we might build systems for long-term benefits. It is not possible to generalize and make a case that just because we build an application quickly, that it is likely to be full of defects or that it won't be secure. By contrast, just because we build a lot of robustness into an application, this does not mean that it will make the product sell better. There is a cost to building something, and there is also a cost to not building something and a cost to a rework. The cost will be justified based on the benefits we can reap, but it is important for product technology and business stakeholders to align on the loss or gain in terms of the end-to-end product experience because of the technical approach we are taking today. In order to arrive at these decisions, the business does not really need to understand design patterns, coding practices, or the nuanced technology details. They need to know the viability to meet business outcomes. This viability is based on technology possibilities, constraints, effort, skills needed, resources (hardware and software), time, and other prerequisites. What we can expect and what we cannot expect must both be agreed upon. In every scope-related discussion, I have seen that there are better insights and conversations when we highlight what the business/customer does not get from this product release. When we only highlight what value they will get, the discussions tend to go toward improvising on that value. When the business realizes what it doesn't get, the discussions lean toward improvising the end-to-end product experience. Should a business care that we wrote unit tests? Does the business care what design patterns we used or what language or software we used? We can have general guidelines for healthy and effective ways to follow best practices within our lines of work, but best practices don't define us, outcomes do. To summarize we learned before commencing on the development of any feature idea, there must be a consensus on what outcomes we are seeking to achieve. The success metrics should be our guideline for finding the smartest way to implement a feature. Developer’s guide to Software architecture patterns Hey hey, I wanna be a Rockstar (Developer) The developer-tester face-off needs to end. It’s putting our projects at risk
Read more
  • 0
  • 0
  • 18161

article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 18099

article-image-tech-hype-cycles-do-they-deserve-your-attention
Richard Gall
30 Apr 2018
6 min read
Save for later

Tech hype cycles: do they deserve your attention?

Richard Gall
30 Apr 2018
6 min read
Hype cycles are an integral aspect of modern technology. They tell us the story of a specific technology and how it fits into a given context. This context is usually professional, but it is sometimes social and cultural. They are also are able to show us how the use of something has changed. They illustrate when something was adopted, when it grew, and perhaps when it began to decline. True, this might seem superfluous or superficial. But that explains while we often fail to pay that much attention to them. Instead of focusing on the cycle, and the wider context of how and why something is being used, we get distracted in the details of whatever is being hyped. "Hype cycles allow us to see past hype." But hype cycles, or hype curves, can help us to make better sense of the technology at our disposal. They allow you to see past the hype. That means rather than following the trends or buzzwords that fashion places on a pedestal at any given moment, you're always able to see those trends and buzzwords in a context. For example, instead of simply moving from big data to to AI, or from cloud to edge, you can see how different technologies and trends fit together. You can begin to observe how things are impacting one another. Hype cycles allow you to see how software changes trends, and then how trends change industries. It's not always easy to see how the code you're writing fits into the big picture - but hype cycles are a good way of allowing you to get a better sense of it. The history of the tech hype cycle According to this Wired article from 2012, the term 'hype cycle' has been around since 1995. But the idea of a hype cycle was taken by research organization Gartner and became central to the way they presented changes across the tech landscape. The first Gartner hype cycle report was released in 1999. Written by Alexander Drobik, the report predicted the end of the dot com bubble at the beginning of the new millennium. However, it's important to note that what Drobnik hadn't simply predicted the end of a trend - it was instead what's called a period of disillusionment within the 'hype cycle' of, well, the internet (perhaps the ultimate hype cycle). Let's look at what the cycle looks like in detail. What does the tech hype cycle look like? Of course, Gartner are the organization that popularized the concept of the hype cycle, but we've created our own example of what it looks like:                 Let's break down each of these points in the hype cycle in a bit more detail. Technology trigger This is the initial breakthrough. It's an exciting time when either researchers, engineers discover a new way of doing something. It's more the possibility of disruption rather than actual disruption. This is often the time when the press - and investors - get excited. Peak of inflated expectations This is when everyone gets really excited about the possibility of disruption. This period can be characterized by the sentence "This changes everything." It's the period when everyone talks about transformation but nothing yet has really transformed. True, the new technology might have worked somewhere, but there's lots of projects that don't even hit the ground, and a few that have simply failed. Trough of disillusionment This is the hangover everyone goes through after getting drunk on inflated expectations. This begins with 'Why X isn't working' pieces in the press, which gradually develops into silence. Technologies or trends seem to disappear into relative insignificance. Slope of enlightenment Now the hype has died down, technologies are applied with more serious consideration. Arguably the period of disillusionment is an important period of reflection about what works and what doesn't. This allows businesses and organizations to apply technologies in a more effective way during this 'enlightened' period. In essence, this time is about experimentation and learning. True, there might be some humility here, which is probably a good thing after the earlier inflated expectations. Plateau of productivity This is where enlightenment turns into stability. Ways of using a particular technology become established within an industry. It becomes mainstream. Perhaps the benefits to customers are now being felt more readily, which makes it easier to calculate just how valuable something might be. The hype cycle is a framework that explains how technologies become popular and gradually more mainstream. Of course, there are some technologies that don't quite follow this trajectory - what happens, for example, when things simply never take off? Some technologies get stuck at the trough of disillusionment. If they can never really give us the full picture, are hype cycles actually nothing more than a load of hype? Are hype cycles just a load of hype? Although hype cycles are useful in outlining how technologies are adopted, and mature, there are, of course, do have some limitations. Of course, Gartner have some stake in actually selling the concept to you. Its business is based on being an authoritative and invaluable source of tech insight. This means Gartner needs you (or maybe your boss) to think that hype cycles are a recurring pattern of all technology. Similarly, the people who write about technology and sell it, have a vested interest in hype cycles. They might not realize it but the need to 'tell a story' about how or why something is important - why something is 'transformative' - feeds into the concept that Gartner has successfully monetized. But that doesn't mean tech hype cycles should simply be ignored. They might well be artificial and lacking in any quantitative rigour, but we ignore the hype cycle at our peril. This is because the way we - the press, industry leaders, and tech communities - plays an important part in how technologies and trends are adopted. We need to take a somewhat ironic approach to hype cycles. That means we need to recognise while part of it is a bit of a charade, it's a charade that is pretty much inescapable. Trends and technology can't exist outside of these systems. Things only ever become popular when they're visible and when they're being talked about. Hype cycles give us a framework for understanding how technology is talked about. Read next: What is AIOps and why is it going to be important?
Read more
  • 0
  • 0
  • 18098
article-image-5-things-you-need-to-learn-to-become-a-server-side-web-developer
Amarabha Banerjee
19 Jun 2018
6 min read
Save for later

5 things you need to learn to become a server-side web developer

Amarabha Banerjee
19 Jun 2018
6 min read
The profession of a back end web developer is ringing out loud and companies seek to get a qualified server-side developer to their team. The fact that the back-end specialist has comprehensive set of knowledge and skills helps them realize their potential in versatile web development projects. Before diving into what it takes to succeed at back end development as a profession, let’s look at what it’s about. In simple words, the back end is that invisible part of any application that activates all its internal elements. If the front-end answers the question of “how does it look”, then the back end or server-side web development deals with “how does it work”. A back end developer is the one who deals with the administrative part of the web application, the internal content of the system, and server-side technologies such as database, architecture and software logic. If you intend to become a professional server-side developer then there are few basic steps which will ease out your journey. In this article we have listed down five aspects of server-side development: servers, databases, networks, queues and frameworks, which you must master to become a successful server side web developer. Servers and databases: At the heart of server-side development are servers which are nothing but the hardware and storage devices connected to a local computer with working internet connection. So everytime you ask your browser to load a web page, the data stored in the servers are accessed and sent to the browser in a certain format. The bigger the application, the larger the amount of data stored in the server-side. The larger the data, the higher possibility of lag and slow performance. Databases are the particular file formats in which the data is stored. There are two different types of databases - Relational and Non- Relational. Both have their own pros and cons. Some of the popular databases which you can learn to take your skills up to the next level are NoSQL, SQL Server, MySQL, MongoDB, DynamoDB etc. Static and Dynamic servers: Static servers are physical hard drives where application data, CSS and HTML files, pictures and images are stored. Dynamic servers actually signify another layer between the server and the browser. They are often known as application servers. The primary function of these application servers is to process the data and format it as per the web page when the data fetching operation is initiated from the browser. This makes saving data much easier and process of data loading becomes much faster. For example, Wikipedia servers are filled with huge amounts of data, but they are not stored as HTML pages, rather they are stored as raw data. When they are queried by the browser, the application browser processes the data and formats it into the HTML format and then sends it to the browser. This makes the process a whole lot faster and space saving for the physical data storage. If you want to go a step ahead and think futuristic, then the latest trend is moving your servers on the cloud. This means the server-side tasks are performed by different cloud based services like Amazon AWS, and Microsoft Azure. This makes your task much simpler as a back end developer, since you simply need to decide which services you would require to best run your application and the rest is taken care off by the cloud service providers. Another aspect of server side development that’s generating a lot of interest among developer is is serverless development. This is based on the concept that the cloud service providers will allocate server space depending on your need and you don’t have to take care of backend resources and requirements. In a way the name Serverless is a misnomer, because the servers are there, just that they are in the cloud and you don’t have to bother about it. The primary role of a backend developer in a serverless system would be to figure out the best possible services and optimize the running cost on the cloud, deploy and monitor the system for non-stop robust performance. The communication protocol: The protocol which defines the data transfer between client side and server side is called HyperTextTransfer Protocol (HTTP). When a search request is typed in the browser, an HTTP request with a URL is sent to the server and the server then sends a response message with either request succeeded or web page not found. When an HTML page is returned for a search query, it is rendered by the web browser. While processing the response, the browser may discover links to other resources (e.g. an HTML page usually references JavaScript and CSS pages), and send separate HTTP Requests to download these files. Both static and dynamic websites use exactly the same communication protocol/patterns. As we have progressed quite a long way from the initial communication protocols, newer technologies like SSL, TLS, IPv6 have taken over the web communication domain. Transport Layer Security (TLS) – and its predecessor, Secure Sockets Layer (SSL), which is now deprecated by the Internet Engineering Task Force (IETF) – are cryptographic protocols that provide communications security over a computer network. The primary reason these protocols were introduced was to protect user data and provide increased security. Similarly newer protocols had to be introduced around late 90’s to cater to the increasing number of internet users. Protocols are basically unique identification pointers that determine the IP address of the server. The initial protocol used was IPv4 which is currently being substituted by IPv6 which has the capability to provide 2^128 or 3.4×1038 addresses. Message Queuing: This is one of the most important aspects of creating fast and dynamic web applications. Message Queuing is the stage where data is queued as per the different responses and then delivered to the browser. This process is asynchronous which means that the server and the browser need not interact with the message queue at the same time. There are some popular message queuing tools like RabbitMQ, MQTT, ActiveMQ which provide real time message queuing functionality. Server-side frameworks and languages: Now comes the last but one of the most important pointers. If you are a developer with a particular choice of language in mind, you can use a language based framework to add functionalities to your application easily. Also this makes it more efficient. Some of the popular server-side frameworks are Node.js for JavaScript, Django for Python, Laravel for PHP, Spring for Java and so on. But using these frameworks will need some amount of experience in respective languages. Now that you have a broad understanding of what server-side web development is, and what are the components, you can jump right into server-side development, databases and protocols management to progress into a successful professional back-end web developer. The best backend tools in web development Preparing the Spring Web Development Environment Is novelty ruining web development?  
Read more
  • 0
  • 0
  • 18074

article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 18064