Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-google-announce-the-largest-overhaul-of-their-cloud-speech-to-text-api
Vijin Boricha
20 Apr 2018
2 min read
Save for later

Google announce the largest overhaul of their Cloud Speech-to-Text

Vijin Boricha
20 Apr 2018
2 min read
Last month Google announced Cloud Text-to-Speech, their speech synthesis API that features DeepMind and WaveNet models. Now, they have announced their largest overhaul of Cloud Speech-to-Text (formerly known as Cloud Speech API) since it was introduced in 2016. Google’s Speech-to-Text API has been enhanced for business use cases, including phone-call and video transcription. With this new Cloud Speech-to-Text update one can get access to the latest research from Google’s machine learning expert team, all via a simple REST API. It also supports Standard service level agreement (SLA) with 99.9% availability. Here’s a sneak peek into the latest updates to Google’s Cloud Speech-to-Text API: New video and phone call transcription models: Google has added models created for specific use cases such as phone call transcriptions and transcriptions of audio from video.Video and phone call transcription models Readable text with automatic punctuation: Google created a new LSTM neural network to improve automating punctuation in long-form speech transcription. This Cloud Speech-to-Text model, currently in beta, can automatically suggest commas, question marks, and periods for your text. Use case description with recognition metadata: The information taken from transcribed audio or video with tags such as ‘voice commands to a Google home assistant’ or ‘soccer sport tv shows’, is aggregated across Cloud Speech-to-Text users to prioritize upcoming activities. To know more about this update in detail visit Google’s blog post.
Read more
  • 0
  • 0
  • 40960

article-image-top-hacks-it-certification
Ronnie Wong
14 Oct 2021
5 min read
Save for later

Top life hacks for prepping for your IT certification exam

Ronnie Wong
14 Oct 2021
5 min read
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed up for a class that lasted one week, per exam, meaning two weeks.  We reviewed so much material during that time that the task of preparing for the certification seemed overwhelming.  Even with an instructor, the scope of the material was a challenge.   Mixed messages  Somedays, I would hear from others how difficult the exam was; on other days, I would hear how easy the exam was. I would also hear advice about topics I should study more and even some topics I didn’t think about studying.  These conflicting comments only increased my anxiety as my exam date drew closer. No matter what I read, studied, or heard from people about the exam, I felt like I was not prepared to pass it. Overwhelmed by the sheer volume of material, anxious from the comments of others and feeling like I didn’t do enough preparation when I finally passed the exam, it didn’t bring me joy so much as relief that I had survived it.   Then it was time to prepare for the second exam, and those same feelings came back but this time with a little more confidence that I could pass it. After that first A+ exam, I have not only passed more exams, I have also have helped others prepare successfully for many certification exams.    Exam hacks  Below is a list that has helped not only me but also others to successfully prepare for exams.   Start with the exam objectives and keep a copy of them close by you for reference during your whole preparation time.  If you haven’t downloaded them (many are on the exam vendor’s site), do it now.  This is your verified guide on what topics will appear on the exam, and it will help you feel confident to ignore others when they tell you what to study. If it’s not in the exam objectives, then it is more than likely not on the exam. There is never a 100% guarantee, but whatever they ask you will at least be related to those topics found on the objectives. They will not be in addition to the objectives.                                                                                                                                                                                                              To sharpen the focus of your preparation, refer to your exam objectives again.  You may see this as just a list, but it is so much more. Put differently, the exam objectives set the scope of what to study.  How?  Pay attention to the verbs used on the exam objectives.  The objectives never give you a topic without using a verb to help you recognize the depth you should go into when you study. e.g., “configure and verify HSRP.”  You are not only learning what HSRP is, but you should know where and how to configure and verify it working successfully.  If it reads to “describe the hacking process”, you will know this topic is more conceptual. A conceptual topic with that verb would require you to define it and put it in context.                                                                                                                                                                                        The exam objectives also show the weighting of those topics for the exam. Vendors break down the objective domain into percentages. For example, you may find one topic accounts for 40% of the exam. This helps you predict what topics you will see more questions for on the exam. That means you can know what topics you’re more likely to see than other topics.  You may also see that you already know a good percentage of the exam as well. It’s a confidence booster and that mindset is key in your preparation.                                                                                                                                    A good study session begins and ends with a win. You can easily sabotage your study by picking a topic that is too difficult to get through in a single session. In the same manner, ending a study session where you feel like you didn’t learn anything is also disheartening.  This is demotivating at best.  How do we ensure that we can begin and end a study session with win? Create a study session with three topics. Begin with an easier topic to review or learn. Then, you can choose a topic that is more challenging.  Of course, end your study session with another easier topic.  Following this model, do a minimum of one a day or maximum of two sessions a day.                            Put your phone away. Set your emails and notifications, instant messaging, and social media on do not disturb during your study session time. Good study time is uninterrupted, except on your very specific and short breaks. It’s amazing how much more you can accomplish when you have dedicated study time away from beeps, rings, notifications.     Prep is king  Preparing for a certification exam is hard enough due to the quantity of material and the added stress of sitting for an exam and passing. You can make it more effective by using the objectives to help guide you, putting a session plan in place that is motivating as well as reducing the distractions during your dedicated study times. These are commonly overlooked preparation hacks that will benefit you in your next certification exam.   These are just some handy hints for passing IT Certification exams. What tips would you give? Have you recently completed a certification or are you planning on taking one soon?  Packt would love to hear your thoughts, so why not take the following survey? The first 200 respondents will get a free ebook of choice from the Packt catalogue.*    *To receive the ebook, you must supply an email. Free ebook requires a no-charge account creation with Packt   
Read more
  • 0
  • 0
  • 40860

article-image-announcing-wireshark-3-0-0
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

Announcing Wireshark 3.0.0

Melisha Dsouza
01 Mar 2019
2 min read
Yesterday, Wireshark released its version 3.0.0 with new user interface improvements, bug fixes, new Npcap Windows Packet capturing driver and more. Wireshark, the open source and cross-platform network protocol analysis software is used by security analysts, experts and developers for analysis, troubleshooting, development, and other security-related tasks to capture and browse the packets traffic on computer networks. Features of Wireshark 3.0.0 The Windows .exe installers replaces WinPcap with Npcap. Npcap supports loopback capture and 802.11 WiFi monitor mode capture - only if supported by the NIC driver. The "Map-Button" of the Endpoint dialog that was erased since Wireshark Version 2.6.0 has been added in a modernized form. The macOS package ships with Qt 5.12.1 and the OS requires version 10.12 or later. Initial support has been provided for using PKCS #11 tokens for RSA decryption in TLS. Configure this at Preferences, RSA Keys. The new WireGuard dissector has decryption support and requires Libgcrypt 1.8 for the same. You can now copy coloring rules, IO graphs, filter Buttons and protocol preference tables from other profiles using a button in the corresponding configuration dialogs. Wireshark now supports Swedish, Ukrainian and Russian language. A new dfilter function string() has been added which allows the conversion of non-string fields to strings. This enables string functions to be used on them. The legacy (GTK+) user interface, the portaudio library are removed and no longer supported. Wireshark requires Qt 5.2 or later, GLib 2.32 or later, GnuTLS 3.2 or later as optional dependency. Building Wireshark requires Python 3.4 or a newer version. Data following a TCP ZeroWindowProbe is not passed to subdissectors and is marked as retransmission. Head over to Wireshark’s official blog for the entire list of upgraded features in this release. Using statistical tools in Wireshark for packet analysis [Tutorial] Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Analyzing enterprise application behavior with Wireshark 2
Read more
  • 0
  • 0
  • 37553

article-image-introducing-quarkus-a-kubernetes-native-java-framework-for-graalvm-openjdk-hotspot
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot

Melisha Dsouza
08 Mar 2019
2 min read
Yesterday, RedHat announced the launch of ‘Quarkus’, a Kubernetes Native Java framework that offers developers “a unified reactive and imperative programming model” in order to address a wider range of distributed application architectures. The framework uses Java libraries and standards and is tailored for GraalVM and HotSpot. Quarkus has been designed keeping in mind serverless, microservices, containers, Kubernetes, FaaS, and the cloud and it provides an effective solution for running Java on these new deployment environments. Features of Quarkus Fast Startup enabling automatic scaling up and down of microservices on containers and Kubernetes as well as FaaS on-the-spot execution. Low memory utilization to help optimize container density in microservices architecture deployments that require multiple containers. Quarkus unifies imperative and reactive programming models for microservices development. Quarkus introduces a full-stack framework by leveraging libraries like Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more. Quarkus includes an extension framework for third-party framework authors can leverage and extend. Twitter was abuzz with Kubernetes users expressing their excitement on this news- describing Quarkus as “game changer” in the world of microservices: https://twitter.com/systemcraftsman/status/1103759828118368258 https://twitter.com/MarcusBiel/status/1103647704494804992 https://twitter.com/lazarotti/status/1103633019183738880 This open source framework is available under the Apache Software License 2.0 or compatible license. You can head over to the Quarkus website for more information on this news. Using lambda expressions in Java 11 [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 26686

article-image-developers-ask-for-an-option-to-disable-docker-compose-from-automatically-reading-the-env-file
Bhagyashree R
18 Oct 2019
3 min read
Save for later

Developers ask for an option to disable Docker Compose from automatically reading the .env file

Bhagyashree R
18 Oct 2019
3 min read
In June this year, Jonathan Chan, a software developer reported that Docker Compose automatically reads from .env. Since other systems also access the same file for parsing and processing variables, this was creating some confusion resulting in breaking compatibility with other .env utilities. Docker Compose has a "docker-compose.yml" config file used for deploying, combining, and configuring multiple multi-container Docker applications. The .env file is used for putting values in the "docker-compose.yml" file. In the .env file, the default environment variables are specified in the form of key-value pairs. “With the release of 1.24.0, the feature where Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. breaks compatibility with other .env utilities. Although my setup does not use the variables in .env for docker-compose, docker-compose now fails because the .env file does not meet docker-compose's format,” Chan explains. This is not the first time that this issue has been reported. Earlier this year, a user opened an issue on the GitHub repo. He described that after upgrading Compose to 1.24.0-rc1, its automatic parsing of .env file was failing. “I keep export statements in my .env file so I can easily source it in addition to using it as a standard .env. In previous versions of Compose, this worked fine and didn't give me any issues, however with this new update I instead get an error about spaces inside a value,” he explained in his report. As a solution, Chan has proposed, “I propose that you can specify an option to ignore the .env file or specify a different.env file (such as .docker.env) in the docker-compose.yml file so that we can work around projects that are already using the .env file for something else.” This sparked a discussion on Hacker News where users also suggested a few solutions. “This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker,” A user commented. Another user recommended, “You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into someplace and run docker-compose.” Some users also pointed out the lack of authentication mechanism in Docker Hub. “Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point. As an attacker, I would put a great deal of focus on attacking a company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high,” a user commented. Others recommended to set up a time-based one-time password (TOTP) instead. Check out the reported issue on the GitHub repository. Amazon EKS Windows Container Support is now generally available GKE Sandbox : A gVisor based feature to increase security and isolation in containers 6 signs you need containers  
Read more
  • 0
  • 0
  • 26425

article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 25176
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-docker-store-and-docker-cloud-are-now-part-of-docker-hub
Amrata Joshi
14 Dec 2018
3 min read
Save for later

Docker Store and Docker Cloud are now part of Docker Hub

Amrata Joshi
14 Dec 2018
3 min read
Yesterday, the team at Docker announced that Docker Store and Docker Cloud are now part of Docker Hub. This makes the process of finding, storing and sharing container images easy. The new Docker Hub has an updated user experience where Docker certified and verified publisher images are available for discovery and download. Docker Cloud, a service provided by Docker helps users to connect the Docker Cloud to their existing cloud providers like Azure or AWS. Docker store is used for creating a self-service portal for Docker's ecosystem partners for publishing and distributing their software through Docker images. https://twitter.com/Docker/status/1073369942660067328 What’s new in this Docker Hub update? Repositories                                            Source: Docker Users can now view recently pushed tags and automated builds on their repository page. Pagination has now been added to the repository tags. The repository filtering on the Docker Hub homepage has been improved. Organizations and Teams Organization owners can now view the team permissions across all of their repositories at one glance. Existing Docker Hub users can now be added to a team via their email IDs instead of their Docker IDs. Automated Builds Source: Docker Build Caching is now used to speed up builds. It is now possible to add environment variables and run tests in the builds. Automated builds can now be added to existing repositories. Account credentials for organizations like GitHub and BitBucket need to re-linked to the organization for leveraging the new automated builds. Improved container image search Filter by Official, Verified Publisher, and Certified images guarantees a level of quality in the Docker images. Docker Hub provides filter by categories for quick search of images. There is no need of updating any bookmarks on Docker Hub. Verified publisher and certified images The Docker certified and verified publisher images are now available for discovery and download on Docker Hub. Just like official Images, even publisher images have been vetted by Docker. The certified and verified publisher images are provided by the third-party software vendors. Certified images are tested and supported by verified publishers on Docker Enterprise platform. Certified images adhere to Docker’s container best practices. The certified images pass a functional API test suite and also display a unique quality mark “Docker Certified”. Read more about this release on Docker’s blog post. Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format Docker announces Docker Desktop Enterprise Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 24230

article-image-unity-editor-will-now-officially-support-linux
Vincy Davis
31 May 2019
2 min read
Save for later

Unity Editor will now officially support Linux

Vincy Davis
31 May 2019
2 min read
Yesterday Martin Best, Senior Technical Product Manager at Unity, briefly announced that the Unity Editor will now officially support Linux. Currently the Editor is available only on ‘preview’ for Ubuntu and CentOS, but Best has stated that it will be fully supported by Unity 2019.3. Another important note is to make sure that before opening projects via the Linux Editor, the 3rd-party tools also support it. Unity has been offering an unofficial, experimental Unity Editor for Linux since 2015. Unity had released the 2019.1 version in April this year, in which it was mentioned that the Unity editor for Linux has moved into preview mode from the experimental status. Now the status has been made official. Best mentions in the blog post, “growing number of developers using the experimental version, combined with the increasing demand of Unity users in the Film and Automotive, Transportation, and Manufacturing (ATM) industries means that we now plan to officially support the Unity Editor for Linux.” The Unity Editor for Linux will be accessible to all Personal (free), Plus, and Pro licenses users, starting with Unity 2019.1. It will be officially supported on the following configurations: Ubuntu 16.04, 18.04 CentOS 7 x86-64 architecture Gnome desktop environment running on top of X11 windowing system Nvidia official proprietary graphics driver and AMD Mesa graphics driver Desktop form factors, running on device/hardware without emulation or compatibility layer Users are quite happy that the Unity Editor will now officially support Linux. A user on Reddit comments, “Better late than never.” Another user added, “Great news! I just used the editor recently. The older versions were quite buggy but the latest release feels totally on par with Windows. Excellent work Unity Linux team!” https://twitter.com/FourthWoods/status/1134196011235237888 https://twitter.com/limatangoalpha/status/1134159970973470720 For the latest builds, check out the Unity Hub. For giving feedback on the Unity Editor for Linux, head over to the Unity Forum page. Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity
Read more
  • 0
  • 0
  • 24204

article-image-baidu-releases-kunlun-ai-chip-chinas-first-cloud-to-edge-ai-chip
Savia Lobo
05 Jul 2018
2 min read
Save for later

Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip

Savia Lobo
05 Jul 2018
2 min read
Baidu, Inc. the leading Chinese language Internet search provider releases Kunlun AI chip. It is China’s first cloud-to-edge AI chip, which is built to handle AI models for both, edge computing on devices and in the cloud via data centers. K'un-Lun is also a place that actually exists in another dimension in Marvel’s Immortal Iron Fist. AI applications have dramatically risen to popularity and adoption. With this, there is increased demand for requirements on the computational end. Traditional chips have limited computational power and to accelerate larger AI workloads; it requires much more scaling, computationally. To suffice this computational demand Baidu released the Kunlun AI chip, which is designed specifically for large-scale AI workloads. Kunlun feeds the high processing demands of AI with a high-performant and cost-effective solution. It can be used for both cloud and edge instances, which include data centers, public clouds, and autonomous vehicles. Kunlun comes in two variants; the 818-300 model is used for training and the 818-100 model is used for inference purposes. This chip leverages Baidu’s AI ecosystem including AI scenarios such as search ranking and deep learning frameworks like PaddlePaddle. Key Specifications of Kunlun AI chip A computational capability which is 30 times faster than the original FPGA-based accelerator that Baidu started developing in 2011 A 14nm Samsung engineering 512 GB/second memory bandwidth Provides 260 TOPS computing performance while consuming 100 Watts of power The features the Kunlun chip include: It supports open source deep learning algorithms Supports a wide range of AI applications including voice recognition, search ranking, natural language processing, and so on. Baidu plans to continue to iterate this chip and develop it progressively to enable the expansion of an open AI ecosystem. To make it successful, Baidu continues to make “chip power” to meet the needs of various fields such as intelligent vehicles and devices, voice and image recognition. Read more about Baidu’s Kunlun AI chip on the MIT website. IBM unveils world’s fastest supercomputer with AI capabilities, Summit AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU?
Read more
  • 0
  • 0
  • 24079

article-image-why-did-last-weeks-azure-cloud-outage-happen-heres-microsofts-root-cause-analysis-summary
Prasad Ramesh
12 Sep 2018
3 min read
Save for later

Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.

Prasad Ramesh
12 Sep 2018
3 min read
Earlier this month, Microsoft Azure Cloud was experiencing problems that left users unable to access its cloud services. The outage in South Central US affected several Azure Cloud services and caused them to go offline for U.S. users. The reason for the outage was stated as “severe weather”. Microsoft is currently conducting a root cause analysis to find out the exact reason. Many services went offline due to cooling system failure causing the servers to overheat and turn themselves off. What did the RCA reveal about the Azure outage High energy storms associated with Hurricane Gordon hit the southern area of Texas near Microsoft Azure’s data centers for South Central US. Many data centers were affected and experienced voltage fluctuations. Lightning-induced increased electrical activity caused significant voltage swells. The rise in voltages, in turn, caused a portion of one data center to switch to generator power. The power swells also shut down the mechanical cooling systems despite surge suppressors being in place. With the cooling systems being offline, temperatures exceeded the thermal buffer within the cooling system. The safe operational temperature threshold exceeded which initiated an automated shutdown of devices. The shutdown mechanism is installed to preserve infrastructure and data integrity. But in this incident, the temperatures increased pretty quickly in some areas of the datacenter causing hardware damage before a shutdown could be initiated. Many storage servers and some network devices and power units were damaged. Microsoft is taking steps to prevent further damage as the storms are still active in the area. They are switching the remaining data centers to generator power to stabilize power supply. For recovery of damaged units, the first step taken was to recover the Azure Software Load Balancers (SLBs) for storage scale units. The next step was to recover the storage servers and the data on them by replacing failed components and migrating data to healthy storage units while validating that no data is corrupted. The Azure website also states that the “Impacted customers will receive a credit pursuant to the Microsoft Azure Service Level Agreement, in their October billing statement.” A detailed analysis will be available on their website in the coming weeks. For more details on the RCA and customer impact, visit the Azure website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft Azure now supports NVIDIA GPU Cloud (NGC)
Read more
  • 0
  • 0
  • 23942
article-image-cloudflare-finally-launches-warp-and-warp-plus-after-a-delay-of-more-than-five-months
Vincy Davis
27 Sep 2019
5 min read
Save for later

Cloudflare finally launches Warp and Warp Plus after a delay of more than five months

Vincy Davis
27 Sep 2019
5 min read
More than five months after announcing Warp, Cloudflare has finally made it available to the general public, yesterday. With two million people on the waitlist to try Warp, the Cloudflare team says that it took them harder than they thought to build a next-generation service to secure consumer mobile connections, without compromising on speed and power usage. Along with Warp, Cloudflare is also launching Warp Plus. Warp is a free VPN to the 1.1.1.1 DNS resolver app which will speed up mobile data using the Cloudflare network to resolve DNS queries at a faster pace. It also comes with end-to-end encryption and does not require users to install a root certificate to observe encrypted internet traffic. It is built around a UDP-based protocol that is optimized for the mobile internet and offers excellent performance and reliability. Why Cloudflare delayed the Warp release? A few days before Cloudflare announced Warp on April 1st, Apple released its new version iOS 12.2 with significant changes in its underlying network stack implementation. This made the Warp network unstable thus making the Cloudflare team arrange for workarounds in their networking code, which took more time. Cloudflare adds, “We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we'd anticipated.” As the internet is made up of diverse network components, the Cloudflare team found it difficult to include all the diversity of mobile carriers, mobile operating systems, and mobile device models in their network. The Cloudflare team also found it testing to include users’ diverse network settings in their network. Warp uses a technology called Anycast to route user traffic to the Cloudflare network, however, it moves the users’ data between entire data centers, which made the Warp functioning complex.  To overcome all these barriers, the Cloudflare team has now changed its approach by focussing more on iOS. The team has also solidified the shared underpinnings of the app to ensure that it would even work with future network stack upgrades. The team has also tested Warp with network-based users to discover as many corner cases as possible. Thus, the Cloudflare team has successfully invented new technologies to keep the session state stable even with multiple mobile networks. Cloudflare introduces Warp Plus - an unlimited version of Warp Along with Warp, the Cloudflare team has also launched Warp Plus, an unlimited version of WARP for a monthly subscription fee. Warp Plus is faster than Warp and uses Cloudflare’s Argo Smart Routing to achieve a higher speed than Warp. The official blog post states, “Routing your traffic over our network often costs us more than if we release it directly to the internet.” To cover these costs, Warp Plus will charge a monthly fee of $4.99/month or less, depending on the user location. The Cloudflare team also added that they will be launching a test tool within the 1.1.1.1 app in a few weeks to make users “see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus.” Read Also: Cloudflare plans to go public; files S-1 with the SEC  To know more details about Warp Plus, read the technical post by Cloudflare team. Privacy features offered by Warp and Warp Plus The 1.1.1.1 DNS resolver app provides strong privacy protections such as all the debug logs will be kept only long enough to ensure the security of the service. Also, Cloudflare will only retain the limited transaction data for legitimate operational and research purposes.  Warp will not only maintain the 1.1.1.1 DNS protection layers but will also ensure: User’s-identifiable log data will be written to disk The user’s browsing data will not be sold for advertising purposes Warp will not demand any personal information (name, phone number, or email address) to use Warp or Warp Plus Outside editors will regularly regulate Warp’s functioning The Cloudflare team has also notified users that the newly available Warp will have bugs present in them. The blog post also specifies that the most popular bug currently in Warp is due to traffic misroute, which is making the Warp function slower than the speed of non-Warp mobile internet.  Image Source: Cloudflare blog The team has made it easier for users to report bugs as they have to just click on the little bug icon near the top of the screen on the 1.1.1.1 app or shake their phone with the app open and send a bug report to Cloudflare. Visit the Cloudflare blog for more information on Warp and Warp Plus. Facebook will no longer involve third-party fact-checkers to review the political content on their platform GNOME Foundation’s Shotwell photo manager faces a patent infringement lawsuit from Rothschild Patent Imaging A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 23566

article-image-cncf-sandbox-accepts-googles-openmetrics-project
Fatema Patrawala
14 Aug 2018
3 min read
Save for later

CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project

Fatema Patrawala
14 Aug 2018
3 min read
The Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects. Google cloud engineers and other vendors had been working on this persistently from the past several months and finally it got accepted by CNCF. Engineers are further working on ways to support OpenMetrics in the OpenSensus, a set of uniform tracing and stats libraries that work with multi-vendor services. OpenMetrics will bring together the maturity and adoption of Prometheus, and Google’s background in working with stats at extreme scale. It will also bring in the experience and needs of a variety of projects, vendors, and end-users who are aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale. The open source initiative, focused on creating a neutral metrics exposition format will provide a sound data model for current and future needs of users. It will embed into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models. “The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries. CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.” says Richard Hartmann, Technical Architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others. “Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” says Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google. For more information about OpenMetrics, please visit openmetrics.io. To quickly enable trace and metrics collection from your application, please visit opencensus.io. 5 reasons why your business should adopt cloud computing Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 23557

article-image-azure-functions-3-0-released-with-support-for-net-core-3-1
Savia Lobo
12 Dec 2019
2 min read
Save for later

Azure Functions 3.0 released with support for .NET Core 3.1!

Savia Lobo
12 Dec 2019
2 min read
On 9th December, Microsoft announced that the go-live release of the Azure Functions 3.0 is now available. Among many new capabilities and functionality added to this release, one amazing addition is the support for the newly released .NET Core 3.1 -- an LTS (long-term support) release -- and Node 12. With users having the advantage to build and deploy 3.0 functions in production, the Azure Functions 3.0 bring newer capabilities including the ability to target .NET Core 3.1 and Node 12, higher backward compatibility for existing apps running on older language versions, without any code changes. “While the runtime is now ready for production, and most of the tooling and performance optimizations are rolling out soon, there are still some tooling improvements to come before we announce Functions 3.0 as the default for new apps. We plan to announce Functions 3.0 as the default version for new apps in January 2020,” the official announcement mentions. While users running on earlier versions of Azure Functions will continue to be supported, the company does not plan to deprecate 1.0 or 2.0 at present. “Customers running Azure Functions targeting 1.0 or 2.0 will also continue to receive security updates and patches moving forward—to both the Azure Functions runtime and the underlying .NET runtime—for apps running in Azure. Whenever there’s a major version deprecation, we plan to provide notice at least a year in advance for users to migrate their apps to a newer version,” Microsoft mentions. https://twitter.com/rickvdbosch/status/1204115191367114752 https://twitter.com/AzureTrenches/status/1204298388403044353 To know more about this in detail, read Azure Functions’ official documentation. Creating triggers in Azure Functions [Tutorial] Azure Functions 2.0 launches with better workload support for serverless Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 23040
article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 22850

article-image-microsoft-announces-decentralized-identity-in-partnership-with-dif-and-w3c-credentials-community-group
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Microsoft published a white paper on Decentralized Identity (DID) solution. These identities are user-generated, self-owned, globally unique identifiers rooted in decentralized systems. Over the past 18 months, Microsoft has been working towards building a digital identity system using blockchain and other distributed ledger technologies. With these identities aims to enhance personal privacy, security, and control. Microsoft has been actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. They are working with these groups to identify and develop critical standards. Together they plan to establish a unified, interoperable ecosystem that developers and businesses can rely on to build more user-centric products, applications, and services. Why decentralized identity (DID) is needed? Nowadays, people use digital identity at work, at home, and across every app, service, and device. Access to these digital identities such as email addresses and social network IDs can be removed at any time by the email provider, social network provider, or other external parties. Users also give permissions to numerous apps and devices, which calls for a high degree of vigilance of tracking who has access to what information. This standards-based decentralized identity system empowers users and organizations to have greater control over their data. This system addresses the problem of users granting broad consent to countless apps and services. It provides them a secure encrypted digital hub where they can store their identity data and easily control access to it. What it means for users, developers, and organizations? Benefits for users It enables all users to own and control their identity Provides secure experiences that incorporate privacy by design Design user-centric apps and services Benefits for developers It allows developers to provide users personalized experiences while respecting their privacy Enables developers to participate in a new kind of marketplace, where creators and consumers exchange directly Benefits for organizations Organizations can deeply engage with users while minimizing privacy and security risks Provides a unified data protocol to organizations to transact with customers, partners, and suppliers Improves transparency and auditability of business operations To know more about decentralized identity, read the white paper published by Microsoft. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week
Read more
  • 0
  • 0
  • 22847