Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 3948

article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 3934

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 3919

article-image-introducing-zowe-a-new-open-source-framework-to-simplify-development-on-z-os-backed-by-ibm
Bhagyashree R
28 Aug 2018
3 min read
Save for later

Open Mainframes Project introduces Zowe: A new open-source framework to simplify development on z/OS, supported by IBM

Bhagyashree R
28 Aug 2018
3 min read
IBM with its partners, Rocket Software and CA Technologies, have announced the launch of Zowe at the ongoing Open Source Summit in Vancouver, Canada. It is the first z/OS open source project, which is part of the Linux Foundation’s Open Mainframe Project community. Why is Zowe introduced? The rapid technology advancements and rising expectations in user experience demands  more productive and better integrated capabilities for z/OS, an operating system for IBM mainframes. Zowe enables delivery of such an environment through an extensible open source framework. It aims to create an ecosystem of Independent Software Vendors (ISVs), System Integrators, clients, and end users. By using it, development and operations teams can securely manage, control, script and develop on the mainframe like any other cloud platform. What are its components? The four main components of Zowe are: the Explorer server, API Mediation Layer, zLUX, and Zowe CLI. Source: Zowe Zowe APIs and Explorers z/OS Management Facility (z/OSMF) supports the use of REST APIs, which are public APIs that your application can use to work with system resources and can also extract system data. With the help of these REST APIs, Zowe submits jobs, works with the Job Entry Subsystem (JES) queue, and manipulates UNIX System Services (USS) or Multiple Virtual Storage (MVS) datasets. Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. They create an extensible z/OS framework that provides new z/OS REST services to transform enterprise tools and DevOps processes to incorporate new technology, languages, and modern workflows. Zowe API Mediation Layer The following are the key components of API Mediation Layer: API Gateway: It is built using Netflix Zuul and Spring Boot technology. Its purpose is to forward API requests to the appropriate corresponding service through the microservice endpoint UI. Discovery Service: It is built on Eureka and Spring Boot technology. It acts as the central point in the API Gateway that accepts announcements of REST services, and is a repository for active services. API Catalog: It is used to view the services running in API Mediation Layer. You can also view the corresponding API documentation to a service. Zowe Web UI Web UI named zLUX, modernizes and simplifies working on the mainframe.The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode. It gives users a unifying experience where various applications can work together. Zowe Command Line Interface (CLI) Zowe CLI is used to allow user interactions from different platforms with z/OS. The platforms which can be cloud or distributed systems are able to submit jobs, issue TSO and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents with the help of Zowe CLI. Currently, the Zowe is available in beta and is not intended for production use. The Zowe Leadership Committee is targeting to have a stable release by the end of the year. To know more about the launch of Zowe, refer to IBM’s announcement on their official website. IBM Files Patent for “Managing a Database Management System using a Blockchain Database” Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices IBM launches Nabla containers: A sandbox more secure than Docker containers
Read more
  • 0
  • 0
  • 3911

article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 3911

article-image-opus-1-3-a-popular-foss-audio-codec-with-machine-learning-and-vr-support-is-now-generally-available
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available

Amrata Joshi
22 Oct 2018
3 min read
Last week, the team at Opus announced the general availability of Opus Audio Codec version 1.3. Opus 1.3 comes along with a new set of features, namely, a recurrent neural network, reliable speech/music detector, convenience, ambisonics support, efficient memory, compatibility with RFC 6716 and a lot more. Opus is an open and royalty-free audio codec, which is highly useful for all audio applications, right from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is included in all major browsers and mobile operating systems, used for a wide range of applications and is the default WebRTC codec. New features in Opus Audio Codec 1.3 Reliable speech/music detector powered by machine learning Opus 1.3 promises a new speech/music detector. As it is based on a recurrent neural network, it is way simpler and reliable than the detector used in version 1.1.The speech/music detector in earlier versions was based on a simple (non-recurrent) neural network, followed by an HMM-based layer to combine the neural network results over time. Opus 1.3 introduces a new recurrent neuron which is the Gated Recurrent Unit (GRU). The GRU does not just learn how to use its input and memory at a time, but it also promises to learn, how and when to update its memory. This, in turn, helps it to remember information for a longer period of time. Mixed Content encoding gets better Mixed content encoding, especially at bit rates below 48 kb/s, will get more convenient as the new detector helps in improving the performance of Opus. Developers will experience a great change in speech encoding at lower bit rates, both for mono and stereo. Encode 3D audio soundtracks for VR easily This release comes along with ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos. Opus detector won’t take much of your space The Opus detector has just 4986 weights (that fit in less than 5 KB) and takes about 0.02% memory of CPU to run in real-time, instead of thousands of neurons and millions of weights running on a GPU. Additional Updates Improvements in Security/hardening, Voice Activity Detector (VAD), and speech/music classification using an RNN are simply add-ons. The major bug fixes in this release are CELT PLC and bandwidth detection fixes. Read more about the release on Mozilla’s official website. Also, check out a demo for more details. YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Google releases Oboe, a C++ library to build high-performance Android  audio apps How to perform Audio-Video-Image Scraping with Python
Read more
  • 0
  • 0
  • 3881
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 3876

article-image-confluent-an-apache-kafka-service-provider-adopts-a-new-license-to-fight-against-cloud-service-providers
Natasha Mathur
26 Dec 2018
4 min read
Save for later

Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers

Natasha Mathur
26 Dec 2018
4 min read
A common trend of software firms limiting their software licenses to prevent cloud service providers from exploiting their open source code is all the rage these days. One such software firm to have joined this move is Confluent, an Apache Kafka service provider, who announced its new Confluent Community License, two weeks back. The new license is aimed at allowing users to download, modify and redistribute the code without letting them provide the software as a system as a service (SaaS). “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-a-service offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions”, says Jay Kreps, CEO, Confluent. The new license, however, will have no effect on Apache Kafka that remains under the Apache 2.0 license and Confluent will continue to contribute to it. Kreps pointed out that leading cloud providers such as Amazon, Microsoft, Alibaba, and Google, today, are all different in the way that they approach open source. Some of these major cloud providers partner up with the open source companies offering hosted versions of their SaaS. Then, there are other cloud providers that take the open source code, implement it into their cloud offering, and then further push all of their investments into differentiated proprietary offerings. For instance, Michael Howard, CEO, MariaDB Corp. called Amazon’s tactics “the worst behavior”  that she’s seen in the software industry due to a loophole in its licensing. Howard also mentioned that the cloud giant is “strip mining by exploiting the work of a community of developers who work for free”, as first reported by Silicon Angle. Kreps suggests a solution, that open source software firms should focus on building more proprietary software and should “pull back” from their open source investments. “But we think the right way to build fundamental infrastructure layers is with open code. As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change”, mentions Kreps. The Confluent license change move was followed by MongoDB, who switched to Server Side Public License (SSPL) this October in order to prevent these major cloud providers from misusing its open source code. This decision by MongoDB to change its software license was sparked by the fact these cloud vendors who are not responsible for the development of a software “captures all the value” for the developed software without contributing back much to the community. Another reason was that many cloud providers started to take MongoDB’s open-source code in order to offer a hosted commercial version of its database without following the open-source rules. The license change helps create “an incredible opportunity to foster a new wave of great open source server-side software”, said Eliot Horowitz, CTO, and co-founder, MongoDB. Horowitz also said that he hopes the change would “protect open source innovation”. MongoDB had followed the path of “Common Clause” license that was first adopted by Redis Labs. Common Clause started out as an initiative by a group of top software firms to protect their rights. Common Clause has been added to existing open source software licenses in order to develop a new and combined software license. The combined license puts a limit on the commercial sale of the software. All of these efforts by these companies are aimed at making sure that open source communities do not get taken advantage of by the leading cloud providers. As Kreps point out, “We think this is a positive change and one that can help ensure small open source communities aren’t acting as free and unsustainable R&D (Research & development) for tech giants that put sustaining resources only into their own differentiated proprietary offerings”. Neo4j Enterprise Edition is now available under a commercial license GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 3865

article-image-google-introduces-e2-a-flexible-performance-driven-and-cost-effective-vms-for-google-compute-engine
Vincy Davis
12 Dec 2019
3 min read
Save for later

Google introduces E2, a flexible, performance-driven and cost-effective VMs for Google Compute Engine

Vincy Davis
12 Dec 2019
3 min read
Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud. According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.” What are the key features offered by E2 VMs E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing. The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. How E2 VMs achieve optimal efficiency Large, efficient physical servers E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources. Intelligent VM placement In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them. Performance-aware live migration After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center. A new hypervisor CPU scheduler In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads. https://twitter.com/uhoelzle/status/1204972503921131521 Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs. Why use JVM (Java Virtual Machine) for deep learning Brad Miro talks TensorFlow 2.0 features and how Google is using it internally EU antitrust regulators are investigating Google’s data collection practices, reports Reuters Google will not support Cloud Print, its cloud-based printing solution starting 2021 Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide
Read more
  • 0
  • 0
  • 3859

article-image-cloudflare-adds-warp-a-free-vpn-to-1-1-1-1-dns-app-to-improve-internet-performance-and-security
Natasha Mathur
02 Apr 2019
3 min read
Save for later

Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security

Natasha Mathur
02 Apr 2019
3 min read
Cloudflare announced yesterday that it is adding Warp, a free VPN to the 1.1.1.1 DNS resolver app. Cloudflare team states that it began its plans to integrate 1.1.1.1 app with warp performance and security tech, about two years ago. The 1.1.1.1 app was released in November last year for iOS and Android. The mobile app included features such as VPN support that helped move the mobile traffic towards 1.1.1.1 DNS servers, thereby, helping improve speeds. Now with warp integration, 1.1.1.1 app will speed up mobile data using Cloudflare network to resolve DNS queries at a faster pace.  With Warp, all the unencrypted connections are encrypted automatically by default. Also, Warp comes with end-to-end encryption and doesn’t require users to install a root certificate to observe the encrypted Internet traffic. For cases when you browse the unencrypted Internet through Warp, Cloudflare’s network can cache and compress content to improve performance and decrease your data usage and mobile carrier bill. “In the 1.1.1.1 App, if users decide to enable Warp, instead of just DNS queries being secured and optimized, all Internet traffic is secured and optimized. In other words, Warp is the VPN for people who don't know what V.P.N. stands for”, states the Cloudflare team. Apart from that, Warp also offers excellent performance and reliability. Warp is built around a UDP-based protocol that has been optimized for the mobile Internet. Warp also makes use of Cloudflare’s massive global network and allows Warp to connect with servers within milliseconds. Moreover, Warp has been tested to show that it increases internet performance. Another factor is reliability which has also significantly improved. Warp is not as capable of eliminating mobile dead spots, but it is very efficient at recovering from loss. Warp doesn’t increase your battery usage as it is built around WireGuard, a new and efficient VPN protocol. The basic version of Warp has been added as a free option with the 1.1.1.1 app for free. However, Cloudflare team will be charging for Warp+, a premium version of Warp, that will be even faster with Argo technology. A low monthly fee will be charged for Warp+ that will vary based on different regions. Also, the 1.1.1.1 App with Warp will have all the privacy protections launched formerly with the 1.1.1.1 app. Cloudflare team states that 1.1.1.1 app with warp is still under works, and although sign-ups for Warp aren’t open yet, Cloudflare has started a waiting list where you can “claim your place” by downloading the 1.1.1.1 app or by updating the existing app. Once the service is available, you’ll be notified. “Our whole team is proud that today, for the first time, we’ve extended the scope of that mission meaningfully to the billions of other people who use the Internet every day”, states the Cloudflare team. For more information, check out the official Warp blog post. Cloudflare takes a step towards transparency by expanding its government warrant canaries Cloudflare raises $150M with Franklin Templeton leading the latest round of funding workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice
Read more
  • 0
  • 0
  • 3858
article-image-amazon-launches-tls-termination-support-for-network-load-balancer
Bhagyashree R
25 Jan 2019
2 min read
Save for later

Amazon launches TLS Termination support for Network Load Balancer

Bhagyashree R
25 Jan 2019
2 min read
Starting from yesterday, AWS Network Load Balancers (NLB) supports TLS/SSL. This new feature simplifies the process of building secure web applications by allowing users to make use of TLS connections that terminate at an NLB. This support is fully integrated with AWS PrivateLink and is also supported by AWS CloudFormation. https://twitter.com/colmmacc/status/1088510453767000064 Here are some features and benefits it comes with: Simplified management Using TLS at scale requires you to do extra management work like distributing the server certificate to each backend server. Additionally, it also increases the attack surface due to the presence of multiple copies of the certificate. This TLS/SSL support comes with a central management point for your certificates by integrating with AWS Certificate Manager (ACM) and Identity Access Manager (IAM). Improved compliance This new feature provides the flexibility of predefined security policies. Developers can use these built-in security policies to specify the cipher suites and protocol versions that are acceptable to their application. This will help you if you are going for PCI and FedRAMP compliance and also allow you to achieve a perfect TLS score. Classic upgrade Users who are currently using a Classic Load Balancer for TLS termination can switch to NLB, which will help them to scale quickly in case of an increased load. Users will also be able to make use a static IP address for their NLB and log the source IP address for requests. Access logs This support allows users to enable access logs for their NLBs and direct them to the S3 bucket of their choice. These logs will document information about the TLS protocol version, cipher suite, connection time, handshake time, and more. To read more in detail, check out Amazon’s announcement. Amazon is reportedly building a video game streaming service, says Information Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more
Read more
  • 0
  • 0
  • 3855

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 3839

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 3818
article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3805

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783
Modal Close icon
Modal Close icon