Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-what-google-redhat-oracle-and-others-announced-at-kubercon-cloudnativecon-2018
Savia Lobo
17 May 2018
6 min read
Save for later

What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018

Savia Lobo
17 May 2018
6 min read
Earlier this month, 4000+ developers attended the Cloud Native Computing Foundation’s flagship event, KubeCon + CloudNativeCon 2018 conference, held at Copenhagen, Europe from May 2nd to 4th. This conference focussed on a series of announcements on microservices, containers, and other open source tools for building applications for the web. Top vendors including Google, RedHat, Oracle, and many more announced a myriad of releases and improvements with respect to Kubernetes. Read our article on Big vendor announcements at KubeCon + CloudNativeCon Europe. Let’s brush through the top 7 vendors and their release highlights in this conference. Google released Stackdriver Kubernetes Monitoring and open sourced gVisor Released in beta, the Stackdriver Kubernetes Monitoring enables both developers and operators to use Kubernetes in a comprehensive fashion and also simplifies operations for them. Features of Stackdriver Kubernetes Monitoring include: Scalable Comprehensive Observability: Stackdriver Kubernetes Monitoring sums up logs, events and metrics from the Kubernetes environment to understand the behaviour of one’s application. These are rich, unified set of signals which are used by developers to build higher quality applications faster. It also helps operators speed root cause analysis and reduce mean time to resolution (MTTR). Seamless integration with Prometheus: The Stackdriver Kubernetes Monitoring integrates seamlessly with Prometheus--a leading Kubernetes open source monitoring approach--without any change. Unified view: Stackdriver Kubernetes Monitoring provides a unified view into signals from infrastructure, applications and services across multiple Kubernetes clusters. With this, developers, operators and security analysts, can effectively manage Kubernetes workloads. This allows them to easily observe system information from various sources, in flexible ways. Some instances include, inspecting a single container, or scaling up to explore massive, multi-cluster deployments. Get started on-cloud or on-premise easily: Stackdriver Kubernetes Monitoring is pre-integrated with Google Kubernetes Engine. Thus, one can immediately use it within their Kubernetes Engine workloads. It is easily integrated with Kubernetes deployments on other clouds or on-premise infrastructure. Hence, one can access a unified collection of logs, events, and metrics for their application, regardless of where the containers are deployed. Also, Google has open-sourced gVisor, a sandboxed container runtime. gVisor, which is lighter than a Virtual machine, enables secure isolation for containers. It also integrates with Docker and Kubernetes and thus makes it simple to run sandboxed containers in production environments. gVisor is written in Go to avoid security pitfalls that can plague kernels. RedHat shared an open source toolkit called Operator Framework RedHat in collaboration with Kubernetes open source community has shared the Operator Framework to make it easy to build Kubernetes applications. The Operator Framework is an open source toolkit designed in order to manage Kubernetes native applications named as Operators in an effective, automated and scalable manner. The Operator Framework comprises of an: Operator SDK that helps developers in building Operators based on their expertise. This does not require any knowledge of the complexities of Kubernetes API. Operator Lifecycle Manager which supervises the lifecycle of all the operators running across a kubernetes cluster. It also keep a check on the services associated with the operators. Operator Metering, which is soon to be added, allows creating a usage report for Operators providing specialized services. Oracle added new open serverless support and key Kubernetes features to Oracle Container Engine According to a report, security, storage and networking are the major challenges that companies face while working with containers. In order to address these challenges, the Oracle Container Engine have proposed some solutions, which include getting new governance, compliance and auditing features such as Identity and Access Management, role-based access control, support for the Payment Card Industry Data Security Standard, and cluster management auditing capabilities. Scalability features: Oracle is adding support for small and virtualized environments, predictable IOPS, and the ability to run Kubernetes on NVIDIA Tesla GPUs. New networking features: These include load balancing and virtual cloud network. Storage features: The company has added the OCI volume provisioner and flexvolume driver. Additionally, Oracle Container Engine features support for Helm and Tiller, and the ability to run existing apps with Kubernetes. Kublr announced that its version 1.9 provides easy configuration of Kubernetes clusters for enterprise users Kublr unleashed an advanced configuration capability in its version 1.9. This feature is designed to provide customers with flexibility that enables Kubernetes clusters to meet specific use cases. The use cases include: GPU-enabled nodes for Data Science applications Hybrid clusters spanning data centers and clouds, Custom Kubernetes tuning parameters, and Meeting other advanced requirements. New features in the Kublr 1.9 include: Kubernetes 1.9.6 and new Dashboard Improved backups in AWS with full cluster restoration An introduction to Centralized monitoring, IAM, Custom cluster specification Read more about Kublr 1.9 on Kublr blog. Kubernetes announced the availability of Kubeflow 0.1 Kubernetes brought forward a power-packed package for tooling, known as Kubeflow 0.1. Kubeflow 0.1 provides a basic set of packages for developing, training, and deploying machine learning models. This package: Supports Argo, for managing ML workflows Offers Jupyter Hub to create interactive Jupyter notebooks for collaborative and interactive model training. Provides a number of TensorFlow tools, which includes Training Controller for native distributed training. The Training Controller can be configured to CPUs or GPUs and can also be adjusted to fit the size of a cluster by a single click. Additional features such as a simplified setup via bootstrap container, improved accelerator integration, and support for more ML frameworks like Spark ML, XKGBoost, and sklearn will be released soon in the 0.2 version of KubeFlow. CNCF(Cloud Native Computing Foundation) announced a new Certified Kubernetes Application Developer program The Cloud Native Computing Foundation has successfully launched the Certified Kubernetes Application Developer (CKAD) exam and corresponding Kubernetes for Developers course. The CKAD exam certifies that users are fit to design, build, configure, and expose cloud native applications on top of Kubernetes. A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. Read more about this program on the Cloud Native Computing Foundation blog. DigitalOcean launched managed Kubernetes service DigitalOcean cloud computing platform launched DigitalOcean Kubernetes, which is a simple and cost-effective solution for deploying, orchestrating, and managing container workloads on cloud. With the DigitalOcean Kubernetes service, developers can save time and deploy their container workloads without the need to configure things from scratch. The organization has also provided an early access to this Kubernetes service. Read more on the DigitalOcean blog. Apart, from these 7 vendors, many others such as DataDog, Humio, Weaveworks and so on have also announced features, frameworks, and services based on Kubernetes, serverless, and cloud computing. This is not the end to the announcements, read the KubeCon + CloudNativeCon 2018 website to know about other announcements rolled out in this event. Top 7 DevOps tools in 2018 Apache Spark 2.3 now has native Kubernetes support! Polycloud: a better alternative to cloud agnosticism
Read more
  • 0
  • 0
  • 2926

article-image-introducing-numpywren-a-system-for-linear-algebra-built-on-a-serverless-architecture
Sugandha Lahoti
29 Oct 2018
3 min read
Save for later

Introducing numpywren, a system for linear algebra built on a serverless architecture

Sugandha Lahoti
29 Oct 2018
3 min read
Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel. What is numpywren? Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of "cores" and "memory". Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction. Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours. They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. Why serverless for Numpywren? Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms. What’s next for Numpywren? One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives. The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem. The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel. As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research. For further explanation, go through the research paper. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 2925

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 2892
Visually different images

article-image-cloud-next-2019-tokyo-google-announces-new-security-capabilities-for-enterprise-users
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users

Bhagyashree R
01 Aug 2019
3 min read
At its Cloud Next 2019 conference happening in Tokyo, Google unveiled new security capabilities that are coming to its enterprise products, G Suite Enterprise, Google Cloud, and Cloud Identity. These capabilities are intended to help its enterprise customers protect their “users, data, and applications in the cloud.” Google is hosting this two-day event (July 31- Aug 1) to showcase its cloud products. Among the key announcements made are Advanced Protection Program support for enterprise products that are rolling out soon, expanded availability of Titan Security Keys, improved anomaly detection in G Suite enterprise, and more. Advanced Protection Program for high-risk employees The Advanced Protection Program was launched in 2017 to protect the personal Google accounts of users who are at high risk of online threats like phishing. The program goes beyond the traditional two-step verification by enforcing you to use a physical security key in addition to your password for signing in to your Google account. The program will be available in beta in the coming days for G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. It will enable enterprise admins to enforce a set of security policies for employees who are at high-risk of targeted attacks such as IT administrators, business executives, among others. The set of policies include enforcing the use of Fast Identity Online (FIDO) keys like Titan Security Keys, automatically blocking of access to non-trusted third-party apps, and enabling enhanced scanning of incoming emails. Wider availability of Titan Security Keys After looking at the growing demand for Titan Security Keys in the US, Google has now expanded its availability in Canada, France, Japan, and the United Kingdom (UK). These keys are available as bundles of two: USB/NFC and Bluetooth. You can use these keys anywhere FIDO security keys are supported including Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more. Anomalous activity alerts in G Suite G Suite Enterprise and G Suite Enterprise for Education admins can now opt-in to receive anomalous activity alerts in the G Suite alert center. G Suite takes the help of machine learning to analyze security signals within Google Drive to detect potential security risks. These security risks include data exfiltration, policy violations when sharing and downloading files, and more. Google also announced that it will be rolling out support for password vaulted apps in Cloud Identity. Karthik Lakshminarayanan and Vidya Nagarajan from the Google Cloud team wrote in a blog post, “The combination of standards-based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.” You can read the official announcement by Google to know more in detail. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless Understanding security features in the Google Cloud Platform (GCP)
Read more
  • 0
  • 0
  • 2891

article-image-juniper-networks-comes-up-with-5g-iot-ready-routing-platform-mx-series-5g
Gebin George
14 Jun 2018
3 min read
Save for later

Juniper networks comes up with 5G - IoT-ready routing platform, MX Series 5G

Gebin George
14 Jun 2018
3 min read
Juniper networks, one of industry leads in automated, scalable and secure networks, today announced fifth generation of it’s MX Series 5G Universal Routing Platform. This series has more offerings for cutting-edge infrastructure and technology like cloud and IoT, enabling high-level network programmability. It has improved the programmability, performance and flexibility, for rapid cloud deployment by introducing a new set of software. This platform supports complex networks and service-intensive applications such as secured SD-WAN-based services and so on. Executive vice president and chief product officer at Juniper Networks, Manoj Leelanivas, said “ Cloud is eating the world, 5G is ramping up, IoT is presenting a host of new challenges, and security teams simply can’t keep up with the sheer volume of cyber attacks on today’s network. One thing service providers should not have to worry about among all this is the unknown of what lies ahead.” Few highlights of this release are as follows: Juniper Penta Silicon Penta silicon is considered the heart of the 5G platform which is next-generation 16 nm service-optimized, having a packet-forwarding engine that delivers upto 50% power efficiency over existing Junos trio chipset. Pena silicon has native support to MACsec and IPsec crypto engine that enables end to end secure connectivity at scale. In addition to this, Penta silicon also supports flexible native Ethernet (FlexE). MX 5G Control User-Plane Separation (CUPS) The 3GPP CUPS standard allows the customer to separate the evolved packet core user plane (GTP-U), and control plane (GTP-C) with standard interface to help service providers scale each independently as needed. The MX Series 5G platform is the first networking platform to support a standard-based hardware accelerated 5G user-plane in both existing and future MX routers. It enables converged services (wireless and wireline) on the same platform while also allowing integration with third-party 5G control planes. MX10008 and MX10016 Universal Chassis MX series continues to do innovations in the area of cloud, enterprise networking, and previously announced PTX and QFX Universal Chassis gains two new MX variants with today’s announcement: MX10008 and MX10016. A variety of line cards and software are available to satisfy specific networking use cases across the data center, enterprise and WAN. Refer to the official Juniper website for details on MX Series 5G. Five developer centric sessions at IoT World 2018 Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Windows 10 IoT Core: What you need to know  
Read more
  • 0
  • 0
  • 2868

article-image-triggermesh-announces-open-source-knative-lambda-runtime-aws-lambda-functions-can-now-be-deployed-on-knative
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

TriggerMesh announces open source ‘Knative Lambda Runtime’; AWS Lambda functions can now be deployed on Knative!

Melisha Dsouza
10 Jan 2019
2 min read
"We believe that the key to enabling cloud native applications, is to provide true portability and communication across disparate cloud infrastructure." Mark Hinkle, co-founder of TriggerMesh Yesterday, TriggerMesh- the open source multi-cloud service management platform- announced their open source project ‘Knative Lambda Runtime’ (TriggerMesh KLR). KLR will bring AWS Lambda serverless computing to Kubernetes which will enable users to run Lambda functions on Knative-enabled clusters and serverless clouds. Amazon Web Services' (AWS) Lambda for serverless computing can only be used on AWS and not on another cloud platform. TriggerMesh KLR changes the game completely as now, users can avail complete portability of Amazon Lambda functions to Knative native enabled clusters, and Knative enabled serverless cloud infrastructure “without the need to rewrite these serverless functions”. [box type="shadow" align="" class="" width=""]Fun fact: KLR is pronounced as ‘clear’[/box] Features of TriggerMesh Knative Lambda Runtime Knative is a  Google Cloud-led Kubernetes-based platform which can be used to build, deploy, and manage modern serverless workloads. KLR are Knative build templates that can be used to runan AWS Lambda function in a Kubernetes cluster as is in a Knative powered Kubernetes cluster (installed with Knative). KLR enables serverless users to move functions back and forth between their Knative and AWS Lambda. AWS  Lambda Custom Runtime API in combination with the Knative Build system makes deploying KLR possible. Serverless users have shown a positive response to this announcement, with most of them excited for this news. Kelsey Hightower, developer advocate, Google Cloud Platform, calls this news ‘dope’ and we can understand why! His talk at KubeCon+CloudNativeCon 2018 had focussed on serveless and its security aspects. Now that AWS Lambda functions can be run on Google’s Knative, this marks a new milestone for TriggerMesh. https://twitter.com/kelseyhightower/status/1083079344937824256 https://twitter.com/sebgoa/status/1083014086609301504 It would be interesting to see how this moulds the path to a Kubernetes hybrid-cloud model. Head over to TriggerMesh’s official blog for more insights to this news. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes  
Read more
  • 0
  • 0
  • 2863
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-redhat-contributes-etcd-a-distributed-key-value-store-project-to-the-cloud-native-computing-foundation-at-kubecon-cloudnativecon
Amrata Joshi
12 Dec 2018
2 min read
Save for later

RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, RedHat announced its contribution towards etcd, an open source project and its acceptance into the Cloud Native Computing Foundation (CNCF). Red Hat is participating in developing etcd, as a part of the enterprise Kubernetes product, Red Hat OpenShift. https://twitter.com/coreos/status/1072562301864161281 etcd is an open source, distributed, consistent key-value store for service discovery, shared configuration, and scheduler coordination. It is a core component of software that comes with safer automatic updates and it also sets up overlay networking for containers. The CoreOS team created etcd in 2013 and the Red Hat engineers maintained it by working alongside a team of professionals from across the industry. The etcd project focuses on safely storing critical data of a distributed system and demonstrating its quality. It is also the primary data store for Kubernetes. It uses the Raft consensus algorithm for replicated logs. With etcd, applications can maintain more consistent uptime and work smoothly even when the individual servers are failing. Etcd is progressing and it already has 157 releases with etcd v3.3.10 being the latest one that got released just two month ago. etcd is designed as a consistency store across environments including public cloud, hybrid cloud and bare metal. Where is etcd used? Kubernetes clusters use etcd as their primary data store. Red Hat OpenShift customers and Kubernetes users benefit from the community work on the etcd project. It is also used by communities and users like Uber, Alibaba Cloud, Google Cloud, Amazon Web Services, and Red Hat. etcd will be under Linux Foundation and the domains and accounts will be managed by CNCF. The community of etcd maintainers, including Red Hat, Alibaba Cloud, Google Cloud, Amazon, etc, won’t be changed. The project will continue to focus on the communities that depend on it. Red Hat will continue extending etcd with the etcd Operator in order to bring more security and operational ease. It will enable users to easily configure and manage etcd by using a declarative configuration that creates, configures, and manages etcd clusters. Read more about this news on RedHat’s official blog. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 2862

article-image-verizon-chooses-amazon-web-servicesaws-as-its-preferred-cloud-provider
Savia Lobo
18 May 2018
2 min read
Save for later

Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider

Savia Lobo
18 May 2018
2 min read
Verizon Communications Inc. recently announced that it is migrating about 1000 of its business-critical applications and database back-end systems to the popular cloud provider, Amazon Web Services (AWS). Verizon had bought Terramark, a cloud and service provider, in 2011 as part of its public and private cloud strategy. This strategy included building its own cloud that offered infrastructure-as-a-service to its customers. AWS has stayed ahead of competition, where it offered added services to its customers. On the other hand, Verizon could not stay in the race for longer as it was usurped by Microsoft and Google. Due to this, two years ago, in 2016, Verizon closed down its public cloud offering and then sold off its cloud and managed hosting service assets to IBM and also sold a number of data centres to Equinix. Verizon had first started working with AWS in 2015 and has many business and consumer applications already running in the cloud. The current migrations to AWS is part of Verizon’s corporate-wide initiative, which is, to increase agility and reduce costs through the use of cloud computing. Some benefits of this migration include: With the help of AWS, Verizon will enable it to access more comprehensive set of cloud capabilities. This will ensure that its developers are able to invent on behalf of its customers. Verizon has built AWS-specific training facilities where its employees can quickly update themselves on the AWS technologies and learn how to innovate with speed and at scale. AWS enables Verizon to quickly deliver the best, most efficient customer experiences. Verizon also aims to make the public cloud a core part of its digital transformation, upgrading its database management approach to replace its proprietary solutions with Amazon Aurora To know more about AWS and Verizon’s partnership, read the AWS blog post. Linux Foundation launches the Acumos Al Project to make AI accessible Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2852

article-image-introducing-automatic-dashboards-by-amazon-cloudwatch-for-monitoring-all-aws-resources
Savia Lobo
26 Nov 2018
1 min read
Save for later

Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources

Savia Lobo
26 Nov 2018
1 min read
Last week, Amazon CloudWatch, a monitoring and management service, introduced Automatic Dashboards for monitoring all the AWS resources. These Automatic Dashboards are available in AWS public regions with no additional charges. Through CloudWatch Automatic Dashboards, users can now get aggregated views of health and performance of all the AWS resources. This allows users to quickly monitor, explore user accounts and resource-based view of metrics and alarms, and easily drill-down to understand the root cause of performance issues. Once identified, users can quickly act by going directly to the AWS resource. Features of these Automatic Dashboards are: They are pre-built with AWS services recommended best practices They remain resource aware These dashboards are dynamically updated to reflect the latest state of important performance metrics Users can filter and troubleshoot to a specific view without additional code to reflect the latest state of one's AWS resources. To know more about Automatic Dashboards in detail, visit its official website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Amazon announces Corretto, an open source, production-ready distribution of OpenJDK backed by AWS AWS announces more flexibility its Certification Exams, drops its exam prerequisites
Read more
  • 0
  • 0
  • 2832

article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 2832
article-image-introducing-pivotal-function-service-alpha-an-open-kubernetes-based-multi-cloud-serverless-framework-for-developer-workloads
Melisha Dsouza
10 Dec 2018
3 min read
Save for later

Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads

Melisha Dsouza
10 Dec 2018
3 min read
Last week, Pivotal announced the ‘Pivotal Function Service’ (PFS)  in alpha. Until now, Pivotal has focussed on making open-source tools for enterprise developers but has lacked a serverless component to its suite of offerings. This aspect changes with the launch of PFS. PFS is designed to work both on-premise and in the cloud in a cloud-native fashion while being open source. It is a Kubernetes-based, multi-cloud function service offering customers a single platform for all their workloads on any cloud. Developers can deploy and operate databases, batch jobs, web APIs, legacy apps, event-driven functions, and many other workloads the same way everywhere, all because of the Pivotal Cloud Foundry (PCF) platform. This is comprised of Pivotal Application Service (PAS), Pivotal Container Service (PKS), and now, Pivotal Function Service (PFS). Providing the same developer and operator experience on every public or cloud, PFS is event-oriented with built-in components that make it easy to architect loosely coupled, streaming systems. Its buildpacks simplify packaging and are operator-friendly providing a secure, low-touch experience running atop Kubernetes. The fact that Pivotal can work on any cloud as an open product, makes it stand apart from cloud providers like Amazon, Google, and Microsoft, which provide similar services that run exclusively on their clouds. Features of PFS PFS is built on Knative, which is an open-source project led by Google that simplifies how developers deploy functions atop Kubernetes and Istio. PFS runs on Kubernetes and Istio and helps customers take advantage of the benefits of Kubernetes and Istio, abstracting away the complexity of both technologies. PFS allows customers to use familiar, container-based workflows for serverless scenarios. PFS Event Sources helps customers create feeds from external event sources such as GitHub webhooks, blob stores, and database services. PFS can be connected easily with popular message brokers such as Kafka, Google Pub/Sub, and RabbitMQ; that provide a reliable backing services for messaging channels. Pivotal has continued to develop the riff invoker model in PFS, to help developers deliver both streaming and non-streaming function code using simple, language-idiomatic interfaces. The new package includes several key components for developers, including a native eventing ability that provides a way to build rich event triggers to call whatever functionality a developer requires within a Kubernetes-based environment. This is particularly important for companies that deploy a hybrid use case to manage the events across on-premise and cloud in a seamless way. Head over to Pivotal’s official Blog to know more about this announcement. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12/ ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 2808

article-image-google-kubernetes-engine-was-down-last-friday-users-left-clueless-of-outage-status-and-rca
Melisha Dsouza
12 Nov 2018
3 min read
Save for later

Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA

Melisha Dsouza
12 Nov 2018
3 min read
On the 9th of November, at 4.30 am US/Pacific time,  the Google Kubernetes Engine faced a service disruption. It was questionable whether or not a user would be able to launch a node pool through Cloud Console UI. The team responded to the issue saying that they would get back to users with more information by Friday, 9th November 04:45 am US/Pacific time. However, this was not solved by the given time. Another status update was posted by the team assuring users that mitigation work was underway by the Engineering Team. Users were to be posted with another update by 06:00 pm US/Pacific with current details. In the meantime, affected customers were advised to use gcloud command to create new Node Pools. An update for the issue being finally resolved was posted on Sunday, the 11th of November, stating that services were restored on Friday at 14:30 US/Pacific time.  . However, no proper explanation has been provided regarding what led to the service disruption. They did mention that an internal investigation of the issue will be done and appropriate improvements to the systems will be implemented to help prevent or minimize future recurrence of the issue. According to a user’s summary on Hacker News, “Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems. Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.” According to another user, “When everything works, GCP is the best. Stable, fast, simple, reliable. When things stop working, GCP is the worst. They require way too much work before escalating issues or attempting to find a solution”. We can’t help but agree looking at the timeline of the service downtime. Users have also expressed disappointment over how the outage was managed. Source:Hacker News With users demanding a root cause analysis of the situation, it is only fitting that Google provides one so users can trust the company better. You can check out Google Cloud’s blog post detailing the timeline of the downtime. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]  
Read more
  • 0
  • 0
  • 2787

article-image-polaris-gps-rubriks-new-saas-platform-for-data-management-applications
Savia Lobo
06 Apr 2018
2 min read
Save for later

Polaris GPS: Rubrik's new SaaS platform for data management applications

Savia Lobo
06 Apr 2018
2 min read
Rubrik, a cloud data management company launched Polaris GPS, a new SaaS platform for Data Management Applications. This new platform helps businesses and individuals to manage their information spread across the cloud. Polaris GPS delivers a single control and policy management console across globally distributed business applications that are locally managed by Rubrik’s Cloud Data Management instances. Polaris GPS SaaS Platform This new SaaS platform forms a unified system of record for business information across all enterprise applications running in data centers and clouds. The system of record includes native search, workflow orchestration, and a global content catalog, which are exposed through an open API architecture. Developers can leverage these APIs to deliver high-value data management applications for data policy, control, security, and deep intelligence. These applications can further address challenges of risk mitigation, compliance, and governance within the enterprise. Some key features of Polaris GPS : Connects all applications and data across data center and cloud with a uniform framework. No infrastructure or upgrades required. One can leverage the latest features immediately. With Polaris GPS, one can apply the same logic throughout to any kind of data and focus on business outcomes rather than technical processes. Provides faster on-demand broker services with the help of API-driven connectivity. Helps mitigate risk with automated compliance. This means one can define policies and Polaris applies these globally to all your business applications. Read more about Polaris GPS, on Rubrik’s official website.
Read more
  • 0
  • 0
  • 2784
article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 2774

article-image-day-1-at-the-amazon-re-invent-conference-aws-robomaker-fully-managed-sftp-service-for-amazon-s3-and-much-more
Melisha Dsouza
27 Nov 2018
6 min read
Save for later

Day 1 at the Amazon re: Invent conference - AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Melisha Dsouza
27 Nov 2018
6 min read
Looks like Christmas has come early this year for AWS developers! Following Microsoft’s Surface devices and Amazon’s wide range of Alex products, the latter has once again made a series of big releases, at the Amazon re:Invent 2018 conference. These announcements include an AWS RoboMaker to help developers test and deploy robotics applications, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, Amazon EC2 C5n Instances Featuring 100 Gbps of Network Bandwidth and much more! Let’s take a look at what developers can expect from these releases. #1 AWS RoboMaker helps developers develop, test, deploy robotics applications at scale The AWS RoboMaker allows developers to develop, simulate, test, and deploy intelligent robotics applications at scale. Code can be developed inside of a cloud-based development environment and can be tested in a Gazebo simulation. Finally, they can deploy the finished code to a fleet of one or more robots. RoboMaker uses an open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. The service suit includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker can work with robots of many different shapes and sizes running in many different physical environments. After a developer designs and codes an algorithm for the robot, they can also monitor how the algorithm performs in different conditions or environments. You can check an interesting simulation of a Robot using Robomaker at the AWS site. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker. #2 AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3 AWS Transfer for SFTP is a fully managed service that enables the direct transfer of files to and fro Amazon S3 using the Secure File Transfer Protocol (SFTP). Users just have to create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. AWS allows users to migrate their file transfer workflows to AWS Transfer for SFTP- by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53. Along with AWS services, acustomer'ss data in S3 can be used for processing, analytics, machine learning, and archiving. Along with control over user identity, permissions, and keys; users will have full access to the underlying S3 buckets and can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, etc. On the outbound side, users can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners. #3 EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors Amazon has launched EC2 instances powered by Arm-based AWS Graviton Processors. These are built around Arm cores. The A1 instances are optimized for performance and cost and are a great fit for scale-out workloads where the load has to be shared across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. AWS Graviton are custom designed by AWS and deliver targeted power, performance, and cost optimizations. A1 instances are built on the AWS Nitro System, that  maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. #4 Introducing Amazon EC2 C5n Instances featuring 100 Gbps of Network Bandwidth AWS announced the availability of C5n instances that can utilize up to 100 Gbps of network bandwidth to provide a significantly higher network performance across all instance sizes, ranging from 25 Gbps of peak bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size. They are powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake) and provide support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set. These instances also feature 33% higher memory footprint compared to C5 instances and are ideal for applications that can take advantage of improved network throughput and packet rate performance. Based on the next generation AWS Nitro System, C5n instances make 100 Gbps networking available to network-bound workloads.  Workloads on C5n instances take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). The improved network performance will accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. #5  Introducing AWS Global Accelerator AWS Global Accelerator is a  a network layer service that enables organizations to seamlessly route traffic to multiple regions, while improving availability and performance for their end users. It supports both TCP and UDP protocols, and performs a health check of a user’s target endpoints while routing traffic away from unhealthy applications. AWS Global Accelerator uses AWS’ global network to direct internet traffic from an organization's users to their applications running in AWS Regions  based on a users geographic location, application health, and routing policies that can be configured. You can head over to the AWS blog to get an in-depth view of how this service works. #6 Amazon’s  ‘Machine Learning University’ In addition to these announcements at re:Invent, Amazon also released a blog post introducing its ‘Machine Learning University’, where the company announced that the same machine learning courses used to train engineers at Amazon can now be availed by all developers through AWS. These courses, available as part of a new AWS Training and Certification Machine Learning offering, will help organizations accelerate the growth of machine learning skills amongst their employees. With more than 30 self-service, self-paced digital courses and over 45 hours of courses, videos, and labs, developers can be rest assured that ML fundamental and  real-world examples and labs, will help them explore the domain. What’s more? The digital courses are available at no charge and developers only have to pay for the services used in labs and exams during their training. This announcement came right after Amazon Echo Auto was launched at Amazon’s Hardware event. In what Amazon defines as ‘Alexa to vehicles’, the Amazon Echo Auto is a small dongle that plugs into the car’s infotainment system, giving drivers the smart assistant and voice control for hands-free interactions. Users can ask for things like traffic reports, add products to shopping lists and play music through Amazon’s entertainment system. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS
Read more
  • 0
  • 0
  • 2771