Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-kubeflow-0-3-released-with-simpler-setup-and-improved-machine-learning-development
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Kubeflow 0.3 released with simpler setup and improved machine learning development

Melisha Dsouza
02 Nov 2018
3 min read
Early this week, the Kubeflow project launched its latest version- Kubeflow 0.3, just 3 months after version 0.2 was out. This release comes with easier deployment and customization of components along with better multi-framework support. Kubeflow is the machine learning toolkit for Kubernetes. It is an open source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Users are provided with a easy to use ML stack anywhere that Kubernetes is already running, and this stack can self configure based on the cluster it deploys into. Features of Kubeflow 0.3 1. Declarative and Extensible Deployment Kubeflow 0.3 comes with a deployment command line script; kfctl.sh. This tool allows consistent configuration and deployment of Kubernetes resources and non-K8s resources (e.g. clusters, filesystems, etc. Minikube deployment provides a single command shell script based deployment. Users can also use MicroK8s to easily run Kubeflow on their laptop. 2. Better Inference Capabilities Version 0.3 makes it possible to do batch inference with GPUs (but non distributed) for TensorFlow using Apache Beam.  Batch and streaming data processing jobs that run on a variety of execution engines can be easily written with Apache Beam. Running TFServing in production is now easier because of the Liveness probe added and using fluentd to log request and responses to enable model retraining. It also takes advantage of the NVIDIA TensorRT Inference Server to offer more options for online prediction using both CPUs and GPUs. This Server is a containerized, production-ready AI inference server which maximizes utilization of GPU servers. It does this by running multiple models concurrently on the GPU and supports all the top AI frameworks. 3. Hyperparameter tuning Kubeflow 0.3 introduces a new K8s custom controller, StudyJob, which allows a hyperparameter search to be defined using YAML thus making it easy to use hyperparameter tuning without writing any code. 4. Miscellaneous updates The upgrade includes a release of a K8s custom controller for Chainer (docs). Cisco has created a v1alpha2 API for PyTorch that brings parity and consistency with the TFJob operator. It is easier to handle production workloads for PyTorch and TFJob because of the new features added to them. There is also support provided for gang-scheduling using Kube Arbitrator to avoid stranding resources and deadlocking in clusters under heavy load. The 0.3 Kubeflow Jupyter images ship with TF Data-Validation. TF Data-Validation is a library used to explore and validate machine learning data. You can check the examples added by the team to understand how to leverage Kubeflow. The XGBoost example indicates how to use non-DL frameworks with Kubeflow The object detection example illustrates leveraging GPUs for online and batch inference. The financial time series prediction example shows how to leverage Kubeflow for time series analysis The team has said that the next major release:  0.4, will be coming by the end of this year. They will focus on ease of use to perform common ML tasks without having to learn Kubernetes. They also plan to make it easier to track models by providing a simple API and database for tracking models. Finally, they intend to upgrade the PyTorch and TFJob operators to beta. For a complete list of updates, visit the 0.3 Change Log on GitHub. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl    
Read more
  • 0
  • 0
  • 2600

article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 2600

article-image-redhats-quarkus-announces-plans-for-quarkus-1-0-releases-its-rc1
Vincy Davis
11 Nov 2019
3 min read
Save for later

Red Hat’s Quarkus announces plans for Quarkus 1.0, releases its rc1 

Vincy Davis
11 Nov 2019
3 min read
Update: On 25th November, the Quarkus team announced the release of Quarkus 1.0.0.Final bits. Head over to the Quarkus blog for more details on the official announcement. Last week, RedHat’s Quarkus, the Kubernetes native Java framework for GraalVM & OpenJDK HotSpot announced the availability of its first release candidate. It also notified users that its first stable version will be released by the end of this month. Launched in March this year, Quarkus framework uses Java libraries and standards to provide an effective solution for running Java on new deployment environments like serverless, microservices, containers, Kubernetes, and more. Java developers can employ this framework to build apps with faster startup time and less memory than traditional Java-based microservices frameworks. It also provides flexible and easy to use APIs that can help developers to build cloud-native apps, and best-of-breed frameworks. “The community has worked really hard to up the quality of Quarkus in the last few weeks: bug fixes, documentation improvements, new extensions and above all upping the standards for developer experience,” states the Quarkus team. Latest updates added in Quarkus 1.0 A new reactive core based on Vert.x with support for reactive and imperative programming models. This feature aims to make reactive programming a first-class feature of Quarkus. A new non-blocking security layer that allows reactive authentications and authorization. It also enables reactive security operations to integrate with Vert.x. Improved Spring API compatibility, including Spring Web and Spring Data JPA, as well as Spring DI. A Quarkus ecosystem also called as “universe”, is a set of extensions that fully supports native compilation via GraalVM native image. It supports Java 8, 11 and 13 when using Quarkus on the JVM. It will also support Java 11 native compilation in the near future. RedHat says, “Looking ahead, the community is focused on adding additional extensions like enhanced Spring API compatibility, improved observability, and support for long-running transactions.” Many users are excited about Quarkus and are looking forward to trying the stable version. https://twitter.com/zemiak/status/1192125163472637952 https://twitter.com/loicrouchon/status/1192206531045085186 https://twitter.com/lasombra_br/status/1192114234349563905 How Quarkus brings Java into the modern world of enterprise tech Apple shares tentative goals for WebKit 2020 Apple introduces Swift Numerics to support numerical computing in Swift Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Fastly announces the next-gen edge computing services available in private beta
Read more
  • 0
  • 0
  • 2592
Visually different images

article-image-google-cloud-console-incident-resolved
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Google Cloud Console Incident Resolved!

Melisha Dsouza
12 Mar 2019
2 min read
On 11th March, Google Cloud team received a report of an issue with Google Cloud Console and Google Cloud Dataflow. Mitigation work to fix the issue was started on the same day as per Google Cloud’s official page. According to Google post, “Affected users may receive a "failed to load" error message when attempting to list resources like Compute Engine instances, billing accounts, GKE clusters, and Google Cloud Functions quotas.” As a workaround, the team suggested the use of gcloud SDK instead of the Cloud Console. No workaround was suggested for Google Cloud Dataflow. While the mitigation was underway, another update was posted by the team: “The issue is partially resolved for a majority of users. Some users would still face trouble listing project permissions from the Google Cloud Console.” The issue which began around 09:58 Pacific Time, was finally resolved around 16:30 Pacific Time on the same day. The team said that they will conduct an internal investigation of this issue and “make appropriate improvements to their systems to help prevent or minimize future recurrence. They will also provide a more detailed analysis of this incident once they have completed our internal investigation.”  There is no other information revealed as of today. This downtime affected a  majority of Google Cloud users. https://twitter.com/lukwam/status/1105174746520526848 https://twitter.com/jbkavungal/status/1105184750560571393 https://twitter.com/bpmtri/status/1105264883837239297 Head over to Google Cloud’s official page for more insights on this news. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 2589

article-image-debian-project-leader-elections-goes-without-nominations
Fatema Patrawala
13 Mar 2019
5 min read
Save for later

Debian project leader elections goes without nominations. What now?

Fatema Patrawala
13 Mar 2019
5 min read
The Debian Project is an association of individuals who have made common cause to create a free operating system. One of the traditional rites of the northern hemisphere spring is the elections for the Debian project leader. Over a six-week period in the month of March they hold the elections, interested candidates put their names forward, describe their vision for the project as a whole, answer questions from Debian developers, then wait and watch while the votes come in. But what would happen if Debian were to hold an election and no candidates stepped forward? The Debian project has just found itself in that situation this year and is trying to figure out what will happen next. The Debian project scatters various types of authority widely among its members, leaving relatively little for the project leader. As long as they stay within the bounds of Debian policy, individual developers have nearly absolute control over the packages they maintain, for example: Difficult technical disagreements between developers are handled by the project's technical committee. The release managers and FTP masters make the final decisions on what the project will actually ship (and when). The project secretary ensures that the necessary procedures are followed. The policy team handles much of the overall design for the distribution. So, in a sense, there is relatively little leading left for the leader to do. The roles that do fall to the leader fit into a couple of broad areas; the first of those is representing the project to the rest of the world. The leader gives talks at conferences and manages the project's relationships with other groups and companies. The second role is, to a great extent, administrative: the leader manages the project's money appoints developers to other roles within the project and takes care of details that nobody else in the project is responsible for Leaders are elected to a one-year term; for the last two years, this position has been filled by Chris Lamb. The February "Bits from the DPL" by Chris gives a good overview of what sorts of tasks the leader is expected to carry out. The Debian constitution describes the process for electing the leader. Six weeks prior to the end of the current leader's term, a call for candidates goes out. Only those recognized as Debian developers are eligible to run; they get one week to declare their intentions. There follows a three-week campaigning period, then two weeks for developers to cast their votes. This being Debian, there is always a "none of the above" option on the ballot; should this option win, the whole process restarts from the beginning. This year, the call for nominations was duly sent out by project secretary Kurt Roeckx on March 3. But, as of March 10, no eligible candidates had put their names forward. Lamb has been conspicuous in his absence from the discussion, with the obvious implication that he does not wish to run for a third term. So, it would seem, the nomination period has come to a close and the campaigning period has begun, but there is nobody there to do any campaigning. This being Debian, the constitution naturally describes what is to happen in this situation: the nomination period is extended for another week. Any Debian developers who procrastinated past the deadline now have another seven days in which to get their nominations in; the new deadline is March 17. Should this deadline also pass without candidates, it will be extended for another week; this loop will repeat indefinitely until somebody gives in and submits their name. Meanwhile, though, there is another interesting outcome from this lack of candidacy: the election of a new leader, whenever it actually happens, will come after the end of Lamb's term. There is no provision for locking the current leader in the office and requiring them to continue carrying out its duties; when the term is done, it's done. So the project is now certain to have a period of time where it has no leader at all. Some developers seem to relish this possibility; one even suggested that a machine-learning system could be placed into that role instead. But, as Joerg Jaspert pointed out: "There is a whole bunch of things going via the leader that is either hard to delegate or impossible to do so". Given enough time without a leader, various aspects of the project's operation could eventually grind to a halt. The good news is that this possibility, too, has been foreseen in the constitution. In the absence of a project leader, the chair of the technical committee and the project secretary are empowered to make decisions — as long as they are able to agree on what those decisions should be. Since Debian developers are famously an agreeable and non-argumentative bunch, there should be no problem with that aspect of things. In other words, the project will manage to muddle along for a while without a leader, though various aspects of processes could slow down and become more awkward if the current candidate drought persists. One might well wonder, though, why there seems to be nobody who wants to take the helm of this project for a year. Could the fact that it is an unpaid position requiring a lot of time and travel have something to do with it? If that were indeed to prove to be part of the problem, Debian might eventually have to consider doing what a number of similar organizations have done and create a paid position to do this work. Such a change would not be easy to make. But, if the project finds itself struggling to find a leader every year, it's a discussion that may need to happen. Are Debian and Docker slowly losing popularity? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw  
Read more
  • 0
  • 0
  • 2584

article-image-cortex-an-open-source-horizontally-scalable-multi-tenant-prometheus-as-a-service-becomes-a-cncf-sandbox-project
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Cloud Native Computing Foundation (CNCF) accepted Cortex as a CNCF Sandbox project. Cortex is an open source, horizontally scalable, multi-tenant Prometheus-as-a-service. It provides long-term storage for Prometheus metrics when used as a remote write destination. It also comes with a horizontally scalable, Prometheus-compatible query API. It provides uses cases for: Service providers to enable them to manage a large number of Prometheus instances and provide long-term storage. Enterprises to centralize management of large-scale Prometheus deployments and ensure long-term durability of Prometheus data. Originally developed by Weaveworks, it is now being used in production by organizations like Grafana Labs, FreshTracks, and EA. How does it work? The following diagram shows its architecture: Source: CNCF 1. Scraping samples: First, a Prometheus instance scraps all of the users’ services and then forwards them to a Cortex deployment. It does this using the remote_write API, which was added to Prometheus to support Cortex and other integrations. 2. Distributor distributes the samples: The instance then sends all these samples to distributor, which is a stateless service that consults the ring to figure out which ingesters should ingest the sample. The ingesters are arranged using a consistent hash ring, keyed on the fingerprint of the time series, and stored in a consistent data store, such as Consul. Distributor finds the owner ingester and forwards the sample to it and also to two ingesters after it in the ring. This means if an ingester goes down, we have two others that have its data. 3. Ingesters make chunks of samples: Ingesters continuously receive a stream of samples and group them together in chunks. These chunks are then stored in a backend database, such as DynamoDB, BigTable, or Cassandra. Ingesters facilitate this chunking process so that Cortex isn’t constantly writing to its backend database. Alexis Richardson, CEO of Weaveworks believes that being a CNCF Sandbox project will help grow the Prometheus ecosystem: “By joining CNCF, Cortex will have a neutral home for collaboration between contributor companies, while allowing the Prometheus ecosystem to grow a more robust set of integrations and solutions. Cortex already has a strong affinity with several CNCF technologies, including Kubernetes, gRPC, OpenTracing and Jaeger, so it’s a natural fit for us to continue building on these interoperabilities as part of CNCF.” To know more in detail, check out the official announcement by CNCF and also read What is Cortex?, a blog post published on Weaveworks Blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 2580
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-microsoft-introduces-immutable-blob-storage-a-highly-protected-object-storage-for-azure
Savia Lobo
06 Jul 2018
2 min read
Save for later

Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure

Savia Lobo
06 Jul 2018
2 min read
Microsoft released a new Chamber of Secrets named as ‘Immutable Blob Storage’.  This storage service safeguards sensitive data and is built on the Azure Platform. It is the latest addition to Microsoft’s continuous development towards the industry-specific cloud offerings. This service is mainly built for the financial sector but can be utilized for other sectors too by helping them in managing the information they own. The Immutable Blob Storage is a specialized version of Azure’s existing object storage and includes a number of added security features, which include: The ability to configure an environment such that the records inside it are not easily deleted by anyone; not even by the administrators who maintain the deployment. Enables companies to block edits to existing files. This setting can assist banks and other heavily regulated organizations to prove the validity of their records during audits. The service costs of Immutable Blob Storage is as same as Azure’s regular object service and the two products are integrated with another. Immutable Blob Storage can be used for both standard and immutable storage. This means  IT no longer needs to manage the complexity of a separate archive storage solution. These features come on top of the ones that have been carried over to Immutable Blob Storage from the standard object service. This also includes a data lifecycle management tool that allows organizations to set policies for managing their data. Read more about this new feature on Microsoft Azure’s blog post. How to migrate Power BI datasets to Microsoft Analysis Services models [Tutorial] Microsoft releases Open Service Broker for Azure (OSBA) version 1.0 Microsoft Azure IoT Edge is open source and generally available!
Read more
  • 0
  • 0
  • 2564

article-image-microsoft-open-sources-trill-a-streaming-engine-that-employs-algorithms-to-process-a-trillion-events-per-day
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”

Prasad Ramesh
18 Dec 2018
3 min read
Yesterday, Microsoft open sourced Trill, previously an internal project used for processing “a trillion events per day”. It was the first streaming engine to incorporate algorithms that process events in small batches of data based on latency on the user side. It powers services like Financial Fabric, Bing ads, Azure stream analytics, Halo, etc. With the increasing flow of data, the ability to process huge amounts of data each millisecond is a necessity. Microsoft has open sourced Trill for processing a trillion events per day to ‘address this growing trend’. Microsoft Trill features Trill is a single-node engine library and any .NET application, service, or platform can readily use Trill to start processing queries. It has a temporal query language which allows users to use complex queries over real-time and offline data sets. Trill has high performance which allows users to get results with great speed and low latency. How did Trill start? Trill was a research project at Microsoft Research in 2012. It has been described in various research papers like VLDB and the IEEE Data Engineering Bulletin. Trill is based on a former Microsoft service called StreamInsight—a platform that allowed developers to develop and deploy event processing applications. Both of these systems are based on an extended query and data model which extends the relational model with a component for time. Systems before Trill could only achieve a part of the benefits. All these advantages come in one package with Trill. Trill was the very first streaming engine that incorporated algorithms to process events in data batches based on the latency tolerated by users. It was also the first engine that organized data batches in a columnar format. This enabled queries to execute with much higher efficiency. Using Trill is similar to working with any .NET library. Trill has the same performance for real-time and offline datasets. Trill allows users to perform advanced time-oriented analytics and also look for complex patterns over streaming datasets. Open-sourcing Trill Microsoft believes there Trill is the best available tool in this domain in the developer community. By open sourcing it, they want to offer the features of IStreamable abstraction to all customers. There are opportunities for community involvement for future development of Trill. It allows users to write custom aggregates. There are also research projects built on Trill where the code is present but is not yet ready to use. For more details on Trill, visit the Microsoft website. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced
Read more
  • 0
  • 0
  • 2563

article-image-cloud-native-application-bundle-cnab-docker-microsoft-partner-on-an-open-source-cloud-agnostic-all-in-one-packaging-format
Savia Lobo
05 Dec 2018
3 min read
Save for later

Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format

Savia Lobo
05 Dec 2018
3 min read
At Dockercon Europe 2018 held in Barcelona, Microsoft in collaboration with the Docker community announced Cloud Native Application Bundle (CNAB), which is an open-source, cloud-agnostic specification for packaging and running distributed applications. Cloud Native Application Bundle (CNAB) Cloud Native Application Bundle(CNAB) is the combined effort of Microsoft and the Docker community to provide a single all-in-one packaging format, which unifies management of multi-service, distributed applications across different toolchains. Docker is the first to implement CNAB for containerized applications. It plans to expand CNAB across the Docker platform to support new application development, deployment, and lifecycle management. CNAB allows users to define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services. Patrick Chanezon, technical staff at Docker Inc. writes, “Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry.” Docker also plans to enable organizations to deploy and manage CNAB-based applications in Docker Enterprise soon. Scott Johnston, Chief product officer at Docker, said, “this is not a Docker proprietary thing, this is not a Microsoft proprietary thing, it can take Compose files as inputs, it can take Helm charts as inputs, it can take Kubernetes YAML as inputs, it can take serverless artifacts as inputs.” According to Microsoft, they partnered with Docker to solve issues with ISV (Independent Software Vendor) and enterprises including: To be able to describe their application as a single artifact, even when it is composed of a variety of cloud technologies Wanting to provision their applications without having to master dozens of tools They needed to manage lifecycle (particularly installation, upgrade, and deletion) of their applications Added features that CNAB brings include: Manage discrete resources as a single logical unit that comprises an app. Use and define operational verbs for lifecycle management of an app Sign and digitally verify a bundle, even when the underlying technology doesn’t natively support it. Attest and digitally verify that the bundle has achieved that state to control how the bundle can be used. Enable the export of the bundle and all dependencies to reliably reproduce in another environment, including offline environments (IoT edge, air-gapped environments). Store bundles in repositories for remote installation. According to a user review on Hacker News thread, “The goal with CNAB is to be able to version your application with all of its components and then ship that as one logical unit making it reproducible. The package format is flexible enough to let you use the tooling that you're already using”. Another user said, “CNAB makes reproducibility possible by providing unified lifecycle management, packaging, and distribution. Of course, if bundle authors don't take care to work around problems with imperative logic, that's a risk.” To know more about Cloud Native Application Bundle(CNAB) in detail, visit Microsoft blog. Microsoft and Mastercard partner to build a universally-recognized digital identity Creating a Continuous Integration commit pipeline using Docker [Tutorial] Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login
Read more
  • 0
  • 0
  • 2556

article-image-introducing-tigergraph-cloud-a-database-as-a-service-in-the-cloud-with-ai-and-machine-learning-support
Savia Lobo
27 Nov 2018
3 min read
Save for later

Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support

Savia Lobo
27 Nov 2018
3 min read
Today, TigerGraph, the world’s fastest graph analytics platform for the enterprise, introduced TigerGraph Cloud, the simplest, most robust and cost-effective way to run scalable graph analytics in the cloud. With TigerGraph Cloud, users can easily get their TigerGraph services up and running. They can also tap into TigerGraph’s library of customizable graph algorithms to support key use cases including AI and Machine Learning. It provides data scientists, business analysts, and developers with the ideal cloud-based service for applying SQL-like queries for faster and deeper insights into data. It also enables organizations to tap into the power of graph analytics within hours. Features of TigerGraph Cloud Simplicity It forgoes the need to set up, configure or manage servers, schedule backups or monitoring, or look for security vulnerabilities. Robustness TigerGraph relies on the same framework providing point-in-time recovery, powerful configuration options, and stability that has been used for its own workloads over several years. Application Starter Kits It offers out-of-the-box starter kits for quicker application development for cases such as Anti-Fraud, Anti-Money Laundering (AML), Customer 360, Enterprise Graph analytics and more. These starter kits include graph schemas, sample data, preloaded queries and a library of customizable graph algorithms (PageRank, Shortest Path, Community Detection, and others). TigerGraph makes it easy for organizations to tailor such algorithms for their own use cases. Flexibility and elastic pricing Users pay for exactly the hours they use and are billed on a monthly basis. Spin up a cluster for a few hours for minimal cost, or run larger, mission-critical workloads with predictable pricing. This new cloud offering will also be available for production on AWS, with other cloud availability forthcoming. Yu Xu, founder and CEO, TigerGraph, said, “TigerGraph Cloud addresses these needs, and enables anyone and everyone to take advantage of scalable graph analytics without cloud vendor lock-in. Organizations can tap into graph analytics to power explainable AI - AI whose actions can be easily understood by humans - a must-have in regulated industries. TigerGraph Cloud further provides users with access to our robust graph algorithm library to support PageRank, Community Detection and other queries for massive business advantage.” Philip Howard, research director, Bloor Research, said, “What is interesting about TigerGraph Cloud is not just that it provides scalable graph analytics, but that it does so without cloud vendor lock-in, enabling companies to start immediately on their graph analytics journey." According to TigerGraph, “Compared to TigerGraph Cloud, other graph cloud solutions are up to 116x slower on two hop queries, while TigerGraph Cloud uses up to 9x less storage. This translates into direct savings for you.” TigerGraph also announces New Marquee Customers TigerGraph also announced the addition of new customers including Intuit, Zillow and PingAn Technology among other leading enterprises in cybersecurity, pharmaceuticals, and banking. To know more about TigerGraph Cloud in detail, visit its official website. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’  
Read more
  • 0
  • 0
  • 2539
article-image-cockroach-labs-2018-cloud-report-aws-outperforms-gcp-hands-down
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

Cockroach Labs 2018 Cloud Report: AWS outperforms GCP hands down

Melisha Dsouza
14 Dec 2018
5 min read
While testing the features for CockroachDB 2.1, the team discovered that AWS offered 40% greater throughput than GCP. To understand the reason for this result, the team compared GCP and AWS on TPC-C performance (e.g., throughput and latency), CPU, Network, I/O, and cost. This has resulted in CockroachDB releasing a 2018 Cloud Report to help customers decide on which cloud solution to go with based on the most commonly faced questions, such as should they use Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure? How should they tune their workload for different offerings? Which of the platforms are more reliable? Note: They did not test Microsoft Azure due to bandwidth constraints but will do so in the near future. The tests conducted For GCP, the team chose the n1-standard-16 machine with Intel Xeon Scalable Processor (Skylake) in the us-east region and for AWS  they chose the latest compute-optimized AWS instance type, c5d.4xlarge instances, to match n1-standard-16, because they both have 16 cpus and SSDs. #1 TPC-C Benchmarking test The team tested the workload performance by using TPC-C. The results were surprising as CockroachDB 2.1 achieves 40% more throughput (tpmC) on TPC-C when tested on AWS using c5d.4xlarge than on GCP via n1-standard-16. They then tested the TPC-C against some of the most popular AWS instance types. Taking the testing a step ahead, they focused on the higher performing c5 series with SSDs, EBS-gp2, and EBS-io1 volume types. The AWS Nitro System present in c5and m5 series offers approximately similar or superior performance when compared to a similar GCP instance. The results were clear: AWS wins on TPC-C benchmark. #2 CPU Experiment The team chose stress-ng as according to them, it offered more benchmarks and provided more flexible configurations as compared to sysbench benchmarking test. On running the Stress-ng command stress-ng --metrics-brief --cpu 16 -t 1m five times on both AWS and GCP, they found that   AWS offered 28% more throughput (~2,900) on stress-ng than GCP. #3 Network throughput and latency test The team measured network throughput using a tool called iPerf and latency via another tool PING. They have given a detailed setup of the iPerf tool used for this experiment in a blog post. The tests were run 4 times, each for AWS and GCP. The results once again showed AWS was better than GCP. GCP showed a fairly normal distribution of network throughput centered at ~5.6 GB/sec. Throughput ranges from 4.01 GB/sec to 6.67 GB/sec, which according to the team is “a somewhat unpredictable spread of network performance”, reinforced by the observed average variance for GCP of 0.487 GB/sec. AWS, offers significantly higher throughput, centered on 9.6 GB/sec, and providing a much tighter spread between 9.60 GB/sec and 9.63 GB/sec when compared to GCP. On checking network throughput variance, for AWS, the variance is only 0.006 GB/sec. This indicates that the GCP network throughput is 81x more variable when compared to AWS. The network latency test showed that, AWS has a tighter network latency than GCP. AWS’s values are centered on an average latency, 0.057 ms. AWS offers significantly better network throughput and latency with none of the variability present in GCP. #4 I/O Experiment The team tested I/O using a configuration of Sysbench that simulates small writes with frequent syncs for both write and read performance. This test measures throughput based on a fixed set of threads, or the number of items concurrently writing to disk. The write performance showed that AWS consistently offers more write throughput across all thread variance from 1 thread up to 64. In fact, it can be as high as 67x difference in throughput. AWS also offers better average and 95th percentile write latency across all thread tests. At 32 and 64 threads, GCP provides marginally more throughput. For read latency, AWS tops the charts for up to 32 threads. At 32 and 64 threads GCP and AWS split the results. The test also shows that GCP offers a marginally better performance with similar latency to AWS for read performance at 32 threads and up. The team also used the no barrier method of writing directly to disk without waiting for the write cache to be flushed. The result for this were reverse as compared to the above experiments. They found that GCP with no barrier speeds things up by 6x! On AWS, no barrier (vs. not setting no barrier) is only a 25% speed up. #5 Cost Considering AWS outperformed GCP at the TPC-C benchmarks, the team wanted to check the cost involved on both platforms. For both clouds we assumed the following discounts available: On GCP :a  three-year committed use price discount with local SSD in the central region. On AWS : a three-year standard contract paid up front. They found that GCP is more expensive as compared to AWS, given the performance it has shown in the tests conducted. GCP costs 2.5 times more than AWS per tpmC. In response to this generated report, Google Cloud developer advocate, Seth Vargo, posted a comment on Hacker News assuring users that Google’s team would look into the tests and conduct their own benchmarking to provide customers with the much needed answers to the questions generated by this report. It would be interesting to see the results GCP comes up with in response to this report. Head over to cockroachlabs.com for more insights on the tests conducted. CockroachDB 2.0 is out! Cockroach Labs announced managed CockroachDB-as-a-Service Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 2520

article-image-netflix-releases-flamescope
Richard Gall
06 Apr 2018
2 min read
Save for later

Netflix releases FlameScope

Richard Gall
06 Apr 2018
2 min read
Netflix has released FlameScope, a visualization tool that allows software engineering teams to monitor performance issues. From application startup to single threaded execution, FlameScope will provide real time insight into the time based metrics crucial to software performance. The team at Netflix has made FlameScope open  source, encouraging engineers to contribute to the project and help develop it further - we're sure that many development teams could derive a lot of value from the tool, and we're likely to see many customisations as its community grows. How does FlameScope work? Watch the video below to learn more about FlameScope. https://youtu.be/cFuI8SAAvJg Essentially, FlameScope allows you to build something a bit like a flame graph, but with an extra dimension. One of the challenges that Netflix identified that flame graphs sometimes have is that while they allow you to analyze steady and consistent workloads, "often there are small perturbations or variation during that minute that you want to know about, which become a needle-in-a-haystack search when shown with the full profile". With FlameScope, you get the flame graph, but by using a subsecond-offset heat map, you're also able to see the "small perturbations" you might have otherwise missed. As Netflix explains: "You can select an arbitrary continuous time-slice of the captured profile, and visualize it as a flame graph." Why Netflix built FlameScope FlameScope was built by the Netflix cloud engineering team. The key motivations for building it are actually pretty interesting. The team had a microservice that was suffering from strange spikes in latency, the cause a mystery. One of the members of the team found that these spikes, which occurred around every fifteen minutes appeared to correlate with "an increase in CPU utilization that lasted only a few seconds." CPU frame graphs, of course, didn't help for the reasons outlined above. To tackle this, the team effectively sliced up a flame graph into smaller chunks. Slicing it down into one second snapshots was, as you might expect, a pretty arduous task, so by using subsecond heatmaps, the team was able to create flamegraphs on a really small scale. This made it much easier to visualize those variations. The team are planning to continue to develop the FlameScope project. It will be interesting to see where they decide to take it and how the community responds. To learn more read the post on the Netflix Tech Blog.
Read more
  • 0
  • 0
  • 2518

article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 2507
article-image-oracle-announces-oracle-soar-a-tools-package-to-ease-application-migration-on-cloud
Savia Lobo
13 Jun 2018
2 min read
Save for later

Oracle announces Oracle Soar, a tools package to ease application migration on cloud

Savia Lobo
13 Jun 2018
2 min read
Oracle recently released Oracle Soar, a brand new tools, and services package to help customers migrate their applications on the cloud. Oracle Soar comprises a set of automated migration tools along with professional services i.e. a complete solution for migration. It is a semi-automated solution that fits in with Oracle's recent efforts to stand apart from other cloud providers which offer advanced automated services. Tools available within the Oracle Soar package are: Discovery assessment tool Process analyzer tool Automated data and configuration migration utilities tool Rapid integration tool The automated process is powered by True Cloud Method, which is Oracle’s proprietary approach to support customers throughout their cloud journey. Customers are also guided by a dedicated Oracle concierge service that ensures the migration aligns with modern, industry best practices. Customers can monitor the status of their cloud transition via an intuitive mobile application, which allows them to follow a step-by-step implementation guide for what needs to be done on each day. With Soar, customers can save up to 30% on cost and time as it offers simple migrations taking as little as 20 weeks for completion of the process. Oracle Soar is currently available for customers from the Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning who will move to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Read more about Oracle Soar, on Oracle’s official blog post. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Oracle Apex 18.1 is here! What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018    
Read more
  • 0
  • 0
  • 2499

article-image-kubernetes-1-14-releases-with-support-for-windows-nodes-kustomize-integration-and-much-more
Amrata Joshi
26 Mar 2019
2 min read
Save for later

Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more

Amrata Joshi
26 Mar 2019
2 min read
Yesterday, the team at Kubernetes released Kubernetes 1.14, a new update to the popular open-source container orchestration system. Kubernetes 1.14 comes with support for Windows nodes, kubectl plugin mechanism, Kustomize integration, and much more. https://twitter.com/spiffxp/status/1110319044249309184 What’s new in Kubernetes 1.14? Support for Windows Nodes This release comes with added support for Windows nodes as worker nodes. Kubernetes now schedules Windows containers and enables a vast ecosystem of Windows applications. With this release, enterprises with investments can easily manage their workloads and operational efficiencies across their deployments, regardless of the operating systems. Kustomize integration With this release, the declarative resource config authoring capabilities of kustomize are now available in kubectl through the -k flag. Kustomize helps the users in authoring and reusing resource config using Kubernetes native concepts. kubectl plugin mechanism This release comes with kubectl plugin mechanism that allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. PID Administrators can now provide pod-to-pod PID (Process IDs) isolation by defaulting the number of PIDs per pod. Pod priority and preemption in this release enables Kubernetes scheduler to schedule important pods first and remove the less important pods to create room for more important ones. Users are generally happy and excited about this release. https://twitter.com/fabriziopandini/status/1110284805411872768 A user commented on HackerNews, “The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere” To know more about this release in detail, check out Kubernetes’ official announcement. RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 2493