Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-google-compute-engine-plugin-makes-it-easy-to-use-jenkins-on-google-cloud-platform
Savia Lobo
15 May 2018
2 min read
Save for later

Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform

Savia Lobo
15 May 2018
2 min read
Google recently announced the Google Compute Engine Plugin for Jenkins, which helps to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP). Jenkins is one of the most popular tools for Continuous Integration(CI), a standard practice carried out by many software organizations. CI assists in automatically detecting changes that were committed to one’s software repositories, running them through unit tests, integration tests and functional tests, to finally create an artifact (JAR, Docker image, or binary). Jenkins helps one to define, build and test a process, then run it continuously against the latest software changes. However, as one scales up their continuous integration practice, one may need to run builds across fleets of machines rather than on a single server. With the Google Compute Engine Plugin, The DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. The plugin automatically deletes one’s unused instances, once work in the build system has slowed down,so that one only pays for the instances needed. One can also configure the Google Compute Engine Plugin to create build instances as Preemptible VMs, which can save up to 80% on per-second pricing of builds. One can attach accelerators like GPUs and Local SSDs to instances to run builds faster. One can configure build instances as per their choice, including the networking. For instance: Disable external IPs so that worker VMs are not publicly accessible Use Shared VPC networks for greater isolation in one’s GCP projects Apply custom network tags for improved placement in firewall rules One can improve security risks present in CI using the Compute Engine Plugin as it uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. One can create an ephemeral build farm in Compute Engine while keeping Jenkins master and other necessary build dependencies behind firewall while using Jenkins on-premises. Read more about the Compute Engine Plugin in detail, on the Google Research blog. How machine learning as a service is transforming cloud Polaris GPS: Rubrik’s new SaaS platform for data management applications Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 4434

article-image-idera-acquires-travis-ci-the-open-source-continuous-integration-solution
Sugandha Lahoti
24 Jan 2019
2 min read
Save for later

Idera acquires Travis CI, the open source Continuous Integration solution

Sugandha Lahoti
24 Jan 2019
2 min read
The popular open source continuous integration service Travis CI solution, has been acquired by Idera. Idera offers a number of B2B software solutions ranging from database administration to application development to test management. Travis CI will be joining Idera’s Testing Tools division, which also includes TestRail, Ranorex, and Kiuwan. Travis CI assured its users that the company will continue to be open source and a stand-alone solution under an MIT license. “We will continue to offer the same services to our hosted and on-premises users. With the support from our new partners, we will be able to invest in expanding and improving our core product”, said Konstantin Haase, a founder of Travis CI in a blog post. Idera will also keep the Travis Foundation running which runs projects like Rails Girls Summer of Code, Diversity Tickets, Speakerinnen, and Prompt. It’s not just a happy day for Travis CI. Travis CI will also bring it’s 700,000 users to Idera, and it’s high profile customers like IBM and Zendesk. Users are quick to note that this acquisition comes at a time when Tavis CI’s competitors like Circle CI, seem to be taking market share away from Travis CI. A comment on hacker news reads, “In a past few month I started to see Circle CI badges popping here and there for opensource repositories and anecdotally many internal projects at companies are moving to GitLab and their built-in CI offering. Probably a good time to sell Travis CI, though I'd prefer if they would find a better buyer.” Another user says, “Honestly, for enterprise users that is a good thing. In the hands of a company like Idera we can be reasonably confident that Travis will not disappear anytime soon” Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies
Read more
  • 0
  • 0
  • 4424

article-image-linux-4-20-kernel-slower-than-its-previous-stable-releases-spectre-flaw-to-be-blamed-according-to-phoronix
Melisha Dsouza
19 Nov 2018
3 min read
Save for later

Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix

Melisha Dsouza
19 Nov 2018
3 min read
On the 4th of November, Linux 4.20 rc-1 was released with a host of notable changes right from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, and other new hardware support additions and software features. The release that was supposed to upgrade the kernel’s performance, did not succeed in doing so. On the contrary, the kernel is much slower as compared to previous Linux kernel stable releases. In a blog released by Phoronix, Michael Larabel,e lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org, discussed the results of some tests conducted on the kernel. He bisected the 4.20 kernel merge window to explore the reasons for the significant slowdowns in the kernel for many real-world workloads. The article attributes this degrade in performance to the Spectre Flaws in the processor. In order to mitigate against the Spectre flaw, an intentional kernel change was made.The change is termed as  "STIBP" for cross-hyperthread Spectre mitigation on Intel processors. Single Thread Indirect Branch Predictors (STIBP) prevents cross-hyperthread control of decisions that are made by indirect branch predictors. The STIBP addition in Linux 4.20 will affect systems that have up-to-date/available microcode with this support and where a user’s CPU has Hyper-Threading enabled/present. Performance issues in Linux 4.20 Michael has done a detailed analysis of the kernel performance and here are some of his findings. Many synthetic and real-world tests showed that the Intel Core i9 performance was not upto the mark. The Rodinia scientific OpenMP tests took 30% longer, Java-based DaCapo tests taking up to ~50% more time to complete, the code compilation tests also extended in length. There was lower PostgreSQL database server performance and longer Blender3D rendering times. All this was noticed in Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade. The latest Linux kernel Git benchmarks also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems. The tests conducted found the  Smallpt renderer to slow down significantly PHP performance took a major dive, HMMer also faced a major setback compared to the current Linux 4.19 stable series. What is surprising is that there are mitigations against Spectre, Meltdown, Foreshadow, etc in Linux 4.19 as well. But 4.20 shows an additional performance drop on top of all the previously outlined performance hits this year. In the entire testing phase, the AMD systems didn’t appear to be impacted. This would mean if a user disables Spectre V2 mitigations to account for better performance- the system’s security could be compromised. You can head over to Phoronix for a complete analysis of the test outputs and more information on this news. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 4416

article-image-bpftrace-a-dtrace-like-tool-for-linux-now-open-source
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

bpftrace, a DTrace like tool for Linux now open source

Prasad Ramesh
09 Oct 2018
2 min read
bpftrace is a DTrace like tool for troubleshooting kernel problems. It was created about a year ago by Alastair Robertson and the GitHub repository was made public recently. It has plenty of features to relate it to DTrace 2.0. bpftrace bpftrace is an open source high level tracing tool which allows analyzing systems. It is now more competent and built for modern extended Berkeley Packet Filter (eBPF). eBPF is a part of the Linux kernel and is popular in systems engineering. Robertson recently developed struct support, and applied it to tracepoints. Struct support was also applied to kprobes. bpftrace uses existing Linux kernel facilities like eBPF, kprobes, uprobes, tracepoints, and perf_events. It also uses bcc libraries. bpftrace uses a lex/yacc parser internally to convert programs into abstract syntax tree (AST). Then llvm intermediate representation actions are done and finally, then BPF is done. Source: GitHub bpftrace and DTrace bpftrace is a higher-level front end for custom ad-hoc tracing. It can play a similar role as DTrace. There are some things eBPF can do and DTrace can't, one of them being the ability to save and retrieve stack traces as variables. Brendan Gregg, one of the contributors of bpftrace states in his blog: “We've been adding bpftrace features as we need them, not just because DTrace had them. I can think of over a dozen things that DTrace can do that bpftrace currently cannot, including custom aggregation printing, shell arguments, translators, sizeof(), speculative tracing, and forced panics.” A one-liner tutorial and reference guide is available on GitHub for learning bpftrace. For more details and trying bpftrace head on to the GitHub repository and Brendan Gregg’s blog. NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux LLVM 7.0.0 released with improved optimization and new tools for monitoring Xamarin Test Cloud for API Monitoring [Tutorial]
Read more
  • 0
  • 0
  • 4397

article-image-introducing-quarkus-a-kubernetes-native-java-framework-for-graalvm-openjdk-hotspot
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot

Melisha Dsouza
08 Mar 2019
2 min read
Yesterday, RedHat announced the launch of ‘Quarkus’, a Kubernetes Native Java framework that offers developers “a unified reactive and imperative programming model” in order to address a wider range of distributed application architectures. The framework uses Java libraries and standards and is tailored for GraalVM and HotSpot. Quarkus has been designed keeping in mind serverless, microservices, containers, Kubernetes, FaaS, and the cloud and it provides an effective solution for running Java on these new deployment environments. Features of Quarkus Fast Startup enabling automatic scaling up and down of microservices on containers and Kubernetes as well as FaaS on-the-spot execution. Low memory utilization to help optimize container density in microservices architecture deployments that require multiple containers. Quarkus unifies imperative and reactive programming models for microservices development. Quarkus introduces a full-stack framework by leveraging libraries like Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more. Quarkus includes an extension framework for third-party framework authors can leverage and extend. Twitter was abuzz with Kubernetes users expressing their excitement on this news- describing Quarkus as “game changer” in the world of microservices: https://twitter.com/systemcraftsman/status/1103759828118368258 https://twitter.com/MarcusBiel/status/1103647704494804992 https://twitter.com/lazarotti/status/1103633019183738880 This open source framework is available under the Apache Software License 2.0 or compatible license. You can head over to the Quarkus website for more information on this news. Using lambda expressions in Java 11 [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 4385

article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 4326
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-spacex-shares-new-information-on-starlink-after-the-successful-launch-of-60-satellites
Sugandha Lahoti
27 May 2019
3 min read
Save for later

SpaceX shares new information on Starlink after the successful launch of 60 satellites

Sugandha Lahoti
27 May 2019
3 min read
After the successful launch of Elon Musk’s mammoth space mission, Starlink last week, the company has unveiled a brand new website with more details on the Starlink commercial satellite internet service. Starlink Starlink sent 60 communications satellites to the orbit which will eventually be part of a single constellation providing high speed internet to the globe. SpaceX has plans to deploy nearly 12,000 satellites in three orbital shells by the mid-2020s, initially placing approximately 1600 in a 550-kilometer (340 mi)-altitude area. The new website gives a few glimpses of how Starlink’s plan looks like such as including the CG representation of how the satellites will work. These satellites will move along their orbits simultaneously, providing internet in a given area. They have also revealed more intricacies about the satellites. Flat Panel Antennas In each satellite, the signal is transmitted and received by four high-throughput phased array radio antennas. These antennas have a flat panel design and can transmit in multiple directions and frequencies. Starlink Ion Propulsion system and solar array Each satellite carries a krypton ion propulsion system. These systems enable satellites to orbit raise, maneuver in space, and deorbit. There is also a singular solar array, singe for simplifying the system. Ion thrusters provide a more fuel-efficient form of propulsion than conventional liquid propellants. It uses Krypton, which is less expensive than xenon but offers lower thrust efficiency. Starlink Star Tracker and Autonomous collision avoidance system Star Tracker is Space X’s inbuilt sensors, that can tell each satellite’s output for precise broadband throughput placement and tracking. The collision avoidance system uses inputs from the U.S. Department of Defense debris tracking system, reducing human error with a more reliable approach. Through this data it can perform maneuvers to avoid collision with space debris and other spacecrafts. Per Techcrunch, who interviewed a SpaceX representative, “the debris tracker hooks into the Air Force’s Combined Space Operations Center, where trajectories of all known space debris are tracked. These trajectories are checked against those of the satellites, and if a possible collision is detected the course changes are made, well ahead of time.” Source: Techcrunch More information on Starlink (such as the cost of the project, what ground stations look like, etc) is yet unknown. Till that time, keep an eye on the Starlink’s website and this space for new updates. SpaceX delays launch of Starlink, its commercial satellite internet service, for the second time to “update satellite software” Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon Elon Musk reveals big plans with Neuralink
Read more
  • 0
  • 0
  • 4281

article-image-soon-rhel-red-hat-enterprise-linux-wont-support-kde
Amrata Joshi
05 Nov 2018
2 min read
Save for later

Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE

Amrata Joshi
05 Nov 2018
2 min read
Later last week, Red Hat announced that RHEL has deprecated KDE (K Desktop Environment) support. KDE Plasma Workspaces (KDE) is an alternative to the default GNOME desktop environment for RHEL. Major future release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. In the 90’s, the Red Hat team was entirely against KDE and had put lots of effort into Gnome. Since Qt was under a not-quite-free license that time, the Red Hat team was firmly behind Gnome. Steve Almy, principal product manager of Red Hat Enterprise Linux, told the Register, “Based on trends in the Red Hat Enterprise Linux customer base, there is overwhelming interest in desktop technologies such as Gnome and Wayland, while interest in KDE has been waning in our installed base.” Red Hat heavily backs the Linux desktop environment GNOME, which is developed as an independent open-source project. Also, it is used by a large bunch of other distros. Although Red Hat is indicating the end of KDE support in RHEL, KDE is very much its own independent project that will continue on its own, with or without support from future RHEL editions. Almy said, “While Red Hat made the deprecation note in the RHEL 7.6 notes, KDE has quite a few years to go in RHEL's roadmap.” This is simply a warning that certain functionality may be removed or replaced from RHEL in the future with functionality similar or more advanced to the one deprecated. Though KDE, as well as anything listed in Chapter 51 of the Red Hat Enterprise Linux 7.6 release notes,  will continue to be supported for the life of Red Hat Enterprise Linux 7. Read more about this news on the official website of Red Hat. Red Hat released RHEL 7.6 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 4269

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 4239

article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 4217
article-image-cncf-sandbox-accepts-googles-openmetrics-project
Fatema Patrawala
14 Aug 2018
3 min read
Save for later

CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project

Fatema Patrawala
14 Aug 2018
3 min read
The Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects. Google cloud engineers and other vendors had been working on this persistently from the past several months and finally it got accepted by CNCF. Engineers are further working on ways to support OpenMetrics in the OpenSensus, a set of uniform tracing and stats libraries that work with multi-vendor services. OpenMetrics will bring together the maturity and adoption of Prometheus, and Google’s background in working with stats at extreme scale. It will also bring in the experience and needs of a variety of projects, vendors, and end-users who are aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale. The open source initiative, focused on creating a neutral metrics exposition format will provide a sound data model for current and future needs of users. It will embed into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models. “The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries. CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.” says Richard Hartmann, Technical Architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others. “Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” says Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google. For more information about OpenMetrics, please visit openmetrics.io. To quickly enable trace and metrics collection from your application, please visit opencensus.io. 5 reasons why your business should adopt cloud computing Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 4182

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 4146

article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 4092
article-image-kong-announces-kuma-an-open-source-project-to-overcome-the-limitations-of-first-generation-service-mesh-technologies
Amrata Joshi
10 Sep 2019
3 min read
Save for later

Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies

Amrata Joshi
10 Sep 2019
3 min read
Today, the team at Kong, the creators of the API and service lifecycle management platform for modern architectures announced the release of Kuma, a new open-source project.  Kuma is based on the open-source Envoy proxy that addresses limitations of first-generation service mesh technologies by seamlessly managing services on the network. The first-generation meshes didn't have a mature control plane, and later on, when they provided a control plane, it wasn’t easy to use them as they were hard to deploy. Kuma is easy to use and enables rapid adoption of mesh. Also Read: Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Features of Kuma Runs on all the platforms Kuma can run on any platform including Kubernetes, containers, virtual machines, and legacy environments. It also includes a fast data plane as well as an advanced control plane that makes it easier to use.  It is reliable The initial service mesh solutions were not flexible and it was difficult to use them. Kuma ensures reliability by automating the process of securing the underlying network.  Support for all the environments Kuma has support for all the environments in the organization, so the existing applications can still be used in their traditional environments. This provides comprehensive coverage across an organization. Couples a fast data plane using control plane Kuma couples a fast data plane with a control plane that helps users to set permissions, routing rules and expose metrics with just a few commands. Tracing and logging Kuma helps users to implement tracing and logging and analyze metrics for rapid debugging. Routing and Control  Kuma provides traffic control capabilities including circuit breakers and health checks in order to enhance L4 (Layer 4) routing. Marco Palladino, CTO and co-founder of Kong, said, “We now have more microservices talking to each other and connectivity between them is the most unreliable piece: prone to failures, insecure and hard to observe.”  Palladino further added, “It was important for us to make Kuma very easy to get started with on both Kubernetes and VM environments, so developers can start using service mesh immediately even if their organization hasn’t fully moved to Kubernetes yet, providing a smooth path to containerized applications and to Kubernetes itself. We are thrilled to be open-sourcing Kuma and extending the adoption of Envoy, and we will continue to contribute back to the Envoy project like we have done in the past. Just as Kong transformed and modernized API Gateways with open-source Kong, we are now doing that for service mesh with Kuma.” The Kuma platform will be on display during the second annual Kong Summit, which is to be held on October 2-3, 2019. Other interesting news in Cloud and Networking  Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models
Read more
  • 0
  • 0
  • 4074

article-image-grafana-labs-announces-general-availability-of-loki-1-0-a-multi-tenant-log-aggregation-system
Savia Lobo
20 Nov 2019
3 min read
Save for later

Grafana Labs announces general availability of Loki 1.0, a multi-tenant log aggregation system

Savia Lobo
20 Nov 2019
3 min read
Today, at the ongoing KubeCon 2019, Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. The Loki project was first introduced at KubeCon Seattle in 2018. Before the official launch, this project was started inside of Grafana Labs and was internally used to monitor all of Grafana Labs’ infrastructure. It helped ingest around 1.5TB/10 billion log lines a day. Released under the Apache 2.0 license, the Loki tool is optimized for Grafana, Kubernetes, and Prometheus. Just within a year, the project has more than 1,000 contributions from 137 contributors and also has nearly 8,000 stars on GitHub. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Loki’s design is inspired by Prometheus, the open source monitoring solution for the cloud-native ecosystem, as it offers a Prometheus-like query language called LogQL to further integrate with the cloud-native ecosystem. Tom Wilkie, VP of Product at Grafana Labs, said, “Grafana Labs is proud to have created Loki and fostered the development of the project, building first-class support for Loki into Grafana and ensuring customers receive the support and features they need.” He further added, “We are committed to delivering an open and composable observability platform, of which Loki is a key component, and continue to rely on the power of open source and our community to enhance observability into application and infrastructure.” Grafana Labs also offers enterprise services and support for Loki, which includes: Support and training from Loki maintainers and experts 24 x 7 x 365 coverage from the geographically distributed Grafana team Per-node pricing that scales with deployment Read more about Grafana Loki in detail on GitHub. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more! Grafana 6.2 released with improved security, enhanced provisioning, Bar Gauge panel, lazy loading and more
Read more
  • 0
  • 0
  • 3962
Modal Close icon
Modal Close icon