Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-2019-upskilling-enterprise-devops-skills-report-gives-an-insight-into-the-devops-skill-set-required-for-enterprise-growth
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth

Melisha Dsouza
05 Mar 2019
3 min read
DevOps Institute has announced the results of the "2019 Upskilling: Enterprise DevOps Skills Report". The research and analysis for this report were conducted by Eveline Oehrlich, former vice president and research director at Forrester Research. The project was supported by founding Platinum Sponsor Electric Cloud, Gold Sponsor CloudBees and Silver Sponsor Lenovo. This report outlines the most valued and in-demand skills needed to achieve DevOps transformation within enterprise IT organizations of all sizes. It also gives an insight into the skills a DevOps professional should develop to help build a mindset and a culture for organizations and other individuals. According to Jayne Groll, CEO of DevOps Institute, “DevOps Institute is thrilled to share the research findings that will help businesses and the IT community understand the requisite skills IT practitioners need to meet the growing demand for T-shaped professionals. By identifying skill sets needed to advance the human side of DevOps, we can nurture the development of the T-shaped professional that is being driven by the requirement for speed, agility and quality software from the business.” Key findings from the report 55% of the survey respondents said that they first look for internal candidates when searching for DevOps team members and will look for external candidates only if they have not identified an internal candidate. The respondents agreed that automation skills (57%), process skills (55%) and soft skills (53%) are the most important must-have skills On being asked about which job title(s) companies recently hired (or are planning to hire), the survey depicted: DevOps Engineer/Manager, 39%; Software Engineer, 29%; DevOps Consultant, 22%; Test Engineer, 18%; Automation Architect, 17%; and Infrastructure Engineer, 17%. Other recruits included CI/CD Engineers, 16%; System Administrators, 15%; Release Engineers/Managers, 13%; and Site Reliability Engineers, 10% Functional skills and key technical skills when combined, complement the soft skills required to create qualified  DevOps engineers. Automation process and soft skills are the “must-have” skills for a DevOps engineer. Process skills are needed for intelligent automation.  Another key functional skill is IT Operations. Security comes in second. Business skills are most important to leaders, but not as much to individual contributors. Cloud and analytical knowledge are the top technical skills. Recruiting for DevOps is on the rise. Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report The following figure shows the priorities across the top skill categories relative to the key roles surveyed: Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report Oehrlich also said in a statement that Hiring managers see a DevOps professional as a creative, knowledge-sharing, eager-to-learn individual with shapeable skill sets. Andre Pino, vice president of marketing, CloudBees said in a statement s that “The survey results show the importance for developers and managers to have the right skills that empower them to meet business objectives and have a rewarding career in our fast-paced industry.” You can check out the entire report for more insights on this news. Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution
Read more
  • 0
  • 0
  • 2918

article-image-fastly-announces-the-next-gen-edge-computing-services-available-in-private-beta
Fatema Patrawala
08 Nov 2019
4 min read
Save for later

Fastly announces the next-gen edge computing services available in private beta

Fatema Patrawala
08 Nov 2019
4 min read
Fastly, a San Francisco based startup, providing edge cloud platform, yesterday announced the private beta launch of Compute@Edge, its new edge computing services. Compute@Edge is a powerful language-agnostic compute environment. This major milestone marks as an evolution of Fastly’s edge computing capabilities and the company’s innovation in the serverless space.  https://twitter.com/fastly/status/1192080450069643264 Fastly’s Compute@Edge is designed to empower developers to build far more advanced edge applications with greater security, more robust logic, and new levels of performance. They can also create a new and improved digital experience with their own technology choices around the cloud platforms, services, and programming languages needed.  Rather than spend time on operational overhead, the company’s goal is to continue reinventing the way end users live, work, and play on the web. Fastly's Compute@Edge gives developers the freedom to push complex logic closer to end users. “When we started Fastly, we sought to build a platform with the power to realize the future of edge computing — from our software-defined modern network to our point of presence design, everything has led us to this point,” explained Tyler McMullen, CTO of Fastly. “With this launch, we’re excited to double down on that vision and work with enterprises to help them build truly complete applications in an environment that offers new levels of stability, security, and global scale.” We had the opportunity to interview Fastly’s CTO Tyler McMullen a few months back. We discussed Fastly’s Lucet and the future of WebAssembly and Rust among other things. You can read the full interview here.  Fastly Compute@Edge leverages speed for global scale and security Fastly’s Compute@Edge environment promises to offer 100x faster startup time at 35.4 microseconds, than any other solution in the market. Additionally Compute@Edge is powered by its open-source WebAssembly compiler and runtime, Lucet and supports Rust as a second language in addition to Varnish Configuration Language (VCL).  Other benefits of Compute@Edge include: Code can be computed around the world instead of a single region. This will allow developers to reduce code execution latency and further optimize the performance of their code, without worrying about managing the underlying infrastructure The unmatched speed at which the environment operates, combined with Fastly’s isolated sandboxing technology, reduces the risk of accidental data leakage. With a “burn-after-reading” approach to request memory, entire classes of vulnerabilities are eliminated With Compute@Edge, developers can serve GraphQL from its network edge and deliver more personalized experiences Developers can develop their own customized API protection logic With manifest manipulation, developers can deliver content with a “best-performance-wins” approach— like multi-CDN live streams that run smoothly for users around the world Fastly has operated in the serverless market since its founding in 2011 through its Edge Cloud Platform, including products like Full Site Delivery, Load Balancer, DDoS, and Web Application Firewall (WAF). Till date, Fastly’s serverless computing offering has focused on delivery-centric use cases via its VCL-powered programmable edge. With the introduction of Compute@Edge, Fastly unlocks even more powerful and widely-applicable computing capabilities. To learn more about Fastly’s edge computing and cloud services, you can visit its official blog. Developers who are interested to be a part of the private beta can sign up on this page. Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more Fastly, edge cloud platform, files for IPO Fastly open sources Lucet, a native WebAssembly compiler and runtime “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 2892

article-image-google-and-waze-share-their-best-practices-for-canary-deployment-using-spinnaker
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Google and Waze share their best practices for canary deployment using Spinnaker

Bhagyashree R
18 Jan 2019
3 min read
On Monday, Eran Davidovich, a System Operations Engineer at Waze and Théo Chamley, Solutions Architect at Google Cloud shared their experience on using Spinnaker for canary deployments. Waze estimated that canary deployment helped them prevent a quarter of all incidents on their services. What is Spinnaker? Developed at Netflix, Spinnaker, is an open source, multi-cloud continuous delivery platform that helps developers to manage app deployments on different computing platforms including Google App Engine, Google Kubernetes Engine, AWS, Azure, and more. This platform also enables you to implement advanced deployment methods like canary deployment. In this type of deployment, developers roll out the changes to a subset of users to analyze whether or not the code release provides the desired outcome. If this new code poses any risks, you can mitigate it before releasing the update to all users. In April 2018, Google and Netflix introduced a new feature for Spinnaker called Kayenta using which you can create an automated canary analysis for your project. Though you can build your own canary deployment or other advanced deployment patterns, Spinnaker and Kayenta together are aimed at making it much easier and reliable. The tasks that Kayenta automates includes fetching user-configured metrics from their sources, running statistical tests, and providing an aggregating score for the canary. On the basis of the aggregated score and set limits for success, Kayenta automatically promotes or fails the canary, or triggers a human approval path. Canary best practices Check out the following best practices to ensure that your canary analyses are reliable and relevant: Instead of comparing the canary against the production, compare it against a baseline. This is because many differences can skew the results of the analysis such as cache warmup time, heap size, load-balancing algorithms, and so on. The canary should be run for enough time, at least 50 pieces of time-series data per metric, to ensure that the statistical analysis is relevant. Choose metrics that represent different aspects of your applications’ health. Three aspects are very critical as per the SRE book, which includes latency, errors, and saturation. You must put a standard set of reusable canary configs in place. This will come in handy for anyone in your team as a starting point and will also keep the canary configurations maintainable. Thunderbird welcomes the new year with better UI, Gmail support and more Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! AIOps – Trick or Treat?
Read more
  • 0
  • 0
  • 2878
Visually different images

article-image-kubernetes-containerd-1-1-integration-is-now-generally-available
Savia Lobo
25 May 2018
3 min read
Save for later

Kubernetes Containerd 1.1 Integration is now generally available

Savia Lobo
25 May 2018
3 min read
After just 6 months of releasing the alpha version of Kubernetes containerd integration, the community has declared that the upgraded containerd 1.1 is now generally available. Containerd 1.1 can be used as the container runtime for production Kubernetes clusters. It works well with Kubernetes 1.10 and also supports all Kubernetes features. Let’s look at the key upgrades in the new Kubernetes Containerd 1.1 : Architecture upgrade Containerd 1.1 architecture with the CRI plugin In the current version 1.1, the cri-containerd daemon is changed to a containerd CRI plugin. This CRI plugin is made default and is built-in containerd 1.1. It interacts with containerd through direct function calls. Kubernetes can now be used by containerd directly as this new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Thus, the cri-containerd daemon is no longer needed. Performance upgrades Performance optimizations have been the major focus in the Containerd 1.1. Performance was optimized in terms of pod startup latency and daemon resource usage which are discussed in detail below. Pod Startup Latency The containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim. Following graph is based on the results from the ‘105 pod batch startup benchmark’ (The lower, the better) Pod Startup Latency Graph CPU and Memory Usage The containerd 1.1 integration consumes less CPU and memory overall compared to Docker 18.03 CE integration with dockershim at a steady state with 105 pods. The results differ as per the number of pods running on the node. 105 is the current default for the max number of user pods per node. CPU Usage Graph Memory Usage Graph On comparing Docker 18.03 CE integration with dockershim, the containerd 1.1 integration has 30.89% lower kubelet cpu usage, 68.13% lower container runtime cpu usage, 11.30% lower kubelet resident set size (RSS) memory usage,  and 12.78% lower container runtime RSS memory usage. What would happen to Docker Engine? Switching to containerd would not mean that one will be unable to use Docker Engine. The fact is that Docker Engine is built on top of containerd. The next release of Docker Community Edition (Docker CE) will allow using containerd version 1.1. Docker engine built over Containerd Containerd is being used by both Kubelet and Docker Engine. This means users choosing the containerd integration will not only get new Kubernetes features, performance, and stability improvements, but also have the option of keeping Docker Engine around for other use cases. Read more interesting details on the Containerd 1.1 on Kubernetes official blog post. Top 7 DevOps tools in 2018 Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 2866

article-image-codefreshs-fixvember-a-devops-hackathon-to-encourage-developers-to-contribute-to-open-source
Sugandha Lahoti
30 Oct 2018
2 min read
Save for later

Codefresh’s Fixvember, a Devops hackathon to encourage developers to contribute to open source

Sugandha Lahoti
30 Oct 2018
2 min read
Open Source is getting a lot of attention these days and to incentivize people to contribute to open source Codefresh has launched "Fixvember", a do-it-from-home, DevOps hackathon. Codefresh is a Kubernetes native CI/CD which allows for creating powerful pipelines based on DinD as a service and provides self-service test environments, release management, and Docker and Helm registry. Codefresh’s Fixvember is a Devops based hackathon where Codefresh will provide DevOps professionals with a limited-edition t-shirt to contribute to open source. The event basically encourages developers (and not just Codefresh users) to make at least three contributions to open source projects, including building automation, adding better testing, and fixing bugs. The focus is on making engineers more successful by following DevOps best practices. Adding a Codefresh YAML to an open-source repo may also earn developers additional prizes or recognition. Codefresh debuts Fixvember in sync with the launch of its public-facing builds in the Codefresh platform. Codefresh is offering 120 builds/month, private Docker Registry, Helm Repository, and Kubernetes/Helm Release management for free to increase the adoption of CI/CD processes. It is also offering a huge free tier within Codefresh with everything needed to help teams. Developers can participate by following these steps. Step 1: Signup at codefresh.io/fixvember Step 2: Make 3 open source contributions that improve DevOps. This could be adding/updating a Codefresh pipeline to a repo, adding tests or validation to a repo, or just fixing bugs. Step 3: Submit your results using your special email link “I can’t promise the limited-edition t-shirt will increase in value, but if it does, I bet it will be worth $1,000 by next year. The FDA prevents me from promising any health benefits, but it’s possible this t-shirt will actually make you smarter,” joked Dan Garfield, Chief Technology Evangelist for Codefresh. “Software engineers sometimes have a hero complex that adding cool new features is the most valuable thing. But, being ‘Super Fresh’ means you do the dirty work that makes new features deploy successfully. Adding automated pipelines, writing tests, or even fixing bugs are the lifeblood of these projects.” Read more about Fixvember on Codefresh Blog. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 2789

article-image-openshift-3-9-released-ahead-of-planned-schedule
Gebin George
09 Apr 2018
2 min read
Save for later

OpenShift 3.9 released ahead of planned schedule

Gebin George
09 Apr 2018
2 min read
In an effort to sync their releases with Kubernetes, RedHat skipped 3.8 release and came up with version 3.9, for their very own container application platform i.e OpenShift. RedHat seems to be moving really quick with their OpenShift roadmap, with 3.10 release lined up in Q2 2018 (June). The primary takeaway from the accelerated release cycle of OpenShift is the importance of the tool in RedHat DevOps expansion. With dedicated support to cutting-edge tools like Docker and Kubernetes, OpenShift looks like a strong DevOps tool, which is here to stay. The OpenShift 3.9 release has quite a few exciting middleware updates, bug fixes, and service extensions. Let’s look at some of the enhancements in key areas:   Container Orchestration OpenShift has added Soft Image Pruning, wherein you don't have to remove the actual image, but need to just update the etcd storage file instead. Added support to deploy RedHat ClouForms on OpenShift container engine. Added features: OpenShift Container Platform template provisioning Offline OpenScapScans Alert management: You can choose Prometheus (currently in Technology Preview) and use it in CloudForms. Reporting enhancements Provider updates Chargeback enhancements UX enhancements The inclusion of CRI-O V1.9, a lightweight native Kubernetes run-time interface. Addition of CRI-O brings the following advancements: A minimal and secure architecture. Excellent scale and performance. The ability to run any Open Container Initiative (OCI) or docker image. Familiar operational tooling and commands. Storage Expand persistent volume claims online from {product-tile} for CNS glusterFS, Cinder, and GCE PD. CNS deployments are automated and CNS Un-install Playbook is added with the release of OpenShift 3.9 Developer Experience Improvements in Jenkins support, which intelligently predicts to pod memory before processing it. Updated CLI-plugins or binary extensions, which extends the default set of OC commands, allowing you to perform a new task. The BUILDCONFIG DEFAULTER now allows specifications now allows a toleration value, which is applied upon creation. For minor bug fixes and the complete release data, refer to  OpenShift Release Notes.  
Read more
  • 0
  • 0
  • 2759
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 2691

article-image-amazon-eks-windows-container-support-is-now-generally-available
Savia Lobo
10 Oct 2019
2 min read
Save for later

Amazon EKS Windows Container Support is now generally available

Savia Lobo
10 Oct 2019
2 min read
A few days ago, Amazon announced the general availability of the Windows Container support on  Amazon Elastic Kubernetes Service (EKS). The company announced a preview of the Windows Container support in March this year and also invited customers to try it out and provide their feedback. With the Windows Container Support, development teams can now deploy applications designed to run on Windows Servers, on Kubernetes alongside Linux applications. It will also bring in more consistency in system logging, performance monitoring, and code deployment pipelines. “We are proud to be the first Cloud provider to have General Availability of Windows Containers on Kubernetes and look forward to customers unlocking the business benefits of Kubernetes for both their Windows and Linux workloads,” the official post mentions. A few considerations before deploying the Worker nodes include: Windows workloads are supported with Amazon EKS clusters running Kubernetes version 1.14 or later. Amazon EC2 instance types C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances are not supported for Windows workloads. Host networking mode is not supported for Windows workloads. Amazon EKS clusters must contain 1 or more Linux worker nodes to run core system pods that only run on Linux, such as coredns and the VPC resource controller. The kubelet and kube-proxy event logs are redirected to the Amazon EKS Windows Event Log and are set to a 200 MB limit. In a demonstration, Martin Beeby, a principal evangelist for Amazon Web Services has created a new Amazon Elastic Kubernetes Service cluster, which works with any cluster that is using Kubernetes version 1.14 and above. He has also added some new Windows nodes and deploys a Windows application. For a complete demonstration and to know more about the Amazon EKS Windows Container Support, read AWS’ official blog post. Amazon EBS snapshots exposed publicly leaking sensitive data in hundreds of thousands, security analyst reveals at DefCon 27 Amazon is being sued for recording children’s voices through Alexa without consent Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 2680

article-image-stripe-open-sources-skycfg-a-configuration-builder-for-kubernetes
Melisha Dsouza
05 Dec 2018
2 min read
Save for later

Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes

Melisha Dsouza
05 Dec 2018
2 min read
On 3rd December, Stripe announced the open-sourcing of Skycfg which is a configuration builder for Kubernetes. Skycfg was developed by Stripe as an extension library for the Starlark language. It adds support for constructing Protocol Buffer messages. The team states that as the implementation of Skycfg stabilizes, the public API surface will be expanded so that Skycfg can be combined with other Starlark extensions. Benefits of Skycfg Skycfg ensures Type safety. It uses ‘Protobuf’  which has a statically-typed data model, and the type of every field is known to Skycfg when it's building a configuration. Users are free from the risk of accidentally assigning a string to a number, a struct to a different struct, or forgetting to quote a YAML value. Users can reduce duplicated typing and share logic by defining helper functions. Starlark supports importing modules from other files. This can be used to share common code between configurations. These modules can protect service owners from complex Kubernetes logic. Skycfg supports limited dynamic behavior through the use of context variables, which let the Go caller pass arbitrary key:value pairs in the ctx parameter. Skycfg simplifies the configuration of Kubernetes services, Envoy routes, Terraform resources, and other complex configuration data. Here is what users are saying about Skycfg over at HackerNews: Head over to GitHub for all the code and supporting files. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 2679

article-image-jfrog-devops-artifact-management-platform-bags-165-million-series-d-funding
Sugandha Lahoti
05 Oct 2018
2 min read
Save for later

JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding

Sugandha Lahoti
05 Oct 2018
2 min read
JFrog the DevOps based artifact management platform has announced a $165 million Series D funding, yesterday. This funding round was led by Insight Venture Partners. The secured funding is expected to drive JFrog product innovation, support rapid expansion into new markets, and accelerate both organic and inorganic growth. Other new investors included Spark Capital and Geodesic Capital, as well as existing investors including Battery Ventures, Sapphire Ventures, Scale Venture Partners, Dell Technologies Capital and Vintage Investment Partners. Additional JFrog investors include JFrog Gemini VC Israel, Qumra Capital and VMware. JFrog transforms the way software is updated by offering an end-to-end, universal, highly-available software release platform. This platform is used for storing, securing, monitoring and distributing binaries for all technologies, including Docker, Go, Helm, Maven, npm, Nuget, PyPi, and more. As of now, according to the company, more than 5 million developers use JFrog Artifactory as their system of record when they build and release software. It also supports multiple deployment options, with its products available in a hybrid model, on-premise, and across major cloud platforms: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The announcement comes on the heels of Microsoft’s $7.5 billion purchase of coding-collaboration site GitHub earlier this year. Since its Series C funding round in 2016, the company has seen more than 500% sales growth and expanded its reach to over 4,500 customers, including more than 70% of the Fortune 100. It continues to add 100 new commercial logos per month and supports the world’s open source communities with its Bintray binary hub. Bintray powers 700K community projects distributing over 5.5M unique software releases that generate over 3 billion downloads a month. Read more about the announcement on JFrog official press release. OmniSci, formerly MapD, gets $55 million in series C funding. Microsoft’s GitHub acquisition is good for the open source community. Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency”
Read more
  • 0
  • 0
  • 2679
article-image-gitlab-retracts-its-privacy-invasion-policy-after-backlash-from-community
Vincy Davis
25 Oct 2019
3 min read
Save for later

GitLab retracts its privacy invasion policy after backlash from community

Vincy Davis
25 Oct 2019
3 min read
Yesterday, GitLab retracted its earlier decision to implement user level product usage tracking on their websites after receiving negative feedback from its users. https://twitter.com/gitlab/status/1187408628531322886 Two days ago, GitLab informed its users that starting from its next yet to be released version (version 12.4), there would be an addition of Javascript snippets in GitLab.com (GitLab’s SaaS offering) and GitLab's proprietary Self-Managed packages (Starter, Premium, and Ultimate) websites. These Java snippets will be used to interact with GitLab and other third-party SaaS telemetry services. Read More: GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more GitLab.com users were specifically notified that until they accept the new service terms condition, their access to the web interface and API will be blocked. This meant that users with integration to the API will experience a brief pause of service, until the new terms are accepted by signing in to the web interface. The self-managed users, on the other hand, were apprised that they can continue to use the free software GitLab Core without any changes. The DevOps coding platform says that SaaS telemetry products are important tools to understand the analytics on user behaviour inside web-based applications. According to the company, these additional user information will help in increasing their website speed and also enrich user experience. “GitLab has a lot of features, and a lot of users, and it is time that we use telemetry to get the data we need for our product managers to improve the experience,” stated the official blog. The telemetry tools will use JavaScript snippets that will be executed in the user’s browser and will send the user information back to the telemetry service. Read More: GitLab faces backlash from users over performance degradation issues tied to redis latency The company had also assured users that they will disclose all the whereabouts of the user information in the privacy policy. They also ensured that the third-party telemetry service will have data protection standards equivalent to their own standard and will also aim for their SOC2 compliance. If any user does not wish to be tracked, they can turn on the Do Not Track (DNT) mechanism in their GitLab.com or GitLab Self-Managed web browser. The DNT mechanism will not load the  the JavaScript snippet. “The only downside to this is that users may also not get the benefit of in-app messaging or guides that some third-party telemetry tools have that would require the JavaScript snippet,” added the official blog. Following this announcement, GitLab received loads of negative feedback from users. https://twitter.com/PragmaticAndy/status/1187420028653723649 https://twitter.com/Cr0ydon/status/1187380142995320834 https://twitter.com/BlindMyStare/status/1187400169303789568 https://twitter.com/TheChanceSays/status/1187095735558238208 Although, GitLab has rolled backed the Telemetry service changes for now, and are re-considering their decision, many users are warning them to drop the idea completely. https://twitter.com/atom0s/status/1187438090991751168 https://twitter.com/ry60003333/status/1187601207046524928 https://twitter.com/tresronours/status/1187543188703186949 DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 GitLab goes multicloud using Crossplane with kubectl Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim PostGIS 3.0.0 releases with raster support as a separate extension Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more
Read more
  • 0
  • 0
  • 2664

article-image-why-its-time-for-site-reliability-engineering-to-shift-left-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Why It’s Time for Site Reliability Engineering to Shift Left from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
By adopting a multilevel approach to site reliability engineering and arming your team with the right tools, you can unleash benefits that impact the entire service-delivery continuum In today’s application-driven economy, the infrastructure supporting business-critical applications has never been more important. In response, many companies are recruiting site reliability engineering (SRE) specialists to help them […] The post Why It’s Time for Site Reliability Engineering to Shift Left appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 2663

article-image-kubernetes-1-12-released-with-general-availability-of-kubelet-tls-bootstrap-support-for-azure-vmss
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS

Melisha Dsouza
28 Sep 2018
3 min read
As promised by the Kubernetes team earlier this month, Kubernetes 1.12 now stands released! With a focus on internal improvements,  the release includes two highly-anticipated features- general availability of Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS). This promises to provide better security, availability, resiliency, and ease of use for faster delivery of production based applications. Let’s dive into the features of Kubernetes 1.12 #1 General Availability of Kubelet TLS Bootstrap The team has made the Kubelet TLS Bootstrap generally available. This feature significantly streamlines Kubernetes’ ability to add and remove nodes to the cluster. Cluster operators are responsible for ensuring the TLS assets they manage remain up-to-date and can be rotated in the face of security events. Kubelet server certificate bootstrap and rotation (beta) will introduce a process for generating a key locally and then issuing a Certificate Signing Request to the cluster API server to get an associated certificate signed by the cluster’s root certificate authority. As certificates approach expiration, the same mechanism will be used to request an updated certificate. #2 Stable Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Azure Virtual Machine Scale Sets (VMSS) allows users to create and manage a homogenous VM pool. This pool can automatically increase or decrease based on demand or a set schedule. Users can easily manage, scale, and load balance multiple VMs to provide high availability and application resiliency which will be ideal for large-scale applications that can run as Kubernetes workloads. The stable support will allow Kubernetes to manage the scaling of containerized applications with Azure VMSS. Users will have the ability to integrate the applications with cluster-autoscaler to automatically adjust the size of the Kubernetes clusters. #3 Other additional Feature Updates Encryption at rest via KMS is now in beta. It adds multiple encryption providers, including Google Cloud KMS, Azure Key Vault, AWS KMS, and Hashicorp Vault. These providers will encrypt data as it is stored to etcd. RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane. Topology aware dynamic provisioning is now in beta. Storage resources can now understand where they live. Configurable pod process namespace sharing enables users to configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. Vertical Scaling of Pods will help vary the resource limits on a pod over its lifetime. Snapshot / restore functionality for Kubernetes and CSI will provide standardized APIs design and add PV snapshot/restore support for CSI volume drivers To explore these features in depth, the team will be hosting a  5 Days of Kubernetes series next week. Users will be given a walkthrough of the following features: Day 1 - Kubelet TLS Bootstrap Day 2 - Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Day 3 - Snapshots Functionality Day 4 - RuntimeClass Day 5 - Topology Resources Additionally, users can join the members of the release team on November 6th at 10 am PDT in a webinar that will cover major features in this release. You can check out the release on GitHub. Additionally, if you would like to know more about this release, head over to Kubernetes official blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 2650
article-image-macstadium-announces-orka-orchestration-with-kubernetes-on-apple
Savia Lobo
13 Aug 2019
2 min read
Save for later

MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple)

Savia Lobo
13 Aug 2019
2 min read
Today, MacStadium, an enterprise-class cloud solution for Apple Mac infrastructure, announced ‘Orka’ (Orchestration with Kubernetes on Apple). Orka is a new virtualization layer for Mac build infrastructure based on Docker and Kubernetes technology. It offers a solution for orchestrating macOS in a cloud environment using Kubernetes on genuine Apple Mac hardware. With Orka, users can apply native Kubernetes commands for macOS virtual machines (VMs) on genuine Apple hardware. “While Kubernetes and Docker are not new to full-stack developers, a solution like this has not existed in the Apple ecosystem before,” MacStadium wrote in an email statement to us. “The reality is that most enterprises need to develop applications for Apple platforms, but these enterprises prefer to use nimble, software-defined build environments,” said Greg McGraw, Chief Executive Officer, MacStadium. “With Orka, MacStadium’s flagship orchestration platform, developers and DevOps teams now have access to a software-defined Mac cloud experience that treats infrastructure-as-code, similar to what they are accustomed to using everywhere else.” Developers creating apps for Mac or iOS must build on genuine Apple hardware. However, until now, popular orchestration and container technologies like Kubernetes and Docker have been unable to leverage Mac operating systems. With Orka, Apple OS development teams can use container technology features in a Mac cloud, the same way they build on other cloud platforms like AWS, Azure or GCP. As part of its initial release, Orka will ship with a plugin for Jenkins, an open-source automation tool that enables developers to build, test and deploy their software using continuous integration techniques. Macstadium will also present a session at DevOps World | Jenkins World in San Francisco (August 12-15) demonstrating users how Orka integrates with Jenkins build pipelines and how it leverages the capability and power of Docker/Kubernetes in a Mac development environment. To know more about Orka in detail, visit MacStadium’s official website. CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]
Read more
  • 0
  • 0
  • 2558

article-image-lxd-3-11-releases-with-configurable-snapshot-expiry-progress-reporting-and-more
Natasha Mathur
08 Mar 2019
2 min read
Save for later

LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more

Natasha Mathur
08 Mar 2019
2 min read
The LXD team released version 3.11 of LXD, its open source container management extension for Linux Containers (LXC), earlier this week. LXD 3.11 explores new features, minor improvements, and bugfixes. LXD or ‘ Linux Daemon’ system container manager provides users with an experience similar to virtual machines. It is written in Go and helps improve the existing LXC features to build and manage Linux containers. New Features in LXD 3.11 Configurable snapshot expiry at creation time: LXD 3.11 allows users to set an expiry during the snapshot creation time. Earlier, it was a hassle to manually create snapshots and edit them to modify their expiry. To change the expiry at the API level, you can set the exact timestamp to null that will make a persistent snapshot despite any configured auto-expiry. Progress reporting for publish operations: Progress information is now displayed to the user in LXD 3.11 when running lxc publish against a container or snapshot. This is similar to image transfers and container migrations. Improvements Minor improvements have been made to how candid authentication feature gets handled by the CLI in LXD 3.11. Per-remote authentication cookies: Now every remote consist of its own “cookie jar”. Also, LXD’s behavior is now always identical in LXD 3.11 when adding remotes. In prior releases, a shared “cookie jar” was being used for all remotes which would lead to inconsistent behaviors. Candid preferred over TLS for new remotes: In LXD 3.11, while using LXC remote add to add in a new remote, candid will be used for TLS authentication in case that remote supports candid. Also, authentication type can always be overridden using --auth-type. Remote list can now show Candid domain: The remote list can now indicate what Candid domain is used in LXD 3.11. Bug Fixes Goroutine leak has been fixed in ExecContainer. The “client: fix goroutine leak in ExecContainer” has been reverted. rest-api.md formatting has been updated. Translations from weblate have also been updated. Error handling in execIfAliases has been improved. Duplicate scheduled snapshots have been fixed. failing backup import has been fixed. Test case that covers the image sync scenario for the joined node has been updated. For a complete list of changes, check out the official LXD 3.11 release notes. LXD 3.8 released with automated container snapshots, ZFS compression support and more! Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem”
Read more
  • 0
  • 0
  • 2541