Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-kublr-1-9-2-for-kubernetes-cluster-deployment-in-isolated-environments-released
Savia Lobo
30 May 2018
2 min read
Save for later

Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released!

Savia Lobo
30 May 2018
2 min read
Kublr, a comprehensive Kubernetes platform for the enterprise, announced the release of Kublr 1.9.2 at the DevOpsCon, Berlin. Kublr provides a Kubernetes platform which makes it easy for Operations to deploy, run, and handle containerized applications. At the same time, it allows developers to use the development tools and the environment they wish to choose. Kublr 1.9.2 allows developers to deploy the complete Kublr platform and Kubernetes clusters in isolated environments without requiring access to the Internet. This comes as an advantage for organizations that have sensitive data, which should remain secure. However, while being secured and isolated this data also benefits from features such as auto-scaling, backup and disaster recovery, centralized monitoring and log collection. Slava Koltovich, CEO of Kublr, stated that,”We’ve learned from several financial institutions that there is a vital need for cloud-like capabilities in completely isolated environments. It became increasingly clear that, to be truly enterprise grade, Kublr needed to work in even the most secure environments. We are proud to now offer that capability out-of-the-box”. The Kublr 1.9.2 changelog includes the following key updates: Ability to deploy Kublr without access to Internet Support Docker EE for RHEL Support CentOS 7.4. Delete onprem clusters. Additional kubelet monitoring. The Changelog also includes some bug fixes of some known issues. Kublr further announced that it is now Certified Kubernetes for Kubernetes v1.10. To know more about Kublr 1.9.2 in detail, check the release notes. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Kubernetes Containerd 1.1 Integration is now generally available Introducing OpenStack Foundation’s Kata Containers 1.0  
Read more
  • 0
  • 0
  • 2021

article-image-introducing-vmware-integrated-openstack-vio-5-0-a-new-infrastructure-as-a-service-iaas-cloud
Savia Lobo
30 May 2018
3 min read
Save for later

Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud

Savia Lobo
30 May 2018
3 min read
VMware recently released its brand new Infrastructure-as-a-Service (IaaS) cloud, known as the VMware Integrated OpenStack (VIO) 5.0. This release, announced at the OpenStack Summit in Vancouver, Canada, is fully based on the new OpenStack Queens release. VIO provides customers with a fast and efficient solution to deploy and operate OpenStack clouds. These clouds are highly optimized for VMware's NFV and software-defined data center (SDDC) infrastructure, with advanced automation and onboarding. If one is already using VIO, they can use OpenStack's built-in upgrade capability to upgrade seamlessly to VIO 5.0. VMWare Integrated OpenStack(VIO)5.0 would be available in both Carrier and Data Center Editions.The VIO-Carrier Edition will addresses specific requirements of communication service providers (CSP). The improvements in this include: An Accelerated Data Plane Performance:  Support of NSX Managed Virtual Distributed Switch in Enhanced Data Path mode and DPDK provides customers with: Significant improvements in application response time, reduced network latencies breakthrough network performance optimized data plane techniques in VMware vSphere. Multi-Tenant Resource is now scalable: This will provide resource guarantee and resource isolation to each tenant. It will also support elastic resource scaling that allows CSPs to add new resources dynamically across different vSphere clusters to adapt to traffic conditions or transition from pilot phase to production in place. OpenStack for 5G and Edge Computing: Customers will have full control over the micro data centers and apps at the edge via automated API-driven orchestration and lifecycle management. The solution will help tackle enterprise use cases such as utilities, oil and gas drilling platforms, point-of-sale applications, security cameras, and manufacturing plants. Also, Telco oriented use-cases such Multi-Access Edge Computing (MEC), latency sensitivity VNF deployments, and operational support systems (OSS) would be addressed. VIO 5.0 also enables CSP and enterprise customers to utilize Queens advancements to support mission-critical workloads, across container and cloud-native application environments. Some new features include: High Scalability: One can run upto 500 hosts and 15,000 VMs in a single region using the VIO5.0. It will also introduce support for multiple regions at once with monitoring and metrics at scale. High Availability for Mission-Critical Workloads: Creating snapshots, clones, and backups of attached volumes to dramatically improve VM and application uptime via enhancements to the Cinder volume driver is now possible. Unified Virtualized Environment: Ability to deploy and run both VM and container workloads on a single virtualized infrastructure manager (VIM) and with a single network fabric based on VMware NSX-T Data Center. This architecture will enable customers to seamlessly deploy hybrid workloads where some components run in containers while others run in VMs. Advanced Security: Consolidate and simplify user and role management based on enhancements to Keystone, including the use of application credentials as well as system role assignment. VMware Integrated OpenStack 5.0 takes security to new levels with encryption of internal API traffic, Keystone to Keystone federation, and support for major identity management providers that includes VMware Identity Manager. Optimization and Standardization of DNS Services: Scalable, on-demand DNS as a service via Designate. Customers can auto-register any VM or Virtual Network Function (VNF) to a corporate approved DNS server instead of manually registering newly provisioned hosts through Designate. To know more about the other features in detail read VMWare’s official blog. How to create and configure an Azure Virtual Machine Introducing OpenStack Foundation’s Kata Containers 1.0 SDLC puts process at the center of software engineering
Read more
  • 0
  • 0
  • 3674

article-image-epicor-partners-with-microsoft-azure-to-adopt-cloud-erp
Savia Lobo
29 May 2018
2 min read
Save for later

Epicor partners with Microsoft Azure to adopt Cloud ERP

Savia Lobo
29 May 2018
2 min read
Epicor Software Corporation recently announced its partnership with Microsoft Azure to accelerate its Cloud ERP adoption. This partnership further aims at delivering Epicor’s enterprise solutions on the Microsoft Azure platform. The company plans to deploy its Epicor Prophet 21 enterprise resource planning (ERP) suite on Microsoft Azure. This would enable customers a faster growth and innovation as they look forward to digitally transform their business with the reliable, secure, and scalable features of Microsoft Azure. With the Epicor and Microsoft collaboration customers can now access the power of Epicor ERP and Prophet 21 running on Microsoft Azure. Having Microsoft as a partner, Epicor, Leverages a range of technologies such as Internet of Things (IoT), Artificial Intelligence (AI), and machine learning (ML) to deliver ready-to-use, accurate solutions for mid-market manufacturers and distributors. Plans to explore Microsoft technologies for advanced search, speech-to-text, and other use cases to deliver modern human/machine interfaces that improve productivity for customers. Steve Murphy, CEO, Epicor said that,”Microsoft’s focus on the ‘Intelligent Cloud’ and ‘Intelligent Edge’ complement our customer-centric focus”. He further stated that after looking at several cloud options, they felt Microsoft Azure offers the best foundation for building and deploying enterprise business applications that enables customers’ businesses to adapt and grow. As most of the prospects these days ask about Cloud ERP, Epicor says that by deploying such a model they would be ready to offer their customers the ability to move onto cloud with the confidence that Microsoft Azure offers. Read more about this in detail on Epicor’s official blog. Rackspace now supports Kubernetes-as-a-Service How to secure an Azure Virtual Network What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 2363
Visually different images

article-image-platform-13-openstack-queens-the-first-fully-containerized-version-released
Gebin George
28 May 2018
2 min read
Save for later

Platform 13: OpenStack Queens, the first fully containerized version released

Gebin George
28 May 2018
2 min read
Red Hat released its 13th version of OpenStack cloud platform i.e Queens. OpenStack usually follows a rapid six-month release cycle. This release was majorly focussed upon using open-source OpenStack to bridge the gap between private and public cloud. RHOP will be generally available in June through the Red Hat customer portal and as a part of both Red Hat infrastructure and cloud suite. Red Hat’s general manager of OpenStack said “RHOP 13 is the first complete containerized OpenStack. Our customers have been asking us to make it easy to run Red Hat OpenShift Container Platform (RHOCP), Red Hat's Kubernete's offering. We want to make this as seamless as possible” OpenStack has come with very interesting cross-portfolio support, to accelerate their hybrid cloud offering. This includes: Red Hat CloudForms which help in managing day-to-day tasks in Hybrid Infrastructure. Red Hat Ceph storage, a scalable storage solution which enables provisioning of hundreds of virtual machines from a single snapshot to build a massive storage solution Red Hat OpenShift container platform which enables running of cloud-native workloads with ease. OpenShift architecture supports running of both Linux as well as Kubernetes containers on a single workload. RHOP 13 also comes with a varied set of feature enhancements and upgrades, like: Containerization capabilities OpenStack 13 is building upon the containerization capabilities and services introduced with the release of OpenStack 12. It enables containerization of all the services including networking and storage. Security capabilities By the inclusion of OpenStack Barbican, RHOP 13 comes up with tenant-level lifecycle for sensitive data protection such as passwords, security certificates and keys. With the introduction of features in Barbican, encryption-based services are available to extensive data protection. For official release notes, please refer to the official OpenStack blog. Introducing OpenStack Foundation’s Kata Containers 1.0 About the Certified OpenStack Administrator Exam OpenStack Networking in a Nutshell
Read more
  • 0
  • 0
  • 2119

article-image-kubernetes-containerd-1-1-integration-is-now-generally-available
Savia Lobo
25 May 2018
3 min read
Save for later

Kubernetes Containerd 1.1 Integration is now generally available

Savia Lobo
25 May 2018
3 min read
After just 6 months of releasing the alpha version of Kubernetes containerd integration, the community has declared that the upgraded containerd 1.1 is now generally available. Containerd 1.1 can be used as the container runtime for production Kubernetes clusters. It works well with Kubernetes 1.10 and also supports all Kubernetes features. Let’s look at the key upgrades in the new Kubernetes Containerd 1.1 : Architecture upgrade Containerd 1.1 architecture with the CRI plugin In the current version 1.1, the cri-containerd daemon is changed to a containerd CRI plugin. This CRI plugin is made default and is built-in containerd 1.1. It interacts with containerd through direct function calls. Kubernetes can now be used by containerd directly as this new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Thus, the cri-containerd daemon is no longer needed. Performance upgrades Performance optimizations have been the major focus in the Containerd 1.1. Performance was optimized in terms of pod startup latency and daemon resource usage which are discussed in detail below. Pod Startup Latency The containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim. Following graph is based on the results from the ‘105 pod batch startup benchmark’ (The lower, the better) Pod Startup Latency Graph CPU and Memory Usage The containerd 1.1 integration consumes less CPU and memory overall compared to Docker 18.03 CE integration with dockershim at a steady state with 105 pods. The results differ as per the number of pods running on the node. 105 is the current default for the max number of user pods per node. CPU Usage Graph Memory Usage Graph On comparing Docker 18.03 CE integration with dockershim, the containerd 1.1 integration has 30.89% lower kubelet cpu usage, 68.13% lower container runtime cpu usage, 11.30% lower kubelet resident set size (RSS) memory usage,  and 12.78% lower container runtime RSS memory usage. What would happen to Docker Engine? Switching to containerd would not mean that one will be unable to use Docker Engine. The fact is that Docker Engine is built on top of containerd. The next release of Docker Community Edition (Docker CE) will allow using containerd version 1.1. Docker engine built over Containerd Containerd is being used by both Kubelet and Docker Engine. This means users choosing the containerd integration will not only get new Kubernetes features, performance, and stability improvements, but also have the option of keeping Docker Engine around for other use cases. Read more interesting details on the Containerd 1.1 on Kubernetes official blog post. Top 7 DevOps tools in 2018 Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 2866

article-image-is-cloud-mining-profitable
Richard Gall
24 May 2018
5 min read
Save for later

Is cloud mining profitable?

Richard Gall
24 May 2018
5 min read
Cloud mining has become into one of the biggest trends in Bitcoin and cryptocurrency. The reason is simple: it makes mining Bitcoin incredibly easy. By using cloud, rather than hardware to mine bitcoin, you can avoid the stress and inconvenience of managing hardware. Instead of using the processing power from hardware, you share the processing power of the cloud space (or more specifically the remote data center). In theory, cloud mining should be much more profitable than mining with your own hardware. However, it's easy to be caught out. At best some schemes are useless - at worst, they could be seen as a bit of a pyramid scheme. For this reason, it's essential you do your homework. However, although there are some risks associated with cloud mining, it does have benefits. Arguably it makes Bitcoin, and cryptocurrency in general, more accessible to ordinary people. Provided people get to know the area, what works and what definitely doesn't it could be a positive opportunity for many people. How to start cloud mining Let's first take a look at different methods of cloud mining. If you're going to do it properly, it's worth taking some time to consider your options. At a top level there are 3 different types of cloud mining. Renting out your hashing power This is the most common form of cloud mining. To do this, you simple 'rent out' a certain amount of your computer's hashing power. In case you don't know, hashing power is essentially your hardware's processing power; it's what allows your computer to use and run algorithms. Hosted mining As the name suggests, this is where you simply use an external machine to mine Bitcoin. To do this, you'll have to sign up with a cloud mining provider. If you do this, you'll need to be clear on their terms and conditions, and take care when calculating profitability. Virtual hosted mining Virtual hosted mining is a hybrid approach to cloud mining. To do this, you use a personal virtual server and then install the required software. This approach can be a little more fun, especially if you want to be able to build your own Bitcoin mining set up, but of course this poses challenges too. Depending on what you want to achieve any of these options may be right for you. Which cloud mining provider should you choose? As you'd expect from a trend that's growing rapidly, there's a huge number of cloud mining providers out there that you can use. The downside is that there are plenty of dubious providers that aren't going to be profitable for you. For this reason, it's best to do your research and read what others have to say. One of the most popular cloud mining providers is Hashflare. With Hashflare, you can buy a number of different types of cryptocurrencies, including Bitcoin, Ethereum, and Litecoin. You can also select your 'mining pool', which is something many providers won't let you do. Controlling the profitability of cloud mining can be difficult, so having control over your mining pool could be important. A mining pool is a bit like a hedge fund - a group of people pool together their processing resources, and the 'pay out' will be split according to the amount of work put in in order to create what's called a 'block', which is essentially a record or ledger of transactions. Hashflare isn't the only cloud mining solution available. Genesis Mining is another very high profile provider. It's incredibly accessible - you can begin a Bitcoin mining contract for just $15.99. Of course, the more you invest the better the deal you'll get. For a detailed exploration and comparison of cloud mining solutions, this TechRadar article is very useful. Take a look before you make any decisions! How can I ensure cloud mining is profitable? It's impossible to ensure profitability. Remember - cloud mining providers are out to make a profit. Although you might well make a profit, it's not necessarily in their interests to be paying money out to you. Calculating cloud mining profitability can be immensely complex. To do it properly you need to be clear on all the elements that are going to impact profitability. This includes: The cryptocurrency you are mining How much mining will cost per unit of hashing power The growth rate of block difficulty How the network hashrate might increase over the length of your mining contract There are lots of mining calculators out there that you can use to calculate how profitable cloud mining is likely to be. This article is particularly good at outlining how you can go about calculating cloud mining profitability. Its conclusion is an interesting take that's worth considering if you are interested in starting cloud mining: is "it profitable because the underlying cryptocurrency went up, or because the mining itself was profitable?" As the writer points out, if it is the cryptocurrency's value, then you might just be better off buying the cryptocurrency. Read next A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains
Read more
  • 0
  • 0
  • 3078
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-introducing-openstack-foundations-kata-containers-1-0
Savia Lobo
24 May 2018
2 min read
Save for later

Introducing OpenStack Foundation’s Kata Containers 1.0

Savia Lobo
24 May 2018
2 min read
OpenStack Foundation successfully launched the version 1.0 of its first non-OpenStack project, Kata Containers. Kata containers is a result of the combination of two leading open source virtualized container projects, Intel’s Clear Containers and Hyper’s runV technology. Kata Containers enable developers to have a, lighter, faster, and an agile container management technology across stacks and platforms. Developers can have a more container-like experience with security and isolation features. Kata Containers deliver an OCLI compatible runtime with seamless integration for Docker and Kubernetes. They execute a lightweight VM for every container such that every container gets similar hardware isolation as expected from a virtual machine. Although, hosted by OpenStack foundation, Kata Containers are assumed to be platform and architecture agnostic. Kata Containers 1.0 components include: Kata Containers runtime 1.0.0 (in the /runtime repo) Kata Containers proxy 1.0.0 (in the /proxy repo) Kata Containers shim 1.0.0 (in the /shim repo) Kata Containers agent 1.0.0 (in the /agent repo) KSM throttler 1.0.0 (in the /ksm-throttler repo) Guest operating system building scripts (in the /osbuilder repo) Intel, RedHat, Canonical and cloud vendors such as Google, Huawei, NetApp, and others have offered to financially support the Kata Containers Project. Read more about Kata containers on their official website and on the GitHub Repo. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform What to expect from vSphere 6.7 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available
Read more
  • 0
  • 0
  • 2286

article-image-verizon-chooses-amazon-web-servicesaws-as-its-preferred-cloud-provider
Savia Lobo
18 May 2018
2 min read
Save for later

Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider

Savia Lobo
18 May 2018
2 min read
Verizon Communications Inc. recently announced that it is migrating about 1000 of its business-critical applications and database back-end systems to the popular cloud provider, Amazon Web Services (AWS). Verizon had bought Terramark, a cloud and service provider, in 2011 as part of its public and private cloud strategy. This strategy included building its own cloud that offered infrastructure-as-a-service to its customers. AWS has stayed ahead of competition, where it offered added services to its customers. On the other hand, Verizon could not stay in the race for longer as it was usurped by Microsoft and Google. Due to this, two years ago, in 2016, Verizon closed down its public cloud offering and then sold off its cloud and managed hosting service assets to IBM and also sold a number of data centres to Equinix. Verizon had first started working with AWS in 2015 and has many business and consumer applications already running in the cloud. The current migrations to AWS is part of Verizon’s corporate-wide initiative, which is, to increase agility and reduce costs through the use of cloud computing. Some benefits of this migration include: With the help of AWS, Verizon will enable it to access more comprehensive set of cloud capabilities. This will ensure that its developers are able to invent on behalf of its customers. Verizon has built AWS-specific training facilities where its employees can quickly update themselves on the AWS technologies and learn how to innovate with speed and at scale. AWS enables Verizon to quickly deliver the best, most efficient customer experiences. Verizon also aims to make the public cloud a core part of its digital transformation, upgrading its database management approach to replace its proprietary solutions with Amazon Aurora To know more about AWS and Verizon’s partnership, read the AWS blog post. Linux Foundation launches the Acumos Al Project to make AI accessible Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2852

article-image-rackspace-now-supports-kubernetes-as-a-service
Vijin Boricha
18 May 2018
2 min read
Save for later

Rackspace now supports Kubernetes-as-a-Service

Vijin Boricha
18 May 2018
2 min read
Rackspace recently announced the launch of its Kubernetes-as-a-Service offering which would be  implemented to its private cloud clients worldwide, this month. It claims this service would be soon coming to public cloud later this year. Rackspace, which is a managed-cloud computing company, revealed that it will fully operate and manage the Kubernetes deployment, including the infrastructure. It also claimed that users can save up to 50% when compared to other open source system deployments. So, if you are looking at automating deployments, scaling, and managing containerized applications then, Kubernetes is your open-source option. It is the most efficient way of running online software across a vast range of machines. Kubernetes is becoming a leading player in cloud container orchestration, where bigger players like Microsoft Azure and Cisco have started adopting its services. Not all businesses comply with the internal resources and expertise needed to effectively manage a Kubernetes environment on their own. By delivering a fully managed Kubernetes-as-a-Service, Rackspace allows organizations to focus more on building and running their applications. With the new service, Rackspace delivers an enhanced level of ongoing operations management and support for the entire technology stack. This support ranges from the hardware to the Infrastructure as a Service (IaaS) to Kubernetes. Rackspace also claims that the key benefits of this offering include Support for operations such as Updates, Upgrades, Patching and security hardening, and The ability to use a single platform to deploy Kubernetes clusters across private and public clouds. Ensures that a customer always has access to an entire team of specialists 24*7*365 Rackspace experts fully validate and inspect each component of the service, provide static container scanning and enable customers to restrict user access to the environment. This is just an overview of Rackspace’s  extended support to Kubernetes-as-a-Service. You can know more about this new offering from the Rackspace blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 How to secure a private cloud using IAM Google’s kaniko – An open-source build tool for Docker Images in Kubernetes, without a root access
Read more
  • 0
  • 0
  • 2076

article-image-what-google-redhat-oracle-and-others-announced-at-kubercon-cloudnativecon-2018
Savia Lobo
17 May 2018
6 min read
Save for later

What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018

Savia Lobo
17 May 2018
6 min read
Earlier this month, 4000+ developers attended the Cloud Native Computing Foundation’s flagship event, KubeCon + CloudNativeCon 2018 conference, held at Copenhagen, Europe from May 2nd to 4th. This conference focussed on a series of announcements on microservices, containers, and other open source tools for building applications for the web. Top vendors including Google, RedHat, Oracle, and many more announced a myriad of releases and improvements with respect to Kubernetes. Read our article on Big vendor announcements at KubeCon + CloudNativeCon Europe. Let’s brush through the top 7 vendors and their release highlights in this conference. Google released Stackdriver Kubernetes Monitoring and open sourced gVisor Released in beta, the Stackdriver Kubernetes Monitoring enables both developers and operators to use Kubernetes in a comprehensive fashion and also simplifies operations for them. Features of Stackdriver Kubernetes Monitoring include: Scalable Comprehensive Observability: Stackdriver Kubernetes Monitoring sums up logs, events and metrics from the Kubernetes environment to understand the behaviour of one’s application. These are rich, unified set of signals which are used by developers to build higher quality applications faster. It also helps operators speed root cause analysis and reduce mean time to resolution (MTTR). Seamless integration with Prometheus: The Stackdriver Kubernetes Monitoring integrates seamlessly with Prometheus--a leading Kubernetes open source monitoring approach--without any change. Unified view: Stackdriver Kubernetes Monitoring provides a unified view into signals from infrastructure, applications and services across multiple Kubernetes clusters. With this, developers, operators and security analysts, can effectively manage Kubernetes workloads. This allows them to easily observe system information from various sources, in flexible ways. Some instances include, inspecting a single container, or scaling up to explore massive, multi-cluster deployments. Get started on-cloud or on-premise easily: Stackdriver Kubernetes Monitoring is pre-integrated with Google Kubernetes Engine. Thus, one can immediately use it within their Kubernetes Engine workloads. It is easily integrated with Kubernetes deployments on other clouds or on-premise infrastructure. Hence, one can access a unified collection of logs, events, and metrics for their application, regardless of where the containers are deployed. Also, Google has open-sourced gVisor, a sandboxed container runtime. gVisor, which is lighter than a Virtual machine, enables secure isolation for containers. It also integrates with Docker and Kubernetes and thus makes it simple to run sandboxed containers in production environments. gVisor is written in Go to avoid security pitfalls that can plague kernels. RedHat shared an open source toolkit called Operator Framework RedHat in collaboration with Kubernetes open source community has shared the Operator Framework to make it easy to build Kubernetes applications. The Operator Framework is an open source toolkit designed in order to manage Kubernetes native applications named as Operators in an effective, automated and scalable manner. The Operator Framework comprises of an: Operator SDK that helps developers in building Operators based on their expertise. This does not require any knowledge of the complexities of Kubernetes API. Operator Lifecycle Manager which supervises the lifecycle of all the operators running across a kubernetes cluster. It also keep a check on the services associated with the operators. Operator Metering, which is soon to be added, allows creating a usage report for Operators providing specialized services. Oracle added new open serverless support and key Kubernetes features to Oracle Container Engine According to a report, security, storage and networking are the major challenges that companies face while working with containers. In order to address these challenges, the Oracle Container Engine have proposed some solutions, which include getting new governance, compliance and auditing features such as Identity and Access Management, role-based access control, support for the Payment Card Industry Data Security Standard, and cluster management auditing capabilities. Scalability features: Oracle is adding support for small and virtualized environments, predictable IOPS, and the ability to run Kubernetes on NVIDIA Tesla GPUs. New networking features: These include load balancing and virtual cloud network. Storage features: The company has added the OCI volume provisioner and flexvolume driver. Additionally, Oracle Container Engine features support for Helm and Tiller, and the ability to run existing apps with Kubernetes. Kublr announced that its version 1.9 provides easy configuration of Kubernetes clusters for enterprise users Kublr unleashed an advanced configuration capability in its version 1.9. This feature is designed to provide customers with flexibility that enables Kubernetes clusters to meet specific use cases. The use cases include: GPU-enabled nodes for Data Science applications Hybrid clusters spanning data centers and clouds, Custom Kubernetes tuning parameters, and Meeting other advanced requirements. New features in the Kublr 1.9 include: Kubernetes 1.9.6 and new Dashboard Improved backups in AWS with full cluster restoration An introduction to Centralized monitoring, IAM, Custom cluster specification Read more about Kublr 1.9 on Kublr blog. Kubernetes announced the availability of Kubeflow 0.1 Kubernetes brought forward a power-packed package for tooling, known as Kubeflow 0.1. Kubeflow 0.1 provides a basic set of packages for developing, training, and deploying machine learning models. This package: Supports Argo, for managing ML workflows Offers Jupyter Hub to create interactive Jupyter notebooks for collaborative and interactive model training. Provides a number of TensorFlow tools, which includes Training Controller for native distributed training. The Training Controller can be configured to CPUs or GPUs and can also be adjusted to fit the size of a cluster by a single click. Additional features such as a simplified setup via bootstrap container, improved accelerator integration, and support for more ML frameworks like Spark ML, XKGBoost, and sklearn will be released soon in the 0.2 version of KubeFlow. CNCF(Cloud Native Computing Foundation) announced a new Certified Kubernetes Application Developer program The Cloud Native Computing Foundation has successfully launched the Certified Kubernetes Application Developer (CKAD) exam and corresponding Kubernetes for Developers course. The CKAD exam certifies that users are fit to design, build, configure, and expose cloud native applications on top of Kubernetes. A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. Read more about this program on the Cloud Native Computing Foundation blog. DigitalOcean launched managed Kubernetes service DigitalOcean cloud computing platform launched DigitalOcean Kubernetes, which is a simple and cost-effective solution for deploying, orchestrating, and managing container workloads on cloud. With the DigitalOcean Kubernetes service, developers can save time and deploy their container workloads without the need to configure things from scratch. The organization has also provided an early access to this Kubernetes service. Read more on the DigitalOcean blog. Apart, from these 7 vendors, many others such as DataDog, Humio, Weaveworks and so on have also announced features, frameworks, and services based on Kubernetes, serverless, and cloud computing. This is not the end to the announcements, read the KubeCon + CloudNativeCon 2018 website to know about other announcements rolled out in this event. Top 7 DevOps tools in 2018 Apache Spark 2.3 now has native Kubernetes support! Polycloud: a better alternative to cloud agnosticism
Read more
  • 0
  • 0
  • 2926
article-image-google-compute-engine-plugin-makes-it-easy-to-use-jenkins-on-google-cloud-platform
Savia Lobo
15 May 2018
2 min read
Save for later

Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform

Savia Lobo
15 May 2018
2 min read
Google recently announced the Google Compute Engine Plugin for Jenkins, which helps to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP). Jenkins is one of the most popular tools for Continuous Integration(CI), a standard practice carried out by many software organizations. CI assists in automatically detecting changes that were committed to one’s software repositories, running them through unit tests, integration tests and functional tests, to finally create an artifact (JAR, Docker image, or binary). Jenkins helps one to define, build and test a process, then run it continuously against the latest software changes. However, as one scales up their continuous integration practice, one may need to run builds across fleets of machines rather than on a single server. With the Google Compute Engine Plugin, The DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. The plugin automatically deletes one’s unused instances, once work in the build system has slowed down,so that one only pays for the instances needed. One can also configure the Google Compute Engine Plugin to create build instances as Preemptible VMs, which can save up to 80% on per-second pricing of builds. One can attach accelerators like GPUs and Local SSDs to instances to run builds faster. One can configure build instances as per their choice, including the networking. For instance: Disable external IPs so that worker VMs are not publicly accessible Use Shared VPC networks for greater isolation in one’s GCP projects Apply custom network tags for improved placement in firewall rules One can improve security risks present in CI using the Compute Engine Plugin as it uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. One can create an ephemeral build farm in Compute Engine while keeping Jenkins master and other necessary build dependencies behind firewall while using Jenkins on-premises. Read more about the Compute Engine Plugin in detail, on the Google Research blog. How machine learning as a service is transforming cloud Polaris GPS: Rubrik’s new SaaS platform for data management applications Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 4434

article-image-red-hat-enterprise-linux-7-5-rhel-7-5-now-generally-available
Savia Lobo
11 May 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available

Savia Lobo
11 May 2018
2 min read
Red Hat recently announced that its latest enterprise distribution, Red Hat Enterprise linux version 7.5 (RHEL 7.5) is now generally available. This release aims at simplifying hybrid computing. The RHEL 7.5 is packed with multiple features for the server administrators and developers. New features in the RHEL 7.5 RHEL 7.5 provides support for Network Bound Disk Encrypted (NBDE) devices, new Red Hat cluster management capabilities, and compliance management features. Enhancements to the cockpit administrator console. Cockpit provides a simplified web interface to help eliminate complexities around Linux system administration. This makes it easier for new administrators, or administrators moving over from non-Linux systems, to better understand the health and status of their operations. Helps cut down storage costs by providing improved compliance controls and security, enhanced usability, and tools to cut down storage costs. Better Integration with Microsoft Windows infrastructure both in Microsoft Azure and on-premise. This includes improved management and communication with Windows Server, more secure data transfers with Azure, and performance improvements when used within Active Directory architectures. If one wishes to use both RHEL and Windows for their network, RHEL 7.5 serves this purpose. Delivers improved software security controls to alleviate risk while also augmenting IT operations. A significant component of these controls is security automation via the integration of OpenSCAP with Red Hat Ansible Automation. This is aimed at facilitating the development of Ansible Playbooks straight from OpenSCAP scans which, in turn, can be leveraged to execute remediations more consistently and quickly across a hybrid IT environment. Provides high availability support for enterprise applications running on Amazon Web Services or Microsoft Azure with Pacemaker support in public clouds via the Red Hat High Availability Add-On and Red Hat Enterprise Linux for SAP® Solutions. To know more about this release in detail read Red Hat official blog. Linux Foundation launches the Acumos Al Project to make AI accessible How to implement In-Memory OLTP on SQL Server in Linux Kali Linux2 released    
Read more
  • 0
  • 0
  • 2741

article-image-what-to-expect-from-vsphere-6-7
Vijin Boricha
11 May 2018
3 min read
Save for later

What to expect from vSphere 6.7

Vijin Boricha
11 May 2018
3 min read
VMware has announced the latest release of the industry-leading virtualization platform vSphere 6.7. With vSphere 6.7, IT organizations can address key infrastructure demands like: Extensive growth in quantity and diversity of applications delivered Increased adoption of hybrid cloud environments Global expansion of data centers Robust infrastructure and application security Let’s take a look at some of the key capabilities of vSphere 6.7: Effortless and Efficient management: vSphere 6.7 is built on the industrial innovations delivered by vSphere 6.5, which advances customer experience to a another level. With vSphere 6.7 you can leverage management simplicity, operational efficiency, and faster time to market, all at scale. It comes with an enhanced vCenter Server Appliance (vCSA), new APIs that improve multiple vCenters deployments, which results in easier management of vCenter Server Appliance, as well as backup and restore. Customers can now link multiple vCenters and have seamless visibility across their environment without external platform services or load balancers dependencies. Extensive Security capabilities: vSphere 6.7 has enhanced its security capabilities from vSphere 6.5. It has added support for Trusted Platform Module (TPM) 2.0 hardware devices and has also introduced Virtual TPM 2.0, where you will notice significant enhancements in both the hypervisor and the guest operating system security. With this capability VMs and hosts cannot be tampered, preventing loading of unauthorized components and this enables desired guest operating system security features. With vSphere 6.7, VM Encryption is further enhanced and more operationally simple to manage, enabling encrypted vMotion across different vCenter instances. vSphere 6.7 has also extended its security features keeping in mind the collaboration between VMware and Microsoft ensuring secured Windows VMs on vSphere. Universal Application Platform: vSphere is now a universal application platform that supports existing mission critical applications along with new workloads such as 3D Graphics, Big Data, Machine Learning, Cloud-Native and more. It has also extended its support to some of the latest hardware innovations in the industry, delivering exceptional performance for a variety of workloads. With collaboration of VMware and Nvidia, vSphere 6.7 has further extended its support for GPUs by virtualizing Nvidia GPUs for non-VDI and non-general-purpose-computing use cases such as artificial intelligence, machine learning, big data and more. With these enhancements, customers are now able to better lifecycle management of hosts, reducing disruption for end-users. VMware plans to invest more in this area in order to bring full vSphere support to GPUs in future releases. Hybrid Cloud Experience is now flawless: Since customers have started looking for hybrid cloud options vSphere 6.7 introduces vCenter Server Hybrid Linked Mode. It makes customers have a unified manageability and visibility across an on-premises vSphere environments running on similar versions and a VMware Cloud on AWS environment, running on a different version of vSphere. To ensure seamless hybrid cloud experience, vSphere 6.7 delivers a new capability, called Per-VM EVC which allows for seamless migration across different CPUs. This is only an overview of the key capabilities of vSphere 6.7. You can know more about this release from VMware vSphere Blog and VMware release. Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) VMware vSphere storage, datastores, snapshots The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 2883
article-image-linux-foundation-launches-the-acumos-al-project-to-make-ai-accessible
Savia Lobo
08 May 2018
2 min read
Save for later

Linux Foundation launches the Acumos Al Project to make AI accessible

Savia Lobo
08 May 2018
2 min read
The Linux Foundation recently launched the Acumos Al Project with an aim to make AI accessible to all. Acumos AI is a platform and an open source framework, to easily build, share and deploy Artificial Intelligence, Machine Learning, and Deep learning applications. As a part of the LF Deep Learning Foundation, Acumos strives to make these AI, ML and DL technologies available to developers and data scientists everywhere. It caters to a broad range of business use cases, which include network analytics, customer care, field service and equipment repair, healthcare analytics, network security and advanced video services, and many more. Let’s have a look at what Acumos AI has in store. The Acumos AI Project, Packages tool kits such as TensorFlow and SciKit Learn and models with a common API that allows them to seamlessly connect Allows easy onboarding and training of models and tools Supports a variety of popular software languages, including Java, Python, and R Leverages modern microservices and containers in order to package and export production-ready AI applications as Docker files Includes a federated AI Model Marketplace, which is a catalog of AI models contributed by the community that can be securely shared Benefits of Acumos AI It provides a standardized platform, an easy export, and Docker-file deployment to any environment, including major public clouds, making stand-up and maintenance a breeze It has a simplified toolkit and model onboarding, which helps data scientists focus on building great AI models rather than maintaining infrastructure. The Acumos AI comprises of a Visual design editor, a drag-and-drop application design, and a chaining feature, where applications can be chained to create an array of AI services. These enable end users to deploy complicated AI apps for training and testing within minutes. Read the Acumos AI whitepaper to know more about the Acumos AI Project in detail. Kali Linux 2018.2 released How to implement In-Memory OLTP on SQL Server in Linux What to expect from upcoming Ubuntu 18.04 release
Read more
  • 0
  • 0
  • 2484

article-image-big-vendor-announcents-at-kubecon-cloudnativecon-europe
Richard Gall
03 May 2018
4 min read
Save for later

Big vendor announcements at KubeCon + CloudNativeCon Europe

Richard Gall
03 May 2018
4 min read
KubeCon and Cloud Native Computing Foundation (CNCF) have been running a joint summit in Copenhagen this week. There has been a whole host of updates and announcements from some of the biggest cloud vendors, from Oracle to Google. That's important as it highlights that Kubernetes has well and truly established itself within the container space. Months after Docker conceded ground to the project in the orchestration world, vendors are looking to adapt to Kubernetes status on today's software landscape. 5 important vendor announcements from KubeCon + CloudNativeCon Let's take a look at some of the biggest announcements from KubeCon and CNC and what they mean for the industry. Oracle Oracle has made a number of announcements in Copenhagen that underline not only the dominance of Kubernetes, but the growth of serverless computing as well. The organization's Fn Project, Oracle's serverless cloud project, are working closely with Cloud Native Computing Foundation to develop open standards. This includes support for the Cloud Events initiative, which aims to standardize how event data is described. Oracle also revealed it was launching a container engine for Kubernetes. Oracle Container Engine has been developed to help Oracle's customers tackle a range of common infrastructure challenges, such as security and networking. Both announcements highlight the changing needs of Oracle's customers. It also underscores how open source software is transforming the way established vendors act and view the world. They need to adapt. Google Google announced gVisor. gVisor is a runtime environment that allows you to separate containerized applications from the kernel on which they are based. The company also revealed Stackdriver Kubernetes Monitoring. This is an interesting tool as it should simplify the way in which you monitor Kubernetes on the Google cloud platform. Essentially, it brings various different components into one place. You'll now be able to see a range of metrics and events across containers and clusters. Cloud66 Cloud 66 introduced a number of new features designed to enhance Skycap, its flagship container delivery pipeline product. Stencils is, as the name suggests a way of templating Kubernetes configuration files. This will make managing accessibility to those files easier, and means that making changes won't impact releases in the way they might otherwise. Formations, meanwhile, allow you to target container deployments to particular clusters. Cloud 66 also revealed an open source tool called Copper. Copper validates Kubernetes configuration files; it's essentially a way of allowing you to test and check the permissions and overall configuration of the files. In the press release, CEO Khash Sajadi said: "With the advance of micro-services, containers and the surge of APIs, developers and operations teams appreciate a self-service toolchain that operations curate, and developers can run with in production. Cloud 66 is committed to tools that provide a balance between operational governance and development freedom, in the cloud or for on-premises deployments." Cisco Cisco used KubeCon to reveal a couple of important Kubernetes-related updates to two of their products. AppDynamics, the application performance analytics tool, and CloudCenter, both now have Kubernetes support. This move will bring Kubernetes into many legacy applications that have previously been locked into the level of functionality offered by Cisco. Here's what Kip Compton, the VP of Cisco Cloud Platform and Solutions Group had to say: "The Kubernetes platform has emerged as the de-facto container solution as customers accelerate adoption of containerized application architectures... But organizations are still challenged to efficiently and confidently utilize Kubernetes as they modernize legacy applications and develop new cloud applications. With our latest Kubernetes support, customers can now easily adopt production-grade Kubernetes across multicloud environments.” This is interesting - Compton identifies a common challenge around bringing legacy software up to date. With this announcements, Cisco is helping their customers to find a way around legacy issues, reducing the need to undergo a risky mass system migration. Digital Ocean Cloud platform Digital Ocean released a Kubernetes product in Copenhagen. Like the Cisco release, at a most basic level, it's going to make it much easier for engineering and operations teams to leverage Kubernetes without the challenges faced with integrating the various platforms. Learn more about Digital Ocean Kubernetes here. Read next Google’s kaniko – An open-source build tool for Docker Images in Kubernetes, without a root access Kubernetes 1.10 released The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 2174