Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-go-cloud-is-googles-bid-to-establish-golang-as-the-go-to-language-of-cloud
Richard Gall
25 Jul 2018
2 min read
Save for later

Go Cloud is Google's bid to establish Golang as the go-to language of cloud

Richard Gall
25 Jul 2018
2 min read
Google's Go is one of the fastest growing programming languages on the planet. But Google is now bidding to make it the go-to language for cloud development. Go Cloud, a new library that features a set of tools to support cloud development, has been revealed in a blog post published yesterday. "With this project," the team explains, "we aim to make Go the language of choice for developers building portable cloud applications." Why Go Cloud now? Google developed Go Cloud because of a demand for a way of writing, simpler applications that aren't so tightly coupled to a single cloud provider. The team did considerable research into the key challenges and use cases in the Go community to arrive at Go Cloud. They found that the increased demand for multi-cloud or hybrid cloud solutions wasn't being fully leveraged by engineering teams, as there is a trade off between improving portability and shipping updates. Essentially, the need to decouple applications was being pushed back by the day-to-day pressures of delivering new features. With Go Cloud, developers will be able to solve this problem and develop portable cloud solutions that aren't tied to one cloud provider. What's inside Go Cloud? Go Cloud is a library that consists of a range of APIs. The team has "identified common services used by cloud applications and have created generic APIs to work across cloud providers." These APIs include: Blob storage MySQL database access Runtime configuration A HTTP server configured with request logging, tracing, and health checking At the moment Go Cloud is compatible with Google Cloud Platform and AWS, but say they plan "to add support for additional cloud providers very soon." Try Go Cloud for yourself If you want to see how Go Cloud works, you can try it out for yourself - this tutorial on GitHub is a good place to start. You can also stay up to date with news about the project by joining Google's dedicated mailing list.   Google Cloud Launches Blockchain Toolkit to help developers build apps easily Writing test functions in Golang [Tutorial]
Read more
  • 0
  • 0
  • 3272

article-image-google-cloud-launches-blockchain-toolkit-to-help-developers-build-apps-easily
Natasha Mathur
24 Jul 2018
2 min read
Save for later

Google Cloud Launches Blockchain Toolkit to help developers build apps easily

Natasha Mathur
24 Jul 2018
2 min read
Google Cloud launched new Blockchain tools for developers on Monday, as a result of a collaboration with a DLT (distributed ledger technology) startup to help developers easily build apps. Google seems to take inspiration from AWS. Amazon’s cloud company partnered with Hyperledger Fabric earlier this year, introducing Blockchain templates which, allow developers to launch Ethereum apps without needing to write all the code required to create a smart contract. Digital Asset, a Blockchain platform services run by former JPMorgan executive Blythe Masters, will provide a software development kit to developers working on Google cloud. Along with this, Digital Asset Modeling Language (DAML), a platform-as-a-service (PaaS) program will also be made available to developers. The DAML PaaS program is now available through Google Cloud’s Orbitera Application Marketplace. According to Leonard Law, Head of Financial Services Platform at Google Cloud, “DLT has great potential to benefit customers not just in the financial services industry, but across many industries, and we’re excited to bring these developer tools to Google Cloud.” Blythe Masters, CEO of Digital Asset, also mentioned that the company is partnering with Google Cloud to provide a full stack solution to developers “so they can unleash the potential for web-paced innovation in Blockchain”. This, in turn, will help developers and organizations overcome some of the most common technical barriers to DLT application development today. With the arrival of the new Blockchain toolkit, developers will be able to easily manage the distributed systems for financial applications, games, etc. Oracle makes its Blockchain cloud service generally available Blockchain can solve tech’s trust issues – Imran Bashir  
Read more
  • 0
  • 2
  • 2439

article-image-ibm-launches-nabla-containers-a-sandbox-more-secure-than-docker-containers
Savia Lobo
17 Jul 2018
4 min read
Save for later

IBM launches Nabla containers: A sandbox more secure than Docker containers

Savia Lobo
17 Jul 2018
4 min read
Docker, and container technology in general have gotten a buzzing response from developers over the globe. The container technology with some enticing features such as lightweight in nature, being DevOps focussed, etc. has gradually taken over virtual machines much recently. However, most developers and organizations out there still prefer using virtual machines as they fear containers are less secure than the VMs. Enter IBM’s Nabla containers. IBM recently launched its brand new container tech with claims of it being more secure than Docker or any other containers in the market. It is a sandbox designed for a strong isolation on a host. This means, these specialized containers would cut down OS system calls to a bare minimum with as little code as possible. This is expected to decrease the surface area available for an attack. What are the leading causes for security breaches in containers? IBM Research’s distinguished engineer, James Bottomley, highlights the two fundamental kinds of security problems affecting containers and virtual machines(VM): Vertical Attack Profile (VAP) Horizontal Attack Profile (HAP) Vertical Attack Profile or VAP includes code which is used for traversing in order to provide services right from input to database update to output, in a stack. A container-based Virtual infrastructure Similar to all other programs, this VAP code is prone to bugs. Greater the code one traverses, greater will be the chances of exposure to a security loophole. Hence, the density of these bugs varies. However, this profile is much benign, as the primary actors for the hostile security attacks are the cloud tenants and the Cloud Security Providers(CSPs), which come much more into a picture in the HAP. Horizontal Attack Profile or HAP are stack security holes exploits that can jump either into the physical server host or VMs. A HAP attack These exploits cause, what is called, a failure of containment. Here, one part of the Vertical Attack Profile belongs to the tenants (The guest kernel, guest OS and application) while the other part (the hypervisor and host OS) belongs to the CSPs. However, the CSP vertical part has an additional problem which is, any exploit present in this piece of stack can be used to jump onto either the host itself or any other tenant VMs running on the host. James also states that any Horizontal security failure or HAP is a potential business destroying event for the CSPs. So one has to take care of preventing such failures. On the other hand, the exploit occuring in the VAP owned by the tenant is seen as a tenant-only-problem. This problem is expected to be located and fixed by tenants only. This tells us that, the larger the profile( for instance CSPs) the greater the probability of being exploited. HAP breaches, however, are not that common. But, whenever they occur, they ruin the system. James has called HAPs as the "potentially business destroying events." IBM Nabla Containers can ease out the HAP attacks for you!! Nabla containers achieve isolation by reducing the surface for an attack on the host. Standard containers vs Nabla containers These containers make use of a library OS also known as unikernel techniques adapted from the Solo5 project. These techniques help Nabla containers to avoid system calls and simultaneously reduce the attack surface. The containers use only 9 system calls; the rest are blocked through a Linux seccomp policy. Internals of Nabla containers Per IBM Research, Nabla containers are more secure than the other container technologies including Docker, and Google’s gVisor (a container runtime sandbox), and even Kata Containers (an open-source lightweight VM to secure containers). Read more about IBM Nabla containers on the official GitHub website. Docker isn’t going anywhere AWS Fargate makes Container infrastructure management a piece of cake Create a TeamCity project [Tutorial]    
Read more
  • 0
  • 0
  • 3174
Visually different images

article-image-broadcom-value-drops-after-purchasing-ca-technologies
Richard Gall
13 Jul 2018
2 min read
Save for later

Too weird for Wall Street: Broadcom's value drops after purchasing CA Technologies

Richard Gall
13 Jul 2018
2 min read
The tech world has spent the last 24 hours or so pretty confused at semiconductor manufacturer Broadcom's purchase software company CA Technologies. The deal, which Broadcom sealed with $18.9 billion in cash, was, according to the company, a way of adding to its portfolio "mission critical technology businesses." However, it seems the deal was just a little too left-field. Yesterday (Thursday 12 July), Broadcom's shares dropped 13.8%. This equates to a drop of $14.5 billion. Why did Broadcom purchase CA Technologies? This is the question that everyone seems to be asking. Ostensibly, the move is really about consolidating and driving Broadcom's position in the tech space forward. However, as The Register pointed out, a quarterly review between executives in June made no mention of an acquisition. It certainly didn't mention CA Technologies. Speaking to Bloomberg, Cody Acree said "It’s the lack of obvious connection between the two businesses. What does Broadcom know about improving CA’s efficiencies?" However, there may be some method in Broadcom's apparent madness, even if investors don't see it. Broadcom's business in semiconductors - Silicon chips - is more unstable than the type of software solutions offered by CA Technologies. The semiconductor market depends a lot on fluctuations in the consumer gadget market. However, even if this makes sense to the Broadcom excecutives, communicating this strategy would surely be absolutely essential. Surprising feints might look good in the long run but they can spook investors. A tale of two markets: consumer tech and software solutions It will take some time to see if Broadcom's move actually does work out. But it demonstrates the vast difference between the consumer and B2B markets in technology. It doesn't seem outrageous to suggest that at the very least Broadcom feels anxious about the volatility of its core market at the moment; its acquisition of CA Technologies might be the insurance policy it has been searching for.
Read more
  • 0
  • 0
  • 1958

article-image-microsoft-introduces-immutable-blob-storage-a-highly-protected-object-storage-for-azure
Savia Lobo
06 Jul 2018
2 min read
Save for later

Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure

Savia Lobo
06 Jul 2018
2 min read
Microsoft released a new Chamber of Secrets named as ‘Immutable Blob Storage’.  This storage service safeguards sensitive data and is built on the Azure Platform. It is the latest addition to Microsoft’s continuous development towards the industry-specific cloud offerings. This service is mainly built for the financial sector but can be utilized for other sectors too by helping them in managing the information they own. The Immutable Blob Storage is a specialized version of Azure’s existing object storage and includes a number of added security features, which include: The ability to configure an environment such that the records inside it are not easily deleted by anyone; not even by the administrators who maintain the deployment. Enables companies to block edits to existing files. This setting can assist banks and other heavily regulated organizations to prove the validity of their records during audits. The service costs of Immutable Blob Storage is as same as Azure’s regular object service and the two products are integrated with another. Immutable Blob Storage can be used for both standard and immutable storage. This means  IT no longer needs to manage the complexity of a separate archive storage solution. These features come on top of the ones that have been carried over to Immutable Blob Storage from the standard object service. This also includes a data lifecycle management tool that allows organizations to set policies for managing their data. Read more about this new feature on Microsoft Azure’s blog post. How to migrate Power BI datasets to Microsoft Analysis Services models [Tutorial] Microsoft releases Open Service Broker for Azure (OSBA) version 1.0 Microsoft Azure IoT Edge is open source and generally available!
Read more
  • 0
  • 0
  • 2564

article-image-whats-new-in-the-windows-10-sdk-preview-build-17704
Natasha Mathur
06 Jul 2018
2 min read
Save for later

What’s new in the Windows 10 SDK Preview Build 17704

Natasha Mathur
06 Jul 2018
2 min read
Microsoft keeps rolling out with updates. After Windows 10 SDK Preview Build 17115 which included Machine learning APIs, Microsoft has now released Windows 10 SDK Preview Build 17704, two days ago. The new preview SDK build can be used with Windows 10 insider preview build 17704 or greater. It includes bug fixes, MSIX support, and other development changes to the API surface area. In case you want to download the latest Windows 10 SDK Preview Build 17704, visit the developer section on Windows Insider. Key Updates Here’s what’s new in this latest SDK preview Build: MSIX Support Windows 10 SDK Preview Build 17704 has finally got MSIX support enabled. You can install and run these apps with MSIX support on devices having 17682 build or greater. Using the MakeAppx tool, you can package your applications with MSIX. Just click on the MSIX file to install the application. If you wish to know more about MSIX, check out the video below: https://www.youtube.com/watch?v=FKCX4Rzfysk Source: Microsoft Developer  MC.EXE Changes are made to the C/C++ETW code generation of mc.exe which is a message compiler. “-mof” parameter has been deprecated. The “-mof” parameter instructs MC.exe to generate the ETW code which is compatible with Windows XP and earlier. If “-mof” parameter is not used, the generated C/C++ header is compatible with both kernel-mode and user-mode, regardless of whether “-km” or “-um” was specified on the command line. The generated header supports several customization macros. If you need the generated macros to call something other than EventWriteTransfer, you can set the MCGEN_EVENTWRITETRANSFER macro. The manifest supports new attributes such as Event “names”, event “attributes”, event “tags”, etc. The “provider traits” can be now defined in the manifest (e.g. provider group). The EventRegister[ProviderName] macro will automatically register the provider traits, if they are used in the manifest. MC has the capability to generate Unicode (utf-8 or utf-16) output MC now with the “-cp utf-8” or “-cp utf-16” parameters. API Spot Light There is a new LauncherOptions.GroupingPreference property in the Windows 10 SDK  Preview Build 17704. This helps assist your app in tailoring its behavior for Sets. Apart from these changes in the release, APIs have also been updated, added, and removed. More information about other known issues and improvements is available on the Windows Blog. Microsoft releases Windows 10 Insider build 17682! Microsoft Cloud Services get GDPR Enhancements  
Read more
  • 0
  • 0
  • 2269
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-baidu-releases-kunlun-ai-chip-chinas-first-cloud-to-edge-ai-chip
Savia Lobo
05 Jul 2018
2 min read
Save for later

Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip

Savia Lobo
05 Jul 2018
2 min read
Baidu, Inc. the leading Chinese language Internet search provider releases Kunlun AI chip. It is China’s first cloud-to-edge AI chip, which is built to handle AI models for both, edge computing on devices and in the cloud via data centers. K'un-Lun is also a place that actually exists in another dimension in Marvel’s Immortal Iron Fist. AI applications have dramatically risen to popularity and adoption. With this, there is increased demand for requirements on the computational end. Traditional chips have limited computational power and to accelerate larger AI workloads; it requires much more scaling, computationally. To suffice this computational demand Baidu released the Kunlun AI chip, which is designed specifically for large-scale AI workloads. Kunlun feeds the high processing demands of AI with a high-performant and cost-effective solution. It can be used for both cloud and edge instances, which include data centers, public clouds, and autonomous vehicles. Kunlun comes in two variants; the 818-300 model is used for training and the 818-100 model is used for inference purposes. This chip leverages Baidu’s AI ecosystem including AI scenarios such as search ranking and deep learning frameworks like PaddlePaddle. Key Specifications of Kunlun AI chip A computational capability which is 30 times faster than the original FPGA-based accelerator that Baidu started developing in 2011 A 14nm Samsung engineering 512 GB/second memory bandwidth Provides 260 TOPS computing performance while consuming 100 Watts of power The features the Kunlun chip include: It supports open source deep learning algorithms Supports a wide range of AI applications including voice recognition, search ranking, natural language processing, and so on. Baidu plans to continue to iterate this chip and develop it progressively to enable the expansion of an open AI ecosystem. To make it successful, Baidu continues to make “chip power” to meet the needs of various fields such as intelligent vehicles and devices, voice and image recognition. Read more about Baidu’s Kunlun AI chip on the MIT website. IBM unveils world’s fastest supercomputer with AI capabilities, Summit AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU?
Read more
  • 0
  • 0
  • 3025

article-image-zefflin-systems-unveils-servicenow-plugin-for-red-hat-ansible-2-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0

Savia Lobo
29 Jun 2018
2 min read
Zefflin Systems announced its ServiceNow Plugin 2.0 for the Red Hat Ansible 2.0. The plugin helps IT operations easily map IT services to infrastructure for automatically deployed environment. Zefflin's Plugin Release 2.0 enables the use of ServiceNow Catalog and Request management modules to: Facilitate deployment options for users Capture requests and route them for approval Invoke Ansible playbooks to auto-deploy server, storage, and networking Zefflin's Plugin 2.0 also provides full integration to ServiceNow Change Management for complete ITIL-compliant auditability. Key features and benefits of the ServiceNow Plugin 2.0 are: Support for AWX: With the help of AWX, customers who are on the open source version of Ansible can easily integrate into ServiceNow. Automated Catalog Variable Creation: Plugin 2.0 reads the target Ansible playbook and automatically creates the input variables in the ServiceNow catalog entry. This significantly reduces implementation time and maintenance effort. This means that the new playbooks can be onboarded in less time. Update to Ansible Job Completion: This extends the amount of information returned from an Ansible playbook and logged into the ServiceNow request. This enhancement dramatically improves the audit trail and provides a higher degree of process control. The ServiceNow Plugin for Ansible enables DevOps with ServiceNow integration by establishing: Standardized development architectures An effective routing approval process An ITIL-compliant audit framework Faster deployment An automated process that frees up the team to focus on other activities Read more about the ServiceNow Plugin in detail on Zefflin System’s official blog post Mastering Ansible – Protecting Your Secrets with Ansible An In-depth Look at Ansible Plugins Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 2614

article-image-microsoft-releases-open-service-broker-for-azure-osba-version-1-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Microsoft releases Open Service Broker for Azure (OSBA) version 1.0

Savia Lobo
29 Jun 2018
2 min read
Microsoft released version 1.0 of Open Service Broker for Azure (OSBA) along with full support for Azure SQL, Azure Database for MySQL, and Azure Database for PostgreSQL. Microsoft announced the preview of Open Service Broker for Azure (OSBA) at the KubeCon 2017. OSBA is the simplest way to connect apps running on cloud-native environment (such as Kubernetes, Cloud Foundry, and OpenShift) and rich suite of managed services available on Azure. The OSBA 1.0 ensures to connect mission-critical applications to Azure’s enterprise-grade backing services. It is also ideal to run on a containerized environment like Kubernetes. In a recent announcement of a strategic partnership between Microsoft and Red Hat to provide  OpenShift service on Azure, Microsoft demonstrated the use of OSBA using an OpenShift project template. OSBA will enable customers to deploy Azure services directly from the OpenShift console and connect them to their containerized applications running on OpenShift. It also plans to collaborate with Bitnami to bring OSBA into KubeApps, for customers to deploy solutions like WordPress built on Azure Database for MySQL and Artifactory on Azure Database for PostgreSQL. Microsoft plans 3 additional focus areas for OSBA and the Kubernetes service catalog: Plans to expand the set of Azure services available in OSBA by re-enabling services such as Azure Cosmos DB and Azure Redis. These services will progress to a stable state as Microsoft will learn how customers intend to use them. They plan to continue working with the Kubernetes community to align the capabilities of the service catalog with the behavior that customers expect. With this, the cluster operator will have the ability to choose which classes/plans are available to developers. Lastly, Microsoft has a vision for the Kubernetes service catalog and the Open Service Broker API. It will enable developers to describe general requirements for a service, such as “a MySQL database of version 5.7 or higher”. Read the full coverage on Microsoft’s official blog post GitLab is moving from Azure to Google Cloud in July Announces general availability of Azure SQL Data Sync Build an IoT application with Azure IoT [Tutorial]
Read more
  • 0
  • 0
  • 3659

article-image-hashicorp-announces-consul-1-2-to-ease-service-segmentation-with-the-connect-feature
Savia Lobo
28 Jun 2018
3 min read
Save for later

HashiCorp announces Consul 1.2 to ease Service segmentation with the Connect feature

Savia Lobo
28 Jun 2018
3 min read
HashiCorp recently announced the release of a new version of its distributed service mesh, Consul 1.2.  This release supports a new feature known as Connect, which automatically changes any existing Consul cluster into a service mesh solution. It works on any platform such as physical machines, cloud, containers, schedulers, and more. HashiCorp is San Francisco based organization that helps businesses resolve development, operations, and security challenges in infrastructure, for them to focus on other business-critical tasks. Consul is one such HashiCorp’s product; it is a distributed service mesh for connecting, securing, and configuring services across any runtime platform or any public or private cloud platform. The Connect feature within the Consul 1.2, enables secure service-to-service communication with automatic TLS encryption and identity-based authorization. HashiCorp further stated the Connect feature to be free and open source. New functionalities in the Consul 1.2 Encrypted Traffic while in transit All traffic is established with Connect through a mutual TLS. It ensures traffic to be encrypted in transit and allows services to be safely deployed in low-trust environment. Connection Authorization It will allow or deny service communication by creating a service access graph with intentions. Connect uses the logical name of the service, unlike a firewall which uses IP addresses. This means rules are scale independent; it doesn’t matter if there is one web server or 100. Intentions can be configured using the UI, CLI, API, or HashiCorp Terraform. Proxy Sidecars Applications are allowed to use a lightweight proxy sidecar process to automatically establish inbound and outbound TLS connections. With this, existing applications can work with Connect without any modification. Consul ships with a built-in proxy that doesn't require external dependencies, along with third-party proxies such as Envoy. Native Integration Performance sensitive applications can natively integrate with the Consul Connect APIs to establish and accept connections without a proxy for optimal performance and security. Certificate Management Consul creates and distributes certificates using a certificate authority (CA) provider. Consul has a built-in CA system that requires no external dependencies. This CA system integrates with HashiCorp Vault, and can also be extended to support any other PKI (Public Key Infrastructure) system. Network and Cloud Independent Connect uses standard TLS over TCP/IP, which allows Connect to work on any network configuration. However, the IP advertised by the destination service should be reachable by the underlying operating system. Further, services can communicate cross-cloud without complex overlays. Know more about these functionalities in detail, by visiting HashiCorp Consul 1.2 official blog post SDLC puts process at the center of software engineering Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner What is a multi layered software architecture?  
Read more
  • 0
  • 0
  • 2346
article-image-kubernetes-1-11-is-here
Vijin Boricha
28 Jun 2018
3 min read
Save for later

Kubernetes 1.11 is here!

Vijin Boricha
28 Jun 2018
3 min read
This is the second release of Kubernetes in 2018. Kubernetes 1.11 comes with significant updates on features that revolve around maturity, scalability, and flexibility of Kubernetes.This newest version comes with storage and networking enhancements with which it is possible to plug-in any kind of infrastructure (Cloud or on-premise), into the Kubernetes system. Now let's dive into the key aspects of this release: IPVS-Based In-Cluster Service Load Balancing Promotes to General Availability IPVS consist of a simpler programming interface than iptable and delivers high-performance in-kernel load balancing. In this release it has moved to general availability where is provides better network throughput, programming latency, and scalability limits. It is not yet the default option but clusters can use it for production traffic. CoreDNS Graduates to General Availability CoreDNS has moved to general availability and is now the default option when using kubeadm. It is a flexible DNS server that directly integrates with the Kubernetes API. In comparison to the previous DNS server CoreDNS has lesser moving pasts as it is a single process that creates custom DNS entries to supports flexible uses cases. CoreDNS is also memory-safe as it is written in Go. Dynamic Kubelet Configuration Moves to Beta It has always been difficult to update Kubelet configurations in a running cluster as Kubelets are configured through command-line flags. With this feature moving to Beta, one can configure Kubelets in a live cluster through the API server. CSI enhancements Over the past few releases CSI (Container Storage Interface) has been a major focus area. This service was moved to Beta in version 1.10. In this version, the Kubernetes team continues to enhance CSI with a number of new features such as: Alpha support for raw block volumes to CSI Integrates CSI with the new kubelet plugin registration mechanism Easier to pass secrets to CSI plugins Enhanced Storage Features This release introduces online resizing of Persistent Volumes as an alpha feature. With this feature users can increase the PVs size without terminating pods or unmounting the volume. Users can update the PVC to request a new size and kubelet can resize the file system for the PVC. Dynamic maximum volume count is introduced as an alpha feature. With this new feature one can enable in-tree volume plugins to specify the number of volumes to be attached to a node, allowing the limit to vary based on the node type. In the earlier version the limits were configured through an environment variable. StorageObjectInUseProtection feature is now stable and prevents issues from deleting a Persistent Volume or a Persistent Volume Claim that is integrated to an active pod. You can know more about Kubernetes 1.11 from Kubernetes Blog and this version is available for download on GitHub. To get started with Kubernetes, check out our following books: Learning Kubernetes [Video] Kubernetes Cookbook - Second Edition Mastering Kubernetes - Second Edition Related Links VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Rackspace now supports Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads
Read more
  • 0
  • 0
  • 2134

article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 3934

article-image-cloud-filestore-a-new-high-performance-storage-option-by-google-cloud-platform
Vijin Boricha
27 Jun 2018
3 min read
Save for later

Cloud Filestore: A new high performance storage option by Google Cloud Platform

Vijin Boricha
27 Jun 2018
3 min read
Google recently came up with a new storage option for developers in its cloud. Cloud Filestore which is in its beta will launch next month according to the Google Cloud Platform Blog. Applications that require a filesystem interface and a shared filesystem for data can leverage this file storage service. It provides a fully managed  Network Attached Storage (NAS) service to effectively integrate with Google Compute Engine and Kubernetes Engine instances. Developers can leverage the abilities of Filestore for high performing file-based workloads. Now enterprises can easily run applications that depend on traditional file system interface with Google Cloud Platform. Traditionally, if applications needed a standard file system, developers would have to improvise a file server with a persistent disk. Filestore does away with traditional methods and allows GCP developers to spin-up storage as needed. Filestore offers high throughput, low latency and high IOPS (Input/output operations per second). This service is available in two tiers; premium and standard. The premium tier costs $0.30/GB/month and promises a max throughput of 700 MB/s and 30,000 max IOPS. The standard tier costs $0.20/GB/month with 180 MB/s max throughput and 5,000 max IOPS. A snapshot of Filestore features Filestore was introduced at the Los Angeles region launch and majorly focused on the entertainment and media industries, where there is a great need for shared file systems for enterprise applications. But this service is not limited only to the media industry, other industries that rely on similar enterprise applications can also benefit from this service. Benefits of using Filestore A lightning speed experience Filestore provides high IOPS for latency sensitive workloads such as content management systems, databases, random i/o, or other metadata intensive applications. This further results in a minimal variability in performance. Consistent  performance throughout Cloud Filestore ensures that one pays a predictable price for predictable performance. Users can independently choose the preferred IOPS--standard or premium-- and storage capacity with Filestore. With this option to choose from, users can fine tune their filesystem for a particular workload. One will also experience consistent performance for a particular workload over time. Simplicity at its best Cloud Filestore, a fully managed, NoOps service, is integrated with the rest of the Google Cloud portfolio. One can easily mount Filestore volumes on Compute Engine VMs. Filestore is tightly integrated with Google Kubernetes Engine, which allows containers to refer the same shared data. To know more about this exciting release, visit Cloud Filestore official website. Related Links AT&T combines with Google cloud to deliver cloud networking at scale What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 3782
article-image-gitlab-is-moving-from-azure-to-google-cloud
Richard Gall
26 Jun 2018
2 min read
Save for later

GitLab is moving from Azure to Google Cloud in July

Richard Gall
26 Jun 2018
2 min read
In a switch that contains just a subtle hint of saltiness, GitLab has announced that it is to move its code repositories from Microsoft Azure to Google Cloud on Saturday, July 28, 2018. The news comes just weeks after Microsoft revealed it was to acquire GitHub (this happened in early June if you've lost track of time). While it's tempting to see this as a retaliatory step, it is instead just a coincidence. The migration was planned before the Microsoft and GitHub news was even a rumor. Why is GitLab moving to Google Cloud? According to GitLab's Andrew Newdigate, the migration to Google Cloud is being done in a bid to "improve performance and reliability." In a post on the GitLab blog, Newdigate explains that one of the key drivers of the team's decision is Kubernetes. "We believe Kubernetes is the future. It's a technology that makes reliability at massive scale possible." Kubernetes is a Google product, so it makes sense for GitLab to make the switch to Google's cloud offering to align their toolchain. Read next: The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab How GitLab's migration will happen A central part of the GitLab migration is Geo. Geo is a tool built by GitLab that makes cloning and reproducing repositories easier for developers working in different locations. Essentially, it creates 'mirrors' of GitLab instances. That's useful for developers using GitLab, as it provides extra safety and security, but GitLab are using it themselves for the migration. [caption id="attachment_20323" align="aligncenter" width="300"] Image via GitLab[/caption] Newdigate writes that GitLab has been running a parallel site that is running on Google Cloud Platform as the migration unfolds. This contains  an impressive "200TB of Git data and 2TB of relational data in PostgreSQL." Rehearsing the failover in production Coordination and planning is everything when conducting such a substantial migration. That's why GitLab's Geo, Production, and Quality teams meet several times a week to rehearse the failover. This process has a number of steps, and each time, every step throws up new issues and problems. These are then documented and resolved by the relevant team. Given confidence and reliability is essential to any version control system, building this into the migration process is a worthwhile activity.
Read more
  • 0
  • 0
  • 2158

article-image-gitlab-11-0-released
Savia Lobo
25 Jun 2018
2 min read
Save for later

GitLab 11.0 released!

Savia Lobo
25 Jun 2018
2 min read
GitLab recently announced the release of GitLab 11.0 which includes major features such as the Auto DevOps and License Management; among other features. The Auto DevOps feature is generally available in GitLab 11.0. It is a pre-built, fully featured CI/CD pipeline that automates the entire delivery process. With this feature, one has to simply commit their code and Auto DevOps does the rest. This includes tasks such as building and testing the app; performing code quality, security, and license scans. One can also package, deploy and monitor their applications using Auto DevOps. Chris Hill, head of systems engineering for infotainment at Jaguar Land Rover, said, “We’re excited about Auto DevOps, because it will allow us to focus on writing code and business value. GitLab can then handle the rest; automatically building, testing, deploying, and even monitoring our application.” License Management automatically detects licenses of project's dependencies such as, Enhanced Security Testing of code, containers, and dependencies: GitLab 11.0 has an extended coverage of Static Analysis Security Testing (SAST) and  includes Scala and .Net. Kubernetes integration features: If one needs to debug or check on a pod, they can do so by reviewing the Kubernetes pod logs directly from GitLab's deployment board. Improved Web IDE:  One can view their CI/CD pipelines from the IDE and get immediate feedback if a pipeline fails. Switching tasks can be disruptive, so the updated Web IDE makes it easy to quickly switch to the next merge request, to create, improve, or review without leaving the Web IDE. Enhanced Epic and Roadmap views : GitLab 11.0 has an updated Epic/Roadmap navigation interface to make it easier to see the big images and make planning easier. Read more about GitLab 11.0 on its GitLab’s official website. GitLab’s new DevOps solution GitLab open sources its Web IDE in GitLab 10.7 The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab
Read more
  • 0
  • 0
  • 1238