Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-github-open-sources-its-github-load-balancer-glb-director
Savia Lobo
10 Aug 2018
2 min read
Save for later

GitHub open sources its GitHub Load Balancer (GLB) Director

Savia Lobo
10 Aug 2018
2 min read
GitHub, open sourced the GitHub Load Balancer (GLB) Director on August 8, 2018. GLB Director is a Layer 4 load balancer which scales a single IP address across a large number of physical machines. It also minimizes connection disruption during any change in servers. Apart from open sourcing the GLB Director, GitHub has also shared details on the Load balancer design. GitHub had first released its GLB on September 22, 2016. The GLB is GitHub’s scalable load balancing solution for bare metal data centers. It powers a majority of GitHub’s public web and Git traffic, and GitHub’s critical internal systems such as its highly available MySQL clusters. How GitHub Load Balancer Director works GLB Director is designed for use in data center environments where multiple servers can announce the same IP address via BGP. Further, the network routers shard traffic amongst those servers using ECMP routing. The ECMP shards connections per-flow using consistent hashing and by addition or removal of nodes. This will cause some disruption to traffic as the state isn't stored for each flow. A split L4/L7 design is typically used to allow the L4 servers to redistribute these flows back to a consistent server in a flow-aware manner. GLB Director implements the L4 (director) tier of a split L4/L7 load balancer design. The GLB design The GLB Director does not replace services like haproxy and nginx, but rather is a layer in front of these services (or any TCP service) that allows them to scale across multiple physical machines without requiring each machine to have unique IP addresses. Source: GitHub GLB Director only processes packets on ingress. It then encapsulates them inside an extended Generic UDP Encapsulation packet. Egress packets from proxy layer servers are sent directly to clients using Direct Server Return. Read more about the GLB Director in detail on the GitHub Engineering blog post. Microsoft’s GitHub acquisition is good for the open source community Snapchat source code leaked and posted to GitHub Why Golang is the fastest growing language on GitHub GitHub has added security alerts for Python
Read more
  • 0
  • 0
  • 3588

article-image-microsoft-azures-new-governance-dapp-an-enterprise-blockchain-without-mining
Prasad Ramesh
09 Aug 2018
2 min read
Save for later

Microsoft Azure’s new governance DApp: An enterprise blockchain without mining

Prasad Ramesh
09 Aug 2018
2 min read
Microsoft Azure has just released a Blockchain-as-a-Service product that uses Ethereum to support blockchain with a set of templates to deploy and configure your choice of blockchain network. This can be done with minimal Azure and blockchain knowledge. The conventional blockchain in the open is based on Proof-of-Work (PoW) and requires mining as the parties do not trust each other. An enterprise blockchain does not require PoW but is based on Proof-of-Authority (PoA) where approved identities or validators on a blockchain, validate the transactions on the blockchain. The PoA product features a decentralized application (DApp) called the Governance DApp. Blockchains in this new model can be deployed in 5-45 minutes depending on the size and complexity of the network. The PoA network comes with security features such as identity leasing system to ensure no two nodes carry the same identity. There are also other features to achieve good performance. Web assembly smart contracts: Solidity is cited as one of the pain areas when developing smart contracts on Ethereum. This feature allows developers to use familiar languages such as C, C++, and Rust. Azure Monitor: Used to track node and network statistics. Developers can view the underlying blockchain to track statistics while the network admins can detect and prevent network outages. Extensible governance: With this feature, customers can participate in a consortium without managing the network infrastructure. It can be optionally delegated to an operator of their choosing. Governance DApp: Provides a decentralized governance in which network authority changes are administered via on-chain voting done by select administrators. It also contains validator delegation for authorities to manage their validator nodes that are set up in each PoA deployment. Users can audit change history, each change is recorded, providing transparency and auditability. Source: Microsoft Blog Along with these features, the Governance DApp will also ensure each consortium member has control over their own keys. This enables secure signing on a wallet chosen by the user. The blog mentions “In the case of a VM or regional outage, new nodes can quickly spin up and resume the previous nodes’ identities.” To know more visit the official Microsoft Blog. Read next Automate tasks using Azure PowerShell and Azure CLI [Tutorial] Microsoft announces general availability of Azure SQL Data Sync Microsoft supercharges its Azure AI platform with new features
Read more
  • 0
  • 0
  • 3358

article-image-oracle-bid-protest-against-u-s-defence-departmentspentagon-10-billion-cloud-contract
Savia Lobo
09 Aug 2018
2 min read
Save for later

Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract

Savia Lobo
09 Aug 2018
2 min read
On Monday, Oracle Corp filed a protest with the Government Accountability Office(GAO) against Pentagon’s $10 billion JEDI(Joint Enterprise Defense Infrastructure) cloud contract. Oracle believes should not be awarded only to a single company but instead, allow for multiple winners. https://twitter.com/92newschannel/status/1027090662162944000 The U.S Defence Department unveiled the competition in July and stated that only a single winner, the one with the most rapid adoption of the cloud technology would be awarded. Deborah Hellinger, Oracle’s spokeswoman, said in a statement on Tuesday, “The technology industry is innovating around next-generation cloud at an unprecedented pace and JEDI virtually assures DoD will be locked into a legacy cloud for a decade or more. The single-award approach is contrary to the industry's multi-cloud strategy, which promotes constant competition, fosters innovation and lowers prices.” A bid protest is a challenge to the terms of a solicitation or the award of a federal contract. The GAO, which adjudicates and decides these challenges, will issue a ruling on the protest by November 14. This has been the first bid protest ever since the competition started a decade ago. Amazon.com is being seen as a top contender throughout the deal. Amazon Web Services or AWS is the only company approved by the U.S. government to handle secret and top secret data. Thus, this competition has attracted criticism from companies that fear Amazon Web Services, Amazon’s cloud unit, will win the contract. This would choke out hopes for others (Microsoft Corp (MSFT.O), Oracle (ORCL.N), IBM (IBM.N) and Alphabet Inc’s (GOOGL.O) Google) to win the government cloud computing contract. Read more about this news on The Register. Oracle makes its Blockchain cloud service generally available Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle reveals issues in Object Serialization. Plans to drop it from core Java.  
Read more
  • 0
  • 0
  • 2217
Banner background image

article-image-googles-second-innings-in-china-exploring-cloud-partnerships-with-tencent-and-others
Bhagyashree R
07 Aug 2018
3 min read
Save for later

Google’s second innings in China: Exploring cloud partnerships with Tencent and others

Bhagyashree R
07 Aug 2018
3 min read
Google with the aims of re-entering the Chinese market, is in talks with the top companies in China like Tencent Holdings Ltd. (the company which owns the popular social media site, WeChat) and Inspur Group. Its aim is to expand their cloud services in the second-largest economy. According to some people who are familiar with the ongoing discussion, the talks began in early 2018 and Google was able to narrow down to three firms in late March. But because of the US - China trade war there is an uncertainty, whether this will materialize or not. Why is Google interested in cloud partnerships with Chinese tech giants? In many countries, Google rents computing power and storage over the internet and sells the G Suite, which includes Gmail, Docs, Drive, Calendar, and more tools for business. These run on their data centers. It wants to collaborate with the domestic data center and server providers in China to run internet-based services as China requires the digital information to be stored in the country. This is the reason why they need to partner with the local players. A tie-up with large Chinese tech firms, like Tencent and Inspur will also give Google powerful allies as it attempts a second innings in China after its earlier exit from the country in 2010. A cloud partnership with China will help them compete with their rivals like Amazon and Microsoft. With Tencent by their side, it will be able to go up against the local competitors including Alibaba Group Holding Ltd. How Google has been making inroads to China in the recent past In December, Google launched its AI China Center, the first such center in Asia, at the Google Developer Days event in Shanghai. In January Google agreed to a patent licensing deal with Tencent Holdings Ltd. This agreement came with an understanding that the two companies would team up on developing future technologies. Google could host services on Tencent’s data centers and the company could also promote its services to their customers. Reportedly, to expand its boundaries to China, Google has agreed upon launching a search engine which will comply with the Chinese cybersecurity regulations. A project code-named Dragonfly has been underway since spring of 2017, and accelerated after the meeting between its CEO Sundar Pichai and top Chinese government official in December 2017. It has  launched a WeChat mini program and reportedly developing an news app for China. It’s building a cloud data center region in Hong Kong this year. Joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo, this will be the sixth GCP region in Asia Pacific. With no official announcements, we can only wait and see what happens in the future. But from the above examples, we can definitely conclude that Google is trying to expand its boundaries to China, and that too in full speed. To know more about this recent Google’s partnership with China in detail, you can refer to the full coverage on the Bloomberg’s report. Google to launch a censored search engine in China, codenamed Dragonfly Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 2302

article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 6235

article-image-announcing-cloud-build-googles-new-continuous-integration-and-delivery-ci-cd-platform
Vijin Boricha
27 Jul 2018
2 min read
Save for later

Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform

Vijin Boricha
27 Jul 2018
2 min read
In today’s world no software developer is expected to wait for long release time and development cycles, all thanks to DevOps. Cloud which are popular for providing feasible infrastructure across different organizations can now offer better solutions with the help of DevOps. Applications can have bug fixes and updates almost everyday but this update cycles require a CI/CD framework. Google recently released its all new continuous integration/continuous delivery framework Cloud Build at Google Cloud Next ’18 in San Francisco. Cloud Build is a complete continuous integration and continuous delivery platform that helps you build software at scale across all languages. It gives developers complete control over a variety of environments such as VMs, serverless, Firebase or Kubernetes. Google’s Cloud Build supports Docker, giving developers the option of automating deployments to Google Kubernetes Engine or Kubernetes for continuous delivery. It also supports the use of triggers for application deployment which helps launch an update whenever certain conditions are met. Google also tried to eliminate the pain of managing build servers by providing a free version of Cloud Build with up to 120 build minutes per day including up to 10 concurrent builds. After the user has exhausted the first free 120 build minutes, additional build minutes will be charged at $0.0034 per minute. Another plus point of Cloud Build is that it automatically identifies package vulnerabilities before deployment along with allowing users to run builds on local machines and later deploy in the cloud. Incase of issues or problems, CloudBuild provides detailed insights letting you ease debugging via build errors and warnings. It also provides an option of filtering build results using tags or queries to identify time consuming tests or slow performing builds. Key features of Google Cloud Build Simpler and faster commit to deploy time Supports language agnostic builds Options to create pipelines to automate deployments Flexibility to define custom workflow Control build access with Google Cloud security Check out the Google Cloud Blog if you find want to learn more about how to start implementing Google's CI/CD offerings. Related Links Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google’s event-driven serverless platform, Cloud Function, is now generally available Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 2765
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-google-new-cloud-services-platform-could-make-hybrid-cloud-more-accessible
Richard Gall
25 Jul 2018
3 min read
Save for later

Google's new Cloud Services Platform could make hybrid cloud more accessible

Richard Gall
25 Jul 2018
3 min read
Hybrid cloud is becoming an increasing reality for many businesses. This is something the software world is only just starting to acknowledge. However, at this year's Google Cloud Next, Google does seem to be making a play for the hybrid market. Its new Cloud Services Platform combines a number of tools, including Kubernetes and Istio, to support a hybrid cloud solution. In his speech at Google Cloud Next, Urs Holze, Senior VP of technical infrastructure, said that although cloud computing offers many advantages, it's "still missing something... a simple way to combine the cloud with your existing on-premise infrastructure or with other clouds." That's the thinking behind Cloud Services Platform, which brings together a whole host of tools to make managing a cloud potentially much easier than ever before. What's inside Google's Cloud Services Platform In a blog post Holze details what's going to be inside Cloud Services Platform: Service mesh: Availability of Istio 1.0 in open source, Managed Istio, and Apigee API Management for Istio Hybrid computing: GKE On-Prem with multi-cluster management Policy enforcement: GKE Policy Management, to take control of Kubernetes workloads Ops tooling: Stackdriver Service Monitoring Serverless computing: GKE Serverless add-on and Knative, an open source serverless framework Developer tools: Cloud Build, a fully managed CI/CD platform This diagram provides a clear illustration of how the various components of the Cloud Services Platform will fit together: [caption id="attachment_21065" align="aligncenter" width="960"] What's inside Google's Cloud Services Platform (via cloudplatform.googleblog.com)[/caption] Why Kubernetes and Istio are at the center of the Cloud Services Platform Holze explains the development of cloud in the context of containers. "The move to software containers", he says, "has helped some [businesses] in simplifying and speeding up how we package and deliver software." Kubernetes has been  an integral part of this shift. And although Holze has a vested interest when he says that "today it's by far the most popular way to run an manage containers," he's ultimately right - Kubernetes is one of the fastest growing open source projects on the planet. Read next: The key differences between Kubernetes and Docker Swarm Holze then follows on from this by introducing Istio. "Istio extends Kubernetes into these higher level services and makes service to service communications secure and reliable in a way that's very easy on developers." Istio is due to hit its first stable release in the next couple of days. So, insofar as both Istio and Kubernetes make it possible to manage and monitor containers at scale, bringing them together in a single platform makes for a compelling proposition for engineers. The advantage of being able to bring in tools like Kubernetes and Istio might make hybrid cloud solutions a much more attractive proposition for business and technology leaders - and for those already convinced, it could make life even better. According to Chen Goldberg, Google's Director of Engineering, speaking to journalists and Google Cloud Next, Cloud Services Platform "allows you to modernize wherever you are and at your own pace." Whether businesses buy into Google's vision remains to be seen - but it could well be a game-changer that threatens AWS dominance in the cloud world.  Read next: Go Cloud is Google’s bid to establish Golang as the go-to language of cloud Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Dispelling the myths of hybrid cloud
Read more
  • 0
  • 0
  • 2055

article-image-go-cloud-is-googles-bid-to-establish-golang-as-the-go-to-language-of-cloud
Richard Gall
25 Jul 2018
2 min read
Save for later

Go Cloud is Google's bid to establish Golang as the go-to language of cloud

Richard Gall
25 Jul 2018
2 min read
Google's Go is one of the fastest growing programming languages on the planet. But Google is now bidding to make it the go-to language for cloud development. Go Cloud, a new library that features a set of tools to support cloud development, has been revealed in a blog post published yesterday. "With this project," the team explains, "we aim to make Go the language of choice for developers building portable cloud applications." Why Go Cloud now? Google developed Go Cloud because of a demand for a way of writing, simpler applications that aren't so tightly coupled to a single cloud provider. The team did considerable research into the key challenges and use cases in the Go community to arrive at Go Cloud. They found that the increased demand for multi-cloud or hybrid cloud solutions wasn't being fully leveraged by engineering teams, as there is a trade off between improving portability and shipping updates. Essentially, the need to decouple applications was being pushed back by the day-to-day pressures of delivering new features. With Go Cloud, developers will be able to solve this problem and develop portable cloud solutions that aren't tied to one cloud provider. What's inside Go Cloud? Go Cloud is a library that consists of a range of APIs. The team has "identified common services used by cloud applications and have created generic APIs to work across cloud providers." These APIs include: Blob storage MySQL database access Runtime configuration A HTTP server configured with request logging, tracing, and health checking At the moment Go Cloud is compatible with Google Cloud Platform and AWS, but say they plan "to add support for additional cloud providers very soon." Try Go Cloud for yourself If you want to see how Go Cloud works, you can try it out for yourself - this tutorial on GitHub is a good place to start. You can also stay up to date with news about the project by joining Google's dedicated mailing list.   Google Cloud Launches Blockchain Toolkit to help developers build apps easily Writing test functions in Golang [Tutorial]
Read more
  • 0
  • 0
  • 3272

article-image-microsoft-introduces-immutable-blob-storage-a-highly-protected-object-storage-for-azure
Savia Lobo
06 Jul 2018
2 min read
Save for later

Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure

Savia Lobo
06 Jul 2018
2 min read
Microsoft released a new Chamber of Secrets named as ‘Immutable Blob Storage’.  This storage service safeguards sensitive data and is built on the Azure Platform. It is the latest addition to Microsoft’s continuous development towards the industry-specific cloud offerings. This service is mainly built for the financial sector but can be utilized for other sectors too by helping them in managing the information they own. The Immutable Blob Storage is a specialized version of Azure’s existing object storage and includes a number of added security features, which include: The ability to configure an environment such that the records inside it are not easily deleted by anyone; not even by the administrators who maintain the deployment. Enables companies to block edits to existing files. This setting can assist banks and other heavily regulated organizations to prove the validity of their records during audits. The service costs of Immutable Blob Storage is as same as Azure’s regular object service and the two products are integrated with another. Immutable Blob Storage can be used for both standard and immutable storage. This means  IT no longer needs to manage the complexity of a separate archive storage solution. These features come on top of the ones that have been carried over to Immutable Blob Storage from the standard object service. This also includes a data lifecycle management tool that allows organizations to set policies for managing their data. Read more about this new feature on Microsoft Azure’s blog post. How to migrate Power BI datasets to Microsoft Analysis Services models [Tutorial] Microsoft releases Open Service Broker for Azure (OSBA) version 1.0 Microsoft Azure IoT Edge is open source and generally available!
Read more
  • 0
  • 0
  • 2564

article-image-baidu-releases-kunlun-ai-chip-chinas-first-cloud-to-edge-ai-chip
Savia Lobo
05 Jul 2018
2 min read
Save for later

Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip

Savia Lobo
05 Jul 2018
2 min read
Baidu, Inc. the leading Chinese language Internet search provider releases Kunlun AI chip. It is China’s first cloud-to-edge AI chip, which is built to handle AI models for both, edge computing on devices and in the cloud via data centers. K'un-Lun is also a place that actually exists in another dimension in Marvel’s Immortal Iron Fist. AI applications have dramatically risen to popularity and adoption. With this, there is increased demand for requirements on the computational end. Traditional chips have limited computational power and to accelerate larger AI workloads; it requires much more scaling, computationally. To suffice this computational demand Baidu released the Kunlun AI chip, which is designed specifically for large-scale AI workloads. Kunlun feeds the high processing demands of AI with a high-performant and cost-effective solution. It can be used for both cloud and edge instances, which include data centers, public clouds, and autonomous vehicles. Kunlun comes in two variants; the 818-300 model is used for training and the 818-100 model is used for inference purposes. This chip leverages Baidu’s AI ecosystem including AI scenarios such as search ranking and deep learning frameworks like PaddlePaddle. Key Specifications of Kunlun AI chip A computational capability which is 30 times faster than the original FPGA-based accelerator that Baidu started developing in 2011 A 14nm Samsung engineering 512 GB/second memory bandwidth Provides 260 TOPS computing performance while consuming 100 Watts of power The features the Kunlun chip include: It supports open source deep learning algorithms Supports a wide range of AI applications including voice recognition, search ranking, natural language processing, and so on. Baidu plans to continue to iterate this chip and develop it progressively to enable the expansion of an open AI ecosystem. To make it successful, Baidu continues to make “chip power” to meet the needs of various fields such as intelligent vehicles and devices, voice and image recognition. Read more about Baidu’s Kunlun AI chip on the MIT website. IBM unveils world’s fastest supercomputer with AI capabilities, Summit AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU?
Read more
  • 0
  • 0
  • 3025
article-image-zefflin-systems-unveils-servicenow-plugin-for-red-hat-ansible-2-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0

Savia Lobo
29 Jun 2018
2 min read
Zefflin Systems announced its ServiceNow Plugin 2.0 for the Red Hat Ansible 2.0. The plugin helps IT operations easily map IT services to infrastructure for automatically deployed environment. Zefflin's Plugin Release 2.0 enables the use of ServiceNow Catalog and Request management modules to: Facilitate deployment options for users Capture requests and route them for approval Invoke Ansible playbooks to auto-deploy server, storage, and networking Zefflin's Plugin 2.0 also provides full integration to ServiceNow Change Management for complete ITIL-compliant auditability. Key features and benefits of the ServiceNow Plugin 2.0 are: Support for AWX: With the help of AWX, customers who are on the open source version of Ansible can easily integrate into ServiceNow. Automated Catalog Variable Creation: Plugin 2.0 reads the target Ansible playbook and automatically creates the input variables in the ServiceNow catalog entry. This significantly reduces implementation time and maintenance effort. This means that the new playbooks can be onboarded in less time. Update to Ansible Job Completion: This extends the amount of information returned from an Ansible playbook and logged into the ServiceNow request. This enhancement dramatically improves the audit trail and provides a higher degree of process control. The ServiceNow Plugin for Ansible enables DevOps with ServiceNow integration by establishing: Standardized development architectures An effective routing approval process An ITIL-compliant audit framework Faster deployment An automated process that frees up the team to focus on other activities Read more about the ServiceNow Plugin in detail on Zefflin System’s official blog post Mastering Ansible – Protecting Your Secrets with Ansible An In-depth Look at Ansible Plugins Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 2614

article-image-microsoft-releases-open-service-broker-for-azure-osba-version-1-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Microsoft releases Open Service Broker for Azure (OSBA) version 1.0

Savia Lobo
29 Jun 2018
2 min read
Microsoft released version 1.0 of Open Service Broker for Azure (OSBA) along with full support for Azure SQL, Azure Database for MySQL, and Azure Database for PostgreSQL. Microsoft announced the preview of Open Service Broker for Azure (OSBA) at the KubeCon 2017. OSBA is the simplest way to connect apps running on cloud-native environment (such as Kubernetes, Cloud Foundry, and OpenShift) and rich suite of managed services available on Azure. The OSBA 1.0 ensures to connect mission-critical applications to Azure’s enterprise-grade backing services. It is also ideal to run on a containerized environment like Kubernetes. In a recent announcement of a strategic partnership between Microsoft and Red Hat to provide  OpenShift service on Azure, Microsoft demonstrated the use of OSBA using an OpenShift project template. OSBA will enable customers to deploy Azure services directly from the OpenShift console and connect them to their containerized applications running on OpenShift. It also plans to collaborate with Bitnami to bring OSBA into KubeApps, for customers to deploy solutions like WordPress built on Azure Database for MySQL and Artifactory on Azure Database for PostgreSQL. Microsoft plans 3 additional focus areas for OSBA and the Kubernetes service catalog: Plans to expand the set of Azure services available in OSBA by re-enabling services such as Azure Cosmos DB and Azure Redis. These services will progress to a stable state as Microsoft will learn how customers intend to use them. They plan to continue working with the Kubernetes community to align the capabilities of the service catalog with the behavior that customers expect. With this, the cluster operator will have the ability to choose which classes/plans are available to developers. Lastly, Microsoft has a vision for the Kubernetes service catalog and the Open Service Broker API. It will enable developers to describe general requirements for a service, such as “a MySQL database of version 5.7 or higher”. Read the full coverage on Microsoft’s official blog post GitLab is moving from Azure to Google Cloud in July Announces general availability of Azure SQL Data Sync Build an IoT application with Azure IoT [Tutorial]
Read more
  • 0
  • 0
  • 3659

article-image-hashicorp-announces-consul-1-2-to-ease-service-segmentation-with-the-connect-feature
Savia Lobo
28 Jun 2018
3 min read
Save for later

HashiCorp announces Consul 1.2 to ease Service segmentation with the Connect feature

Savia Lobo
28 Jun 2018
3 min read
HashiCorp recently announced the release of a new version of its distributed service mesh, Consul 1.2.  This release supports a new feature known as Connect, which automatically changes any existing Consul cluster into a service mesh solution. It works on any platform such as physical machines, cloud, containers, schedulers, and more. HashiCorp is San Francisco based organization that helps businesses resolve development, operations, and security challenges in infrastructure, for them to focus on other business-critical tasks. Consul is one such HashiCorp’s product; it is a distributed service mesh for connecting, securing, and configuring services across any runtime platform or any public or private cloud platform. The Connect feature within the Consul 1.2, enables secure service-to-service communication with automatic TLS encryption and identity-based authorization. HashiCorp further stated the Connect feature to be free and open source. New functionalities in the Consul 1.2 Encrypted Traffic while in transit All traffic is established with Connect through a mutual TLS. It ensures traffic to be encrypted in transit and allows services to be safely deployed in low-trust environment. Connection Authorization It will allow or deny service communication by creating a service access graph with intentions. Connect uses the logical name of the service, unlike a firewall which uses IP addresses. This means rules are scale independent; it doesn’t matter if there is one web server or 100. Intentions can be configured using the UI, CLI, API, or HashiCorp Terraform. Proxy Sidecars Applications are allowed to use a lightweight proxy sidecar process to automatically establish inbound and outbound TLS connections. With this, existing applications can work with Connect without any modification. Consul ships with a built-in proxy that doesn't require external dependencies, along with third-party proxies such as Envoy. Native Integration Performance sensitive applications can natively integrate with the Consul Connect APIs to establish and accept connections without a proxy for optimal performance and security. Certificate Management Consul creates and distributes certificates using a certificate authority (CA) provider. Consul has a built-in CA system that requires no external dependencies. This CA system integrates with HashiCorp Vault, and can also be extended to support any other PKI (Public Key Infrastructure) system. Network and Cloud Independent Connect uses standard TLS over TCP/IP, which allows Connect to work on any network configuration. However, the IP advertised by the destination service should be reachable by the underlying operating system. Further, services can communicate cross-cloud without complex overlays. Know more about these functionalities in detail, by visiting HashiCorp Consul 1.2 official blog post SDLC puts process at the center of software engineering Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner What is a multi layered software architecture?  
Read more
  • 0
  • 0
  • 2346
article-image-cloud-filestore-a-new-high-performance-storage-option-by-google-cloud-platform
Vijin Boricha
27 Jun 2018
3 min read
Save for later

Cloud Filestore: A new high performance storage option by Google Cloud Platform

Vijin Boricha
27 Jun 2018
3 min read
Google recently came up with a new storage option for developers in its cloud. Cloud Filestore which is in its beta will launch next month according to the Google Cloud Platform Blog. Applications that require a filesystem interface and a shared filesystem for data can leverage this file storage service. It provides a fully managed  Network Attached Storage (NAS) service to effectively integrate with Google Compute Engine and Kubernetes Engine instances. Developers can leverage the abilities of Filestore for high performing file-based workloads. Now enterprises can easily run applications that depend on traditional file system interface with Google Cloud Platform. Traditionally, if applications needed a standard file system, developers would have to improvise a file server with a persistent disk. Filestore does away with traditional methods and allows GCP developers to spin-up storage as needed. Filestore offers high throughput, low latency and high IOPS (Input/output operations per second). This service is available in two tiers; premium and standard. The premium tier costs $0.30/GB/month and promises a max throughput of 700 MB/s and 30,000 max IOPS. The standard tier costs $0.20/GB/month with 180 MB/s max throughput and 5,000 max IOPS. A snapshot of Filestore features Filestore was introduced at the Los Angeles region launch and majorly focused on the entertainment and media industries, where there is a great need for shared file systems for enterprise applications. But this service is not limited only to the media industry, other industries that rely on similar enterprise applications can also benefit from this service. Benefits of using Filestore A lightning speed experience Filestore provides high IOPS for latency sensitive workloads such as content management systems, databases, random i/o, or other metadata intensive applications. This further results in a minimal variability in performance. Consistent  performance throughout Cloud Filestore ensures that one pays a predictable price for predictable performance. Users can independently choose the preferred IOPS--standard or premium-- and storage capacity with Filestore. With this option to choose from, users can fine tune their filesystem for a particular workload. One will also experience consistent performance for a particular workload over time. Simplicity at its best Cloud Filestore, a fully managed, NoOps service, is integrated with the rest of the Google Cloud portfolio. One can easily mount Filestore volumes on Compute Engine VMs. Filestore is tightly integrated with Google Kubernetes Engine, which allows containers to refer the same shared data. To know more about this exciting release, visit Cloud Filestore official website. Related Links AT&T combines with Google cloud to deliver cloud networking at scale What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 3782

article-image-gitlab-is-moving-from-azure-to-google-cloud
Richard Gall
26 Jun 2018
2 min read
Save for later

GitLab is moving from Azure to Google Cloud in July

Richard Gall
26 Jun 2018
2 min read
In a switch that contains just a subtle hint of saltiness, GitLab has announced that it is to move its code repositories from Microsoft Azure to Google Cloud on Saturday, July 28, 2018. The news comes just weeks after Microsoft revealed it was to acquire GitHub (this happened in early June if you've lost track of time). While it's tempting to see this as a retaliatory step, it is instead just a coincidence. The migration was planned before the Microsoft and GitHub news was even a rumor. Why is GitLab moving to Google Cloud? According to GitLab's Andrew Newdigate, the migration to Google Cloud is being done in a bid to "improve performance and reliability." In a post on the GitLab blog, Newdigate explains that one of the key drivers of the team's decision is Kubernetes. "We believe Kubernetes is the future. It's a technology that makes reliability at massive scale possible." Kubernetes is a Google product, so it makes sense for GitLab to make the switch to Google's cloud offering to align their toolchain. Read next: The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab How GitLab's migration will happen A central part of the GitLab migration is Geo. Geo is a tool built by GitLab that makes cloning and reproducing repositories easier for developers working in different locations. Essentially, it creates 'mirrors' of GitLab instances. That's useful for developers using GitLab, as it provides extra safety and security, but GitLab are using it themselves for the migration. [caption id="attachment_20323" align="aligncenter" width="300"] Image via GitLab[/caption] Newdigate writes that GitLab has been running a parallel site that is running on Google Cloud Platform as the migration unfolds. This contains  an impressive "200TB of Git data and 2TB of relational data in PostgreSQL." Rehearsing the failover in production Coordination and planning is everything when conducting such a substantial migration. That's why GitLab's Geo, Production, and Quality teams meet several times a week to rehearse the failover. This process has a number of steps, and each time, every step throws up new issues and problems. These are then documented and resolved by the relevant team. Given confidence and reliability is essential to any version control system, building this into the migration process is a worthwhile activity.
Read more
  • 0
  • 0
  • 2158