Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-oracle-directors-support-billion-dollar-lawsuit-against-larry-ellison-and-safra-catz-for-netsuite-deal
Fatema Patrawala
23 Aug 2019
5 min read
Save for later

Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal

Fatema Patrawala
23 Aug 2019
5 min read
On Tuesday, Reuters reported that Oracle directors gave a go ahead for a million dollar lawsuit filed against Larry Ellison and Safra Catz in a NetSuite deal in 2016. This was made possible by several board members who wrote an extraordinary letter to the Delaware Court. According to Reuters, in 2017, shareholders led by the Firemen’s Retirement System of St. Louis alleged that Oracle directors breached their duties when they approved a $9.3 billion acquisition of NetSuite – a company controlled by Oracle chair Larry Ellison – at a huge premium above NetSuite’s trading price. Shareholders alleged that Oracle directors sanctioned Ellison’s self-dealing - and also claimed that Oracle’s board members were too entwined with Ellison to be entrusted with the decision of whether the company should sue him and other directors over the NetSuite deal. In an opinion published in Reuters in May 2018, Vice-Chancellor Sam Glasscock of Delaware Chancery Court agreed that shareholders had shown it would have been futile for them to demand action from the board itself. Three years after closing a $9.3 billion deal to acquire NetSuite, three board members, including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, Vice Chancellor for the Court of Chancery in Georgetown, Delaware, approving the lawsuit as members of a special board of directors entity known as the Special Litigation Committee. This lawsuit in legal parlance is known as a derivative suit. According to Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained. The letter went on to say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed. As per the letter, the lawsuit, which was originally filed by the Firemen’s Retirement System of St. Louis, could be worth billions. It reads, “One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members.” Oracle directors struggled with its cloud footing and ended up buying NetSuite TechCrunch noted that Larry Ellison was involved in setting up NetSuite in the late 1990s and was a major shareholder of NetSuite at the time of the acquisition. Oracle directors were struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player, like NetSuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. On Hacker News, a few users commented saying Oracle directors overpaid NetSuite and enriched Larry Ellison. One comment reads, “As you know people, as you learn about things, you realize that these generalizations we have are, virtually to a generalization, false. Well, except for this one, as it turns out. What you think of Oracle, is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle. And I gotta say, as someone who has seen that complexity for my entire life, it's very hard to get used to that idea. It's like, 'surely this is more complicated!' but it's like: Wow, this is really simple! This company is very straightforward, in its defense. This company is about one man, his alter-ego, and what he wants to inflict upon humanity -- that's it! ...Ship mediocrity, inflict misery, lie our asses off, screw our customers, and make a whole shitload of money. Yeah... you talk to Oracle, it's like, 'no, we don't fucking make dreams happen -- we make money!' ...You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle.” Oracle does “organizational restructuring” by laying off 100s of employees IBM, Oracle under the scanner again for questionable hiring and firing policies The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 3348

article-image-tumblr-open-sources-its-kubernetes-tools-for-better-workflow-integration
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

Tumblr open sources its Kubernetes tools for better workflow integration

Melisha Dsouza
15 Jan 2019
3 min read
Yesterday, Tumblr announced the open sourcing of three tools developed at Tumblr itself, that will help developers integrate Kubernetes into their workflows. These tools were developed by Tumblr throughout their eleven-year journey to migrate their workflow to Kubernetes with ease. These are the 3 tools and their features as listed on the Tumblr blog: #1 k8s- sidecar injector Containerizing complex applications can be time-consuming. Sidecars come as a savior option, that allows developers to emulate older deployments with co-located services on Virtual machines or physical hosts. The k8s-sidecar injector dynamically injects sidecars, volumes, and environment data into pods as they are launched. This reduced the overhead and work involved in copy-pasting code to add sidecars to a developer's deployments and cronjobs. What’s more, the tool listens to the specific sidecar to be injected, contained within the Kubernetes API for Pod launch. This tool will be useful when containerizing legacy applications requiring a complex sidecar configuration. #2 k8s-config-projector The k8s-config projector is a command line tool that was generated out of the necessity of accessing a subset of configuration data (feature flags, lists of hosts/IPs+ports, and application settings) and a need to be informed as soon as this data changes. Config data defines how deployed services operate at Tumblr. Kubernetes ConfigMap resource enables users to provide their service with configuration data. It also allows them to update the data in running pods without redeployment of the application. To use this feature to configure Tumblr’s services and jobs in a Kubernetes-native manner, the team had to bridge the gap between their canonical configuration store (git repo of config files) to ConfigMaps. k8s-config-projector combines the git repo hosting configuration data with “projection manifest” files, that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. Developers can now encode a set of configuration data that the application needs to run into a projection manifest. The blog states that ‘as the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed’. #3 k8s-secret-projector Tumblr stores secure credentials (passwords, certificates, etc) in access controlled vaults. With k8s-secret-projector tool, developers will now be able to request access to subsets of credentials for a given application. This can be done now without granting the user access to the secrets as a whole. The tool ensures applications always have the appropriate secrets at runtime, while enabling automated systems including certificate refreshers, DB password rotations, etc to automatically manage and update these credentials, without the need to redeploy/restart the application. It performs the same by combining two repositories- projection manifests and credential repositories. A Continuous Integration (CI) tool like Jenkins will run the tool against any changes in the projection manifests repository. This will generate new Kubernetes Secret YAML files which will lead to the Continuous Deployment to deploy the generated and validated Secret files to any number of Kubernetes clusters. The tool will allow secrets to be deployed in Kubernetes environments by encrypting generated Secrets before they touch the disk. You can head over to Tumblr’s official blog for examples on each tool. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes
Read more
  • 0
  • 0
  • 3347

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 3325
Visually different images

article-image-atlassian-overhauls-its-jira-software-with-customizable-workflows-new-tech-stack-and-roadmaps-tool
Sugandha Lahoti
19 Oct 2018
3 min read
Save for later

Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool

Sugandha Lahoti
19 Oct 2018
3 min read
Atlassian has completely revamped it’s traditional Jira software adding a simplified user experience, new third-party integrations, and a new product roadmaps tool. Announced yesterday, in their official blog post, they mention that “They’ve rolled out an entirely new project experience for the next generation with a focus on making Jira Simply Powerful.” Sean Regan, head of growth for Software Teams at Atlassian, said: “With a more streamlined and simplified application, Atlassian hopes to appeal to a wider range of business execs involved in the software-creation process.” What’s new in the revamped Jira software? Powerful tech stack: Jira Software is transformed into a modern cloud app. It now includes an updated tech stack, permissions, and UX. Developers have more autonomy, administrators have more flexibility and advanced users have more power. “Additionally, we’ve made Jira simpler to use across the board. Now, anyone who works with development teams can collaborate more easily.” Customizable workflow: To upgrade user experience, Atlassian has introduced a new feature called build-your-own-boards. Users can customize their own workflow, issue types, and fields for the board. They don’t require administrator access or the need to jeopardize other project’s customizations. Source: Jira blog This customizable workflow was inspired by Trello, the task management app acquired by Atlassian for $425 million in 2017. “What we tried to do in this new experience is mirror the power that people know and love about Jira, with the simplicity of an experience like Trello.” said Regan. Third party integrations: The new Jira comes with almost 600 third-party integrations. These third-party applications, Atlassian said, should help appeal to a broader range of job roles that interact with developers. Integrations include Adobe, Sketch, and Invision. Other integrations include Facebook's Workplace and updated integrations for Gmail and Slack. Jira Cloud Mobile: Jira Cloud mobile helps developers access their projects from their smartphones. Developers can create, read, update, and delete issues and columns; groom their backlog and start and complete sprints; respond to comments and tag relevant stakeholders, all from their mobile. Roadmapping tool: Jira now features a brand new roadmaps tool that makes it easier for teams to see the big picture. “When you have multiple teams coordinating on multiple projects at the same time, shipping different features at different percentage releases, it’s pretty easy for nobody to know what is going on,” said Regan. “Roadmaps helps bring order to the chaos of software development.” Source: Jira blog Pricing for the Jira software varies by the number of users. It costs $10 per user per month for teams of up to 10 people; $7 per user per month for teams of between 11 and 100 users; and varying prices for teams larger than 100. The company also offers a free 7-day trial. Read more about the release on the Jira Blog. You can also have a look at their public roadmap. Atlassian acquires OpsGenie, launches Jira Ops to make the incident response more powerful. GitHub’s new integration for Jira Software Cloud aims to provide teams with a seamless project management experience. Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 3294

article-image-cloudflare-workers-kv-a-distributed-native-key-value-store-for-cloudflare-workers
Prasad Ramesh
01 Oct 2018
3 min read
Save for later

Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers

Prasad Ramesh
01 Oct 2018
3 min read
Cloudflare announced a fast distributed native key-value store for Cloudflare Workers on Friday. They are calling this “Cloudflare Workers KV”. Cloudflare Workers is a new kind of computing platform which is built on top of their global network of over 150 data centers. It allows writing serverless code which runs in the fabric of the internet itself. This allows engaging with users faster than other platforms. Cloudflare Workers KV is built on a new architecture which eliminates cold starts and dramatically reduced the memory overhead of keeping the code running. The values can also be written from within a Cloudflare Worker. Cloudflare handles synchronizing keys and values across the network. Cloudflare Workers KV features Developers can augment their existing applications or build a new application on Cloudflare’s network using Cloudflare Workers and Cloudflare Workers KV. Cloudflare Workers KV can scale to support applications serving dozens or even millions of users. Some of its features are as follows. Serverless storage Cloudflare created a serverless execution environment at each of their 153 data centers with Cloudflare Workers, but it still caused customers to manage their own storage. But with Cloudflare Workers KV, global application access to a key-value store is just an API call away. Responsive applications anywhere Serverless applications that run on Cloudflare Workers get low latency access to a globally distributed key-value store. Cloudflare Workers KV achieves a low latency by caching replicas of the keys and values stored in Cloudflare's cloud network. Build without scaling concerns Cloudflare Workers KV allows developers to focus their time on adding new capabilities to their serverless applications. They won’t have to waste time scaling their key-value stores. Key features of Cloudflare Workers KV The key features of Cloudflare workers KV as listed on their website are: Accessible from all 153 Cloudflare locations Supports values up to 64 KB Supports keys up to 2 KB Read and write from Cloudflare Workers An API to write to Workers KV from 3rd party applications Uses Cloudflare’s robust caching infrastructure Set arbitrary TTLs for values Integrates with Workers Preview It is currently in beta. To know more about workers KV, visit the Cloudflare Blog and the Cloudflare website. Bandwidth Alliance: Cloudflare collaborates with Microsoft, IBM and others for saving bandwidth Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Google introduces Cloud HSM beta hardware security module for crypto key security
Read more
  • 0
  • 0
  • 3274

article-image-go-cloud-is-googles-bid-to-establish-golang-as-the-go-to-language-of-cloud
Richard Gall
25 Jul 2018
2 min read
Save for later

Go Cloud is Google's bid to establish Golang as the go-to language of cloud

Richard Gall
25 Jul 2018
2 min read
Google's Go is one of the fastest growing programming languages on the planet. But Google is now bidding to make it the go-to language for cloud development. Go Cloud, a new library that features a set of tools to support cloud development, has been revealed in a blog post published yesterday. "With this project," the team explains, "we aim to make Go the language of choice for developers building portable cloud applications." Why Go Cloud now? Google developed Go Cloud because of a demand for a way of writing, simpler applications that aren't so tightly coupled to a single cloud provider. The team did considerable research into the key challenges and use cases in the Go community to arrive at Go Cloud. They found that the increased demand for multi-cloud or hybrid cloud solutions wasn't being fully leveraged by engineering teams, as there is a trade off between improving portability and shipping updates. Essentially, the need to decouple applications was being pushed back by the day-to-day pressures of delivering new features. With Go Cloud, developers will be able to solve this problem and develop portable cloud solutions that aren't tied to one cloud provider. What's inside Go Cloud? Go Cloud is a library that consists of a range of APIs. The team has "identified common services used by cloud applications and have created generic APIs to work across cloud providers." These APIs include: Blob storage MySQL database access Runtime configuration A HTTP server configured with request logging, tracing, and health checking At the moment Go Cloud is compatible with Google Cloud Platform and AWS, but say they plan "to add support for additional cloud providers very soon." Try Go Cloud for yourself If you want to see how Go Cloud works, you can try it out for yourself - this tutorial on GitHub is a good place to start. You can also stay up to date with news about the project by joining Google's dedicated mailing list.   Google Cloud Launches Blockchain Toolkit to help developers build apps easily Writing test functions in Golang [Tutorial]
Read more
  • 0
  • 0
  • 3272
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-amazon-reinvent-announces-amazon-dynamodb-transactions-cloudwatch-logs-insights-and-cloud-security-conference-amazon-reinforce-2019
Melisha Dsouza
28 Nov 2018
4 min read
Save for later

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019

Melisha Dsouza
28 Nov 2018
4 min read
Day 2 of the Amazon AWS re:Invent 2018 conference kicked off with just as much enthusiasm with which it began. With some more announcements and releases scheduled for the day, the conference is proving to be a real treat for AWS Developers. Amongst announcements like Amazon Comprehend Medical, New container products in the AWS marketplace; Amazon also announced Amazon DynamoDB Transactions and Amazon CloudWatch Logs Insights. We will also take a look at Amazon re:Inforce 2019 which is a new conference solely to be launched for cloud security. Amazon DynamoDB Transactions Customers have used Amazon DynamoDB for multiple use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. Amazon DynamoDB is a non-relational database delivering reliable performance at any scale. It offers built-in security, backup and restore, and in-memory caching along with being a fully managed, multi-region, multi-master database that provides consistent single-digit millisecond latency. DynamoDB with native support for transactions will now help developers to easily implement business logic that requires multiple, all-or-nothing operations across one or more tables. With the help of DynamoDB transactions, users can take advantage of the atomicity, consistency, isolation, and durability (ACID) properties across one or more tables within a single AWS account and region. It is the only non-relational database that supports transactions across multiple partitions and tables. Two new DynamoDB operations have been introduced for handling transactions: TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. It can optionally check for prerequisite conditions that need to be satisfied before making updates. TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If this request is issued on an item that is part of an active write transaction, the read transaction is canceled. Amazon CloudWatch Logs Insights Many AWS services create logs. Data points, patterns, trends, and insights embedded within these logs can be used to understand how an applications and a users AWS resources are behaving, identify room for improvement, and to address operational issues. However, the raw logs have a huge size, making analysis difficult. Considering individual AWS customers routinely generate 100 terabytes or more of log files each day, the operations become complex and time-consuming. Enter CloudWatch Logs Insights designed to work at cloud scale, without any setup or maintenance required. It goes through massive logs in seconds and provides a user with fast, interactive queries and visualizations. CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to perform complicated operations efficiently. It is a fully managed service and can handle any log format, and auto-discovers fields from JSON logs. What's more? Users can also visualize query results using line and stacked area charts, and add queries to a CloudWatch Dashboard. AWS re:Inforce 2019 In addition to these releases, Amazon also announced that AWS is launching a conference dedicated to cloud security called ‘AWS re:Inforce’, for the very first time. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. Here is what the AWS re:Inforce 2019 conference is expected to cover: Deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services. There are multiple learning tracks to be covered over this 2-day conference including a technical track and business enablement track, designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. The conference will also feature sessions on Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, and much more. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer
Read more
  • 0
  • 0
  • 3181

article-image-dr-fei-fei-li-googles-ai-cloud-head-steps-down-amidst-speculations-dr-andrew-moore-to-take-her-place
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Dr. Fei Fei Li, Google's AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Melisha Dsouza
11 Sep 2018
4 min read
Yesterday, Diane Greene, the CEO of Google Cloud, announced in a blog post that Chief Artificial Intelligence Scientist Dr. Fei-Fei Li will be   replaced by Dr. Andrew Moore, dean of the school of computer science at Carnegie Mellon University at the end of this year. The blog further mentions that, as originally planned, Dr. Fei-Fei Li will be returning to her professorship at Stanford and in the meanwhile, she will transition to being an AI/ML Advisor for Google Cloud. The timing of the transition following the controversies surrounding Google and Pentagon Project Maven is not lost on many. Flashback on ‘Project Maven’ protest and its outcry On March 2017 it was revealed that Google Cloud, headed by Greene, signed a secret $9m contract with the United States Department of Defense called as 'Project Maven'. The project aimed to develop an AI system that could help recognize people and objects captured in military drone footage. The contract was crucial to the Google Cloud Platform gaining a key US government FedRAMP authorization. This project was expected to assist Google in finding future government work worth potentially billions of dollars. Planned for use for non-offensive purposes only,  project Maven also had the potential to expand to a $250m deal. Google provided the Department of Defense with its TensorFlow APIs to assist in object recognition, which the Pentagon believed would eventually turn its stores of video into "actionable intelligence". In September 2017, in a leaked email reviewed by The New York Times, Scott Frohman, Google’s head of defense and intelligence sales asked Dr. Li ,Google Cloud AI’s leader and Chief Scientist, for directions on the “burning question” of how to publicize this news to the masses. To which she replied back- “Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.” As predicted by Dr. Li, the project was met with outrage by more than 3000 Google employees who believed that Google shouldn't be involved in any military work and that algorithms have no place in identifying potential targets. This caused a rift in Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. Many employees were "deeply concerned" that the data collected by Google integrated with military surveillance data for targeted killing. Fast forward to June 2018 where Google stated that it would not renew its contract (to expire in 2019) with the Pentagon. Dr. Li’s timeline at Google During her two year tenure, Dr. Li oversaw some remarkable work in accelerating the adoption of AI and ML by developers and Google Cloud customers. Considered as one of the most talented machine learning researchers in the world, Dr. Li has published more than 150 scientific articles in top-tier journals and conferences including Nature, Journal of Neuroscience, New England Journal of Medicine and many more. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a large-scale effort contributing to the latest developments in computer vision and deep learning in AI. Dr. Li has been a keynote or invited speaker at many conferences. She has been in the forefront of receiving prestigious awards for innovation and technology while being an acclaimed feature in many magazines. In addition to her contributions in the world of tech, Dr Li also is a co-founder of Stanford’s renowned SAILORS outreach program for high school girls and the national non-profit AI4ALL. The controversial email from Dr.Li can lead to one thinking if the transition was made as a result of the events of 2017. However, no official statement has been released by Google or Dr. Li on why she is moving on. Head over to Google’s Blog for the official announcement of this news. Google CEO Sundar Pichai won’t be testifying to Senate on election interference Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready      
Read more
  • 0
  • 0
  • 3174

article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 3174

article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 3148
article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3118

article-image-githubs-new-integration-for-jira-software-cloud-aims-to-provide-teams-a-seamless-project-management-experience
Bhagyashree R
08 Oct 2018
2 min read
Save for later

GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience

Bhagyashree R
08 Oct 2018
2 min read
Last week, GitHub announced that they have built a new integration to enable software teams to connect their code on GitHub.com to their projects on Jira Software Cloud. This integration updates Jira with data from GitHub, providing a better visibility into the current status of your project. What are the advantages of this new GitHub and Jira integration? No need to constantly switch between GitHub and Jira With your GitHub account linked to Jira, your team can see the branches, commit messages, and pull request in the context of the Jira tickets they’re working on. This integration provides a deeper connection by allowing you to view references to Jira in GitHub issues and pull requests. Source: GitHub Improved capabilities This new GitHub-managed app provides improved security, along with the following capabilities: Smart commits: You can use smart commits to update the status, leave a comment, or log time without having to leave your command line or GitHub View from within a Jira ticket: You can view associated pull requests, commits, and branches from within a Jira ticket Searching Jira issues: You can search for Jira issues based on related GitHub information, such as open pull requests. Check the status of development work: The status of development work can be seen from within Jira projects Keep Jira issues up to date: You can automatically keep your Jira issues up to date while working in GitHub Install the Jira Software and GitHub app to connect your GitHub repositories to your Jira instance. The previous version of the Jira integration will be deprecated in favor of this new GitHub-maintained integration. Once the migration is complete, the legacy integration (DVCS connector) is disabled automatically. Read the full announcement at the GitHub blog. 4 myths about Git and GitHub you should know about GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 3095

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 3087
article-image-is-cloud-mining-profitable
Richard Gall
24 May 2018
5 min read
Save for later

Is cloud mining profitable?

Richard Gall
24 May 2018
5 min read
Cloud mining has become into one of the biggest trends in Bitcoin and cryptocurrency. The reason is simple: it makes mining Bitcoin incredibly easy. By using cloud, rather than hardware to mine bitcoin, you can avoid the stress and inconvenience of managing hardware. Instead of using the processing power from hardware, you share the processing power of the cloud space (or more specifically the remote data center). In theory, cloud mining should be much more profitable than mining with your own hardware. However, it's easy to be caught out. At best some schemes are useless - at worst, they could be seen as a bit of a pyramid scheme. For this reason, it's essential you do your homework. However, although there are some risks associated with cloud mining, it does have benefits. Arguably it makes Bitcoin, and cryptocurrency in general, more accessible to ordinary people. Provided people get to know the area, what works and what definitely doesn't it could be a positive opportunity for many people. How to start cloud mining Let's first take a look at different methods of cloud mining. If you're going to do it properly, it's worth taking some time to consider your options. At a top level there are 3 different types of cloud mining. Renting out your hashing power This is the most common form of cloud mining. To do this, you simple 'rent out' a certain amount of your computer's hashing power. In case you don't know, hashing power is essentially your hardware's processing power; it's what allows your computer to use and run algorithms. Hosted mining As the name suggests, this is where you simply use an external machine to mine Bitcoin. To do this, you'll have to sign up with a cloud mining provider. If you do this, you'll need to be clear on their terms and conditions, and take care when calculating profitability. Virtual hosted mining Virtual hosted mining is a hybrid approach to cloud mining. To do this, you use a personal virtual server and then install the required software. This approach can be a little more fun, especially if you want to be able to build your own Bitcoin mining set up, but of course this poses challenges too. Depending on what you want to achieve any of these options may be right for you. Which cloud mining provider should you choose? As you'd expect from a trend that's growing rapidly, there's a huge number of cloud mining providers out there that you can use. The downside is that there are plenty of dubious providers that aren't going to be profitable for you. For this reason, it's best to do your research and read what others have to say. One of the most popular cloud mining providers is Hashflare. With Hashflare, you can buy a number of different types of cryptocurrencies, including Bitcoin, Ethereum, and Litecoin. You can also select your 'mining pool', which is something many providers won't let you do. Controlling the profitability of cloud mining can be difficult, so having control over your mining pool could be important. A mining pool is a bit like a hedge fund - a group of people pool together their processing resources, and the 'pay out' will be split according to the amount of work put in in order to create what's called a 'block', which is essentially a record or ledger of transactions. Hashflare isn't the only cloud mining solution available. Genesis Mining is another very high profile provider. It's incredibly accessible - you can begin a Bitcoin mining contract for just $15.99. Of course, the more you invest the better the deal you'll get. For a detailed exploration and comparison of cloud mining solutions, this TechRadar article is very useful. Take a look before you make any decisions! How can I ensure cloud mining is profitable? It's impossible to ensure profitability. Remember - cloud mining providers are out to make a profit. Although you might well make a profit, it's not necessarily in their interests to be paying money out to you. Calculating cloud mining profitability can be immensely complex. To do it properly you need to be clear on all the elements that are going to impact profitability. This includes: The cryptocurrency you are mining How much mining will cost per unit of hashing power The growth rate of block difficulty How the network hashrate might increase over the length of your mining contract There are lots of mining calculators out there that you can use to calculate how profitable cloud mining is likely to be. This article is particularly good at outlining how you can go about calculating cloud mining profitability. Its conclusion is an interesting take that's worth considering if you are interested in starting cloud mining: is "it profitable because the underlying cryptocurrency went up, or because the mining itself was profitable?" As the writer points out, if it is the cryptocurrency's value, then you might just be better off buying the cryptocurrency. Read next A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains
Read more
  • 0
  • 0
  • 3078

article-image-microsoft-announces-azure-quantum-an-open-cloud-ecosystem-to-learn-and-build-scalable-quantum-solutions
Savia Lobo
05 Nov 2019
3 min read
Save for later

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Savia Lobo
05 Nov 2019
3 min read
Yesterday, at the Microsoft Ignite 2019 in Orlando, the company released the preview of its first full-stack, scalable, general open cloud ecosystem, ‘Azure Quantum’. For developers, Microsoft has specifically created the open-source Quantum Development Kit, which includes all of the tools and resources you need to start learning and building quantum solutions. Azure Quantum is a set of quantum services including pre-built solutions to software and quantum hardware, providing developers and customers access to some of the most competitive quantum offerings in the market. For this offering, Microsoft has partnered with 1QBit, Honeywell, IonQ, and QCI. With Azure Quantum service, anyone gains deeper insights about quantum computing through a series of tools and learning tutorials such as the quantum katas. It also allows developers to write programs with Q# and QDK and experiment running the code against simulators and a variety of quantum hardware. Customers can also solve complex business challenges with pre-built solutions and algorithms running in Azure. According to Wired, “Azure Quantum has similarities to a service from IBM, which has offered free and paid access to prototype quantum computers since 2016. Google, which said last week that one of its quantum processors had achieved a milestone known as “quantum supremacy” by outperforming a top supercomputer, has said it will soon offer remote access to quantum hardware to select companies.” Microsoft’s Azure Quantum model is more like the existing computing industry, where cloud providers allow customers to choose processors from companies such as Intel and AMD, says William Hurley, CEO of startup Strangeworks. This startup offers services for programmers to build and collaborate with quantum computing tools from IBM, Google, and others. With just a single program, users will be able to target a variety of hardware through Azure Quantum – Azure classical computing, quantum simulators, and resource estimators, and quantum hardware from our partners, as well as our future quantum system being built on revolutionary topological qubit. Microsoft, on its official website, announced that the Azure Quantum will be launched in private preview in the coming months. Many users are excited to try the Quantum service by Azure. https://twitter.com/Daniel_Rubino/status/1191364279339036673 To know more about Azure Quantum in detail, visit Microsoft’s official page. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Using Qiskit with IBM QX to generate quantum circuits [Tutorial] How to translate OpenQASM programs in IBX QX into quantum scores [Tutorial]
Read more
  • 0
  • 0
  • 3054