Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 3035

article-image-gke-sandbox-a-gvisor-based-feature-to-increase-security-and-isolation-in-containers
Vincy Davis
17 May 2019
4 min read
Save for later

GKE Sandbox : A gVisor based feature to increase security and isolation in containers

Vincy Davis
17 May 2019
4 min read
During the Google Cloud Next ‘19, Google Cloud announced the beta version of GKE Sandbox, a new feature in Google Kubernetes Engine (GKE). Yesterday, Yoshi Tamura (Product Manager of Google Kubernetes Engine and gVisor) and Adin Scannell (Senior Staff Software Engineer of gVisor) explained in brief about the GKE Sandbox, on Google Cloud’s official blogspot. GKE Sandbox increases the security and isolation of containers by adding an extra layer between the containers and the host OS. At general availability, GKE Sandbox will be available in the upcoming GKE Advanced. This feature will help in building demanding production applications on top of managed Kubernetes service. GKE Sandbox uses gVisor to abstract the internals, which makes the internals an easy-to-use service. While creating a pod, the user can simply choose GKE Sandbox and continue to interact with containers. This will need no new learning of controls or a mental model. In view of limiting potential attacks, GKE Sandbox helps teams running multi-tenant clusters such as SaaS providers. These teams are often executing  unknown or untrusted code. This helps in providing more secure multi-tenancy in GKE. gVisor is an open-source container sandbox runtime that was released last year. It was created to defend against a host compromise when it runs an arbitrary, untrusted code, and still integrate with container-based infrastructure. gVisor is used in many Google Cloud Platform (GCP) services like the App Engine standard environment, Cloud Functions, Cloud ML Engine, and most recently Cloud Run. Some features of gVisor include: Provides an independent operating system kernel to each container. Applications can interact with the virtualized environment provided by gVisor's kernel rather than the host kernel. Manages and places restrictions on file and network operations. Ensures there are two isolation layers between the containerized application and the host OS. Due to the reduced and restricted interaction of an application with the host kernel, attackers have a smaller attack surface. An experience shared on the official Google blog post mentions how Data refinery creator Descartes Labs have applied machine intelligence to massive data sets. Tim Kelton, Co-Founder and Head of SRE, Security, and Cloud Operations at Descartes Labs, said, “As a multi-tenant SaaS provider, we still wanted to leverage Kubernetes scheduling to achieve cost optimizations, but build additional security layers on top of users’ individual workloads. GKE Sandbox provides an additional layer of isolation that is quick to deploy, scales, and performs well on the ML workloads we execute for our users." Applications suitable for GKE Sandbox GKE Sandbox is well-suited to run compute and memory-bound applications and so works with a wide variety of applications such as: Microservices and functions : GKE Sandbox will enable additional defense in depth while preserving low spin-up times and high service density. Data processing : GKE Sandbox can process data in less than 5 percent for streaming disk I/O and compute-bound applications like FFmpeg. CPU-based machine learning: Training and executing machine learning models frequently involves large quantities of data and complex workflows which mostly belongs to a third party. The CPU overhead of sandboxing compute-bound machine learning tasks is less than 10 percent. A user on Reddit commented, “This is a really interesting add-on to GKE and I'm glad to see vendors starting to offer a variety of container runtimes on their platforms.” GKE Sandbox feature has got rave reviews on twitter too. https://twitter.com/ahmetb/status/1128709028203220992 https://twitter.com/sarki247/status/1128931366803001345 If you want to try GKE Sandbox and know more details, head over to Google’s official feature page. Google Open-sources Sandboxed API, a tool that helps in automating the process of porting existing C and C++ code Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh Google Cloud Console Incident Resolved!
Read more
  • 0
  • 0
  • 3032

article-image-baidu-releases-kunlun-ai-chip-chinas-first-cloud-to-edge-ai-chip
Savia Lobo
05 Jul 2018
2 min read
Save for later

Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip

Savia Lobo
05 Jul 2018
2 min read
Baidu, Inc. the leading Chinese language Internet search provider releases Kunlun AI chip. It is China’s first cloud-to-edge AI chip, which is built to handle AI models for both, edge computing on devices and in the cloud via data centers. K'un-Lun is also a place that actually exists in another dimension in Marvel’s Immortal Iron Fist. AI applications have dramatically risen to popularity and adoption. With this, there is increased demand for requirements on the computational end. Traditional chips have limited computational power and to accelerate larger AI workloads; it requires much more scaling, computationally. To suffice this computational demand Baidu released the Kunlun AI chip, which is designed specifically for large-scale AI workloads. Kunlun feeds the high processing demands of AI with a high-performant and cost-effective solution. It can be used for both cloud and edge instances, which include data centers, public clouds, and autonomous vehicles. Kunlun comes in two variants; the 818-300 model is used for training and the 818-100 model is used for inference purposes. This chip leverages Baidu’s AI ecosystem including AI scenarios such as search ranking and deep learning frameworks like PaddlePaddle. Key Specifications of Kunlun AI chip A computational capability which is 30 times faster than the original FPGA-based accelerator that Baidu started developing in 2011 A 14nm Samsung engineering 512 GB/second memory bandwidth Provides 260 TOPS computing performance while consuming 100 Watts of power The features the Kunlun chip include: It supports open source deep learning algorithms Supports a wide range of AI applications including voice recognition, search ranking, natural language processing, and so on. Baidu plans to continue to iterate this chip and develop it progressively to enable the expansion of an open AI ecosystem. To make it successful, Baidu continues to make “chip power” to meet the needs of various fields such as intelligent vehicles and devices, voice and image recognition. Read more about Baidu’s Kunlun AI chip on the MIT website. IBM unveils world’s fastest supercomputer with AI capabilities, Summit AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU?
Read more
  • 0
  • 0
  • 3025
Visually different images

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 3022

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 3011

article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 3007
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-azure-functions-2-0-launches-with-better-workload-support-for-serverless
Melisha Dsouza
25 Sep 2018
2 min read
Save for later

Azure Functions 2.0 launches with better workload support for serverless

Melisha Dsouza
25 Sep 2018
2 min read
Microsoft  has announced the general availability of Azure Functions 2.0. The new release aims to handle demanding workloads, which should make managing the scale of serverless applications easier than ever before. With an improved user experience, and new developer capabilities, the release is evidence of Microsoft looking to take full advantage of interest in serverless computing. New features in Azure Functions 2.0 Azure Functions can now run on more platforms Azure Functions are now supported on more environments, including local Mac or Linux machines. An integration with its VS Code will help developers have a best-in-class serverless development experience on any platform. Code optimizations Functions 2.0 has added general host improvements, support for more modern language runtimes, and the ability to run code from a package file. .NET developers can now author functions using .NET Core 2.1.  This provides a significant performance gain and helps to develop and run .NET functions in more places. Assembly resolution functions have been improved to reduce the number of conflicts. Functions 2.0 now supports both Node 8 and Node 10, with improved performance in general. A powerful new programming model Bindings and integrations of Functions 1.0 have been improvised in functions 2.0. All bindings are brought in as extensions. The change to decoupled extension packages allows bindings (and their dependencies) to be versioned without depending on the core runtime. The recent launch of Azure SignalR Service, a fully managed service, enables focus on building real-time web experiences without worrying about setting up, hosting, scaling, or load balancing the SignalR server. Find an extension for this service, in this GitHub repo. Check out the SignalR Service binding reference to start building real-time serverless applications. Easier development To improve productivity, Microsoft has introduced a powerful native tooling inside of Visual Studio, VS Code, VS for Mac, and a CLI that can be run alongside any code editing experience. In Functions 2.0, more visibility is given to distributed tracing. Dependencies are automatically tracked, and cross-resource connections are automatically correlated across a variety of services To know more about the updates in Azure Functions 2.0  head to Microsoft’s official Blog Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.
Read more
  • 0
  • 0
  • 3006

article-image-aws-sam-aws-serverless-application-model-is-now-open-source
Savia Lobo
24 Apr 2018
2 min read
Save for later

AWS SAM (AWS Serverless Application Model) is now open source!

Savia Lobo
24 Apr 2018
2 min read
AWS recently announced that  SAM (Serverless Application Model) is now open source. With AWS SAM, one can define serverless applications in a simple and clean syntax. The AWS Serverless Application Model extends AWS CloudFormation and provides a simplified way of defining the Amazon Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. AWS SAM comprises of: the SAM specification Code translating the SAM templates into AWS CloudFormation Stacks General Information about the model Examples of common applications The SAM specification and implementation are open sourced under the Apache 2.0 license for AWS partners and customers to adopt and extend within their own toolsets. The current version of the SAM specification is available at AWS SAM 2016-10-31. Basic steps to create a serverless application with AWS SAM Step 1: Create a SAM template, a JSON or YAML configuration file that describes Lambda functions, API endpoints and the other resources in your application. Step 2: Test, upload, and deploy the application using the SAM Local CLI. During deployment, SAM automatically translates the application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to set-up for any Lambda functions. To learn more about how to define and deploy serverless applications, read the How-To Guide and see examples. One can build serverless applications faster and further simplify one’s development of serverless applications by defining new event sources, new resource types, and new parameters within SAM. One can also modify SAM in order to integrate it with other frameworks and deployment providers from the community for building serverless applications. For more in-depth knowledge, read AWS SAM development guide on GitHub  
Read more
  • 0
  • 0
  • 2998

article-image-google-cloud-collaborates-with-unity-3d-a-connected-gaming-experience-is-here
Savia Lobo
20 Jun 2018
2 min read
Save for later

Google Cloud collaborates with Unity 3D; a connected gaming experience is here!

Savia Lobo
20 Jun 2018
2 min read
Google Cloud announced its recent alliance with Unity at the Unite Berlin conference this week. Unity is a popular game development platform for a real-time 3D game and content creation. Google Cloud stated that they are building a suite of managed services and tools for creating connected games. This suite will be much focussed on real-time multiplayer experiences. With this Google Cloud becomes the default cloud provider helping developers build connected games using Unity. It will also assist them to easily build and scale their games. Additionally, developers will get an advantage of Google Cloud right from the Unity development environment without needing to become cloud experts. The reason Google Cloud collaborates with Unity is to create an open source for connecting players in multiplayer games. This project mainly aims at creating an open source, community-driven solutions built in collaboration with the world’s leading game companies. Unity will also be migrating all of the core infrastructure powering its services and offerings to Google Cloud. Unity will also be running its business on the same cloud that Unity game developers will develop, test and globally launch their games. John Riccitiello, Chief Executive Officer, Unity Technologies, said, “Migrating our infrastructure to Google Cloud was a decision based on the company’s impressive global reach and product quality. Now, Unity developers will be able to take advantage of the unparalleled capabilities to support their cloud needs on a global scale.” Google Cloud plans to release new products and features over the coming months. Keep yourself updated on this alliance by checking out Unity’s homepage. AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 2997

article-image-introducing-gitlab-serverless-to-deploy-cloud-agnostic-serverless-functions-and-applications
Amrata Joshi
12 Dec 2018
2 min read
Save for later

Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, GitLab and TriggerMesh introduced GitLab Serverless which helps enterprises run serverless workloads on any cloud with the help of Google’s Kubernetes-based platform Knative, which is used to build, deploy, and manage serverless workloads. GitLab Serverless enables businesses in deploying serverless functions and applications on any cloud or infrastructure from GitLab UI by using Knative. GitLab Serverless is scheduled for public release on 22 December 2018 and will be available in GitLab 11.6. It involves a technology developed by TriggerMesh, a multi cloud serverless platform, for enabling businesses to run serverless workloads on Kubernetes. Sid Sijbrandij, co-founder and CEO of GitLab said, “We’re pleased to offer cloud-agnostic serverless as a built-in part of GitLab’s end-to-end DevOps experience, allowing organizations to go from planning to monitoring in a single application.” Functions as a service (Faas) With GitLab Serverless, users can run their own Function-as-a-Service (FaaS) on any infrastructure without worrying about vendor lock-in. FaaS allows users to write small and discrete units of code with event-based execution. While deploying the code, developers need not worry about the infrastructure it will run on. It saves resources as the code executes only when needed, so resources don’t get used while the app is idle. Kubernetes and Knative Flexibility and portability can be achieved by running serverless workloads on Kubernetes. The Serverless uses Knative for creating a seamless experience for the entire DevOps lifecycle. Deploy on any infrastructure With Serverless, users can deploy to any cloud or on-premises infrastructure. GitLab can connect to any Kubernetes cluster so users can choose to run their serverless workloads anywhere Kubernetes runs. Auto-scaling with ‘scale to zero’ The Kubernetes cluster automatically scales up and down based on the load. The "Scale to zero" is used for stopping consumption of resources when there are no requests. To know more about this news, check out the official announcement. Haskell is moving to GitLab due to issues with Phabricator GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 2991
article-image-yandex-launched-an-intelligent-public-cloud-platform-yandex-cloud
Savia Lobo
06 Sep 2018
2 min read
Save for later

Yandex launched an intelligent public cloud platform, Yandex.Cloud

Savia Lobo
06 Sep 2018
2 min read
Yesterday, Russia’s largest search engine, Yandex, launched its intelligent public cloud platform named Yandex.Cloud. This intelligent public cloud platform has been tested by more than 50 Russian and international companies since April. Yandex.Cloud is easy to use and offers flexible pricing with a pay per use pricing model. Also, the platform has an easy access to all the Yandex technologies, which makes it easy for companies to complement an existing IT infrastructure or even serve as an alternative to it. The platform will assist companies and industries of different sizes to boost their efficiency or expand their business without large-scale investment. Yandex plans to roll the Yandex.Cloud platform slowly, first to its users of Yandex services for business, and then to all by the end of 2018. It enables companies to store and use databases containing personal data in Russia, as required by law. Features of the ‘Yandex.Cloud’ public cloud platform A scalable virtual infrastructure The new intelligent public cloud platform includes a scalable virtual infrastructure having multiple management options. Users can manage from a graphical interface or the command line. It also includes developer tools for popular programming languages such as Python and Go Automated services Labour-intensive management tasks of popular databases systems such as PostgreSQL, ClickHouse (Yandex open source high-performance database management system) and MongoDB have been automated. AI-based Yandex services Yandex.Cloud includes AI based services such as a SpeechKit speech recognition and synthesis and Yandex.Translate machine translation. Yan Leshinsky, Head of Yandex.Cloud said, “Yandex has an entire ecosystem of successful products and services that are used by millions of people on a daily basis. Yandex.Cloud provides access to the same infrastructure and technologies that we use to power Yandex services, creating unique opportunities for any business to develop their products and services based on this platform.” To know more about Yandex.Cloud, visit its official website. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform Cloud Filestore: A new high-performance storage option by Google Cloud Platform
Read more
  • 0
  • 0
  • 2987

article-image-amazon-reinvent-2018-aws-key-management-service-kms-custom-key-store
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store

Sugandha Lahoti
27 Nov 2018
3 min read
At the ongoing Amazon re:Invent 2018, Amazon announced that AWS Key Management Service (KMS) has integrated with AWS CloudHSM. Users now have the option to create their own KMS custom key store. They can generate, store, and use their KMS keys in hardware security modules (HSMs) through the KSM. The KMS customer key store satisfies compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs). It supports AWS services and encryption toolkits that are integrated with KMS. Previously, AWS CloudHSM was not widely integrated with other AWS managed services. So, if someone required direct control of their HSMs but still wanted to use and store regulated data in AWS managed services, they had to choose between changing those requirements, not using a given AWS service, or building their own solution. With custom key store, users can configure their own CloudHSM cluster and authorize KMS to use it as a dedicated key store for keys rather than the default KMS key store. On using a KMS CMK in a custom key store, the cryptographic operations under that key are performed exclusively in the developer’s own CloudHSM cluster. Master keys that are stored in a custom key store are managed in the same way as any other master key in KMS and can be used by any AWS service that encrypts data and that supports KMS customer managed CMKs. The use of a custom key store does not affect KMS charges for storing and using a CMK. However, it does come with an increased cost and potential impact on performance and availability. Things to consider before using a custom key store Each custom key store requires the CloudHSM cluster to contain at least two HSMs. CloudHSM charges vary by region and the pricing comes to at least $1,000 per month, per HSM, if each device is permanently provisioned. The number of HSMs determines the rate at which keys can be used. Users should keep in mind the intended usage patterns for their keys and ensure appropriate provisioning of HSM resources. The number of HSMs and the use of availability zones (AZs) impacts the availability of a cluster. Configuration errors may result in a custom key store being disconnected, or key material being deleted. Users need to manually setup HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which users should have the appropriate resources and organizational controls in place. Read more about the KMS custom key stores on Amazon. How Amazon is reinventing Speech Recognition and Machine Translation with AI AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources.
Read more
  • 0
  • 0
  • 2984

article-image-amazon-eventbridge-an-event-bus-with-higher-security-and-speed-to-boost-aws-serverless-ecosystem
Vincy Davis
15 Jul 2019
4 min read
Save for later

Amazon EventBridge: An event bus with higher security and speed to boost AWS serverless ecosystem

Vincy Davis
15 Jul 2019
4 min read
Last week, Amazon had a pretty huge news for its AWS serverless ecosystem, one which is being considered as the biggest thing since AWS Lambda itself. Few days ago, with an aim to help customers integrate their own AWS applications with Software as a Service (SaaS) applications, Amazon EventBridge was launched. The EventBridge model is an asynchronous, fast, clean, and easy to use event bus which can be used to publish events, specific to each AWS customer. The SaaS application and a code running on AWS are now independent of a shared communication protocol, runtime environment, or programming language. This allows Lambda functions to handle events from a Saas application as well as route events to other AWS targets. Similar to CloudWatch events, EventBridge also has an existing default event bus that accepts events from AWS services and calls to PutEvents. One distinction between them is that in EventBridge, each partner application that a user subscribes to will also create an event source. This event source can then be used to associate with an event bus in an AWS account. AWS users can select any of their event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule. Important terms to understand the use of Amazon EventBridge Partner: An organization that has integrated their SaaS application with EventBridge. Customer: An organization that uses AWS, and that has subscribed to a partner’s SaaS application. Partner Name: A unique name that identifies an Amazon EventBridge partner. Partner Event Bus: An Event Bus that is used to deliver events from a partner to AWS. How EventBridge works for partners & customers A partner can allow their customers to enter an AWS account number and then select an AWS region. Next, CreatePartnerEventSource is called by the partner in the desired region and the customer is informed of the event source name. After accepting the invitation to connect, customers have to wait for the status of the event source to change to Active. Each time an event of interest to the customer occurs, the partner calls the PutPartnerEvents and reference the event source. Image Source: Amazon It works the same way for customers as well. Customer accepts the invitation to connect by calling CreateEventBus, to create an event bus associated with the event source. Customer can add rules and targets to prepare the Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. Customers can use DeActivateEventSource and ActivateEventSource to control the flow. Amazon EventBridge is launched with ten partner event sources including Datadog, Zendesk, PagerDuty, Whispir, Segment, Symantec and more. This is pretty big news for users who deal with building serverless applications. With inbuilt partner integrations these partners can directly trigger an event in an EventBridge, without the need for a webhook. Thus “AWS is the mediator rather than HTTP”, quotes Paul Johnston, the ServerlessDays cofounder. He also adds that, “The security implications of partner integrations are the first thing that springs to mind. The speed implications will almost certainly be improved as well, with those partners almost certainly using AWS events at the other end as well.” https://twitter.com/PaulDJohnston/status/1149629728065650693 https://twitter.com/PaulDJohnston/status/1149629729571397632 Users are excited with the kind of creative freedom Amazon EventBridge will bring to their products. https://twitter.com/allPowerde/status/1149792437738622976 https://twitter.com/ShortJared/status/1149314506067255304 https://twitter.com/petrabarus/status/1149329981975040000 https://twitter.com/TobiM/status/1149911798256152576 Users with SaaS application can integrate with EventBridge Partner Integration. Visit the Amazon blog to learn the implementation of EventBridge. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon Aurora makes PostgreSQL Serverless generally available Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 2956
article-image-vmware-essential-pks-use-upstream-kubernetes-to-build-a-flexible-cost-effective-cloud-native-platform
Melisha Dsouza
04 Mar 2019
3 min read
Save for later

VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform

Melisha Dsouza
04 Mar 2019
3 min read
Last week, Paul Fazzone, GM Cloud Native Applications, announced the launch of VMware Essential PKS “as a modular approach to cloud-native operation”. VMware Essential PKS includes upstream Kubernetes, reference architectures to help design decisions, and expert support to guide users through upgrades, maintenance and reactively troubleshoot when needed. Paul notes that more than 80% of containers run on virtual machines (VMs), with the percentage growing every year. This launch keeps up with the main objective of establishing VMware as the leading enabler of Kubernetes and cloud-native operation. Features of Essential PKS #1 Modular Approach Customers who have specific technological requirements for networking, monitoring, storage, etc. can build a more modular architecture on upstream Kubernetes. VMware Essential PKS will help these customers access upstream Kubernetes with proactive support.  The only condition being that these organizations should either have the in-house expertise to work with those components, the intention to grow that capability or the willingness to use an expert team. #2 Application portability Customers will be able to use the latest version of upstream Kubernetes, ensuring that they are never locked into a vendor-specific distribution. #3 Flexibility This service allows customers to implement a multi-cloud strategy that lets them choose tools and clouds as per their preference to build a flexible platform on upstream Kubernetes for their workloads. #4  Open-source community support VMware contributes to multiple SIGs and open-source projects that strengthen key technologies and fill up the gaps in the Kubernetes ecosystem. #5 Cloud native ecosystem support and guidance Customers will be able to access 24x7, SLA-driven support for Kubernetes and key open-source tooling. VMware experts will partner with customers to help them with architecture design reviews and help them evaluate networking, monitoring, backup, and other solutions to build a production-grade open source Kubernetes platform. The Kubernetes community has received this news with enthusiasm. https://twitter.com/cmcluck/status/1100506616124719104 https://twitter.com/edhoppitt/status/1100444712794615808 In November, VMware announced it was buying Heptio at VMworld. Heptio products work with upstream Kubernetes and help enterprises realize the impact of Kubernetes on their business. According to FierceTelecom, “PKS Essentials takes the Heptio approach of building a more modular, customized architecture for deploying software containers on upstream Kubernetes but with VMware support.” Rancher Labs announces ‘K3s’: A lightweight distribution of Kubernetes to manage clusters in edge computing environments CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration
Read more
  • 0
  • 0
  • 2932

article-image-google-opts-out-of-pentagons-10-billion-jedi-cloud-computing-contract-as-it-doesnt-align-with-its-ethical-use-of-ai-principles
Bhagyashree R
09 Oct 2018
3 min read
Save for later

Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles

Bhagyashree R
09 Oct 2018
3 min read
Yesterday, Google announced that they will be not be competing for the Pentagon’s cloud-computing contract which is supposedly worth $10 billion. They opted out of bidding for the project named, Joint Enterprise Defense Infrastructure (JEDI) saying the project may conflict with its principles for the ethical use of AI. The JEDI project involves moving massive amounts of Pentagon internal data to a commercially operated secure cloud system. The bidding for this contract began two months ago and closes this week (12th October). CNBC reported in July that Amazon is considered as the number one choice for the contract because it is already providing services for the cloud system used by U.S intelligence agencies. Cloud providers such as IBM, Microsoft, and Oracle are also top-contenders as they have worked with government agencies for many decades. This move could help their chances of winning the decade-long JEDI contract. Why Google has dropped out of this bidding? One of Google’s spokespersons told TechCrunch that the main reason for opting out of this bidding is because it doesn’t align with their AI principles: “While we are working to support the US government with our cloud in many areas, we are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles and second, we determined that there were portions of the contract that were out of scope with our current government certifications.” He further added that: “Had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload. At a time when new technology is constantly becoming available, customers should have the ability to take advantage of that innovation. We will continue to pursue strategic work to help state, local and federal customers modernize their infrastructure and meet their mission critical requirements.” Also, this decision is a result of thousands of Google employees protesting against the company's involvement in another US government project named Project Maven. Earlier this year, some of the Google employees reportedly quit over the company's work on this project. Its employees believed that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. An internal petition was also drafted for Google CEO Sundar Pichai to cancel Project Maven and was signed by over 3,000 employees. After this protest, Google said it would not renew the contract or pursue similar military contracts. Further, Google also formulated its principles for the ethical use of AI. You can read the full story on Bloomberg. Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology
Read more
  • 0
  • 0
  • 2929