Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-lyft-announces-envoy-mobile-an-ios-and-android-client-network-library-for-mobile-application-networking
Sugandha Lahoti
19 Jun 2019
3 min read
Save for later

Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking

Sugandha Lahoti
19 Jun 2019
3 min read
Yesterday, Lyft released the initial OSS preview release of Envoy Mobile. This is an iOS and Android client network library that brings Lyft’s Envoy Proxy to mobile platforms. https://twitter.com/mattklein123/status/1140998175722835974 Envoy proxy was initially built at Lyft to solve the networking and observability issues inherent in large polyglot server-side microservice architectures. It soon gained large scale public appreciation and was used by major public cloud providers, end user companies, and infrastructure startups. Now, Envoy Proxy is brought to iOS and Android platforms, providing an API and abstraction for mobile application networking. Envoy Mobile is currently in a very early stage of development. The initial release brings the following features: Ability to compile Envoy on both Android and iOS: Envoy Mobile uses an intelligent protobuf code generation and an abstract transport to help both iOS and Android provide similar interfaces and ergonomics for consuming APIs. Ability to run Envoy on a thread within an application, utilizing it effectively as an in-process proxy server. Swift/Obj-C/Kotlin demo applications that utilize exposed Swift/Obj-C/Kotlin “raw” APIs to interact with Envoy and make network calls, Long term goals Envoy Mobile provides support for Swift APIs for iOS and Kotlin APIs for Android initially, but depending on community interest they will consider adding support for additional languages in the future. In the long term, they are also planning to include the gRPC Server Reflection Protocol into a streaming reflection service API. This API will allow both Envoy and Envoy Mobile to fetch generic protobuf definitions from a central IDL service, which can then be used to implement annotation driven networking via reflection. They also plan to bring Envoy Mobile to xDS configuration to mobile clients, in the form of routing, authentication, failover, load balancing, and other policies driven by a global load balancing system. Envoy Mobile can also add cross-platform functionality when using strongly typed IDL APIs. Some examples of annotations that are planned in their roadmap are Caching, Priority, Streaming, Marking an API as offline/deferred capable and more. Envoy Mobile is getting loads of appreciation from developers with many happy they have open sourced its development. A comment on Hacker news reads, “I really like how they're releasing this as effectively the first working proof of concept and committing to developing the rest entirely in the open - it's a great opportunity to see how a project of this scale plays out in real-time on GitHub.” https://twitter.com/omerlh/status/1141225499139682305 https://twitter.com/dinodaizovi/status/1141157828247347200   Currently the project is in a pre-release stage. Not all features are implemented, and it is not ready for production use. However, you can get started here. Also see the demo release and their roadmap where they plan to develop Envoy Mobile entirely in the open. Related News Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race
Read more
  • 0
  • 0
  • 3602

article-image-aws-iot-greengrass-extends-functionality-with-third-party-connectors-enhanced-security-and-more
Savia Lobo
27 Nov 2018
3 min read
Save for later

AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more

Savia Lobo
27 Nov 2018
3 min read
At the AWS re:Invent 2018, Amazon announced new features to its AWS IoT Greengrass. These latest features allow users to extend the capabilities of AWS IoT Greengrass and its core configuration options, which include: connectors to third-party applications and AWS services hardware root of trust private key storage isolation and permission settings  New features of the AWS IoT Greengrass AWS IoT Greengrass connectors With the new updated features AWS IoT Greengrass connectors, users can easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials, or interacting with external APIs. These connectors allow users to connect to third-party applications, on-premises software, and AWS services without writing code. Re-use common business logic Users can now re-use common business logic from one AWS IoT Greengrass device to another through the ability to discover, import, configure, and deploy applications and services at the edge. They can even use AWS Secrets Manager at the edge to protect keys and credentials in the cloud and at the edge. Secrets can be attached and deployed from AWS Secrets Manager to groups via the AWS IoT Greengrass console. Enhanced security AWS IoT Greengrass now provides enhanced security with hardware root of trust private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. Users can also use the hardware secure element to protect secrets deployed to the AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. Deploy AWS IoT Greengrass to another container environment With the new configuration option, users can deploy AWS IoT Greengrass to another container environment and directly access device resources such as Bluetooth Low Energy (BLE) devices or low-power edge devices like sensors. They can even run AWS IoT Greengrass on devices without elevated privileges and without the AWS IoT Greengrass container at a group or individual AWS Lambda level. Users can also change their identity associated with an individual AWS Lambda, providing more granular control over permissions. To know more about other updated features, head over to AWS IoT Greengrass website. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 3591

article-image-github-open-sources-its-github-load-balancer-glb-director
Savia Lobo
10 Aug 2018
2 min read
Save for later

GitHub open sources its GitHub Load Balancer (GLB) Director

Savia Lobo
10 Aug 2018
2 min read
GitHub, open sourced the GitHub Load Balancer (GLB) Director on August 8, 2018. GLB Director is a Layer 4 load balancer which scales a single IP address across a large number of physical machines. It also minimizes connection disruption during any change in servers. Apart from open sourcing the GLB Director, GitHub has also shared details on the Load balancer design. GitHub had first released its GLB on September 22, 2016. The GLB is GitHub’s scalable load balancing solution for bare metal data centers. It powers a majority of GitHub’s public web and Git traffic, and GitHub’s critical internal systems such as its highly available MySQL clusters. How GitHub Load Balancer Director works GLB Director is designed for use in data center environments where multiple servers can announce the same IP address via BGP. Further, the network routers shard traffic amongst those servers using ECMP routing. The ECMP shards connections per-flow using consistent hashing and by addition or removal of nodes. This will cause some disruption to traffic as the state isn't stored for each flow. A split L4/L7 design is typically used to allow the L4 servers to redistribute these flows back to a consistent server in a flow-aware manner. GLB Director implements the L4 (director) tier of a split L4/L7 load balancer design. The GLB design The GLB Director does not replace services like haproxy and nginx, but rather is a layer in front of these services (or any TCP service) that allows them to scale across multiple physical machines without requiring each machine to have unique IP addresses. Source: GitHub GLB Director only processes packets on ingress. It then encapsulates them inside an extended Generic UDP Encapsulation packet. Egress packets from proxy layer servers are sent directly to clients using Direct Server Return. Read more about the GLB Director in detail on the GitHub Engineering blog post. Microsoft’s GitHub acquisition is good for the open source community Snapchat source code leaked and posted to GitHub Why Golang is the fastest growing language on GitHub GitHub has added security alerts for Python
Read more
  • 0
  • 0
  • 3588
Visually different images

article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 3567

article-image-aws-introduces-aws-datasync-for-automated-simplified-and-accelerated-data-transfer
Natasha Mathur
27 Nov 2018
3 min read
Save for later

AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer 

Natasha Mathur
27 Nov 2018
3 min read
The AWS team introduced AWS DataSync, an online data transfer service for automating data movement, yesterday. AWS DataSync offers data transfer from on-premises storage to Amazon S3 or Amazon Elastic File System (Amazon EFS) and vice versa. Let’s have a look at what’s new in AWS DataSync. Key Functionalities Move data 10x faster: AWS DataSync uses a purpose-built data transfer protocol along with a parallel, multi-threaded architecture that has the capability to run 10 times as fast as open source data transfer. This also speeds up the migration process and the recurring data processing workflows for analytics, machine learning, and data protection processes. Per-gigabyte fee: It is a managed service and you only need to pay the per-gigabyte fee which is paying only for the amount of data that you transfer. Other than that, there are no upfront costs and no minimum fees. DataSync Agent: The ‘AWS DataSync Agent’ is a crucial part of the service. It helps connect to your existing storage and the in-cloud service to automate, scale, and validate transfers. This, in turn, ensures that you don't have to write scripts, or modify the applications. Easy setup: It is very easy to set up and use (Console and CLI access is available). All you need to do is deploy the DataSync agent on-premises, then connect it to your file systems using the Network File System (NFS) protocol. After this, select Amazon EFS or S3 as your AWS storage, and you can start moving the data. Secure data transfer: AWS DataSync offers secure data transfer over the Internet or AWS Direct Connect. It also comes with automatic encryption and data. This, in turn, minimizes the in-house development and management which is needed for fast and secure transfers. Simplify and automate data transfer: With the help of AWS DataSync, you can perform one-time data migrations, transfer the on-premises data for timely in-cloud analysis, and automate the replication to AWS to ensure data protection and recovery. AWS DataSync is available for use from now in the US East, US West, Europe and Asia Pacific Regions. For more information, check out the official AWS DataSync blog post.  Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018  Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! 
Read more
  • 0
  • 0
  • 3563

article-image-core-security-features-of-elastic-stack-are-now-free
Amrata Joshi
21 May 2019
3 min read
Save for later

Core security features of Elastic Stack are now free!

Amrata Joshi
21 May 2019
3 min read
Today, the team at Elastic announced that the core security features of the Elastic Stack are now free. They also announced about releasing Elastic Stack versions 6.8.0 and 7.1.0 and the alpha release of Elastic Cloud on Kubernetestoday. With the free core security features, users can now define roles that protect index and cluster level access, encrypt network traffic, create and manage users, and fully secure Kibana with Spaces. The team had opened the code for these features last year and has finally made them free today which means the users can now run a fully secure cluster. https://twitter.com/heipei/status/1130573619896225792 Release of Elastic Stack versions 6.8.0 and 7.1.0 The team also made an announcement about releasing versions 6.8.0 and 7.1.0 of the Elastic Stack, today. These versions do not contain new features but they make the core security features free in the default distribution of the Elastic Stack. The core security features include TLS for encrypted communications, file and native realm to create and manage users, and role-based access control to control user access to cluster APIs and indexes. The features also include allowing multi-tenancy for Kibana with security for Kibana Spaces. Previously, these core security features required a paid gold subscription, however, now, they are free as a part of the basic tier. Alpha release of Elastic Cloud on Kubernetes The team has also announced the alpha release of Elastic Cloud on Kubernetes (ECK) which is the official Kubernetes Operator for Elasticsearch and Kibana. It is a new product based on the Kubernetes Operator pattern that lets users manage, provision, and operate Elasticsearch clusters on Kubernetes. It is designed for automating and simplifying how Elasticsearch is deployed and operated in Kubernetes. It also provides an official way for orchestrating Elasticsearch on Kubernetes and provides a SaaS-like experience for Elastic products and solutions on Kubernetes. The team has moved the core security features into the default distribution of Elastic Stack to ensure that all clusters launched and managed by ECK are secured by default at creation time. The clusters that are deployed via ECK include free features and tier capabilities such as Kibana Spaces, frozen indices for dense storage, Canvas, Elastic Maps, and more. Users can now monitor Kubernetes logs and infrastructure with the help of Elastic Logs and Elastic Infrastructure apps. Few users think that security shouldn’t be an added feature, it should be inbuilt. A user commented on HackerNews, “Security shouldn't be treated as a bonus feature.” Another user commented, “Security should almost always be a baseline requirement before something goes up for public sale.” Few others are happy about this news. A user commented, “I know it's hard to make a buck with an open source business model but deciding to charge more for security-related features is always so frustrating to me. It leads to a culture of insecure deployments in environments when the business is trying to save money. Differentiate on storage or number of cores or something, anything but auth/security. I'm glad they've finally reversed this.” To know more about this news, check out the blog post by Elastic. Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more! AWS announces Open Distro for Elasticsearch licensed under Apache 2.0  
Read more
  • 0
  • 0
  • 3536
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-microsoft-ignite-2018-new-azure-announcements-you-need-to-know
Melisha Dsouza
25 Sep 2018
4 min read
Save for later

Microsoft Ignite 2018: New Azure announcements you need to know

Melisha Dsouza
25 Sep 2018
4 min read
If you missed the Azure announcements made at Microsoft Ignite 2018, don’t worry, we’ve got you covered. Here are some of the biggest changes and improvements the Microsoft Azure team have made to their cloud offering. Infrastructure Improvements Azure’s new capabilities to deliver the best infrastructure for every workload include: 1. GPU enable and High-Performance VM To deliver the best infrastructure for every workload, Azure has announced the Preview of GPU-enabled and High-Performance Computing Virtual Machines. The two new N-series Virtual Machines have NVIDIA GPU capabilities. The first one is the NVv2 VMs and the second virtual machine is the NDv2 VMs. The two new H-series VMs are optimized for performance and cost and are aimed at HPC workloads like fluid dynamics, structural mechanics, energy exploration, weather forecasting, risk analysis, and more. The first VM is the HB VMs and the second VM is the HC VMs. 2. Networking Azure has announced the general availability of Azure Firewall and Virtual WAN. They have also announced the preview of Azure Front Door Service, ExpressRoute Global Reach, and ExpressRoute Direct. Azure Firewall has a built-in high availability and cloud scalability. The Virtual WAN will provide a simple, unified, global connectivity, and security platform to deploy large-scale branch connectivity. 3. Improved Disk storage Microsoft has expanded the portfolio of Azure Disk offerings to deploy any app in Azure, including those that are the most IO intensive. The new previews include the Ultra SSDs, Standard SSDs, Larger managed disk sizes - to help deal with data-intensive workloads. This will also ensure better availability, reliability, and latency as compared to standard SSDs 4. Hybrid Microsoft has announced new hybrid capabilities to manage data, create even more consistency, and secure hybrid environment. They have introduced the Azure Data Box edge, Windows Server 2019 and Azure stack. With AI enable edge computing capabilities, and OS that supports hybrid management and flexibility for deploying applications, Azure is causing waves in the developer community Built-in security & management For improved Security, Azure has announced new services for preview, like Confidential Computing DC VM series, Secure score, improved threat protection, and network map (preview). These will expand Azure security controls and services to protect network, applications, data, and identities. These services are enhanced by the unique intelligence that comes from the trillions of signals we collect in running first party services like Office 365 and Xbox. For better Management, Azure has announced the preview of Azure Blueprints. These blueprints make it easy to deploy and update Azure environments in a repeatable manner using composable artifacts such as policies, role-based access controls, and resource templates. Azure cost management in the Azure portal (preview) will help to access cost management from PowerBI or directly from your own custom applications. Migration To make the migration to the cloud less challenging, Azure has announced the support for Hyper-V assessments in Azure Migrate, Azure SQL Database Managed Instance, which enables users to migrate SQL Servers to a fully managed Azure service. To help improve your migration experience, we are announcing that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS). Automated ML capability in Azure Machine Learning The problem of finding the best machine learning pipeline for a given dataset scales faster than the time available for data science projects.  Azure’s Automated machine learning enables developers to access an automated service that identifies the best machine learning pipelines for their labelled data. Data scientists are empowered with a powerful productivity tool that also takes uncertainty into account, incorporating a probabilistic model to determine the best pipeline to try next. To follow more of the Azure buzz, head to  Microsoft’s official Blog   Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Azure Functions 2.0 launches with better workload support for serverless Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace  
Read more
  • 0
  • 0
  • 3513

article-image-hello-gg-a-new-os-framework-to-execute-super-fast-apps-on-1000s-of-transient-functional-containers
Bhagyashree R
15 Jul 2019
4 min read
Save for later

Hello 'gg', a new OS framework to execute super-fast apps on "1000s of transient functional containers"

Bhagyashree R
15 Jul 2019
4 min read
Last week at the USENIX Annual Technical Conference (ATC) 2019 event, a team of researchers introduced 'gg'. It is an open-source framework that helps developers execute applications using thousands of parallel threads on a cloud function service to achieve near-interactive completion times. "In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy," the paper reads. At USENIX ATC, leading systems researchers present their cutting-edge systems research. It also gives researchers to gain insight into topics like virtualization, network management and troubleshooting, cloud and edge computing, security, privacy, and more. Why is the gg framework introduced Cloud functions, better known as, serverless computing, provide developers finer granularity and lower latency. Though they were introduced for event handling and invoking web microservices, their granularity and scalability make them a good candidate for creating something called a “burstable supercomputer-on-demand”. These systems are capable of launching a burst-parallel swarm of thousands of cloud functions, all working on the same job. The goal here is to provide results to an interactive user much faster than their own computer or by booting a cold cluster and is cheaper than maintaining a warm cluster for occasional tasks. However, building applications on swarms of cloud functions pose various challenges. The paper lists some of them: Workers are stateless and may need to download large amounts of code and data on startup Workers have limited runtime before they are killed On-worker storage is limited but much faster than off-worker storage The number of available cloud workers depends on the provider's overall load and can't be known precisely upfront Worker failures occur when running at large scale Libraries and dependencies differ in a cloud function compared with a local machine Latency to the cloud makes roundtrips costly How gg works Previously, researchers have addressed some of these challenges. The gg framework aims to address these principal challenges faced by burst-parallel cloud-functions applications. With gg, developers and users can build applications that burst from zero to thousands of parallel threads to achieve low latency for everyday tasks. The following diagram shows its composition: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers The gg framework enables you to build applications on an abstraction of transient, functional containers that are also known as thunks. Applications can express their jobs in terms of interrelated thunks or Linux containers and then schedule, instantiate, and execute those thunks on a cloud-functions service. This framework is capable of containerizing and executing existing programs like software compilation, unit tests, and video encoding with the help of short-lived cloud functions. In some cases, this can give substantial gains in terms of performance. It can also be inexpensive than keeping a comparable cluster running continuously depending on the frequency of the task. The functional approach and fine-grained dependency management of gg give significant performance benefits when compiling large programs from a cold start. Here's a table showing a summary of the results for compiling Inkscape, an open-source software: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers When running “cold” on AWS Lambda, gg was nearly 5x faster than an existing icecc system, running on a 48-core or 384-core cluster of running VMs. To know more in detail, read the paper: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers. You can also check out gg's code on GitHub. Also, watch the talk in which Keith Winstein, an assistant professor of Computer Science at Stanford University, explains the purpose of GG and demonstrates how it exactly works: https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s Cloud computing trends in 2019 Cloudflare's Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Serverless Computing 101
Read more
  • 0
  • 0
  • 3493

article-image-google-expands-its-machine-learning-hardware-portfolio-with-cloud-tpu-pods-alpha-to-effectively-train-and-deploy-tensorflow-machine-learning-models-on-gcp
Melisha Dsouza
13 Dec 2018
3 min read
Save for later

Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP

Melisha Dsouza
13 Dec 2018
3 min read
Today, Google cloud announced the alpha availability of ‘Cloud TPU Pods’  that are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines, linked via an ultrafast custom interconnect. Google states that these pods make it easier, faster, and more cost-effective to develop and deploy cutting-edge machine learning workloads on Google Cloud. Developers can iterate over the training data in minutes and train huge production models in hours or days instead of weeks. The Tensor Processing Unit (TPU), is an ASIC that powers several of Google’s major products, including Translate, Photos, Search, Assistant, and Gmail. It provides up to 11.5 petaflops of performance in a single pod. Features of Cloud TPU Pods #1 Proven Reference Models Customers can take advantage of  Google-qualified reference models that are optimized for performance, accuracy, and quality for many real-world use cases. These include object detection, language modeling, sentiment analysis, translation, image classification, and more. #2 Connect Cloud TPUs to Custom Machine Types Users can connect to Cloud TPUs from custom VM types. This will them optimally balance processor speeds, memory, and high-performance storage resources for their individual workloads. #3 Preemptible Cloud TPU Preemptible Cloud TPUs are 70% cheaper than on-demand instances. Long training runs with checkpointing or batch prediction on large datasets can now be done at an optimal rate using Cloud TPU’s. #4 Integrated with GCP Cloud TPUs and Google Cloud's Data and Analytics services are fully integrated with other GCP offerings. This provides developers unified access across the entire service line. Developers can run machine learning workloads on Cloud TPUs and benefit from Google Cloud Platform’s storage, networking, and data analytics technologies. #5 Additional features Cloud TPUs perform really well at synchronous training. The Cloud TPU software stack transparently distributes ML models across multiple TPU devices in a Cloud TPU Pod to help customers achieve scalability. All Cloud TPUs are integrated with Google Cloud’s high-speed storage systems, ensuring that data input pipelines can keep up with the TPUs. Users do not have to manage parameter servers, deal with complicated custom networking configurations, or set up exotic storage systems to achieve unparalleled training performance in the cloud. Performance and Cost benchmarking of Cloud TPU Google compared the Cloud TPU Pods and Google Cloud VMs with NVIDIA Tesla V100 GPUs attached- using one of the MLPerf models called TensorFlow 1.12 implementations of ResNet-50 v1.5 (GPU version, TPU version). They trained ResNet-50 on the ImageNet image classification dataset. The results of the test show that Cloud TPU Pods deliver near-linear speedups for large-scale training task; the largest Cloud TPU Pod configuration tested (256 chips) delivers a 200X speedup over an individual V100 GPU. Check out their methodology page for further details on this test. Training ResNet-50 on a full Cloud TPU v2 Pod costs almost 40% less than training the same model to the same accuracy on an n1-standard-64 Google Cloud VM with eight V100 GPUs attached. The full Cloud TPU Pod completes the training task 27 times faster. Head over to Google Cloud’s official page to know more about Cloud TPU Pods. Alternatively, check out Cloud TPU’s documentation for more insights on the same. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?
Read more
  • 0
  • 0
  • 3479

article-image-red-hat-open-sources-project-quay-container-registry
Savia Lobo
13 Nov 2019
2 min read
Save for later

Red Hat open sources Project Quay container registry

Savia Lobo
13 Nov 2019
2 min read
Yesterday, Red Hat introduced the open source Project Quay container registry, which is the upstream project representing the code that powers Red Hat Quay and Quay.io. Open-sourced as a Red Hat commitment, Project Quay “represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat,” the official post reads. Red Hat Quay container image registry provides storage and enables users to build, distribute, and deploy containers. It will also help users to gain more security over their image repositories with automation, authentication, and authorization systems. It is compatible with most container environments and orchestration platforms and is also available as a hosted service or on-premises. Launched in 2013, Quay grew in popularity due to its focus on developer experience and highly responsive support and added capabilities such as image rollback and zero-downtime garbage collection. Quay was acquired by CoreOS in 2014 with a mission to secure the internet through automated operations. Shortly after the acquisition, the company released the on-premise offering of Quay, which is presently known as Red Hat Quay. The Quay team also created and integrated the Clair open source container security scanning project since 2015. It is directly built into Project Quay. Clair enables the container security scanning feature in Red Hat Quay, which helps users identify known vulnerabilities in their container registries. Open-sourced as part of Project Quay, both Quay, and Clair code bases will help cloud-native communities to lower the barrier to innovation around containers, helping them to make containers more secure and accessible. Project Quay contains a collection of open-source software licensed under Apache 2.0 and other open-source licenses. It follows an open-source governance model, with a maintainer committee. With an open community, Red Hat Quay and Quay.io users can benefit from being able to work together on the upstream code. Project Quay will be officially launched at the OpenShift Commons Gathering on November 18 in San Diego at KubeCon 2019. To know more about this announcement, you can read Red Hat’s official blog post. Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastly, Intel and Red Hat partnership After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption
Read more
  • 0
  • 0
  • 3461
article-image-former-google-cloud-ceo-joins-stripe-board-just-as-stripe-joins-the-global-unicorn-club
Bhagyashree R
31 Jan 2019
2 min read
Save for later

Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club

Bhagyashree R
31 Jan 2019
2 min read
Stripe, the payments infrastructure company, has received a whopping $100 million in funding from Tiger Global Management and now its valuation stands at $22.5 billion as reported by The Information on Tuesday. Last year in September, it also secured $245 million through its funding round, also led by Tiger Global Management. Founded in 2010 by the Irish brothers, Patrick and John Collision, Stripe has now become one of the most valuable “unicorns”, a term used for firms worth more than $1 billion, in the U.S. The company also boasts an impressive list of clients, recently adding Google and Uber to its stable users. The company is now planning to expand its platform by launching a point-of-sale payments terminal package targeted at online retailers making the jump to offline. A Stripe spokesperson told CNBC, “Stripe is rapidly scaling internationally, as well as extending our platform into issuing, global fraud prevention, and physical stores with Stripe Terminal. The follow-on funding gives us more leverage in these strategic areas.” The company is also expanding its team. On Tuesday, Patrick Collision announced that Diane Greene, who is an Alphabet board of directors member will be joining the Stripe’s board of directors. Along with Greene, joining the team are Michael Moritz, a partner at Sequoia Capital, Michelle Wilson, former general counsel at Amazon, and Jonathan Chadwick, former CFO of VMware, McAfee, and Skype. https://twitter.com/patrickc/status/1090386301642141696 In addition to Tiger Global Management, the start-up has also being supported by various other investors including Sequoia Capital, Khosla Ventures, Andreessen Horowitz, and PayPal co-founders Peter Thiel, Max Levchin, and Elon Musk. For more details, read the full story on The Information website. PayPal replaces Flow with TypeScript as their type checker for every new web app After BitPay, Coinbase bans Gab accounts and its founder, Andrew Torba Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.
Read more
  • 0
  • 0
  • 3450

article-image-sailfish-os-3-0-2-named-oulanka-now-comes-with-improved-power-management-and-more-features
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Sailfish OS 3.0.2, named Oulanka, now comes with improved power management and more features

Bhagyashree R
28 Mar 2019
2 min read
Last week, Jolla announced the release of Sailfish OS 3.0.2. This release goes by the name Oulanka, which is a national park in Lapland and Northern Ostrobothnia regions of Finland. Along with 44 fixed issues, this release brings in a battery saving mode, better connectivity, new device management APIs, and more. Improved power management Sailfish OS Oulanka comes with a battery saving mode, which is enabled by default when the battery goes lower than 20%. Additionally, users can also specify the battery saving threshold themselves by going to the “Battery” section in the settings menu. Better connectivity Improvements are made in this release so that Sailfish OS better handles scenarios when a large number of Bluetooth and WLAN devices are connected to the network. Now, Bluetooth and WLAN network scan will not slow down your devices. Also, many updates have been made in the Firewall introduced in the previous release, Sipoonkorpi, for better robustness. Updates in Corporate API This release comes with several improvements in the Corporate API. New device management APIs are added including data counters, call statistics, location data sources, proxy settings, app auto start, roaming status, and cellular settings. Sailfish X Beta for Xperia XA2 Sailfish X, the downloadable version of Sailfish OS for select devices, continues to be in Beta for XA2 with the Oulanka update. With this release, the team has improved several aspects of Android 8.1 Support Beta for XA2 devices. Now, Android apps will be able to connect to the internet more reliably via mobile data. To know more in detail about Sailfish OS Oulanka, check out the official announcement. An early access to Sailfish 3 is here! Linux 5.1 will come with Intel graphics, virtual memory support, and more The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration
Read more
  • 0
  • 0
  • 3436

article-image-facebook-and-microsoft-announce-open-rack-v3-to-address-the-power-demands-from-artificial-intelligence-and-networking
Bhagyashree R
18 Mar 2019
3 min read
Save for later

Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking

Bhagyashree R
18 Mar 2019
3 min read
From the past few months, Facebook and Microsoft together have been working on a new architecture based on the Open Rack standards. Last week, Facebook announced a new initiative that aims to build uniformity around the Rack & Power design. The Rack & Power Project Group is responsible for setting the rack standards designed for data centers, integrating the rack into the data center infrastructure. This project comes under a larger initiative started by Facebook called Open Compute Project. Why a new version of Open Rack is needed? Today, the industry is turning to AI and ML systems to solve several difficult problems. Though these systems are helpful, at the same time, they require increased power density at both the component level and the system level. The ever-increasing bandwidth speed demand for networking systems has also led to similar problems. So, in order to improve the overall system performance, it is important to get memory, processors, and system fabrics as close together as possible. This new architecture of Open Rack will come with greater benefits as compared to the current version, Open Rack V2. “For this next version, we are collaborating to create flexible, interoperable, and scalable solutions for the community through a common OCP architecture. Accomplishing this goal will enable wider adoption of OCP technologies across multiple industries, which will benefit operators, solution providers, original design manufacturers, and configuration managers,” shared Facebook in the blog post. What are the goals of this initiative? This new initiative aims to achieve the following goals A common OCP rack architecture to enable greater sharing between Microsoft and Facebook. Creating a flexible frame and power infrastructure that will support a wide range of solutions across the OCP community Apart from the features need by Facebook, this architecture will come with additional features for the larger community, including physical security for solutions deployed in co-location facilities. New thermal solutions will be introduced such as liquid cooling manifolds, door-based heat exchanges, and defined physical and thermal interfaces. These solutions are currently under development by the Advanced Cooling Solutions sub-project. Introducing new power and battery backup solutions that scale across different rack power levels and also accommodate different power input types. To know more in detail, check out the official announcement on Facebook. Two top executives leave Facebook soon after the pivot to privacy announcement Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 3434
article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 3432

article-image-new-redis-6-compatibility-for-amazon-elasticache-from-aws-news-blog
Matthew Emerick
07 Oct 2020
5 min read
Save for later

New – Redis 6 Compatibility for Amazon ElastiCache from AWS News Blog

Matthew Emerick
07 Oct 2020
5 min read
After the last Redis 5.0 compatibility for Amazon ElastiCache, there has been lots of improvements to Amazon ElastiCache for Redis including upstream supports such as 5.0.6. Earlier this year, we announced Global Datastore for Redis that lets you replicate a cluster in one region to clusters in up to two other regions. Recently we improved your ability to monitor your Redis fleet by enabling 18 additional engine and node-level CloudWatch metrics. Also, we added support for resource-level permission policies, allowing you to assign AWS Identity and Access Management (IAM) principal permissions to specific ElastiCache resource or resources. Today, I am happy to announce Redis 6 compatibility to Amazon ElastiCache for Redis. This release brings several new and important features to Amazon ElastiCache for Redis: Managed Role-Based Access Control – Amazon ElastiCache for Redis 6 now provides you with the ability to create and manage users and user groups that can be used to set up Role-Based Access Control (RBAC) for Redis commands. You can now simplify your architecture while maintaining security boundaries by having several applications use the same Redis cluster without being able to access each other’s data. You can also take advantage of granular access control and authorization to create administration and read-only user groups. Amazon ElastiCache enhances the new Access Control Lists (ACL) introduced in open source Redis 6 to provide a managed RBAC experience, making it easy to set up access control across several Amazon ElastiCache for Redis clusters. Client-Side Caching – Amazon ElastiCache for Redis 6 comes with server-side enhancements to deliver efficient client-side caching to further improve your application performance. Redis clusters now support client-side caching by tracking client requests and sending invalidation messages for data stored on the client. In addition, you can also take advantage of a broadcast mode that allows clients to subscribe to a set of notifications from Redis clusters. Significant Operational Improvements – This release also includes several enhancements that improve application availability and reliability. Specifically, Amazon ElastiCache has improved replication under low memory conditions, especially for workloads with medium/large sized keys, by reducing latency and the time it takes to perform snapshots. Open source Redis enhancements include improvements to expiry algorithm for faster eviction of expired keys and various bug fixes. Note that open source Redis 6 also announced support for encryption-in-transit, a capability that is already available in Amazon ElastiCache for Redis 4.0.10 onwards. This release of Amazon ElastiCache for Redis 6 does not impact Amazon ElastiCache for Redis’ existing support for encryption-in-transit. In order to apply RBAC to a new or existing Redis 6 cluster, we first need to ensure you have a user and user group created. We’ll review the process to do this below. Using Role-Based Access Control – How it works An alternative to Authenticating Users with the Redis AUTH Command, Amazon ElastiCache for Redis 6 offers Role-Based Access Control (RBAC). With RBAC, you create users and assign them specific permissions via an Access String. If you want to create, modify, and delete users and user groups, you will need to select to the User Management and User Group Management sections in the ElastiCache console. ElastiCache will automatically configure a default user with user ID and user name “default”, and then you can add it or new created users to new groups in User Group Management. If you want to change the default user with your own password and access setting, you need to create a new user with the username set to “default” and can then swap it with the original default user. We recommend using your own strong password for a default user. The following example shows how to swap the original default user with another default that has a modified access string via AWS CLI. $ aws elasticache create-user --user-id "new-default-user" --user-name "default" --engine "REDIS" --passwords "a-str0ng-pa))word" --access-string "off +get ~keys*" Create a user group and add the user you created previously. $ aws elasticache create-user-group --user-group-id "new-default-group" --engine "REDIS" --user-ids "default" Swap the new default user with the original default user. $ aws elasticache modify-user-group --user-group-id "new-default-group" --user-ids-to-add "new-default-user" --user-ids-to-remove "default" Also, you can modify a user’s password or change its access permissions using modify-user command, or remove a specific user using delete-user command. It will be removed from any user groups to which it belongs. Similarly you can modify a user group by adding new users and/or removing current users using modify-user-group command, or delete a user group using delete-user-group command. Note that the user group itself, not the users belonging to the group, will be deleted. Once you have created a user group and added users, you can assign the user group to a replication group, or migrate between Redis AUTH and RBAC. For more information, see the documentation in detail. Redis 6 cluster for ElastiCache – Getting Started As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to create to new Redis 6 cluster. I’ll use the Console, choose Redis from the navigation pane and click Create with the following settings: Select “Encryption in-transit” checkbox to ensure you can see the “Access Control” options. You can select an option of Access Control either User Group Access Control List by RBAC features or Redis AUTH default user. If you select RBAC, you can choose one of the available user groups. My cluster is up and running within minutes. You can also use the in-place upgrade feature on existing cluster. By selecting the cluster, click Action and Modify. You can change the Engine Version from 5.0.6-compatible engine to 6.x. Now Available Amazon ElastiCache for Redis 6 is now available in all AWS regions. For a list of ElastiCache for Redis supported versions, refer to the documentation. Please send us feedback either in the AWS forum for Amazon ElastiCache or through AWS support, or your account team. – Channy;
Read more
  • 0
  • 0
  • 3410