Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-introducing-pivotal-function-service-alpha-an-open-kubernetes-based-multi-cloud-serverless-framework-for-developer-workloads
Melisha Dsouza
10 Dec 2018
3 min read
Save for later

Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads

Melisha Dsouza
10 Dec 2018
3 min read
Last week, Pivotal announced the ‘Pivotal Function Service’ (PFS)  in alpha. Until now, Pivotal has focussed on making open-source tools for enterprise developers but has lacked a serverless component to its suite of offerings. This aspect changes with the launch of PFS. PFS is designed to work both on-premise and in the cloud in a cloud-native fashion while being open source. It is a Kubernetes-based, multi-cloud function service offering customers a single platform for all their workloads on any cloud. Developers can deploy and operate databases, batch jobs, web APIs, legacy apps, event-driven functions, and many other workloads the same way everywhere, all because of the Pivotal Cloud Foundry (PCF) platform. This is comprised of Pivotal Application Service (PAS), Pivotal Container Service (PKS), and now, Pivotal Function Service (PFS). Providing the same developer and operator experience on every public or cloud, PFS is event-oriented with built-in components that make it easy to architect loosely coupled, streaming systems. Its buildpacks simplify packaging and are operator-friendly providing a secure, low-touch experience running atop Kubernetes. The fact that Pivotal can work on any cloud as an open product, makes it stand apart from cloud providers like Amazon, Google, and Microsoft, which provide similar services that run exclusively on their clouds. Features of PFS PFS is built on Knative, which is an open-source project led by Google that simplifies how developers deploy functions atop Kubernetes and Istio. PFS runs on Kubernetes and Istio and helps customers take advantage of the benefits of Kubernetes and Istio, abstracting away the complexity of both technologies. PFS allows customers to use familiar, container-based workflows for serverless scenarios. PFS Event Sources helps customers create feeds from external event sources such as GitHub webhooks, blob stores, and database services. PFS can be connected easily with popular message brokers such as Kafka, Google Pub/Sub, and RabbitMQ; that provide a reliable backing services for messaging channels. Pivotal has continued to develop the riff invoker model in PFS, to help developers deliver both streaming and non-streaming function code using simple, language-idiomatic interfaces. The new package includes several key components for developers, including a native eventing ability that provides a way to build rich event triggers to call whatever functionality a developer requires within a Kubernetes-based environment. This is particularly important for companies that deploy a hybrid use case to manage the events across on-premise and cloud in a seamless way. Head over to Pivotal’s official Blog to know more about this announcement. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12/ ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 2808

article-image-cloud-native-application-bundle-cnab-docker-microsoft-partner-on-an-open-source-cloud-agnostic-all-in-one-packaging-format
Savia Lobo
05 Dec 2018
3 min read
Save for later

Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format

Savia Lobo
05 Dec 2018
3 min read
At Dockercon Europe 2018 held in Barcelona, Microsoft in collaboration with the Docker community announced Cloud Native Application Bundle (CNAB), which is an open-source, cloud-agnostic specification for packaging and running distributed applications. Cloud Native Application Bundle (CNAB) Cloud Native Application Bundle(CNAB) is the combined effort of Microsoft and the Docker community to provide a single all-in-one packaging format, which unifies management of multi-service, distributed applications across different toolchains. Docker is the first to implement CNAB for containerized applications. It plans to expand CNAB across the Docker platform to support new application development, deployment, and lifecycle management. CNAB allows users to define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services. Patrick Chanezon, technical staff at Docker Inc. writes, “Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry.” Docker also plans to enable organizations to deploy and manage CNAB-based applications in Docker Enterprise soon. Scott Johnston, Chief product officer at Docker, said, “this is not a Docker proprietary thing, this is not a Microsoft proprietary thing, it can take Compose files as inputs, it can take Helm charts as inputs, it can take Kubernetes YAML as inputs, it can take serverless artifacts as inputs.” According to Microsoft, they partnered with Docker to solve issues with ISV (Independent Software Vendor) and enterprises including: To be able to describe their application as a single artifact, even when it is composed of a variety of cloud technologies Wanting to provision their applications without having to master dozens of tools They needed to manage lifecycle (particularly installation, upgrade, and deletion) of their applications Added features that CNAB brings include: Manage discrete resources as a single logical unit that comprises an app. Use and define operational verbs for lifecycle management of an app Sign and digitally verify a bundle, even when the underlying technology doesn’t natively support it. Attest and digitally verify that the bundle has achieved that state to control how the bundle can be used. Enable the export of the bundle and all dependencies to reliably reproduce in another environment, including offline environments (IoT edge, air-gapped environments). Store bundles in repositories for remote installation. According to a user review on Hacker News thread, “The goal with CNAB is to be able to version your application with all of its components and then ship that as one logical unit making it reproducible. The package format is flexible enough to let you use the tooling that you're already using”. Another user said, “CNAB makes reproducibility possible by providing unified lifecycle management, packaging, and distribution. Of course, if bundle authors don't take care to work around problems with imperative logic, that's a risk.” To know more about Cloud Native Application Bundle(CNAB) in detail, visit Microsoft blog. Microsoft and Mastercard partner to build a universally-recognized digital identity Creating a Continuous Integration commit pipeline using Docker [Tutorial] Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login
Read more
  • 0
  • 0
  • 2556

article-image-docker-announces-docker-desktop-enterprise
Savia Lobo
05 Dec 2018
3 min read
Save for later

Docker announces Docker Desktop Enterprise

Savia Lobo
05 Dec 2018
3 min read
Yesterday, at DockerCon Europe 2018, the Docker community announced the Docker Desktop Enterprise, an easy, fast, and a secure way to build production-ready containerized applications. Docker Desktop Enterprise Docker Desktop Enterprise is a new addition to Docker’s desktop product portfolio, which currently includes the free Docker Desktop Community products for MacOS and Windows. The Docker Desktop Enterprise version enables developers to work with the frameworks and languages they are comfortable with. It will also assist IT teams to safely configure, deploy, and manage development environments while adhering to corporate standards practices. Hence the enterprise version enables organizations to quickly move containerized applications from development to production and reduce their time to market. Features of Docker Desktop Enterprise Enterprise Manageability With Docker Desktop Enterprise, IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production. For the IT team, the Docker Desktop Enterprise is packaged as standard MSI (Win) and PKG (Mac) distribution files. These files work with existing endpoint management tools with lockable settings via policy files. This edition also provides developers with ready to code, customized and approved application templates. Enterprise Deployment & Configuration Packaging IT desktop admins can deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. Desktop administrators can also enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience. Application architects provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Increase Developer Productivity and Ship Production-ready Containerized Applications Developers can quickly use company-provided application templates that instantly replicates production-approved application configurations on the local desktop by using the configurable version packs. With these version packs, developers can now synchronize desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. No Docker CLI commands are required to get started with Configurable Version Packs. Developers can also use the Application Designer interface, template-based workflows for creating containerized applications. If one has never launched a container before, the Application Designer interface provides the foundational container artifacts and user’s organization’s skeleton code to help users get started with containers in minutes. Read more about Docker Desktop Enterprise here. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Zeit releases Serverless Docker in beta  
Read more
  • 0
  • 0
  • 2103
Visually different images

article-image-microsoft-connect-2018-azure-updates-azure-pipelines-extension-for-visual-studio-code-github-releases-and-much-more
Melisha Dsouza
05 Dec 2018
4 min read
Save for later

Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more!

Melisha Dsouza
05 Dec 2018
4 min read
“I’m excited to share some of the latest things we’re working on at Microsoft to help developers achieve more when building the applications of tomorrow, today.” -Scott Guthrie - Executive Vice President, Cloud and Enterprise Group, Microsoft On the 4th of December, at the Microsoft Connect(); 2018 Conference, the tech giant announced a series of updates in its Azure domain. With an aim to make it easy for operators and developers to adopt and use Kubernetes, Microsoft has announced the public preview of Azure Kubernetes Service virtual nodes and Azure Container Instances GPU support. They have also announced Azure Pipelines extension for Visual Studio Code, GitHub Releases, and much more! #1 Azure Kubernetes Service virtual nodes, Azure Container Instances GPU support enters public preview The Azure Kubernetes Service (AKS) is powered by the open source Virtual Kubelet technology. This release will enable customers to fully experience serverless Kubernetes. Customers will be able to extend the consistent, powerful Kubernetes API (provided by AKS) with the scalable, container-based compute capacity of ACI. With AKS virtual nodes, customers can precisely allocate the number of additional containers needed, rather than waiting for additional VM-based nodes to spin up. The ACI is billed by the second, based on the resources that a customer specifies, thus enabling them to match their costs to their workloads. This, in turn, will help the AP provided by Kubernetes to reap the benefits of serverless platforms without having to worry about managing any additional compute resources Adding GPU support to ACI will enable a new class of compute-intensive applications through AKS virtual nodes. The blog says that initially, ACI will support the K80, P100, and V100 GPUs from Nvidia and users can specify the type and number of GPUs that they would like for their container. #2 Azure Pipelines extension for Visual Studio Code The  Azure Pipelines extension for Visual Studio Code will enable developers use VS syntax highlighting and IntelliSense that will be aware of the Azure Pipelines YAML format. Traditionally, in Visual Studio Code, syntax highlighting required developers to remember exactly which keys are legal, causing them to flip back and forth to the documentation while keeping track of the location of the keys. Using this new functionality of Azure, they will now be alerted in red “ink” if they write “tasks:” instead of “task:”. They just need to press Ctrl-Space (or Cmd-Space on macOS) to see what’s accepted at that point in the file. #3 GitHub releases Developers can now seamlessly manage GitHub Releases using Azure Pipelines. This allows them to create new releases, modify drafts, or discard older drafts. The new GitHub Releases task supports actions like attaching binary files, publishing draft releases, and marking a release as pre-release and much more. #4 Azure IoT Edge support in the Azure DevOps project Azure DevOps Projects enables developers to set up a fully functional DevOps pipeline straight from the Azure portal which will be customized to the programming language and application platform they want to use, along with the Azure functionality they want to leverage and deploy to. The community showed a growing interest in using Azure DevOps to build and deploy IoT based solutions. The Azure portal for Azure IoT Edge in the Azure DevOps project workflow will make it easy for customers to achieve this goal. They can easily deploy IoT Edge modules written in Node.js, Python, Java, .NET Core, or C, helping users to develop, build, and deploy their IoT Edge application. This support will provide customers with: A Git code repository with a sample IoT Edge application written in Node.js, Python, Java, .NET Core, or C A build and a release pipeline setup for deployment to Azure Easy provisioning of all Azure resources required for Azure IoT Edge #5 ServiceNow integration with Azure Pipelines Azure has joined forces with ServiceNow, an organization that is focussed on automating routines activities, tasks, and processes at work. They help enterprises gain efficiencies and increase the productivity of their workforce. Developers can now automate the deployment process using Azure Pipelines, and use ServiceNow Change Management for risk assessment, scheduling, approvals, and oversight while updating production. You can head over to Microsoft’s official Blog to know more about these announcements. Microsoft and Mastercard partner to build a universally-recognized digital identity Microsoft open sources (SEAL) Simple Encrypted Arithmetic Library 3.1.0, with aims to standardize homomorphic encryption Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser  
Read more
  • 0
  • 0
  • 2636

article-image-stripe-open-sources-skycfg-a-configuration-builder-for-kubernetes
Melisha Dsouza
05 Dec 2018
2 min read
Save for later

Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes

Melisha Dsouza
05 Dec 2018
2 min read
On 3rd December, Stripe announced the open-sourcing of Skycfg which is a configuration builder for Kubernetes. Skycfg was developed by Stripe as an extension library for the Starlark language. It adds support for constructing Protocol Buffer messages. The team states that as the implementation of Skycfg stabilizes, the public API surface will be expanded so that Skycfg can be combined with other Starlark extensions. Benefits of Skycfg Skycfg ensures Type safety. It uses ‘Protobuf’  which has a statically-typed data model, and the type of every field is known to Skycfg when it's building a configuration. Users are free from the risk of accidentally assigning a string to a number, a struct to a different struct, or forgetting to quote a YAML value. Users can reduce duplicated typing and share logic by defining helper functions. Starlark supports importing modules from other files. This can be used to share common code between configurations. These modules can protect service owners from complex Kubernetes logic. Skycfg supports limited dynamic behavior through the use of context variables, which let the Go caller pass arbitrary key:value pairs in the ctx parameter. Skycfg simplifies the configuration of Kubernetes services, Envoy routes, Terraform resources, and other complex configuration data. Here is what users are saying about Skycfg over at HackerNews: Head over to GitHub for all the code and supporting files. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 2679

article-image-kubernetes-1-13-released-with-new-features-and-fixes-to-a-major-security-flaw
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Kubernetes 1.13 released with new features and fixes to a major security flaw

Prasad Ramesh
04 Dec 2018
3 min read
A privilege escalation flaw in Kubernetes was discussed on GitHub last week. Following that, Red Hat released patches for the same. Yesterday Kubernetes 1.13 was also released. The security flaw A recent GitHub issue outlines the issue. Named as CVE-2018-1002105, this issue allowed unauthorized users to craft special requests. This let the unauthorized users establish a connection to a backend server via the Kubernetes API. This let sending arbitrary requests over the same connection directly to the backend. Following this, IBM owned Red Hat released patches for this vulnerability yesterday. All Kubernetes based products are affected by this vulnerability. It has now been patched and as the impact is classified as critical by Red Hat, a version upgrade is strongly recommended if you’re running an affected product. You can find more details at the Red Hat website. Let’s now look at the new features in Kubernetes 1.13 other than the security patch. kubeadm is GA in Kubernetes 1.13 kubeadm is an essential tool for managing the lifecycle of a cluster, right from creation to configuration to upgrade. kubeadm is now officially GA. This tool handles bootstrapping of production clusters on current hardware and configuration of core Kubernetes components. With the GA release, advanced features are available around pluggability and configurability. kubeadm is aimed to be a toolbox for both admins and automated, higher-level systems. Container Storage Interface (CSI) is also GA The Container Storage Interface (CSI) is generally available in Kubernetes 1.13. It was introduced as alpha in Kubernetes 1.9 and beta in Kubernetes 1.10. CSI makes the Kubernetes volume layer truly extensible. It allows third-party storage providers to write plugins that interoperate with Kubernetes without having to modify the core code. CoreDNS replaces Kube-dns as the default DNS Server CoreDNS is replacing Kube-dns to be the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server. It provides an extensible backwards-compatible integration with Kubernetes. CoreDNS is a single executable and a single process. It supports flexible use cases by creating custom DNS entries and is written in Go making it memory-safe. KubeDNS will be supported for at least one more release. Other than these there are also other feature updates like support for 3rd party monitoring, and more features graduating to stable and beta. For more details, on the Kubernetes release, visit the Kubernetes website. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2352
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-an-update-on-bcachefs-the-next-generation-linux-filesystem
Melisha Dsouza
03 Dec 2018
3 min read
Save for later

An update on Bcachefs- the “next generation Linux filesystem”

Melisha Dsouza
03 Dec 2018
3 min read
Kent Overstreet announced Bcachefs as “the COW filesystem for Linux that won't eat your data" in 2015. Since then the system has undergone numerous updates and patches to get to be where it is today. On the 1st of December, Overstreet published an update on the problems and improvements that are currently being worked upon in Bcachefs. Status update on Bcachefs After the last update, Overstreet has focussed on two major areas of improvement- atomicity of filesystem operations and non-persistence of allocation information (per bucket sector counts). The filesystem operations that had anything to do with i_nlink were not atomic. On startup, the system would have to scan and recalculate i_nlink and also delete no longer referenced inodes. Also, because of non-persistence of allocation information, on startup, the system would have to recalculate all the accounting disk space. The team has now been able to get everything to be fully atomic except for fallocate/fcollapse/etc. After an unclean shutdown, the only thing to be done is scan the inodes btree for inodes that have been deleted. Erasure coding is about 80% done now in Bcachefs. Overstreet is now focussed on persistent allocation information. This will then allow him to focus on ‘reflink’ which in turn will be useful to the company that's funding bcachefs development. This is because the reflinked extent refcounts will be much too big to keep in memory and hence will l have to be kept in a btree and updated whenever doing extent updates. The infrastructure needed to make that happen also depends on making disk space accounting persistent. After all of these updates, he claims bcachefs will have fast mounts (including after unclean shutdown). He is also working on some improvements to disk space accounting for multi-device filesystems which will lead up to fast mounts after clean Shutdowns. To know if a user can safely mount in degraded mode, they will have to store a list of all the combinations of disks that have data replicated across them (or are in an erasure coded stripe) - without any kind of fixed layout, like regular RAID does. Why should you choose Bcachefs? Overstreet announced that Bcachefs is stable, fast, and has a small and clean code-base, along with  the necessary features to be a modern Linux file-system. It has a long list of features, completed or in progress: Copy on write (COW) - like zfs or btrfs Full data and metadata checksumming Caching Compression Encryption Snapshots Scalable Bcachefs prioritizes robustness and reliability According to Kent, Bcachefs ensures that customers won't lose their data. The Bcachefs is an extension of bcache where the bcache was designed as a caching layer to improve block I/O performance. It uses a solid-state drive as a cache for a (slower, larger) underlying storage device. Mainline bcache is not a typical filesystem but looks like a special kind of block device. It handles the movement of blocks of data between fast and slow storage, ensuring that the most frequently used data is kept on the faster device. bcache manages data in a way that yields high performance while ensuring that no data is ever lost, even when an unclean shutdown takes place. You can head over to LKML.org for more information on this announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 4542

article-image-aviatrix-introduces-aviatrix-orchestrator-to-provide-powerful-orchestration-for-aws-transit-network-gateway-at-reinvent-2018
Bhagyashree R
30 Nov 2018
2 min read
Save for later

Aviatrix introduces Aviatrix Orchestrator to provide powerful orchestration for AWS Transit Network Gateway at re:Invent 2018

Bhagyashree R
30 Nov 2018
2 min read
Yesterday, at Amazon re:Invent, Aviatrix, a tool that helps users manage cloud deployments, announced and demonstrated Aviatrix Orchestrator. This new feature will make connecting multiple networks much easier. Essentially, it unifies the management of both AWS native networking services and Aviatrix services via a single management console. How does Aviatrix Orchestrator support AWS Transit Gateway? AWS Transit Gateway helps customers to interconnect virtual private clouds and their on-premises networks to a single gateway. Users only need to create and manage a single connection from the central gateway to each Amazon VPC, on-premises data center, or remote office across your network. It basically acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. Aviatrix Orchestrator adds an automation layer to AWS Transit Gateway that allows users to provision and implement route domains securely and accurately. Users can automatically configure and propagate segmentation policies and leverage built-in troubleshooting and visualization tools for monitoring the entire environment. Some of the advantages of combining Aviatrix Orchestrator and AWS Transit Gateway include: Ensuring your AWS network follows virtual private cloud  segmentation best practices Limiting lateral movement in the event of a security breach Reducing the impact of human error by removing the need for potentially tedious manual configuration. Minimizing the blast radius that can result from misconfigurations. Replacing a flat architecture with a transit architecture Aviatrix Orchestrator is now available as an optional feature of the Aviatrix AVX Controller. New customers can launch the Aviatrix Secure Networking Platform AMI from AWS Marketplace to get access to this functionality. The existing customers can upgrade to the latest version of AVX software to use this feature. For more detail, visit the Aviatrix website. cstar: Spotify’s Cassandra orchestration tool is now open source! Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases
Read more
  • 0
  • 0
  • 2215

article-image-amazon-reinvent-day-3-lamba-layers-lambda-runtime-api-and-other-exciting-announcements
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Melisha Dsouza
30 Nov 2018
4 min read
The second last day of Amazon re:Invent 2018 ended on a high note. AWS announced two new features, Lambda Layers, and Lambda Runtime API, that claim to “make serverless development even easier”. In addition to this, they have also announced that Application Load Balancers will now invoke Lambda functions to serve HTTP(S) requests and Ruby Language support for Lambda. #1 Lambda Layers Lambda Layers allow developers to centrally manage code and data which is shared across multiple functions. Instead of packaging and deploying this shared code together with all the functions using it, developers can put common components in a ZIP file and upload it as a Lambda Layer.  These Layers can be used within an AWS account, shared between accounts, or shared publicly within the developer community. AWS  is also publishing a public layer which includes NumPy and SciPy. This layer is prebuilt and optimized to help users to carry out data processing and machine learning applications quickly. Developers can include additional files or data for their functions including binaries such as FFmpeg or ImageMagick, or dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. Layers can also be versioned to manage updates, which will make each version immutable. When a version is deleted or its permissions are revoked, a developer won’t be able to create new functions; however, functions that used it previously will continue to work. Lamba layers helps in making the function code smaller and more focused on what the application has to build. In addition to faster deployments, because less code must be packaged and uploaded, code dependencies can be reused. #2 Lambda Runtime API This is a simple interface to use any programming language, or a specific language version, for developing functions. Here, runtimes can be shared as layers, which allows developers to work with a  programming language of their choice when authoring Lambda functions. Developers using the Runtime API will have to bundle the same with their application artifact or as a Lambda layer that the application uses. When creating or updating a function, users can select a custom runtime. The function must include (in its code or in a layer) an executable file called bootstrap, that will be responsible for the communication between code and the Lambda environment. As of now, AWS has made the C++ and Rust open source runtimes available. The other open source runtimes that will possibly be available soon include: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) Node.js (NodeSource N|Solid) PHP (Stackery) The Runtime API will depict how AWS will support new languages in Lambda. A notable feature of the C++ runtime is its simplicity and expressiveness of interpreted languages while maintaining a good performance and low memory footprint. The Rust runtime makes it easy to write highly performant Lambda functions in Rust. #3 Application Load Balancers to invoke Lambda functions to serve HTTP(S) requests This new functionality will enable users to access serverless applications from any HTTP client, including web browsers. Users can also route requests to different Lambda functions based on the requested content. Application Load Balancer will be used as a common HTTP endpoint to both simplify operations and monitor applications that use servers and serverless computing. #4 Ruby is now a supported language for AWS Lambda Developers can use Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default making it easy and quick for functions to directly interact with the AWS resources directly. Ruby on Lambda can be used either through the AWS Management Console or the AWS SAM CLI. This will ensure developers benefit from the reduced operational overhead, scalability, availability, and pay-per-use pricing of Lambda. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer  
Read more
  • 0
  • 0
  • 2393

article-image-red-hat-acquires-israeli-multi-cloud-storage-software-company-noobaa
Savia Lobo
29 Nov 2018
3 min read
Save for later

Red Hat acquires Israeli multi-cloud storage software company, NooBaa

Savia Lobo
29 Nov 2018
3 min read
On Tuesday, Red Hat announced that it has acquired an Israel-based multi-cloud storage software company NooBaa. This is Red Hat’s first acquisition since it was acquired by IBM in October. However, this acquisition is not subject to IBM’s approval as Red Hat's acquisition process by IBM stands incomplete. Early this month, Red Hat CEO Jim Whitehurst said, “Until the transaction closes, it is business as usual. For example, equity practices will continue until the close of the transaction, Red Hat M&A will continue as normal, and our product roadmap remains the same." NooBaa, founded in 2013, addresses the need for greater visibility and control over unstructured data spread throughout the distributed environments. The company also developed a data platform designed to serve as an abstraction layer over existing storage infrastructure. This abstraction not only enables data portability from one cloud to another but allows users to manage data stored in multiple locations as a single, coherent data set that an application can interact with. NooBaa's technologies complement and enhance Red Hat's portfolio of hybrid cloud technologies, including Red Hat OpenShift Container Platform, Red Hat OpenShift Container Storage and Red Hat Ceph Storage. Together, these technologies are designed to provide users with a set of powerful, consistent and cohesive capabilities for managing application, compute, storage and data resources across public and private infrastructures. Ranga Rangachari, VP and GM of Red Hat's storage and hyper-converged infrastructure said, “Data portability is a key imperative for organizations building and deploying cloud-native applications across private and multiple clouds. NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multi-cloud world. We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.” He further added, "By abstracting the underlying cloud storage infrastructure for developers, NooBaa provides a common set of interfaces and advanced data services for cloud-native applications. Developers can also read and write to a single consistent endpoint without worrying about the underlying storage infrastructure." To know more about this news in detail, head over to RedHat’s official announcement. Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 2089
article-image-the-linux-and-risc-v-foundations-team-up-to-drive-open-source-development-and-adoption-of-risc-v-instruction-set-architecture-isa
Bhagyashree R
29 Nov 2018
3 min read
Save for later

The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)

Bhagyashree R
29 Nov 2018
3 min read
Yesterday, the Linux Foundation announced that they are joining hands with the RISC-V Foundation to drive the open source development and adoption of the RISC-V instruction set architecture (ISA). https://twitter.com/risc_v/status/1067553703685750785 The RISC-V Foundation is a non-profit corporation, which is responsible for directing the future development of the RISC-V ISA. Since its formation, the RISC-V Foundation has quickly grown and now includes more than 100 member organizations. With this collaboration, the foundations aim to further grow this RISC-V ecosystem and provide improved support for the development of new applications and architectures across all computing platforms. Rick O’Connor, the executive director of the RISC-V Foundation, said, “With the rapid international adoption of the RISC-V ISA, we need increased scale and resources to support the explosive growth of the RISC-V ecosystem. The Linux Foundation is an ideal partner given the open source nature of both organizations. This joint collaboration with the Linux Foundation will enable the RISC-V Foundation to offer more robust support and educational tools for the active RISC-V community, and enable operating systems, hardware implementations and development tools to scale faster.” The Linux Foundation will provide governance, best practices for open source development, and resources such as training programs and infrastructure tools. Along with this, they will also help RISC-V in community outreach, marketing, and legal expertise. Jim Zemlin, the executive director at the Linux Foundation believes that RISC-V has great potential seeing its popularity in areas like AI, machine learning, IoT, and more. He said, “RISC-V has great traction in a number of markets with applications for AI, machine learning, IoT, augmented reality, cloud, data centers, semiconductors, networking and more. RISC-V is a technology that has the potential to greatly advance open hardware architecture. We look forward to collaborating with the RISC-V Foundation to advance RISC-V ISA adoption and build a strong ecosystem globally.” The two foundations have already started working on a pair of getting started guides for running Zephyr, a small, scalable open source real-time operating system (RTOS) optimized for resource-constrained devices. They are also conducting RISC-V Summit, a 4-day event starting from December 3-6 in Santa Clara. This summit will include sessions on RISC-V ISA architecture, commercial and open-source implementations, software and silicon, vectors and security, applications and accelerators, and much more. Read the complete announcement on the Linux Foundation’s official website. Uber becomes a Gold member of the Linux Foundation The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project Google becomes new platinum member of the Linux foundation
Read more
  • 0
  • 0
  • 2531

article-image-ipv6-support-to-be-automatically-rolled-out-for-most-netify-application-delivery-network-users
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

IPv6 support to be automatically rolled out for most Netify Application Delivery Network users

Melisha Dsouza
29 Nov 2018
3 min read
Earlier this week,, Netlify announced in a blog post that the company has begun the rollout of IPv6 support on the Netlify Application Delivery Network. Netlify has adopted the IPv6 support as a solution to the IPv4 address capacity problem. This news comes right after the announcement that Netlify raised $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management. Netlify provides developers with an all-in-one workflow to build, deploy, and manage modern web projects. Their ‘Application Delivery Network’ is a new platform for the web and will assist web developers in building newer web-based applications. There is no need for developers to setup or manage servers as all content and applications will be created directly on a global network. It removes the dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. IP addresses are assigned to every server connected to the internet. Netifly explain how  traditionally used IPv4 address pool is getting smaller with continuous expansion of the internet. This is where IPv6 steps in. IPv6 defines an IP address as a 128-bit entity instead of integer-based IPv4 addresses. For example, IPv4 defines an address as 167.99.129.42, and IPv6 address would instead look like 2001:0db8:85a3:0000:0000:8a2e:0370:7334. Even though the IPv6 format is complex to remember, it creates vastly more possible addresses to help support the rapid growth of the internet. In addition to efficient routing and packet processing, IPv6 also accounts for better security as compared to IPv4. This is because IPSec, which provides confidentiality, authentication and data integrity, is baked into IPv6. According to the blog post, users that are serving their sites on a subdomain of netlify.com or using custom domains registered from an external domain registrar, will automatically begin using IPv6 on their ADN. Customers using Netlify for DNS management, can go to the Domains section on the dashboard and enable IPv6 for each of their domains. Customers having a complex or bespoke DNS configuration or enterprise customers using Netlify’s Enterprise ADN infrastructure, are advised to contact Netlify’s support team or their account manager to ensure that their specific configuration is migrated to IPv6 appropriately. Netlify’s users have received this news well: https://twitter.com/sethvargo/status/1067152518638116864 Hacker News is also flooded with positive comments for Netlify: Netlify has starting off on the right foot, it would be interesting to see what customers think after implementing the IPv6 for their Netlify ADN. Head over to Netlify’s blog for more insights on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! libp2p: the modular P2P network stack by IPFS for better decentralized computing  
Read more
  • 0
  • 0
  • 2752

article-image-amazon-reinvent-announces-amazon-dynamodb-transactions-cloudwatch-logs-insights-and-cloud-security-conference-amazon-reinforce-2019
Melisha Dsouza
28 Nov 2018
4 min read
Save for later

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019

Melisha Dsouza
28 Nov 2018
4 min read
Day 2 of the Amazon AWS re:Invent 2018 conference kicked off with just as much enthusiasm with which it began. With some more announcements and releases scheduled for the day, the conference is proving to be a real treat for AWS Developers. Amongst announcements like Amazon Comprehend Medical, New container products in the AWS marketplace; Amazon also announced Amazon DynamoDB Transactions and Amazon CloudWatch Logs Insights. We will also take a look at Amazon re:Inforce 2019 which is a new conference solely to be launched for cloud security. Amazon DynamoDB Transactions Customers have used Amazon DynamoDB for multiple use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. Amazon DynamoDB is a non-relational database delivering reliable performance at any scale. It offers built-in security, backup and restore, and in-memory caching along with being a fully managed, multi-region, multi-master database that provides consistent single-digit millisecond latency. DynamoDB with native support for transactions will now help developers to easily implement business logic that requires multiple, all-or-nothing operations across one or more tables. With the help of DynamoDB transactions, users can take advantage of the atomicity, consistency, isolation, and durability (ACID) properties across one or more tables within a single AWS account and region. It is the only non-relational database that supports transactions across multiple partitions and tables. Two new DynamoDB operations have been introduced for handling transactions: TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. It can optionally check for prerequisite conditions that need to be satisfied before making updates. TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If this request is issued on an item that is part of an active write transaction, the read transaction is canceled. Amazon CloudWatch Logs Insights Many AWS services create logs. Data points, patterns, trends, and insights embedded within these logs can be used to understand how an applications and a users AWS resources are behaving, identify room for improvement, and to address operational issues. However, the raw logs have a huge size, making analysis difficult. Considering individual AWS customers routinely generate 100 terabytes or more of log files each day, the operations become complex and time-consuming. Enter CloudWatch Logs Insights designed to work at cloud scale, without any setup or maintenance required. It goes through massive logs in seconds and provides a user with fast, interactive queries and visualizations. CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to perform complicated operations efficiently. It is a fully managed service and can handle any log format, and auto-discovers fields from JSON logs. What's more? Users can also visualize query results using line and stacked area charts, and add queries to a CloudWatch Dashboard. AWS re:Inforce 2019 In addition to these releases, Amazon also announced that AWS is launching a conference dedicated to cloud security called ‘AWS re:Inforce’, for the very first time. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. Here is what the AWS re:Inforce 2019 conference is expected to cover: Deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services. There are multiple learning tracks to be covered over this 2-day conference including a technical track and business enablement track, designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. The conference will also feature sessions on Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, and much more. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer
Read more
  • 0
  • 0
  • 3181
article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 4326

article-image-aws-iot-greengrass-extends-functionality-with-third-party-connectors-enhanced-security-and-more
Savia Lobo
27 Nov 2018
3 min read
Save for later

AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more

Savia Lobo
27 Nov 2018
3 min read
At the AWS re:Invent 2018, Amazon announced new features to its AWS IoT Greengrass. These latest features allow users to extend the capabilities of AWS IoT Greengrass and its core configuration options, which include: connectors to third-party applications and AWS services hardware root of trust private key storage isolation and permission settings  New features of the AWS IoT Greengrass AWS IoT Greengrass connectors With the new updated features AWS IoT Greengrass connectors, users can easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials, or interacting with external APIs. These connectors allow users to connect to third-party applications, on-premises software, and AWS services without writing code. Re-use common business logic Users can now re-use common business logic from one AWS IoT Greengrass device to another through the ability to discover, import, configure, and deploy applications and services at the edge. They can even use AWS Secrets Manager at the edge to protect keys and credentials in the cloud and at the edge. Secrets can be attached and deployed from AWS Secrets Manager to groups via the AWS IoT Greengrass console. Enhanced security AWS IoT Greengrass now provides enhanced security with hardware root of trust private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. Users can also use the hardware secure element to protect secrets deployed to the AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. Deploy AWS IoT Greengrass to another container environment With the new configuration option, users can deploy AWS IoT Greengrass to another container environment and directly access device resources such as Bluetooth Low Energy (BLE) devices or low-power edge devices like sensors. They can even run AWS IoT Greengrass on devices without elevated privileges and without the AWS IoT Greengrass container at a group or individual AWS Lambda level. Users can also change their identity associated with an individual AWS Lambda, providing more granular control over permissions. To know more about other updated features, head over to AWS IoT Greengrass website. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 3591