Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-oaths-distributed-network-telemetry-collector-panoptes-is-now-open-source
Melisha Dsouza
04 Oct 2018
3 min read
Save for later

Oath’s distributed network telemetry collector- 'Panoptes' is now Open source!

Melisha Dsouza
04 Oct 2018
3 min read
Yesterday, the Oath network automation team open sourced Panoptes, a distributed system for collecting, enriching and distributing network telemetry. This pluggable, distributed and high-performance data collection system supports multiple polling formats, including SNMP and vendor-specific APIs. It also supports emerging streaming telemetry standards including gNMI. Panoptes is written primarily in Python. It leverages multiple open-source technologies to provide the most value for the least development effort. Panoptes Architecture Source: Yahoo Developers The architecture is designed to enable easy data distribution and integration with other systems. The plugin to push metrics into InfluxDB allows Panoptes to evolve with industry standards. Teams can quickly set up a fully-featured monitoring environment because of the combination of Grafana and the InfluxData ecosystem. There were multiple issues inherent in legacy polling systems, including overpolling due to multiple point solutions for metrics, a lack of data normalization, consistent data enrichment and integration with infrastructure discovery systems. Panoptes aims to overcome all these issues. Check scheduling is accomplished using Celery, which is a horizontally scalable, open-source scheduler that utilizes a Redis data store. Panoptes ships with a simple, CSV-based discovery system. It can be integrated with a CMDB. From there, Panoptes will manage the task of scheduling polling for the desired devices. Users can also develop custom discovery plugins to integrate with their CMDB and other device inventory data sources. Vendors are moving towards a more streamlined model of telemetry. Panoptes’ flexible architecture will minimize the effort required to adopt these new protocols. The metric bus at the center of the model is implemented on Kafka. All data plane transactions flow across this bus. Discovery plugins publish devices to the bus and polling plugins publish metrics to the bus. Similarly, numerous clients read the data off of the bus for additional processing and forwarding. This architecture enables easy data distribution and integration with other systems. The team at Oath has deployed Panoptes in a tiered, federated model. They have developed numerous custom applications on the platform, including a load balancer monitor, a BGP session monitor, and a topology discovery application. All this was done at a reduced cost, thanks to Panoptes. This open-source release is packaged for easy deployment into any Linux-based environment and available on Github. You can head over to Yahoo Developer Network for deeper insights into this news. Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat
Read more
  • 0
  • 0
  • 3655

article-image-cloudera-hortonworks-merge-to-advance-cloud-development-artificial-intelligence
Sugandha Lahoti
04 Oct 2018
2 min read
Save for later

Cloudera and Hortonworks merge to advance hybrid cloud development, Edge and Artificial Intelligence

Sugandha Lahoti
04 Oct 2018
2 min read
Cloudera and Hortonworks have announced a corporate partnership to jointly become a data platform provider, spanning multi-cloud, on-premises and the Edge. They will also accelerate innovation in IoT, streaming, data warehouse, and Artificial Intelligence. This merger will also expand market opportunities for Hortonworks DataFlow and Cloudera Data Science Workbench along with partnerships with public cloud vendors and systems integrators. Tom Reilly, chief executive officer at Cloudera, called their merger as highly complementary and strategic. He said, “By bringing together Hortonworks’ investments in end-to-end data management with Cloudera’s investments in data warehousing and machine learning, we will deliver the industry’s first enterprise data cloud from the Edge to AI.” Rob Bearden, chief executive officer of Hortonworks agrees saying that, “Together, we are well positioned to continue growing and competing in the streaming and IoT, data management, data warehousing, machine learning/AI and hybrid cloud markets.” The terms of the transaction agreement are: Cloudera stockholders will own approximately 60% of the equity of the combined company. Hortonworks stockholders will own approximately 40% of the equity of the combined company. Hortonworks stockholders will receive 1.305 common shares of Cloudera for each share of Hortonworks stock owned, which is based on the 10-day average exchange ratio of the two companies’ prices through October 1, 2018. The companies have a combined fully-diluted equity value of $5.2 billion based on closing prices on October 2, 2018. This merger is expected to generate significant financial benefits and improved margin profile for both the companies which includes: Approximately $720 million in revenue More than 2,500 customers More than 800 customers over $100,000 ARR More than 120 customers over $1 million ARR More than $125 million in annual cost synergies More than $150 million cash flow in CY20 Over $500 million cash, no debt Read more about the announcement on the Hortonworks blog. Hortonworks Data Platform 3.0 is now generally available. Hortonworks partner with Google Cloud to enhance their Big Data strategy. Cloudera Altus Analytic DB: Modernizing the cloud-based data warehouses.
Read more
  • 0
  • 0
  • 2095

article-image-viavi-releases-observer-17-5-a-network-performance-management-and-diagnostics-tool
Natasha Mathur
04 Oct 2018
2 min read
Save for later

VIAVI releases Observer 17.5, a network performance management and diagnostics tool

Natasha Mathur
04 Oct 2018
2 min read
Viavi Solutions, a San Jose-based network test, measurement, and assurance technology company, released version 17.5 of Observer, a popular NPMD ( Network Performance Managment and Diagnostics) tool earlier this week. Observer 17.5 has features such as end-user experience scores, full 100 GB support, improved user experience, and enhanced analytic processing among others. Observer is recognized as a Leader in Gartner's Network Performance Management and Diagnostics (NPMD) Magic Quadrant. It is the network administrator's ultimate toolbox. It enables you to discover your network, capture and decode network traffic, as well as use real-time statistics to solve network problems. Observer 17.5 aims to replace the detailed KPIs that are provided to network engineers with a single result-oriented End-User Experience Score. This will help reduce the guesswork and dead ends that result from troubleshooting processes used by network teams. Let’s discuss Observer 17.5 key features. End-User Experience Scoring and Workflows Observer 17.5 comprises End User Experience Scores integrated with out-of-the-box workflows. This empowers any engineer to navigate a guided path to resolution. Observer 17.5 is backed by complete wire-data, the filtered and relevant insight which can be provided to appropriate IT parties to take corrective actions. Full 100 GB interface support Observer provides full-fidelity forensics for investigations with interfaces for 10, and 40 GB. With Observer 17.5, it now also offers support for 100 GB. This ensures the accuracy and completeness of Observer's performance analytics in high-speed network environments. With the increase in network traffic volumes, it makes sure that every metric reported by the IT team is supported by wire data for root-cause analysis and granular reconstruction. Enhanced User Experience Understanding Observer Apex implements adaptive machine learning that delivers intelligent user insight. This helps reduces the false positives as it enforces enhanced understanding of normal environment behavior and user experience. Improved Interfaces and Analytic Processing User interfaces have been redesigned in Observer 17.5 that help with easier navigation and interaction across different key elements of the Observer platform. Also, the real-time analytical performance has improved in this version. For more information, check out the official blog post. Top 10 IT certifications for cloud and networking professionals in 2018 Top 5 cybersecurity assessment tools for networking professionals Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 2219
Visually different images

article-image-cncf-accepts-cloud-native-buildpacks-to-the-cloud-native-sandbox
Sugandha Lahoti
04 Oct 2018
2 min read
Save for later

CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox

Sugandha Lahoti
04 Oct 2018
2 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) accepted Cloud Native Buildpacks (CNB) into the CNCF Sandbox. With this collaboration, Buildpacks will be able to leverage the vendor neutrality of CNCF to leverage cloud native virtues. The Cloud Native Buildpacks project was initiated by Pivotal and Heroku in January 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract. This project incorporates learnings from maintaining production-grade buildpacks at both Pivotal and Heroku. What are Cloud Native Buildpacks? At the high level, Cloud Native Buildpacks turn source code into production ready Docker images that are OCI image compatible. This gives users more options to customize runtime while making their apps portable. Buildpacks minimize initial time to production thus reducing the operational burden on developers, and supports enterprise operators who manage apps at scale. Buildpacks were first created by Heroku in 2011. Since then, they have been adopted by Cloud Foundry as well as Gitlab, Knative, Microsoft, Dokku, and Drie. The Buildpack API was open sourced in 2012 with Heroku-specific elements removed. This was done to make sure that each vendor that adopted buildpacks evolved the API independently, which led to isolated ecosystems. As a part of the Cloud Native Sandbox project, the Buildpack API is standardized for all platforms. They are also opening the tooling they work with and will run buildpacks under the Buildpack GitHub organization. “Anyone can create a buildpack for any Linux-based technology and share it with the world. Buildpacks’ ease of use and flexibility are why millions of developers rely on them for their mission critical apps,” said Joe Kutner, architect at Heroku. “Cloud Native Buildpacks will bring these attributes inline with modern container standards, allowing developers to focus on their apps instead of their infrastructure.” Developers can start using Cloud Native Buildpacks by forking one of the Buildpack Samples. You can also read up on the implementation specifics laid out in the Buildpack API documentation. CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project.
Read more
  • 0
  • 0
  • 2125

article-image-the-kernel-community-attempting-to-make-linux-more-secure
Prasad Ramesh
03 Oct 2018
3 min read
Save for later

The kernel community attempting to make Linux more secure

Prasad Ramesh
03 Oct 2018
3 min read
Last week, Google project zero criticized Ubuntu and Debian developers for not merging kernel security fixes fast enough and leaving users exposed in the meantime. The kernel community clarified yesterday on how it is making attempts to reduce and control the bugs in the Linux ecosystem by testing and kernel hardening. They acknowledge that there is not a lot the kernel community can do to eliminate bugs as bugs are part and parcel of software development. But they are focusing on testing to find them. Now there is a security team in the kernel community made up of kernel developers who are well versed with kernel core concepts. Linux Kernel developer Kroah Hartman said: “A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole”. In addition to fixing bugs, the kernel community will contribute to hardening to the kernel. Kernel hardening enables additional kernel-level security mechanisms to improve the security of the system. Linux Kernel Developer Kees Cook and others have made huge efforts to take hardening features that have been traditionally outside of the kernel and merge them for the kernel. Cook provides a summary of all the new hardening features added with every kernel released. Hardening the kernel is not enough, new features need to be enabled to take advantage of them which is not happening. A stable kernel is released every week at the official Kernel website. Then, companies pick one to support for a longer period of time for enabling device manufacturers to take advantage of it. However, Hartman observed that barring Google Pixel, most Android phones don’t include the additional hardening features, making all those phones vulnerable. He added that companies should enable these features. Hartman stated: “I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said.  “I'm working through the whole supply chain trying to solve that problem because it's a tough problem. There are many different groups involved -- the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.” However, the big vendors like Red Hat and SUSE keep the kernel updated which have these features. The kernel community is also working with Intel to mitigate Meltdown and Spectre attacks. Intel changed its approach in how they work with the kernel community after these vulnerabilities were discovered. The bright side to this is that the Intel vulnerabilities proved that things are getting better for the kernel community. More testing is being done, patches are being made and efforts are put to make the kernel as bug-free as possible. To know more, visit the Linux Blog. Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux Linux programmers opposed to new Code of Conduct threaten to pull code from project Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’
Read more
  • 0
  • 0
  • 3048

article-image-limited-availability-of-digitalocean-kubernetes-announced
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Limited Availability of DigitalOcean Kubernetes announced!

Melisha Dsouza
03 Oct 2018
3 min read
On Monday, the Kubernetes team announced that DigitalOcean, which was available in Early Access, is now accessible as Limited Availability. DigitalOcean simplifies the container deployment process that accompanies plain Kubernetes and offers Kubernetes container hosting services. Incorporating DigitalOcean’s trademark simplicity and ease of use, they aim to reduce the headache involved in setting up, managing and securing Kubernetes clusters. DigitalOcean incidentally are also the people behind Hacktoberfest which runs all of October in partnership with GitHub to promote open source contribution. The Early Access availability was well received by users who commented on the simplicity of configuring and provisioning a cluster. They appreciated that deploying and running containerized services consumed hardly any time. Users also brought to light issues and feedback that was utilized to increase reliability and resolve a number of bugs, thus improving user experience in the limited availability of DigitalOcean Kubernetes. The team also notes that during early access, they had a limited set of free hardware resources for users to deploy to. This restricted the total number of users they could provide access to. In the Limited Availability phase, the team hopes to open up access to anyone who requests it. That being said, the Limited Availability will be a paid product. Why should users consider DigitalOcean Kubernetes? Each customer has their own Dedicated Managed Cluster. This provides security and isolation for their containerized applications with access to the full Kubernetes API. DigitalOcean products provide storage for any amount of data.   Cloud Firewalls make it easy to manage network traffic in and out of the Kubernetes cluster. DigitalOcean provides cluster security scanning capabilities to alert users of flaws and vulnerabilities. In typical Kubernetes environments; metrics, logs, and events can be lost if nodes are spun down. To help developers learn from the performance of past environments, DigitalOcean stores this information separately from the node indefinitely. To know more about these features, head over to their official blog page. Some benefits for users of Limited Availability: Users will be able to provision Droplet workers in many more of regions with full support. To test out their containers in an orchestrated environment, they can start with a single node cluster using a $5/mo Droplet. As they scale their applications, users can add worker pools of various Droplet sizes, attach persistent storage using DigitalOcean Block Storage for $0.10/GB per month, and expose Kubernetes services with a public IP using $10/mo Load Balancers. This is a highly available service designed to protect against application or hardware failures while spreading traffic across available resources. Looks like users are really excited about this upgrade: Source: DigitalOcen Blog Users that have already signed up for Early Access, will receive an email shortly with details about how to get started. To know more about this news, head over to DigitalOcean’s Blog post. Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 4491
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-cloudflare-workers-kv-a-distributed-native-key-value-store-for-cloudflare-workers
Prasad Ramesh
01 Oct 2018
3 min read
Save for later

Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers

Prasad Ramesh
01 Oct 2018
3 min read
Cloudflare announced a fast distributed native key-value store for Cloudflare Workers on Friday. They are calling this “Cloudflare Workers KV”. Cloudflare Workers is a new kind of computing platform which is built on top of their global network of over 150 data centers. It allows writing serverless code which runs in the fabric of the internet itself. This allows engaging with users faster than other platforms. Cloudflare Workers KV is built on a new architecture which eliminates cold starts and dramatically reduced the memory overhead of keeping the code running. The values can also be written from within a Cloudflare Worker. Cloudflare handles synchronizing keys and values across the network. Cloudflare Workers KV features Developers can augment their existing applications or build a new application on Cloudflare’s network using Cloudflare Workers and Cloudflare Workers KV. Cloudflare Workers KV can scale to support applications serving dozens or even millions of users. Some of its features are as follows. Serverless storage Cloudflare created a serverless execution environment at each of their 153 data centers with Cloudflare Workers, but it still caused customers to manage their own storage. But with Cloudflare Workers KV, global application access to a key-value store is just an API call away. Responsive applications anywhere Serverless applications that run on Cloudflare Workers get low latency access to a globally distributed key-value store. Cloudflare Workers KV achieves a low latency by caching replicas of the keys and values stored in Cloudflare's cloud network. Build without scaling concerns Cloudflare Workers KV allows developers to focus their time on adding new capabilities to their serverless applications. They won’t have to waste time scaling their key-value stores. Key features of Cloudflare Workers KV The key features of Cloudflare workers KV as listed on their website are: Accessible from all 153 Cloudflare locations Supports values up to 64 KB Supports keys up to 2 KB Read and write from Cloudflare Workers An API to write to Workers KV from 3rd party applications Uses Cloudflare’s robust caching infrastructure Set arbitrary TTLs for values Integrates with Workers Preview It is currently in beta. To know more about workers KV, visit the Cloudflare Blog and the Cloudflare website. Bandwidth Alliance: Cloudflare collaborates with Microsoft, IBM and others for saving bandwidth Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Google introduces Cloud HSM beta hardware security module for crypto key security
Read more
  • 0
  • 0
  • 3274

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 3818

article-image-kubernetes-1-12-released-with-general-availability-of-kubelet-tls-bootstrap-support-for-azure-vmss
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS

Melisha Dsouza
28 Sep 2018
3 min read
As promised by the Kubernetes team earlier this month, Kubernetes 1.12 now stands released! With a focus on internal improvements,  the release includes two highly-anticipated features- general availability of Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS). This promises to provide better security, availability, resiliency, and ease of use for faster delivery of production based applications. Let’s dive into the features of Kubernetes 1.12 #1 General Availability of Kubelet TLS Bootstrap The team has made the Kubelet TLS Bootstrap generally available. This feature significantly streamlines Kubernetes’ ability to add and remove nodes to the cluster. Cluster operators are responsible for ensuring the TLS assets they manage remain up-to-date and can be rotated in the face of security events. Kubelet server certificate bootstrap and rotation (beta) will introduce a process for generating a key locally and then issuing a Certificate Signing Request to the cluster API server to get an associated certificate signed by the cluster’s root certificate authority. As certificates approach expiration, the same mechanism will be used to request an updated certificate. #2 Stable Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Azure Virtual Machine Scale Sets (VMSS) allows users to create and manage a homogenous VM pool. This pool can automatically increase or decrease based on demand or a set schedule. Users can easily manage, scale, and load balance multiple VMs to provide high availability and application resiliency which will be ideal for large-scale applications that can run as Kubernetes workloads. The stable support will allow Kubernetes to manage the scaling of containerized applications with Azure VMSS. Users will have the ability to integrate the applications with cluster-autoscaler to automatically adjust the size of the Kubernetes clusters. #3 Other additional Feature Updates Encryption at rest via KMS is now in beta. It adds multiple encryption providers, including Google Cloud KMS, Azure Key Vault, AWS KMS, and Hashicorp Vault. These providers will encrypt data as it is stored to etcd. RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane. Topology aware dynamic provisioning is now in beta. Storage resources can now understand where they live. Configurable pod process namespace sharing enables users to configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. Vertical Scaling of Pods will help vary the resource limits on a pod over its lifetime. Snapshot / restore functionality for Kubernetes and CSI will provide standardized APIs design and add PV snapshot/restore support for CSI volume drivers To explore these features in depth, the team will be hosting a  5 Days of Kubernetes series next week. Users will be given a walkthrough of the following features: Day 1 - Kubelet TLS Bootstrap Day 2 - Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Day 3 - Snapshots Functionality Day 4 - RuntimeClass Day 5 - Topology Resources Additionally, users can join the members of the release team on November 6th at 10 am PDT in a webinar that will cover major features in this release. You can check out the release on GitHub. Additionally, if you would like to know more about this release, head over to Kubernetes official blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 2650

article-image-bandwidth-alliance-cloudflare-collaborates-with-microsoft-ibm-and-others-for-saving-bandwidth
Prasad Ramesh
27 Sep 2018
2 min read
Save for later

Bandwidth Alliance: Cloudflare collaborates with Microsoft, IBM and others for saving bandwidth

Prasad Ramesh
27 Sep 2018
2 min read
Cloudflare, a content delivery network service provider, formed a new group yesterday called as the Bandwidth Alliance to reduce bandwidth cost of many cloud users. Cloudflare will provide heavy discounts or free services on bandwidth charges to organizations who are both Cloudflare customers and cloud providers part of this alliance. Current bandwidth charges Hosting on most cloud providers includes data transfer charges, known as bandwidth or egress charges. These charges include the cost of delivering traffic from the cloud to the consumer. However, while using a CDN like Cloudflare, the cost of data transfer is additional over the content delivery cost. This extra charge makes sense if the data has to cross thousands of miles where an infrastructure needs to be maintained across this distance. To do all this, there is a costing involved, which further gets added to customer’s final bill. The Bandwidth Alliance aims to eliminate these additional charges and provide more affordable cloud services. What is the bandwidth alliance? Traffic that is delivered to users through Cloudflare passes across a Private Network Interface (PNI). The PNI usually is within the same facility formed with a fiber optic cable between routers for the two networks. If there’s no transit provider, nor a middleman for maintaining infrastructure, there is no additional cost for Cloudflare or the cloud provider. Cloud service providers use the PNI’s to deeply interconnect with third party networks and Cloudflare. Cloudflare carries the traffic automatically from the user’s location to the Cloudflare data center nearest to the cloud provider then over the PNIs. Cloudflare has heavily peered networks allowing traffic to be carried over the free interconnected links. Thus, Cloudflare came up with Bandwidth Alliance to provide the mutual customers with lower costs. They teamed up with some cloud providers to see if they can make use of their huge interconnects to benefit the end customers. Some of the current members include Automattic, Backblaze, DigitalOcean, DreamHost, IBM Cloud, linode, Microsoft Azure, Packet, Scaleway, and Vapor. The alliance is open for inclusion of more cloud providers. You can read more in the official Cloudflare Blog. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Microsoft Ignite 2018: New Azure announcements you need to know Google introduces Cloud HSM beta hardware security module for crypto key security
Read more
  • 0
  • 0
  • 2225
article-image-gnu-shepherd-0-5-0-releases
Savia Lobo
27 Sep 2018
1 min read
Save for later

GNU Shepherd 0.5.0 releases

Savia Lobo
27 Sep 2018
1 min read
Yesterday, the GNU Daemon Shepherd community announced the release of GNU Shepherd 0.5.0. GNU Shepherd, formerly known as GNU dmd, is a service manager written in Guile and looks after the herd of system services. It provides a replacement for the service-managing capabilities of SysV-init (or any other init) with both a powerful and beautiful dependency-based system and a convenient interface. The GNU Shepherd 0.5.0 contains new features and bug fixes and was bootstrapped with tools including: Autoconf 2.69 Automake 1.16.1 Makeinfo 6.5 Help2man 1.47.6 Changes in GNU Shepherd 0.5.0 Services now have a ‘replacement’ slot In this version, restarting a service will also restart its dependent services When running as PID 1 on GNU/Linux, halt upon ctrl-alt-del Actions can now be invoked on services which are not in the current running state This version supports Guile 3.0 and users need to have Guile version>= 2.0.13 Unused runlevel code has been removed Some of the updated translations in this version include, es, fr, pt_BR, sv To know more about this release in detail, visit GNU official website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Network programming 101 with GAWK (GNU AWK) GNU Octave: data analysis examples  
Read more
  • 0
  • 0
  • 2400

article-image-announcing-hyperswarm-preview
Melisha Dsouza
27 Sep 2018
2 min read
Save for later

Announcing Hyperswarm Preview

Melisha Dsouza
27 Sep 2018
2 min read
Connecting two computers over the Internet is difficult. Software needs to negotiate NATs, firewalls, and limited IPv4 addresses. To overcome this issue, the Beaker browser team is releasing a new Kademlia DHT-based toolset for connecting peers called 'Hyperswarm'. Currently the team uses a tracker to get users connected. However, to move towards a more decentralized model, the team has been working on Hyperswarm to improve the reliability of the Dat project connections. What is Hyperswarm? Hyperswarm is a stack of networking modules that finds peers and creates reliable connections. Users join the swarm for a "topic". They periodically query other peers who are part of the topic. To establish a connection between peers, Hyperswarm creates a socket between them using either UTP or TCP. It uses a Kademlia DHT to track peers and arrange connections. The DHT itself includes mechanisms to establish a direct connection between two peers in which one or both are behind firewalls or behind routers that use network address translation (NAT). A few things about Hyperswarm that you should know Iterating on security DHTs have a number of denial-of-service vectors. There are some known mitigations for the same, but they have tradeoffs. The team is thinking through these tradeoffs and will iterate on this over time. Hyperswarm is not anonymous Hyperswarm does not hide users’ IPs. Devices join topics by listing their IP so that other devices can establish connections. The Dat protocol, however, takes steps to hide the topics’ contents. When downloading a dat, the protocol hashes the dat’s key to create the swarm topic. Only those who know the dat’s key can access the dat’s data or create new connections to people in the topic. The members of the topics are public. The deployment strategy The team will be updating the tracker server to make the deployment backward compatible. This will make it possible for old Dat clients to connect using the tracker, while new clients can connect using the DHT. Hyperswarm is MIT licensed open-source and can be found at the following repositories: Network, discovery, dht To know more about this preview release, head over to pfrazee.hasbase.io. Linkerd 2.0 is now generally available with a new service sidecar design Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 1936

article-image-fedora-29-beta-brings-modularity-gnome-3-30-support-and-other-changes
Prasad Ramesh
26 Sep 2018
2 min read
Save for later

Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes

Prasad Ramesh
26 Sep 2018
2 min read
Fedora 29 Beta was made available yesterday. It is the next big step towards a stable release of the Linux distribution. The stable version will be available late October. This beta brings features like modularity for all, support for GNOME 3.30 and some other changes. Modularity Modular repositories were introduced in Fedora 28 for the Fedora Server Edition. In Fedora 29 beta, modularity is available in all the editions, spins and labs. Modularity makes multiple versions of important packages available in parallel. It will work with the familiar Dandified YUM (DNF) package. With modularity, users can update their OS to the latest version while maintaining the required version of an application for proper functionality. GNOME 3.30 Fedora 29 Workstation Beta comes with the latest version of GNOME. GNOME 3.30 streamlines performance and adds a new application for Podcasts. It also automatically updates Flatpaks in Software Center. Other changes There are also many other updates included in the Fedora 29. Fedora Atomic Workstation is now rebranded as Fedora Silverblue. The GRUB menu will be hidden where only a single OS is installed as it does not provide any useful functionality in those cases. The latest version of Fedora also brings updates to many popular packages including MySQL, GNU C Library, Python, and Perl. Some architecture changes include dropping as an alternative architecture, initial support for field programming gate array (FPGAs), and packages are now built with SSE2 support. Many projects including Eclipse have dropped support for the big endian ppc64 architecture. So now Fedora will have to discontinue producing any ppc64 content. Fedora Scientific will now be shipped as vagrant boxes which were previously delivered as ISO files. Vagrant boxes will give potential users a friendlier option to try Fedora Scientific while keeping the current operating system. For a full list of changes, visit the Fedora website. GIMP gets $100K of the $400K donation made to GNOME GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’
Read more
  • 0
  • 0
  • 3235
article-image-microsoft-ignite-2018-new-azure-announcements-you-need-to-know
Melisha Dsouza
25 Sep 2018
4 min read
Save for later

Microsoft Ignite 2018: New Azure announcements you need to know

Melisha Dsouza
25 Sep 2018
4 min read
If you missed the Azure announcements made at Microsoft Ignite 2018, don’t worry, we’ve got you covered. Here are some of the biggest changes and improvements the Microsoft Azure team have made to their cloud offering. Infrastructure Improvements Azure’s new capabilities to deliver the best infrastructure for every workload include: 1. GPU enable and High-Performance VM To deliver the best infrastructure for every workload, Azure has announced the Preview of GPU-enabled and High-Performance Computing Virtual Machines. The two new N-series Virtual Machines have NVIDIA GPU capabilities. The first one is the NVv2 VMs and the second virtual machine is the NDv2 VMs. The two new H-series VMs are optimized for performance and cost and are aimed at HPC workloads like fluid dynamics, structural mechanics, energy exploration, weather forecasting, risk analysis, and more. The first VM is the HB VMs and the second VM is the HC VMs. 2. Networking Azure has announced the general availability of Azure Firewall and Virtual WAN. They have also announced the preview of Azure Front Door Service, ExpressRoute Global Reach, and ExpressRoute Direct. Azure Firewall has a built-in high availability and cloud scalability. The Virtual WAN will provide a simple, unified, global connectivity, and security platform to deploy large-scale branch connectivity. 3. Improved Disk storage Microsoft has expanded the portfolio of Azure Disk offerings to deploy any app in Azure, including those that are the most IO intensive. The new previews include the Ultra SSDs, Standard SSDs, Larger managed disk sizes - to help deal with data-intensive workloads. This will also ensure better availability, reliability, and latency as compared to standard SSDs 4. Hybrid Microsoft has announced new hybrid capabilities to manage data, create even more consistency, and secure hybrid environment. They have introduced the Azure Data Box edge, Windows Server 2019 and Azure stack. With AI enable edge computing capabilities, and OS that supports hybrid management and flexibility for deploying applications, Azure is causing waves in the developer community Built-in security & management For improved Security, Azure has announced new services for preview, like Confidential Computing DC VM series, Secure score, improved threat protection, and network map (preview). These will expand Azure security controls and services to protect network, applications, data, and identities. These services are enhanced by the unique intelligence that comes from the trillions of signals we collect in running first party services like Office 365 and Xbox. For better Management, Azure has announced the preview of Azure Blueprints. These blueprints make it easy to deploy and update Azure environments in a repeatable manner using composable artifacts such as policies, role-based access controls, and resource templates. Azure cost management in the Azure portal (preview) will help to access cost management from PowerBI or directly from your own custom applications. Migration To make the migration to the cloud less challenging, Azure has announced the support for Hyper-V assessments in Azure Migrate, Azure SQL Database Managed Instance, which enables users to migrate SQL Servers to a fully managed Azure service. To help improve your migration experience, we are announcing that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS). Automated ML capability in Azure Machine Learning The problem of finding the best machine learning pipeline for a given dataset scales faster than the time available for data science projects.  Azure’s Automated machine learning enables developers to access an automated service that identifies the best machine learning pipelines for their labelled data. Data scientists are empowered with a powerful productivity tool that also takes uncertainty into account, incorporating a probabilistic model to determine the best pipeline to try next. To follow more of the Azure buzz, head to  Microsoft’s official Blog   Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Azure Functions 2.0 launches with better workload support for serverless Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace  
Read more
  • 0
  • 0
  • 3513

article-image-azure-functions-2-0-launches-with-better-workload-support-for-serverless
Melisha Dsouza
25 Sep 2018
2 min read
Save for later

Azure Functions 2.0 launches with better workload support for serverless

Melisha Dsouza
25 Sep 2018
2 min read
Microsoft  has announced the general availability of Azure Functions 2.0. The new release aims to handle demanding workloads, which should make managing the scale of serverless applications easier than ever before. With an improved user experience, and new developer capabilities, the release is evidence of Microsoft looking to take full advantage of interest in serverless computing. New features in Azure Functions 2.0 Azure Functions can now run on more platforms Azure Functions are now supported on more environments, including local Mac or Linux machines. An integration with its VS Code will help developers have a best-in-class serverless development experience on any platform. Code optimizations Functions 2.0 has added general host improvements, support for more modern language runtimes, and the ability to run code from a package file. .NET developers can now author functions using .NET Core 2.1.  This provides a significant performance gain and helps to develop and run .NET functions in more places. Assembly resolution functions have been improved to reduce the number of conflicts. Functions 2.0 now supports both Node 8 and Node 10, with improved performance in general. A powerful new programming model Bindings and integrations of Functions 1.0 have been improvised in functions 2.0. All bindings are brought in as extensions. The change to decoupled extension packages allows bindings (and their dependencies) to be versioned without depending on the core runtime. The recent launch of Azure SignalR Service, a fully managed service, enables focus on building real-time web experiences without worrying about setting up, hosting, scaling, or load balancing the SignalR server. Find an extension for this service, in this GitHub repo. Check out the SignalR Service binding reference to start building real-time serverless applications. Easier development To improve productivity, Microsoft has introduced a powerful native tooling inside of Visual Studio, VS Code, VS for Mac, and a CLI that can be run alongside any code editing experience. In Functions 2.0, more visibility is given to distributed tracing. Dependencies are automatically tracked, and cross-resource connections are automatically correlated across a variety of services To know more about the updates in Azure Functions 2.0  head to Microsoft’s official Blog Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.
Read more
  • 0
  • 0
  • 3006