Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-ubuntu-releases-mir-1-0-0
Savia Lobo
24 Sep 2018
2 min read
Save for later

Ubuntu releases Mir 1.0.0

Savia Lobo
24 Sep 2018
2 min read
Last week, the Ubuntu community announced the release of Mir 1.0.0, a fast, open and secure display server. The important highlights of this release are support for the Wayland xdg-shell (stable) extension and improved facilities for customizing display layouts. Mir is a system-level component that can be used to unlock next-generation user experiences. It runs on a range of Linux powered devices including traditional desktops, IoT and embedded products. Highlights in Mir 1.0.0 Wayland extension protocols At present, there are many Wayland “extension protocols” to provide specialized support for specific needs. Mir will continue to implement those protocols that are important for the projects that it supports. With the Mir 1.0.0 release, the list of supported extension protocols is: protocol name=“wayland” protocol name=“xdg_shell_unstable_v6” protocol name=“xdg_shell” These are sufficient for the vast majority of desktop and IoT applications. Display layout Mir has a new .display configuration file that tells it how to organize multiple outputs. This is described in Display Configuration for mir-kiosk and Egmde snap: update 0.2 As Mir is designed to handle a wide range of platforms, Mir can be used to create a Wayland based “Desktop Environment” or “Shell”. A couple of examples that use Mir are: Unity8 Egmde Developers using Mir will find it packaged and available on Ubuntu, Fedora and Arch; and soon on Debian. The latest Mir release is available for all supported Ubuntu series from the Mir team’s ‘Release PPA’. To know more about Mir 1.0.0 in detail, visit Ubuntu community blog. Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look What to expect from upcoming Ubuntu 18.04 release  
Read more
  • 0
  • 0
  • 3383

article-image-google-announces-the-beta-version-of-cloud-source-repositories
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Google announces the Beta version of Cloud Source Repositories

Melisha Dsouza
21 Sep 2018
3 min read
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud. The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search. These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft. How does Google code search work? Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field. Solution to common code search challenges #1 To execute searches across all the code at ones’ company If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster. #2 To search for code that performs a common operation Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code. #3 If a developer cannot remember the right way to use a common code component Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers. #4 Issues with production application  If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered. All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users. You can read more about Cloud Source Repositories in the official documentation. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Google to allegedly launch a new Smart home device Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 2395

article-image-adobe-set-to-acquire-marketo-putting-adobe-experience-cloud-at-the-heart-of-all-marketing
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Adobe set to acquire Marketo putting Adobe Experience Cloud at the heart of all marketing

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Adobe Systems confirmed their plans to acquire Marketo Inc for $4.75 billion from Vista Equity Partners Management. This deal is expected to close in the fourth quarter of Adobe’s Fiscal Year 2018 in December. Adobe, with this acquisition, aims to combine Adobe Experience Cloud and Marketo Commerce Cloud to provide a unified platform for all marketers. Marketo is a US-based software company, which develops marketing software that provides inbound marketing, social marketing, CRM, and other related services. The industries it serves include healthcare, technology, financial services, manufacturing, and media, among others. What acquiring Marketo means to Adobe? A single platform to serve both B2B and B2C customers The integration of Marketo Commerce Cloud into the Adobe Experience Cloud will help Adobe deliver a single platform that serves both B2B and B2C customers globally. This acquisition will bring together Marketo’s lead account-based marketing technology and Adobe’s Experience Cloud analytics, advertising, and commerce capabilities. This will enable B2B companies to create, manage, and execute marketing engagements at scale. Access to Marketo’s huge customer base Enterprises from various industries are using Marketo’s marketing applications to drive engagement and customer loyalty. Marketo will bring its huge ecosystem, which consists of nearly 5000 customers and over 500 partners to Adobe. Brad Rencher, Executive Vice President and General Manager, Digital Experience at Adobe said: “The acquisition of Marketo widens Adobe’s lead in customer experience across B2C and B2B and puts Adobe Experience Cloud at the heart of all marketing.” What’s in it for Marketo? Signaling the next phase of Marketo's growth, its acquisition by Adobe will further accelerate its product roadmap and go-to-market execution. With Adobe, Marketo's products will get a new level of global operational scale and the ability to penetrate new verticals and geographies. The CEO of Marketo, Steve Lucas, believes that with Adobe they will be able to rapidly innovate and provide their customers a definitive system of engagement: “Adobe and Marketo both share an unwavering belief in the power of content and data to drive business results. Marketo delivers the leading B2B marketing engagement platform for the modern marketer, and there is no better home for Marketo to continue to rapidly innovate than Adobe.” To know more about Adobe acquiring Marketo, read their official announcement on Adobe’s  website. Adobe to spot fake images using Artificial Intelligence Adobe is going to acquire Magento for $1.68 Billion Adobe glides into Augmented Reality with Adobe Aero
Read more
  • 0
  • 0
  • 2175
Visually different images

article-image-cortex-an-open-source-horizontally-scalable-multi-tenant-prometheus-as-a-service-becomes-a-cncf-sandbox-project
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Cloud Native Computing Foundation (CNCF) accepted Cortex as a CNCF Sandbox project. Cortex is an open source, horizontally scalable, multi-tenant Prometheus-as-a-service. It provides long-term storage for Prometheus metrics when used as a remote write destination. It also comes with a horizontally scalable, Prometheus-compatible query API. It provides uses cases for: Service providers to enable them to manage a large number of Prometheus instances and provide long-term storage. Enterprises to centralize management of large-scale Prometheus deployments and ensure long-term durability of Prometheus data. Originally developed by Weaveworks, it is now being used in production by organizations like Grafana Labs, FreshTracks, and EA. How does it work? The following diagram shows its architecture: Source: CNCF 1. Scraping samples: First, a Prometheus instance scraps all of the users’ services and then forwards them to a Cortex deployment. It does this using the remote_write API, which was added to Prometheus to support Cortex and other integrations. 2. Distributor distributes the samples: The instance then sends all these samples to distributor, which is a stateless service that consults the ring to figure out which ingesters should ingest the sample. The ingesters are arranged using a consistent hash ring, keyed on the fingerprint of the time series, and stored in a consistent data store, such as Consul. Distributor finds the owner ingester and forwards the sample to it and also to two ingesters after it in the ring. This means if an ingester goes down, we have two others that have its data. 3. Ingesters make chunks of samples: Ingesters continuously receive a stream of samples and group them together in chunks. These chunks are then stored in a backend database, such as DynamoDB, BigTable, or Cassandra. Ingesters facilitate this chunking process so that Cortex isn’t constantly writing to its backend database. Alexis Richardson, CEO of Weaveworks believes that being a CNCF Sandbox project will help grow the Prometheus ecosystem: “By joining CNCF, Cortex will have a neutral home for collaboration between contributor companies, while allowing the Prometheus ecosystem to grow a more robust set of integrations and solutions. Cortex already has a strong affinity with several CNCF technologies, including Kubernetes, gRPC, OpenTracing and Jaeger, so it’s a natural fit for us to continue building on these interoperabilities as part of CNCF.” To know more in detail, check out the official announcement by CNCF and also read What is Cortex?, a blog post published on Weaveworks Blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 2580

article-image-kubernetes-1-12-is-releasing-next-week-with-updates-to-its-storage-security-and-much-more
Melisha Dsouza
21 Sep 2018
4 min read
Save for later

Kubernetes 1.12 is releasing next week with updates to its storage, security and much more!

Melisha Dsouza
21 Sep 2018
4 min read
Kubernetes 1.12 will be released on Tuesday, the 25th of September 2018. This updated release comes with improvements to security and storage, cloud provider support and other internal changes. Let’s take a look at the four domains that will be majorly impacted by this update. #1 Security Stability provided for Kubelet TLS bootstrap The Kubelet TLS bootstrap will now have a stable version. This was also covered in the blog post Kubernetes Security: RBAC and TLS. The kubelet can generate a private key and a signing request (CSR) to get the corresponding certificate. Kubelet server TLS certificate automatic rotation (Beta) The kubelets are able to rotate both client and/or server certificates. They can be automatically rotated through the respective RotateKubeletClientCertificate and RotateKubeletServerCertificate feature flags in the kubelet that are enabled by default now. Egress and IPBlock support for Network Policy NetworkPolicy objects support an egress or to section to allow or deny traffic based on IP ranges or Kubernetes metadata. NetworkPolicy objects also support CIDR IP blocks to be configured in the rule definitions. Users can combine Kubernetes-specific selectors with IP-based ones both for ingress and egress policies. Encryption at rest Data encryption at rest can be obtained using Google Key Management Service as an encryption provider. Read more about this on KMS providers for data encryption. #2 Storage Snapshot / restore volume support for Kubernetes VolumeSnapshotContent and VolumeSnapshot API resources can be provided to create volume snapshots for users and administrators. Topology aware dynamic provisioning, Kubernetes CSI topology support (Beta) Topology aware dynamic provisioning will allow a Pod to request one or more Persistent Volumes (PV) with topology that are compatible with the Pod’s other scheduling constraints- such as resource requirements and affinity/anti-affinity policies. While using multi-zone clusters, pods can be spread across zones in a specific region. The volume binding mode handles the instant at which the volume binding and dynamic provisioning should happen. Automatic detection of Node type When the dynamic volume limits feature is enabled in Kubernetes, it automatically determines the node type. Kubernetes supports the appropriate number of attachable volumes for the node and vendor. #3 Support for Cloud providers Support for Azure Availability Zones Kubernetes 1.12 brings support for Azure availability zones. Nodes within each availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> and Azure managed disks storage class will be provisioned taking this into account. Stable support for Azure Virtual Machine Scale Sets This feature adds support for Azure Virtual Machine Scale Sets. This technology lets users create and manage a group of identical load balanced virtual machines. Add Azure support to cluster-autoscaler (Stable) This feature adds support for Azure Cluster Autoscaler. The cluster autoscaler allows clusters to grow as resource demands increase. The Cluster Autoscaler does this scaling  based on pending pods. #4 Better support for Kubernetes internals Easier installation and upgrades through ComponentConfig In earlier Kubernetes versions, modifying the base configuration of the core cluster components was not easily automatable. ComponentConfig is an ongoing effort to make components configuration more dynamic and directly reachable through the Kubernetes API. Improved multi-platform compatibility Kubernetes aims to support the multiple architectures, including arm, arm64, ppc64le, s390x and Windows platforms. Automated CI e2e conformance tests have been deployed to ensure compatibility moving forward. Quota by priority scopeSelector can be used to create Pods at a specific priority. Users can also control a pod’s consumption of system resources based on a pod’s priority. Apart from these four major areas that will be upgraded in Kubernetes 1.12, additional features to look out for are Arbitrary / Custom Metrics in the Horizontal Pod Autoscaler, Pod Vertical Scaling, Mount namespace propagation, and much more! To know about all the upgrades in Kubernetes 1.12, head over to Sysdig’s Blog Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.11 is here! VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service  
Read more
  • 0
  • 0
  • 2351

article-image-microsofts-immutable-storage-for-azure-storage-blobs-now-generally-available
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Microsoft’s Immutable storage for Azure Storage Blobs, now generally available

Melisha Dsouza
21 Sep 2018
3 min read
Microsoft’s new "immutable storage" feature for Azure Blobs, is now generally available. Financial Services organizations regulated by the Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), Financial Industry Regulatory Authority (FINRA), and others are required to retain business-related communications in a Write-Once-Read-Many (WORM) or immutable state. This ensures that the data is non-erasable and non-modifiable for a specific retention interval. Healthcare, insurance, media, public safety, and legal services industries will also benefit a great deal from this feature. Through configurable policies, users can only create and read Blobs, and not modify or delete them. There is no additional charge for using this feature. Immutable data is priced in the same way as mutable data. Read Also: Microsoft introduces ‘Immutable Blob Storage’, a highly protected object storage for Azure Upgrades that accompany this feature are: #1 Regulatory compliance Immutable storage for Azure Blobs will help financial institutions and related industries to store data immutably. Microsoft will soon release a technical white paper with details on how the feature addresses regulatory requirements. Head over to the Azure Trust Center for detailed information about compliance certifications. #2 Secure document retention The immutable storage feature for Azure Blobs service ensures that data cannot be modified or deleted by any user- even with administrative privileges. #3 Better Legal Hold Users can now store sensitive information related to a litigation, criminal investigation, and more in a tamper-proof state for the desired duration. #4 Time-based retention policy support Users can set policies to store data immutably for a specified interval of time. #5 Legal hold policy support When users do not know the data retention time, they can set legal holds to store data until the legal hold is cleared. #6 Support for all Blob tiers WORM policies are independent of the Azure Blob Storage tier and will apply to all the tiers. Therefore, Customers can store their data in the most cost-optimized tier for their workloads immutably. #7 Blob Container level configuration Users can configure time-based retention policies and legal hold tags at the container level. Simple container level settings can create time-based retention policies, lock policies, extend retention intervals, set legal holds, clear legal holds etc. 17a-4, LLC, Commvault , HubStor,Archive2Azure are among the few Microsoft partners that support Azure Blob immutable storage. To know how to upgrade to this feature, head over to the Microsoft Blog Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 2762
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-linkerd-2-0-is-now-generally-available-with-a-new-service-sidecar-design
Sugandha Lahoti
20 Sep 2018
2 min read
Save for later

Linkerd 2.0 is now generally available with a new service sidecar design

Sugandha Lahoti
20 Sep 2018
2 min read
Linkerd 2.0 is now generally available. Linkerd is a transparent proxy that adds service discovery, routing, failure handling, and visibility to modern software applications. Linkerd 2.0 brings two significant changes. First, Linkerd 2.0 is completely rewritten to to be faster and smaller than Linkerd 1.x. Second, Linkerd moves beyond the service mesh model to running on a single service. It also comes with a focus on minimal configuration, a modular control plane design, and UNIX-style CLI tools. Let’s understand what each of these changes mean. Smaller and Faster Linkerd has undergone a complete change to become faster and smaller than its predecessor. Linkerd 2.0’s data plane is comprised of ultralight Rust proxies which consume around 10mb of RSS and have a p99 latency of <1ms. Linkerd’s minimalist control plane (written in Go) is similarly designed for speed and low resource footprint. Service sidecar design It also adopts a modern service sidecar design from the traditional service mesh model. The traditional service mesh model has two major problems. First, they add a significant layer of complexity to the tech stack. Second they are designed to meet the needs of platform owners undermining the service owners. Linkerd 2.0’s service sidecar design offers a solution to both. It allows platform owners to build out a service mesh incrementally, one service at a time, to provide security and reliability that a full service mesh provides. More importantly, Linkerd 2.0 addresses the needs of service owners directly with its service sidecar model to its focus on diagnostics and debugging. Linkerd 2.0 at its core is a service sidecar, running on a single service without requiring cluster-wide installation. Even without having a whole Kubernetes cluster, developers can run Linkerd and get: Instant Grafana dashboards of a service’s success rates, latencies, and throughput A topology graph of incoming and outgoing dependencies A live view of requests being made to your service Improved, latency-aware load balancing Installation Installing Linkerd 2.0 on a service requires no configuration or code changes. You can try Linkerd 2.0 on a Kubernetes 1.9+ cluster in 60 seconds by running: curl https://run.linkerd.io/install | sh Also check out the full Getting Started Guide. Linkerd 2.0 is also hosted on GitHub. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits. Kubernetes 1.11 is here! VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service.
Read more
  • 0
  • 0
  • 1636

article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 3876

article-image-linus-torvalds-is-sorry-for-his-hurtful-behavior-is-taking-a-break-from-the-linux-community-to-get-help
Natasha Mathur
17 Sep 2018
4 min read
Save for later

Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’

Natasha Mathur
17 Sep 2018
4 min read
Linux is one of the most popular operating systems built around the Linux kernel by Linus Torvalds. Because it is free and open source, it gained a huge audience among developers very fast. Torvalds further welcomed other developers’ contributions to add to the kernel granted that they keep their contributions free. Due to this, thousands of developers have been working to improve Linux over the years, leading to its huge popularity today. Yesterday, Linus, who has been working on the Kernel for almost 30-years caught the Linux community by surprise as he apologized and opened up about going on a break over his ‘hurtful’ behavior that ‘contributed to an unprofessional environment’. In a long email to the Linux Kernel mailing list, Torvalds announced Linux 4.19 release candidate and then talked about his ‘look yourself in the mirror’ moment. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” admitted Torvalds. The confession came about after Torvalds confessed to messing up the schedule of the Maintainer's Summit, a meeting of Linux's top 40 or so developers, by planning a family vacation. “Yes, I was somewhat embarrassed about having screwed up my calendar, but honestly, I was mostly hopeful that I wouldn't have to go to the kernel summit that I have gone to every year for just about the last two decades. That whole situation then started a whole different kind of discussion --  I realized that I had completely mis-read some of the people involved,” confessed Torvalds. Torvalds has been notorious for his outspoken nature and outbursts towards others (especially the developers in the Linux Community). Sarah Sharps, Linux maintainer quit the Linux community in 2015 over Torvald’s offensive behavior and called it ‘toxic’. Torvalds exploded at Intel, earlier this year, for spinning Spectre fix as a security feature. Also, Torvalds responded with profanity, last year, about different approaches to security during a discussion about whitelisting the proposed features for Linux version 4.15. “Maybe I can get an email filter in place so that when I send email with curse-words, they just won't go out. I really had been ignoring some fairly deep-seated feelings in the Community...I am not an emotionally empathetic kind of person...I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely,” writes Torvalds. Torvalds then went ahead to talk about him taking a break from the Linux Community. “This is not some kind of "I'm burnt out, I need to just go away" break. I'm not feeling like I don't want to continue maintaining Linux. I very much want to continue to do this project that I've been working on for almost three decades. I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow”. A discussion with over 500 comments has started already on Reddit regarding Torvald’s decision.  While some people are supporting Torvald by accepting his apology, there are others who feel that the apology was long overdue and will believe him after he puts his words into action. https://twitter.com/TejasKumar_/status/1041527028271312897 https://twitter.com/coreytabaka/status/1041468174397399041 Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Facebook and Arm join Yocto Project as platinum members for embedded Linux development NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 201
Read more
  • 0
  • 0
  • 4645

article-image-oracle-releases-open-source-and-commercial-licenses-for-java-11-and-later
Savia Lobo
13 Sep 2018
3 min read
Save for later

Oracle releases open source and commercial licenses for Java 11 and later

Savia Lobo
13 Sep 2018
3 min read
Oracle announced that it will provide JDK releases in two combinations ( an open source license and a commercial license): Under the open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) Under a commercial license for those using the Oracle JDK as part of an Oracle product or service, or who do not wish to use open source software. These combinations will replace the historical BCL(Binary Code License for Oracle Java SE technologies), which had a combination of free and paid commercial terms. The BCL has been the primary license for Oracle Java SE technologies for well over a decade. It historically contained ‘commercial features’ that were not available in OpenJDK builds. However, over the past year, Oracle has contributed features to the OpenJDK Community, which include Java Flight Recorder, Java Mission Control, Application Class-Data Sharing, and ZGC. From Java 11 onwards, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. Minute differences between Oracle JDK 11 and OpenJDK Oracle JDK 11 emits a warning when using the -XX:+UnlockCommercialFeatures option. On the other hand, in OpenJDK builds this option results in an error. This difference remains in order to make it easier for users of Oracle JDK 10 and earlier releases to migrate to Oracle JDK 11 and later. The javac --release command behaves differently for the Java 9 and Java 10 targets. This is because, in those releases the Oracle JDK contained some additional modules that were not part of corresponding OpenJDK releases. Some of them are: javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.web This difference remains in order to provide a consistent experience for specific kinds of legacy use. These modules are either now available separately as part of OpenJFX, are now in both OpenJDK and the Oracle JDK because they were commercial features which Oracle contributed to OpenJDK (e.g., Flight Recorder), or were removed from Oracle JDK 11 (e.g., JNLP). The Oracle JDK always requires third party cryptographic providers to be signed by a known certificate. The cryptography framework in OpenJDK has an open cryptographic interface. This means it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. The Oracle JDK has always required third party cryptographic providers to be signed by a known certificate.  The cryptography framework in OpenJDK has an open cryptographic interface, meaning it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. Read more about this news in detail on Oracle blog. State of OpenJDK: Past, Present and Future with Oracle Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java
Read more
  • 0
  • 0
  • 5655
article-image-why-did-last-weeks-azure-cloud-outage-happen-heres-microsofts-root-cause-analysis-summary
Prasad Ramesh
12 Sep 2018
3 min read
Save for later

Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.

Prasad Ramesh
12 Sep 2018
3 min read
Earlier this month, Microsoft Azure Cloud was experiencing problems that left users unable to access its cloud services. The outage in South Central US affected several Azure Cloud services and caused them to go offline for U.S. users. The reason for the outage was stated as “severe weather”. Microsoft is currently conducting a root cause analysis to find out the exact reason. Many services went offline due to cooling system failure causing the servers to overheat and turn themselves off. What did the RCA reveal about the Azure outage High energy storms associated with Hurricane Gordon hit the southern area of Texas near Microsoft Azure’s data centers for South Central US. Many data centers were affected and experienced voltage fluctuations. Lightning-induced increased electrical activity caused significant voltage swells. The rise in voltages, in turn, caused a portion of one data center to switch to generator power. The power swells also shut down the mechanical cooling systems despite surge suppressors being in place. With the cooling systems being offline, temperatures exceeded the thermal buffer within the cooling system. The safe operational temperature threshold exceeded which initiated an automated shutdown of devices. The shutdown mechanism is installed to preserve infrastructure and data integrity. But in this incident, the temperatures increased pretty quickly in some areas of the datacenter causing hardware damage before a shutdown could be initiated. Many storage servers and some network devices and power units were damaged. Microsoft is taking steps to prevent further damage as the storms are still active in the area. They are switching the remaining data centers to generator power to stabilize power supply. For recovery of damaged units, the first step taken was to recover the Azure Software Load Balancers (SLBs) for storage scale units. The next step was to recover the storage servers and the data on them by replacing failed components and migrating data to healthy storage units while validating that no data is corrupted. The Azure website also states that the “Impacted customers will receive a credit pursuant to the Microsoft Azure Service Level Agreement, in their October billing statement.” A detailed analysis will be available on their website in the coming weeks. For more details on the RCA and customer impact, visit the Azure website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft Azure now supports NVIDIA GPU Cloud (NGC)
Read more
  • 0
  • 0
  • 3695

article-image-amazon-announces-aws-lambda-support-for-powershell-core-6-0
Melisha Dsouza
12 Sep 2018
2 min read
Save for later

Amazon announces AWS Lambda Support for PowerShell Core 6.0

Melisha Dsouza
12 Sep 2018
2 min read
In a post yesterday, the AWS Developer team has announced that AWS Lambda support will be provided for PowerShell Core 6.0. Users can now execute PowerShell Scripts and functions in response to Lambda events. Why should Developers look forward to this upgrade? The AWS Tools for PowerShell will allow developers and administrators to manage their AWS services and resources in the PowerShell scripting environment. Users will be able to manage their AWS resources with the same PowerShell tools used to manage Windows, Linux, and MacOS environments. These tools will let them perform many of the same actions as available in the AWS SDK for .NET. What’s more is that these tools can be accessed from the command line for quick tasks. For example: controlling Amazon EC2 instances. The PowerShell scripting language composes scripts to automate AWS service management. With direct access to AWS services from PowerShell, management scripts can take advantage of everything that the AWS cloud has to offer. The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are flexible in handling credentials including support for the AWS Identity and Access Management (IAM) infrastructure. To understand how the support works, it is necessary to set up the appropriate development environment as shown below. Set up the Development Environment This can be done in a few simple steps- 1. Set up the correct version of PowerShell 2. Ensure Visual Studio Code is configured for PowerShell Core 6.0. 3. PowerShell Core is built on top of .NET Core hence install .NET Core 2.1 SDK 4. Head over to the PowerShell Gallery and install AWSLambdaPSCore module The module provides users with following cmdlets to author and publish Powershell based   Lambda functions- Source: AWS Blog You can head over to the AWS blog for detailed steps on how to use the Lambda support for PowerShell. The blog gives readers a simple example on how to execute a PowerShell script that ensures that the Remote Desktop (RDP) port is not left open on any of the EC2 security groups. How to Run Code in the Cloud with AWS Lambda Amazon hits $1 trillion market value milestone yesterday, joining Apple Inc Getting started with Amazon Machine Learning workflow [Tutorial]
Read more
  • 0
  • 0
  • 2351

article-image-dr-fei-fei-li-googles-ai-cloud-head-steps-down-amidst-speculations-dr-andrew-moore-to-take-her-place
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Dr. Fei Fei Li, Google's AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Melisha Dsouza
11 Sep 2018
4 min read
Yesterday, Diane Greene, the CEO of Google Cloud, announced in a blog post that Chief Artificial Intelligence Scientist Dr. Fei-Fei Li will be   replaced by Dr. Andrew Moore, dean of the school of computer science at Carnegie Mellon University at the end of this year. The blog further mentions that, as originally planned, Dr. Fei-Fei Li will be returning to her professorship at Stanford and in the meanwhile, she will transition to being an AI/ML Advisor for Google Cloud. The timing of the transition following the controversies surrounding Google and Pentagon Project Maven is not lost on many. Flashback on ‘Project Maven’ protest and its outcry On March 2017 it was revealed that Google Cloud, headed by Greene, signed a secret $9m contract with the United States Department of Defense called as 'Project Maven'. The project aimed to develop an AI system that could help recognize people and objects captured in military drone footage. The contract was crucial to the Google Cloud Platform gaining a key US government FedRAMP authorization. This project was expected to assist Google in finding future government work worth potentially billions of dollars. Planned for use for non-offensive purposes only,  project Maven also had the potential to expand to a $250m deal. Google provided the Department of Defense with its TensorFlow APIs to assist in object recognition, which the Pentagon believed would eventually turn its stores of video into "actionable intelligence". In September 2017, in a leaked email reviewed by The New York Times, Scott Frohman, Google’s head of defense and intelligence sales asked Dr. Li ,Google Cloud AI’s leader and Chief Scientist, for directions on the “burning question” of how to publicize this news to the masses. To which she replied back- “Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.” As predicted by Dr. Li, the project was met with outrage by more than 3000 Google employees who believed that Google shouldn't be involved in any military work and that algorithms have no place in identifying potential targets. This caused a rift in Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. Many employees were "deeply concerned" that the data collected by Google integrated with military surveillance data for targeted killing. Fast forward to June 2018 where Google stated that it would not renew its contract (to expire in 2019) with the Pentagon. Dr. Li’s timeline at Google During her two year tenure, Dr. Li oversaw some remarkable work in accelerating the adoption of AI and ML by developers and Google Cloud customers. Considered as one of the most talented machine learning researchers in the world, Dr. Li has published more than 150 scientific articles in top-tier journals and conferences including Nature, Journal of Neuroscience, New England Journal of Medicine and many more. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a large-scale effort contributing to the latest developments in computer vision and deep learning in AI. Dr. Li has been a keynote or invited speaker at many conferences. She has been in the forefront of receiving prestigious awards for innovation and technology while being an acclaimed feature in many magazines. In addition to her contributions in the world of tech, Dr Li also is a co-founder of Stanford’s renowned SAILORS outreach program for high school girls and the national non-profit AI4ALL. The controversial email from Dr.Li can lead to one thinking if the transition was made as a result of the events of 2017. However, no official statement has been released by Google or Dr. Li on why she is moving on. Head over to Google’s Blog for the official announcement of this news. Google CEO Sundar Pichai won’t be testifying to Senate on election interference Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready      
Read more
  • 0
  • 0
  • 3174
article-image-microsoft-announces-azure-devops-makes-azure-pipelines-available-on-github-marketplace
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace

Melisha Dsouza
11 Sep 2018
4 min read
Microsoft is rebranding Visual Studio Team Services(VSTS) to Azure DevOps along with  Azure DevOps Server, the successor of Team Foundation Server (TFS). Microsoft understands that DevOps has become increasingly critical to a team’s success. The re-branding is done to achieve the aim of shipping higher quality software in a short span of time. Azure DevOps supports both public and private cloud configurations. The services are open and extensible and designed to work with any type of application, framework, platform, or cloud. Since Azure DevOps services work great together, users can gain more control over their projects. Azure DevOps is free for open source projects and small projects including up to five users. For larger teams, the cost ranges from $30 per month to $6,150 per month, depending upon the number of users. VSTS users will be upgraded into Azure DevOps projects automatically without any loss of functionally. URLs will be changed from abc.visualstudio.com to dev.azure.com/abc. Redirects from visualstudio.com URLs will be supported to avoid broken links. New users will get the update starting 10th September 2018, and existing users can expect the update in coming months. Key features in Azure DevOps: #1 Azure Boards Users can keep track of their work at every development stage with Kanban boards, backlogs, team dashboards, and custom reporting. Built-in scrum boards and planning tools help in planning meetings while gaining new insights into the health and status of projects with powerful analytics tools. #2 Azure Artifacts Users can easily manage Maven, npm, and NuGet package feeds from public and private sources. Code storing and sharing across small teams and large enterprises is now efficient thanks to Azure Artifacts. Users can Share packages, and use built-in CI/CD, versioning, and testing. They can easily access all their artifacts in builds and releases. #3 Azure Repos Users can enjoy unlimited cloud-hosted private Git repos for their projects.  They can securely connect with and push code into their Git repos from any IDE, editor, or Git client. Code-aware searches help them find what they are looking for. They can perform effective Git code reviews and use forks to promote collaboration with inner source workflows. Azure repos help users maintain a high code quality by requiring code reviewer sign off, successful builds, and passing tests before pull requests can be merged. #4 Azure Test Plans Users can improve their code quality using planned and exploratory testing services for their apps. These Test plans help users in capturing rich scenario data, testing their application and taking advantage of end-to-end traceability. #5 Azure Pipelines There’s more in store for VSTS users. For a seamless developer experience, Azure Pipelines is also now available in the GitHub Marketplace. Users can easily configure a CI/CD pipeline for any Azure application using their preferred language and framework. These Pipelines can be built and deployed with ease. They provide users with status reports, annotated code, and detailed information on changes to the repo within the GitHub interface. The pipelines Work with any platform- like Azure, Amazon Web Services, and Google Cloud Platform. They can run on apps with operating systems, including Android, iOS, Linux, macOS, and Windows systems. The Pipelines are free for open source projects. Microsoft has tried to update user experience by introducing these upgrades. Are you excited yet? You can learn more at the Microsoft live Azure DevOps keynote today at 8:00 a.m. Pacific and a workshop with Q&A on September 17 at 8:30 a.m. Pacific on Microsoft’s events page. You can read all the details of the announcement on Microsoft’s official Blog. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence 8 ways Artificial Intelligence can improve DevOps  
Read more
  • 0
  • 0
  • 3196

article-image-opensky-is-now-a-part-of-the-alibaba-family
Bhagyashree R
06 Sep 2018
2 min read
Save for later

OpenSky is now a part of the Alibaba family

Bhagyashree R
06 Sep 2018
2 min read
Yesterday, Chris Keane, the General Manager of OpenSky announced that OpenSky is now acquired by the Alibaba Group. OpenSky is a network of businesses that empower modern global trade for SMBs and help people discover, buy, and share unique goods that match their individual taste. OpenSky will join Alibaba Group in two capacities: One of OpenSky’s team will become a part of Alibaba.com in North America B2B to serve US based buyers and suppliers. The other team will become a wholly-owned subsidiary of Alibaba Group consisting of OpenSky’s marketplace and SaaS businesses. In 2015, Alibaba Group acquired a minority ownership on OpenSky. In 2017, they collaborated with Alibaba’s B2B leadership team to solve the challenges faced by small businesses. According to Chris, both the companies share a common interest, which is to help small businesses: “It was thrilling to discover that our counterparts at Alibaba share our obsession with helping SMBs. We’ve quickly aligned on a global vision to provide access to markets and resources for businesses and entrepreneurs, opening new doors and knocking down obstacles.” In this announcement Chris also mentioned that they will be coming up with powerful concepts to serve small businesses everywhere, in the near future. To know more, read the official announcement on LinkedIn. Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Digitizing the offline: How Alibaba’s FashionAI can revive the waning retail industry Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 5382