Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-kotlin-1-3-50-released-with-duration-and-time-measurement-api-preview-dukat-for-npm-dependencies-and-much-more
Savia Lobo
27 Aug 2019
6 min read
Save for later

Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!

Savia Lobo
27 Aug 2019
6 min read
On August 22, the JetBrains team announced the release of Kotlin 1.3.50. Some of the major improvements in this Kotlin version include a preview of the new Duration and Time Measurement API in the standard library, using Dukat for the experimental generation of external declarations for npm dependencies in Gradle Kotlin/JS projects, a separate plugin for debugging Kotlin/Native code in IntelliJ IDEA Ultimate, and much more. The team has also worked on improving Java-to-Kotlin converter and on Java compilation support in multi-platform projects. Let us have a look at these improvements in brief. Major improvements in Kotlin 1.3.50 Changes in the standard library Experimental preview of duration and time measurement API A new duration and time measurement API is available for preview. The researchers say that if the API expects the duration stored as primitive value like Long, one can erroneously pass the value in the wrong unit, and unfortunately, the type system doesn’t help prevent that. Hence, the team created a regular class to store duration solves this problem. However, this brings another problem, i.e., additional allocations. Now the API can use the Duration type, and all the clients will need to specify the time in the desired units explicitly. This release brings support for MonoClock which represents the monotonic clock, which doesn’t depend on the system time. The monotonic clock can only measure the time difference between given time points, but doesn’t know the “current time.” The Clock interface provides a general API for measuring time intervals. MonoClock is an object implementing Clock; it provides the default source of monotonic time on different platforms. When using the Clock interface, the user explicitly marks the time of action start, and later the time elapsed from the start point. It is especially convenient if one wants to start and finish measuring time from different functions. To know more about this feature in detail, read Kotlin/KEEP on GitHub. Experimental API for bit manipulation The standard library now contains an experimental API for bit manipulation. Similar extension functions for Int, Long, Short, Byte, and their unsigned counterparts have also been added. IntelliJ IDEA support in Kotlin 1.3.50 Improvements in Java to Kotlin converter This release includes a preview of Java to Kotlin converter to minimize the amount of “red code” one has to fix manually after the conversion. This improved version of the converter tries to infer nullability more correctly based on the Java type usages in the code. The goal is to decrease the number of compilation errors and to make the produced Kotlin code more convenient to use. The new converter fixes many other known bugs, too; for instance, it now correctly handles implicit Java type casts. This new converter may become the default one in the future. To turn it on, specify the Use New J2K (experimental) flag in settings. Debugging improvements In Kotlin 1.3.50, the team has improved how the Kotlin “Variables” view chooses variables to display. As there’s a lot of additional technical information in the bytecode, the Kotlin “Variables” view highlights only the relevant variables. Local variables inside the lambda, as well as captured variables from the outer context and parameters of the outer function, are correctly displayed: Source: jetbrains.com Kotlin 1.3.50 also adds improved support for the “Evaluate expression” functionality in the debugger for many non-trivial language features, such as local extension functions or accessors of member extension properties. Users can now modify variables via “Evaluate expression”: Source: jetbrains.com Added new intentions and inspections This release includes the addition of new intentions and inspections. One of the goals of intentions is to help users learn how to write idiomatic Kotlin code. The following intention, for instance, suggests using the indices property rather than building a range of indices manually: Source: jetbrains.com Updates to Kotlin/JS Kotlin 1.3.50 adds support for building and running Kotlin/JS Gradle projects using the org.jetbrains.kotlin.js plugin on Windows. Users can now build and run projects using Gradle tasks, dependencies from NPM required in the Gradle configuration are resolved and included. Users can also try out their applications using webpack-dev-server and much more. The team has also added performance improvements for Kotlin/JS by improving the incremental compilation time for projects. With this users expect speedups of up to 30% when compared to 1.3.41. This version also shows an improved integration with NPM, which means that projects are now resolved lazily and in parallel, and support for projects with transitive dependencies between compilations in the same project has been added. Kotlin 1.3.50 also brings changes in the structure and naming of generated artifacts. Generated artifacts are now bundled in the distributions folder, and they include the version number of the project and archiveBaseName (which defaults to the project name), e.g. projectName-1.0-SNAPSHOT.js. Using Dukat for automatic conversion of TypeScript declaration files Dukat allows the automatic conversion of TypeScript declaration files (.d.ts) into Kotlin external declarations. This makes it more comfortable to use libraries from the JavaScript ecosystem in a type-safe manner in Kotlin, thus, reducing the need for manually writing wrappers for JS libraries. Kotlin/JS now ships with experimental support for Dukat integration for Gradle projects. With this integration, by running the build task in Gradle, typesafe wrappers are automatically generated for npm dependencies and can be used from Kotlin. As Dukat is still in a very early stage, its integration is disabled by default. The team has prepared an example project, which demonstrates the use of dukat in Kotlin/JS projects. Updates to Kotlin/ Native Previously, the version of Kotlin/Native differed from the version of Kotlin. However, in this version, schemes for Kotlin and Kotlin/Native are now aligned. This release uses version 1.3.50 for both Kotlin and Kotlin/Native binaries, reducing the complexity. This release brings more pre-imported Apple frameworks for all platforms, including macOS and iOS. The Kotlin/Native compiler now includes actual bitcode in produced frameworks. Several performance improvements have also made in the interop tool. The team has also announced that Null-check optimizations have been planned for Kotlin 1.4. Starting from Kotlin 1.4, "all runtime null checks will throw a java.lang.NullPointerException instead of a KotlinNullPointerException, IllegalStateException, IllegalArgumentException, and TypeCastException. This applies to: the !! operator, parameter null checks in the method preamble, platform-typed expression null checks, and the as operator with a non-null type. This doesn’t apply to lateinit null checks and explicit library function calls like checkNotNull or requireNotNull." Apart from the changes mentioned, Java compilation can now be included in Kotlin/JVM targets of a multiplatform project by calling the newly added withJava() function of the DSL. This release also adds multiple features and improvements in scripting and REPL support. To know more about these changes and other changes in detail, read Kotlin 1.3.50 official blog post. Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines Introducing Kweb: A Kotlin library for building rich web applications How to avoid NullPointerExceptions in Kotlin [Video]
Read more
  • 0
  • 0
  • 3536

article-image-red-hat-announces-the-general-availability-of-red-hat-openshift-service-mesh
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Red Hat announces the general availability of Red Hat OpenShift Service Mesh

Amrata Joshi
27 Aug 2019
3 min read
Last week, the team at Red Hat, a provider of enterprise open source solutions announced the general availability of Red Hat OpenShift Service Mesh for connecting, managing, observing and simplifying service-to-service communication of Kubernetes applications on Red Hat OpenShift 4.  The OpenShift Service Mesh is based on Istio, Kiali and Jaeger projects and is designed to deliver end-to-end developer experience around microservices-based application architectures. It manages the network connections between the containerized applications and decentralized applications. And eases the complex tasks of implementing bespoke networking services for applications and business logic.  Larry Carvalho, research director, IDC said in a statement to Business Wire, “Service mesh is the next big area of disruption for containers in the enterprise because of the complexity and scale of managing interactions with interconnected microservices. Developers seeking to leverage Service Mesh to accelerate refactoring applications using microservices will find Red Hat’s experience in hybrid cloud and Kubernetes a reliable partner with the Service Mesh solution." Developers can now improve the implementation of microservice architectures by natively integrating service mesh into the OpenShift Kubernetes platform. The OpenShift Service Mesh improves traffic management by including service observability and visualization of the mesh topology.  Ashesh Badani, Red Hat's senior VP of Cloud Platforms, said in a statement, "The addition of Red Hat OpenShift Service Mesh allows us to further enable developers to be more productive on the industry's most comprehensive enterprise Kubernetes platform by helping to remove the burdens of network connectivity and management from their jobs and allowing them to focus on building the next-generation of business applications." Features of Red Hat OpenShift Service Mesh Tracing OpenShift Service Mesh features tracing that uses Jaeger which is an open, distributed tracing system. Tracing helps developers in tracking a request between services. Tracing also helps in providing an insight into the request process right from the start till the end.  Visualization and observability  This Service Mesh also provides an easier way to view its topology and observe how the services interact. Visualization helps in understanding how the services are managed and how the traffic is flowing in near-real time which helps in easier management and troubleshooting. Service Mesh installation and configuration  OpenShift Service Mesh features “One-click” Service Mesh installation and configuration with the help of Service Mesh Operator and an Operator Lifecycle Management framework. The developers can deploy applications into a service mesh more easily. A Service Mesh Operator deploys Istio, Jaeger and Kiali together minimizes management burdens and automates tasks such as installation, service maintenance as well as lifecycle management. Developed with open projects OpenShift Service Mesh is developed with open projects and is built in collaboration with leading members of the Kubernetes community. Increases productivity of the developers The Service Mesh integrates communication policies without making changes to the application code or integrating language-specific libraries. To know more about Red Hat OpenShift Service Mesh, check out the official website. Red Hat joins the RISC-V foundation as a Silver level member Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman
Read more
  • 0
  • 0
  • 2600

article-image-introducing-nushell-a-rust-based-shell
Savia Lobo
26 Aug 2019
3 min read
Save for later

Introducing Nushell: A Rust-based shell

Savia Lobo
26 Aug 2019
3 min read
On August 23, Jonathan Turner, an Azure SDK developer introduced a new shell written in Rust, called Nushell or ‘Nu’. This Rust-based shell is inspired by the “classic Unix philosophy of pipelines, the structured data approach of PowerShell, functional programming, systems programming, and more,” Turner writes in his official blog. The idea of Nushell struck when Turner’s friend Yehuda Yatz demonstrated the working of Powershell. Yatz asked Turner if he could join in his project “we could take the ideas of a structured shell and make it more functional (as opposed to object-oriented)? What if, like PowerShell, it worked on Windows, Linux, and macOS? What if it had great error messages?” Turner highlights the fact that “everything in Nu is data”; this means when a user tries other commands and realize that they are using the same commands to filter, to sort, etc. Rather than having the need to remember all the parameters to all the commands, they can just use the same verbs to act over our data, regardless of where the data came from. Nu also understands structured text files like JSON, TOML, YAML, and allows users to manipulate their data, and much more. “You get used to using the verbs, and then you can use them on anything. When you’re ready, you can write it back to disk,” Turner writes. Nu also supports opening and looking at the text and binary data. On opening a source file, users can scroll around in a syntax-highlighted file. Further on opening an xml, they can look at its data. They can even open a binary file and look at what’s inside. Turner mentions that there is a lot one might want to explore with Nushell. Hence, the team has released Nu with the ability to extend it with plugins. Nu will look for these plugins in your path, and load them up on startup. Rust language is the major backbone for this project and Nushell would not have been possible without Rust, Turner exclaims. Nu internally uses async/await, async streams, and employs liberal use of “serde” to manage serializing and deserializing into the common data format and to communicate with plugins. Nushell GitHub page reads, “This project has reached a minimum-viable product level of quality. While contributors dogfood it as their daily driver, it may be instable for some commands. Future releases will work fill out missing features and improve stability. Its design is also subject to change as it matures.” The team will further work towards stability, the ability to use Nu as the main shell, the ability to write functions and scripts in Nu, and much more. Users can also read the book on Nu, available in both English and Spanish language. To know more about this news in detail, head over to Jonathan Turner’s official blog post or visit Nushell’s GitHub page. Announcing ‘async-std’ beta release, an async port of Rust’s standard library Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]
Read more
  • 0
  • 0
  • 4719

article-image-google-confirms-and-fixes-193-security-vulnerabilities-in-android-q
Sugandha Lahoti
26 Aug 2019
3 min read
Save for later

Google confirms and fixes 193 security vulnerabilities in Android Q

Sugandha Lahoti
26 Aug 2019
3 min read
Last week, Google released the latest Android Q security release notes published to the Android Open Source Project (AOSP) security bulletin update. Per this update, there are 193 Android security vulnerabilities in the latest version of Android. These include elevation of privilege, remote code execution, information disclosure and denial of service categories. Two are in the Android runtime, two in the library and 24 in the framework. The Android media framework has 68 vulnerabilities and the Android system has 97. All have been defined with "moderate" severity. These issues Google says are fixed in the default Android 10 patch level of 2019-09-01 on the release of the new OS. “Android Q, as released on AOSP, has a default security patch level of 2019-09-01. Android devices running Android Q and with a security patch level of 2019-09-01 or later address all issues contained in these security release notes,” reads the update. The update specifies that "Google has had no reports of active customer exploitation or abuse of these newly reported issues." At the Google I/O in May, Google had released Android Q beta 3. With this new release, Google announced that Android Q will double down on security and privacy features, such as a Maps incognito mode, reminders for location usage and sharing (such as only when a delivery app is in use), and TLSV3 encryption for low-end devices. Security updates will also roll out faster, updating over the air without a reboot needed for the device. The last Beta update for Android Q was rolled out in August as Beta 6. Other privacy announcements announced for Android Q so far by Google include: Scoped storage: There are new limits on access to files in shared external storage. Device Location: Android Q has a new user option to allow access to device location only while using your app in the foreground Background App Starts: There are new restrictions on launching activities from the background without user interaction Hardware Identifiers: Restrictions on access to device hardware identifiers such as IMEI, serial number, MAC, and similar data Camera And Connectivity: Android 10 has restrictions on access to full camera metadata, and FINE location permission now required for many connectivity workflows. Android has been the target of hackers for a long time. Recently, in July, Check Point researchers reported a new mobile malware attack called ‘Agent Smith’ which infected around 25 million Android devices. This malware is being used for financial gains through the use of malicious advertisements. The malware, concealed under the identity of a Google-related app, exploited known Android vulnerabilities and automatically replaced installed apps with their malicious versions, without any consent of the user. Android Studio 3.4 releases with Android Q Beta emulator, a new resource manager and more. Android Q Beta is now available for developers on all Google Pixel devices Android Q will reportedly give network carriers more control over network devices
Read more
  • 0
  • 0
  • 2255

article-image-apache-flink-1-9-0-releases-with-fine-grained-batch-recovery-state-processor-api-and-more
Fatema Patrawala
26 Aug 2019
5 min read
Save for later

Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more

Fatema Patrawala
26 Aug 2019
5 min read
Last week the Apache Flink community announced the release of Apache Flink 1.9.0. The Flink community defines the project goal as “to develop a stream processing system to unify and power many forms of real-time and offline data processing applications as well as event-driven applications.” In this release, they have made a huge step forward in that effort, by integrating Flink’s stream and batch processing capabilities under a single, unified runtime. There are significant features in this release, namely batch-style recovery for batch jobs and a preview of the new Blink-based query engine for Table API and SQL queries. The team also announced the availability of the State Processor API, one of the most frequently requested features that enables users to read and write savepoints with Flink DataSet jobs. Additionally, Flink 1.9 includes a reworked WebUI and previews of Flink’s new Python Table API and it is integrated with the Apache Hive ecosystem. Let us take a look at the major new features and improvements: New Features and Improvements in Apache Flink 1.9.0 Fine-grained Batch Recovery The time to recover a batch (DataSet, Table API and SQL) job from a task failure is significantly reduced. Until Flink 1.9, task failures in batch jobs were recovered by canceling all tasks and restarting the whole job, i.e, the job was started from scratch and all progress was voided. With this release, Flink can be configured to limit the recovery to only those tasks that are in the same failover region. A failover region is the set of tasks that are connected via pipelined data exchanges. Hence, the batch-shuffle connections of a job define the boundaries of its failover regions. State Processor API Up to Flink 1.9, accessing the state of a job from the outside was limited to the experimental Queryable State. In this release the team introduced a new, powerful library to read, write and modify state snapshots using the batch DataSet API. In practice, this means: Flink job state can be bootstrapped by reading data from external systems, such as external databases, and converting it into a savepoint. State in savepoints can be queried using any of Flink’s batch APIs (DataSet, Table, SQL), for example to analyze relevant state patterns or check for discrepancies in state that can support application auditing or troubleshooting. The schema of state in savepoints can be migrated offline, compared to the previous approach requiring online migration on schema access. Invalid data in savepoints can be identified and corrected. The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Stop-with-Savepoint Cancelling with a savepoint is a common operation for stopping/restarting, forking or updating Flink jobs. However, the existing implementation did not guarantee output persistence to external storage systems for exactly-once sinks. To improve the end-to-end semantics when stopping a job, Flink 1.9 introduces a new SUSPEND mode to stop a job with a savepoint that is consistent with the emitted data. You can suspend a job with Flink’s CLI client as follows: bin/flink stop -p [:targetDirectory] :jobId The final job state is set to FINISHED on success, allowing users to detect failures of the requested operation. Flink WebUI Rework After a discussion about modernizing the internals of Flink’s WebUI, this component was reconstructed using the latest stable version of Angular — basically, a bump from Angular 1.x to 7.x. The redesigned version is the default in Apache Flink 1.9.0, however there is a link to switch to the old WebUI. Preview of the new Blink SQL Query Processor After the donation of Blink to Apache Flink, the community worked on integrating Blink’s query optimizer and runtime for the Table API and SQL. The team refactored the monolithic flink-table module into smaller modules. This resulted in a clear separation of well-defined interfaces between the Java and Scala API modules and the optimizer and runtime modules. Other important changes in this release: The Table API and SQL are now part of the default configuration of the Flink distribution. Previously, the Table API and SQL had to be enabled by moving the corresponding JAR file from ./opt to ./lib. The machine learning library (flink-ml) has been removed in preparation for FLIP-39. The old DataSet and DataStream Python APIs have been removed in favor of FLIP-38. Flink can be compiled and run on Java 9. Note: that certain components interacting with external systems (connectors, filesystems, reporters) may not work since the respective projects may have skipped Java 9 support. The binary distribution and source artifacts for this release are now available via the Downloads page of the Flink project, along with the updated documentation. Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation. You can review the release notes to know about the detailed list of changes and new features to upgrade Flink setup to Flink 1.9.0. Apache Flink 1.8.0 releases with finalized state schema evolution support Apache Flink founders data Artisans could transform stream processing with patent-pending tool Apache Flink version 1.6.0 released!
Read more
  • 0
  • 0
  • 3062

article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 5576
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-cisco-talos-researchers-disclose-eight-vulnerabilities-in-googles-nest-cam-iq-indoor-camera
Savia Lobo
23 Aug 2019
4 min read
Save for later

Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera

Savia Lobo
23 Aug 2019
4 min read
On Monday, August 19, the Cisco Talos research team disclosed eight security vulnerabilities in Google’s Nest Cam IQ, a high-end security indoor camera (IoT device). These vulnerabilities allow hackers to take over the camera, prevent its use or allow code execution. The two researchers, Lilith Wyatt and Claudio Bozzato, said that these eight vulnerabilities  apply to version 4620002 of the Nest Cam IQ indoor device and were located in the Nest implementation of the Weave protocol. The Weave protocol is designed specifically for communications among Internet of Things or IoT devices. Per Cisco Talos, Nest Labs’ Cam IQ Indoor integrates security-enhanced Linux in Android, Google Assistant and facial recognition all into a compact security camera. Nest, on the other hand, has provided a firmware update that the company says will fix the vulnerabilities. Nest says that these updates will happen automatically if the user’s camera is connected to the internet. The researchers in their official statement said, "Nest Cam IQ Indoor primarily uses the Weave protocol for setup and initial communications with other Nest devices over TCP, UDP, Bluetooth, and 6lowpan.” "It is important to note that while the weave-tool binary also lives on the camera and is vulnerable, it is not normally exploitable as it requires a local attack vector (i.e. an attacker-controlled file) and the vulnerable commands are never directly run by the camera," they further added. The eight vulnerabilities in Google Nest Cam IQ TCP connection denial-of-service vulnerability This vulnerability (CVE-2019-5043) is an exploitable denial-of-service vulnerability that exists in the Weave daemon of the Nest Cam IQ Indoor, version 4620002. A set of TCP connections can cause unrestricted resource allocation, resulting in a denial of service. An attacker can connect multiple times to trigger this vulnerability. Legacy pairing information disclosure vulnerability This exploitable information disclosure vulnerability (CVE-2019-5034) exists in the Weave legacy pairing functionality of the Nest Cam IQ Indoor, version 4620002. A set of specially crafted Weave packets can cause an out-of-bounds read, resulting in information disclosure. PASE pairing brute force vulnerability This vulnerability (CVE-2019-5035) exists in the Weave PASE pairing functionality of the Nest Cam IQ Indoor, version 4620002. Here, a set of specially crafted weave packets can brute force a pairing code, resulting in greater Weave access and potentially full device control. KeyError denial-of-service vulnerability This vulnerability (CVE-2019-5036) exists in the Weave error reporting functionality of the Nest Cam IQ Indoor, version 4620002. Here, a specially crafted weave packet can cause an arbitrary Weave Exchange Session to close, resulting in a denial of service. WeaveCASEEngine::DecodeCertificateInfo vulnerability This vulnerability (CVE-2019-5037) exists in the Weave certificate loading functionality of the Nest Cam IQ Indoor camera, version 4620002, where a specially crafted weave packet can cause an integer overflow and an out-of-bounds read to occur on unmapped memory, resulting in a denial of service. Tool Print-TLV code execution vulnerability This exploitable command execution vulnerability (CVE-2019-5038) exists in the print-tlv command of Weave tools. Here, a specially crafted weave TLV can trigger a stack-based buffer overflow, resulting in code execution. An attacker can trigger this vulnerability by convincing the user to open a specially crafted Weave command. ASN1Writer PutValue code execution vulnerability This exploitable command execution vulnerability (CVE-2019-5039) exists in the ASN1 certificate writing functionality of Openweave-core, version 4.0.2. Here, a specially crafted weave certificate can trigger a heap-based buffer overflow, resulting in code execution. An attacker can exploit this vulnerability by tricking the user into opening a specially crafted Weave. DecodeMessageWithLength information disclosure vulnerability This vulnerability (CVE-2019-5040) exists in the Weave MessageLayer parsing of Openweave-core, version 4.0.2 and the Nest Cam IQ Indoor, version 4620002. A specially crafted weave packet can cause an integer overflow to occur, resulting in PacketBuffer data reuse. In a statement to ZDNet, Google said, "We've fixed the disclosed bugs and started rolling them out to all Nest Camera IQs. The devices will update automatically so there's no action required from users." To know more about this news in detail, read Cisco Talos’ official blog post. Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera Google’s Project Zero reveals several serious zero-day vulnerabilities in a fully remote attack surface of the iPhone Docker 19.03 introduces an experimental rootless Docker mode that helps mitigate vulnerabilities by hardening the Docker daemon
Read more
  • 0
  • 0
  • 2897

article-image-oracle-directors-support-billion-dollar-lawsuit-against-larry-ellison-and-safra-catz-for-netsuite-deal
Fatema Patrawala
23 Aug 2019
5 min read
Save for later

Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal

Fatema Patrawala
23 Aug 2019
5 min read
On Tuesday, Reuters reported that Oracle directors gave a go ahead for a million dollar lawsuit filed against Larry Ellison and Safra Catz in a NetSuite deal in 2016. This was made possible by several board members who wrote an extraordinary letter to the Delaware Court. According to Reuters, in 2017, shareholders led by the Firemen’s Retirement System of St. Louis alleged that Oracle directors breached their duties when they approved a $9.3 billion acquisition of NetSuite – a company controlled by Oracle chair Larry Ellison – at a huge premium above NetSuite’s trading price. Shareholders alleged that Oracle directors sanctioned Ellison’s self-dealing - and also claimed that Oracle’s board members were too entwined with Ellison to be entrusted with the decision of whether the company should sue him and other directors over the NetSuite deal. In an opinion published in Reuters in May 2018, Vice-Chancellor Sam Glasscock of Delaware Chancery Court agreed that shareholders had shown it would have been futile for them to demand action from the board itself. Three years after closing a $9.3 billion deal to acquire NetSuite, three board members, including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, Vice Chancellor for the Court of Chancery in Georgetown, Delaware, approving the lawsuit as members of a special board of directors entity known as the Special Litigation Committee. This lawsuit in legal parlance is known as a derivative suit. According to Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained. The letter went on to say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed. As per the letter, the lawsuit, which was originally filed by the Firemen’s Retirement System of St. Louis, could be worth billions. It reads, “One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members.” Oracle directors struggled with its cloud footing and ended up buying NetSuite TechCrunch noted that Larry Ellison was involved in setting up NetSuite in the late 1990s and was a major shareholder of NetSuite at the time of the acquisition. Oracle directors were struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player, like NetSuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. On Hacker News, a few users commented saying Oracle directors overpaid NetSuite and enriched Larry Ellison. One comment reads, “As you know people, as you learn about things, you realize that these generalizations we have are, virtually to a generalization, false. Well, except for this one, as it turns out. What you think of Oracle, is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle. And I gotta say, as someone who has seen that complexity for my entire life, it's very hard to get used to that idea. It's like, 'surely this is more complicated!' but it's like: Wow, this is really simple! This company is very straightforward, in its defense. This company is about one man, his alter-ego, and what he wants to inflict upon humanity -- that's it! ...Ship mediocrity, inflict misery, lie our asses off, screw our customers, and make a whole shitload of money. Yeah... you talk to Oracle, it's like, 'no, we don't fucking make dreams happen -- we make money!' ...You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle.” Oracle does “organizational restructuring” by laying off 100s of employees IBM, Oracle under the scanner again for questionable hiring and firing policies The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 3348

article-image-vmware-signs-definitive-agreement-to-acquire-pivotal-software-and-carbon-black
Vincy Davis
23 Aug 2019
3 min read
Save for later

VMware signs definitive agreement to acquire Pivotal Software and Carbon Black

Vincy Davis
23 Aug 2019
3 min read
Yesterday, VMware announced in a press release that they entered a conclusive agreement to acquire Carbon Black, a cloud-native endpoint security software developer. According to the agreement, “VMware will acquire Carbon Black in an all cash transaction for $26 per share, representing an enterprise value of $2.1 billion.”  VMware intends to use Carbon Black’s big data and behavioral analytics to offer customers advanced threat detection and behavioral insight to defend against experienced attacks. Consequently, they aspire to protect clients through big data, behavioral analytics, and AI. Pat Gelsinger, the CEO of VMware says, “By bringing Carbon Black into the VMware family, we are now taking a huge step forward in security and delivering an enterprise-grade platform to administer and protect workloads, applications, and networks.” He adds, “With this acquisition, we will also take a significant leadership position in security for the new age of modern applications delivered from any cloud to any device.” Yesterday, after much speculation, VMware also announced that they have acquired Pivotal Software, a cloud-native platform provider, for an enterprise value of $2.7 billion. Dell technologies is a major stakeholder in both companies. Lately, VMware has been heavily investing in Kubernetes. Last year, it also launched a VMware Kubernetes Engine (VKE) to offer Kubernetes-as-a-Service. This year, Pivotal also teamed up with the Heroku team to create Cloud Native Buildpacks for Kubernetes and recently, also launched a Pivotal Spring Runtime for Kubernetes. With Pivotal, VMware plans to “deliver a comprehensive portfolio of products, tools and services necessary to build, run and manage modern applications on Kubernetes infrastructure with velocity and efficiency.” Read More: VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Gelsinger told ZDNet that both these “acquisitions address two critical technology priorities of all businesses today — building modern, enterprise-grade applications and protecting enterprise workloads and clients.” Gelsinger also pointed out that multi-cloud, digital transformation, and the increasing trend of moving “applications to the cloud and access it over distributed networks and from a diversity of endpoints” are significant reasons for placing high stakes on security. It is clear that by acquiring Carbon Black and Pivotal Software, the cloud computing and virtualization software company is seeking to expand its range of products and services with an ultimate focus on security in Kubernetes. A user on Hacker News comments, “I'm not surprised at the Pivotal acquisition. VMware is determined to succeed at Kubernetes. There is already a lot of integration with Pivotal's Kubernetes distribution both at a technical as well as a business level.” Also, developers around the world are excited to see what the future holds for VMware, Carbon Black, and Pivotal Software. https://twitter.com/rkagal1/status/1164852719594680321 https://twitter.com/CyberFavourite/status/1164656928913596417 https://twitter.com/arashg_/status/1164785525120618498 https://twitter.com/jambay/status/1164683358128857088 https://twitter.com/AnnoyedMerican/status/1164646153389875200 Per the press release, both the transaction payments are expected to be concluded in the second half of VMware’s fiscal year i.e., January 31, 2020. Interested users can read the VMware acquiring Carbon Black and Pivotal Software press releases for more information. VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 2157

article-image-pivotal-open-sources-kpack-a-kubernetes-native-image-build-service
Sugandha Lahoti
23 Aug 2019
2 min read
Save for later

Pivotal open sources kpack, a Kubernetes-native image build service

Sugandha Lahoti
23 Aug 2019
2 min read
In April, Pivotal and Heroku teamed up to create Cloud Native Buildpacks for Kubernetes. Cloud-Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. Yesterday, they open-sourced kpack, which is a set of experimental build service Kubernetes resource controllers. Basically, kpack is Kubernetes’ native way to build and update containers. It automates the creation and update of container images that can be run anywhere. Pivotal’s commercial implementation of kpack comes via Pivotal Build Service. Users can use it atop Kubernetes to boost developer productivity. The Build Service integrates kpack with buildpacks and the Kubernetes permissions model. kpack presents a CRD as its interface, and users can interact with all Kubernetes API tooling including kubectl. Pivotal has open-sourced kpack for two reasons, as mentioned in their blog post. “First, to provide Build Service’s container building functionality and declarative logic as a consumable component that can be used by the community in other great products. Second, to provide a first-class interface, to create and modify image resources for those who desire more granular control.” Many companies and communities have announced that they will be using Kpack in their projects. Project riff will use kpack to build functions to handle events. The Cloud Foundry community plans to feature kpack as the new app staging mechanism in the Cloud Foundry Application Runtime. Check out the kpack repo for more details. You can also request alpha access to Build Service. In other news, Pivotal and VMware, the former’s parent company are negotiating a deal for VMware to acquire Pivotal as per a recent regulatory filing from Dell. VMware, Pivotal, and Dell have jointly filed the document informing the government regulators about the potential transaction. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads.
Read more
  • 0
  • 0
  • 3206
article-image-a-security-issue-in-the-net-http-library-of-the-go-language-affects-all-versions-and-all-components-of-kubernetes
Savia Lobo
23 Aug 2019
3 min read
Save for later

A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes

Savia Lobo
23 Aug 2019
3 min read
On August 19, the Kubernetes Community disclosed that a security issue has been found in the net/http library of the Go language affecting all versions and all components of Kubernetes. This can further result in a DoS attack against any process with an HTTP or HTTPS listener. The two high severity vulnerabilities, CVE-2019-9512 and CVE-2019-9514 have been assigned CVSS v3.0 base scores of 7.5 by the Kubernetes Product Security Committee. These vulnerabilities allow untrusted clients to allocate an unlimited amount of memory until the server crashes. The Kubernetes' development team has released patched versions to address these security flaws to further block potential attackers from exploiting them. CVE-2019-9512 Ping Flood In CVE-2019-9512, the attacker sends continual pings to an HTTP/2 peer, causing the peer to build an internal queue of responses. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service. CVE-2019-9514 Reset Flood In CVE-2019-9514, the attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both, potentially leading to a denial of service. The Go team announced versions go1.12.8 and go1.11.13, following which the Kubernetes developer team has released patch versions of Kubernetes built using the new versions of Go. Kubernetes v1.15.3 - go1.12.9 Kubernetes v1.14.6 - go1.12.9 Kubernetes v1.13.10 - go1.11.13 On August 13, Netflix announced the discovery of multiple vulnerabilities that can affect server implementations of the HTTP/2 protocol. The popular video streaming website issued eight CVEs in their security advisory and two of these also impact Go and all Kubernetes components designed to serve HTTP/2 traffic (including /healthz). The Azure Kubernetes Service community has recommended customers to upgrade to a patched release soon. “Customers running minor versions lower than the above (1.10, 1.11, 1.12) are also impacted and should also upgrade to one of the releases above to mitigate these CVEs”, the team suggests. To know more about this news in detail, read AKS Guidance and updates on GitHub. Security flaws in Boeing 787 CIS/MS code can be misused by hackers, security researcher says at Black Hat 2019 CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]
Read more
  • 0
  • 0
  • 3041

article-image-turbo-googles-new-color-palette-for-data-visualization-addresses-shortcomings-of-the-common-rainbow-palette-jet
Sugandha Lahoti
23 Aug 2019
4 min read
Save for later

Turbo: Google’s new color palette for data visualization addresses shortcomings of the common rainbow palette, 'Jet'

Sugandha Lahoti
23 Aug 2019
4 min read
Google has released a new color palette, which it has named Turbo to address some of the shortcomings of the current popular rainbow palette, Jet. These shortcomings, include false detail, banding, and color blindness ambiguity. According to the blog post, Turbo provides better data visualization depth perception. Their aim with Turbo is to provide a color map which is uniform and color blind-accessible, but also optimal for day to day tasks where the requirements are not as stringent. The blog post specifies that Turbo is meant to be used in cases where perceptual uniformity is not critical, but one still wants a high contrast, smooth visualization of the underlying data. Google Researchers created a simple interface to interactively adjust the sRGB curves using a 7-knot cubic spline while comparing the result on a selection of sample images as well as other well-known color maps. “This approach,” the blog post reads, “provides control while keeping the curve C2 continuous. The resulting color map is not “perceptually linear” in the quantitative sense, but it is more smooth than Jet, without introducing false detail.” Comparison of Turbo with other color maps Virdius and Inferno are two linear color maps that fix most issues of Jet and are generally recommended when false color is needed. However, some feel that it can be harsh on the eyes, which hampers visibility when used for extended periods. Turbo, on the other hand, mimics the lightness profile of Jet, going from low to high back down to low, without banding. Turbo’s lightness slope is generally double that of Viridis, allowing subtle changes to be more easily seen. “This is a valuable feature,” the researchers note, “since it greatly enhances detail when color can be used to disambiguate the low and high ends.” Lightness plots generated by converting the sRGB values to CIECAM02-UCS and displaying the lightness value (J) in greyscale. The black line traces the lightness value from the low end of the color map (left) to the high end (right). Source: Google blog The lightness plots show Viridis and Inferno plots to be linear and Jet’s plot to be erratic and peaky. Turbo’s had a similar asymmetric profile to Jet with the lows darker than the highs. Although the low-high-low curve increases detail, it comes at the cost of lightness ambiguity. This makes Turbo inappropriate for grayscale printing and for people with the rare case of achromatopsia (total color blindness). In the case of semantic layers, compared to Jet, Turbo is much more smooth and has no “false layers” due to banding. Turbo’s attention system prioritizes hue which makes it easy for Turbo to judge the differences in color than in lightness. Turbo’s color map can be used as a diverging colormap as well. The researchers tested Turbo using a color blindness simulator and found that for all conditions except Achromatopsia, the map remains distinguishable and smooth. NASA data viz lead argues Turbo comes with flaws Joshua Stevens, Data visualization and cartography lead at NASA has posted a detailed Twitter thread pointing out certain flaws with Google’s Turbo color map. He points out that “Color palettes should change linearly in lightness. However, Turbo admittedly does not do this. While it avoids the 'peaks' and banding of Jet, Turbo's luminance curve is still humped. Moreover, the slopes on either side are not equal, the curve is still irregular, and it starts out darker than it finishes.” He also contradicts Google’s statement of "our attention system prioritizes hue". The paper that Google links to clearly specifies that experimental results showed that brightness and saturation levels are more important than hue component in attracting attention.”. He clarifies further, “This is not to say that Turbo is not an improvement over Jet. It is! But there is too much known about visual perception to reimagine another rainbow. The effort is stellar, but IMO Turbo is a crutch that further slows adoption of more sensible palettes.” Google has made available the color map data and usage instructions for Python and C/C++. There is also a polynomial approximation, for cases where a look-up table may not be desirable. DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] Matplotlib 3.0 is here with new cyclic colormaps, and convenience methods
Read more
  • 0
  • 0
  • 5372

article-image-qt-introduces-qt-for-mcus-a-graphics-toolkit-for-creating-a-fluid-user-interface-on-microcontrollers
Vincy Davis
22 Aug 2019
2 min read
Save for later

Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers

Vincy Davis
22 Aug 2019
2 min read
Yesterday, the Qt team introduced a new graphics toolkit called Qt for MCUs for creating fluid user interfaces (UIs) on cost-effective microcontrollers (MCUs). The toolkit will enable new and existing users to take advantage of the existing Qt tools and libraries used for Device Creation, thus enabling companies to provide better user experience.  Petteri Holländer, the Senior Vice President of Product Management at Qt said, “With the introduction of Qt for MCUs, customers can now use Qt for almost any software project they’re working on, regardless of target – with the added convenience of using just one technology framework and toolset.” He further adds, “This means that both existing and new Qt customers can pursue the many business growth opportunities offered by connected devices – across a wide and diverse range of industries.” Qt for MCUs utilizes the Qt Modeling Language (QML) and the developer-designing tools for constructing a fast and customized Qt application. “With the frontend defined in declarative QML and the business logic implemented in C/C++, the end result is a fluid graphical UI application running on microcontrollers,”  says the Qt team. Key benefits offered by Qt for MCUs Existing skill sets can be reused for Qt for microcontrollers Same technology can be used in high-end and mass market devices, thus yielding low maintenance cost No compromise on graphics performance, hence reduced hardware costs Users can upgrade to the cross-platform graphical toolkit from a legacy solution Check out the Qt for MCUs website for more information. Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more
Read more
  • 0
  • 0
  • 2432
article-image-faunadb-now-offers-a-managed-serverless-service-combining-faunas-serverless-database-with-a-managed-solution
Vincy Davis
22 Aug 2019
2 min read
Save for later

FaunaDB now offers a “Managed Serverless” service combining Fauna’s serverless database with a managed solution

Vincy Davis
22 Aug 2019
2 min read
Today, FaunaDB, announced the general availability of FaunaDB managed serverless database service. The new service will provide Fauna’s small and medium-sized enterprises (SMEs) and partners with flexibility and a customer-dedicated deployment of FaunaDB.  In a statement, Evan Weaver, CEO of Fauna said, “We are breaking new ground in the industry by offering the first fully managed serverless service, and we now deliver the best of both worlds.” He further adds, “Developers wanting a powerful data management component for cutting-edge app development can use FaunaDB, while companies wanting to avoid hands-on cloud configuration and maintenance can choose our managed serverless offering.” FaunaDB managed serverless is a mature data management solution which will include all the features of FaunaDB. It currently supports Amazon Web Services (AWS) and Google Cloud Platform (GCP), and will come up with support for Azure soon. Its capacity is termed and priced on a monthly or annual basis. The serverless database is assisted by Fauna customer success enterprise support, which will give users access to technical support and customer service. Operational controls delivered by FaunaDB Managed Serverless  Enterprise-grade support and SLAs Change data feed or stream Query log auditing Operational monitoring integration Customer-defined local endpoints Customer-defined data locality Backup and restore tailored to meet compliance needs Isolated environments as needed for development, testing and staging Nextdoor, a private social network, is already using FaunaDB Managed Serverless Database Service. The co-founder and chief architect of Nextdoor, Prakash Janakiraman says, “We selected FaunaDB for its API flexibility and scalability, security and availability to support global use of our mobile app. We are now using the managed service for its flexible configuration options and capabilities such as multiple development environments, change data feed and query log auditing.” Fauna announces Jepsen results for FaunaDB 2.5.4 and 2.6.0 GraphQL API is now generally available After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering
Read more
  • 0
  • 0
  • 3038

article-image-puppet-launches-puppet-remediate-a-vulnerability-remediation-solution-for-it-ops
Vincy Davis
22 Aug 2019
3 min read
Save for later

Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops

Vincy Davis
22 Aug 2019
3 min read
Yesterday, Puppet announced a vulnerability remediation solution called Puppet Remediate which aims to reduce the time taken by IT teams to identify, prioritize and rectify mission-critical vulnerabilities. Matt Waxman, head of product at Puppet said, “There is a major gap between sophisticated scanning tools that identify vulnerabilities and the fragmented and manual, error-prone approach of fixing these vulnerabilities.” He adds, “Puppet Remediate closes this gap giving IT the insight they need to end the current soul-crushing work associated with vulnerability remediation to ensure they are keeping their organization safe.” Puppet Remediate will produce faster remedial solution by taking support from security partners who have access to potentially sensitive vulnerability data. It will discover vulnerabilities depending on the type of infrastructure resources affected by them. Next, Puppet Remediate will render instant action “to remediate vulnerable packages without requiring any agent technology on the vulnerable systems on both Linux and Windows through SSH and WinRM”, says Puppet. Key features in Puppet Remediate Shared vulnerability data between security and IT Ops Puppet Remediate unifies infrastructure data and vulnerability data, to help IT Ops get access to vulnerability data in real-time, thus reducing delays and eliminating risks associated to manual handover of data. Risk-based prioritization It will assist IT teams to prioritize critical systems and identify vulnerabilities within the organization's systems based on infrastructure context. It will give IT teams more clarity on what to fix first. Agentless remediation IT teams will be able to take immediate action to rectify a vulnerability without requiring to leave the application or without the need of requiring any agent technology on the vulnerable systems. Channel partners will provide Puppet an established infrastructure and InfoSec practices Puppet have selected initial channel partners depending on their established infrastructure and InfoSec practices. The channel partners will help Puppet Remediate to bridge the gap between security and IT practices in enterprises. Fishtech, a cybersecurity solutions provider and Bitbone, a Germany based computer software store are the initial channel partners for Puppet Remediate. Sebastian Scheuring, CEO of Bitbone AG says, “Puppet Remediate offers real added value with its new functions to our customers. It drastically automates the workflow of vulnerability remediation through taking out the manual, mundane and error-prone steps that are required to remediate vulnerabilities. Continuous scans, remediation tasks and short cycles of update processes significantly increase the security level of IT environments.” Check out the website to know more about Puppet Remediate. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 2024