Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-keras-2-3-0-the-first-release-of-multi-backend-keras-with-tensorflow-2-0-support-is-now-out
Bhagyashree R
18 Sep 2019
4 min read
Save for later

Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out

Bhagyashree R
18 Sep 2019
4 min read
Yesterday, the Keras team announced the release of Keras 2.3.0, which is the first release of multi-backend Keras with TensorFlow 2.0 support. This is also the last major release of multi-backend Keras. It is backward-compatible with TensorFlow 1.14, 1.13, Theano, and CNTK. Keras to focus mainly on tf.keras while continuing support for Theano/CNTK This release comes with a lot of API changes to bring the multi-backend Keras API “in sync” with tf.keras, TensorFlow’s high-level API. However, there are some TensorFlow 2.0 features that are not supported. This is why the team recommends developers to switch their Keras code to tf.keras in TensorFlow 2.0. Read also: TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more Moving to tf.keras will give developers access to features like eager execution, TPU training, and much better integration between low-level TensorFlow and high-level concepts like Layer and Model. Following this release, the team plans to mainly focus on the further development of tf.keras. “Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported,” the team writes. To make it easier for the community to contribute to the development of Keras, the team will be developing tf.keras in its own standalone GitHub repository at keras-team/keras. François Chollet, the creator of Keras, further explained on Twitter why they are moving away from the multi-backend Keras: https://twitter.com/fchollet/status/1174019142774452224 API updates in Keras 2.3.0 Here are some of the API updates in Keras 2.3.0: The add_metric method is added to Layer/Model, which is similar to the add_loss method but for metrics. Keras 2.3.0 introduces several class-based losses including MeanSquaredError, MeanAbsoluteError, BinaryCrossentropy, Hinge, and more. With this update, losses can be parameterized via constructor arguments. Many class-based metrics are added including Accuracy, MeanSquaredError, Hinge, FalsePositives, BinaryAccuracy, and more. This update enables metrics to be stateful and parameterized via constructor arguments. The train_on_batch and test_on_batch methods now have a new argument called resent_metrics. You can set this argument to True for maintaining metric state across different batches when writing lower-level training or evaluation loops. The model.reset_metrics() method is added to Model to clear metric state at the start of an epoch when writing lower-level training or evaluation loops. Breaking changes in Keras 2.3.0 Along with the API changes, Keras 2.3.0 includes a few breaking changes. In this release, batch_size, write_grads, embeddings_freq, and embeddings_layer_names are deprecated and hence are ignored when used with TensorFlow 2.0. Metrics and losses will now be reported under the exact name specified by the user. Also, the default recurrent activation is changed from hard_sigmoid to sigmoid in all RNN layers. Read also: Build your first Reinforcement learning agent in Keras [Tutorial] The release started a discussion on Hacker News where developers appreciated that Keras will mainly focus on the development of tf.keras. A user commented, “Good move. I'd much rather it worked well for one backend then sucked mightily on all of them. Eager mode means that for the first time ever you can _easily_ debug programs using the TensorFlow backend. That will be music to the ears of anyone who's ever tried to debug a complex TF-backed model.” Some also raised the question that Google might acquire Keras in the future considering TensorFlow has already included Keras in its codebase and its creator, François Chollet works as an AI researcher at Google. Check out the official announcement to know what more has landed in Keras 2.3.0. Other news in Data The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases InfluxData launches new serverless time series cloud database platform, InfluxDB Cloud 2.0 Different types of NoSQL databases and when to use them
Read more
  • 0
  • 0
  • 4736

article-image-is-dark-an-aws-lambda-challenger
Fatema Patrawala
01 Aug 2019
4 min read
Save for later

Is Dark an AWS Lambda challenger?

Fatema Patrawala
01 Aug 2019
4 min read
On Monday, the CEO and Co-founder of Dark, Ellen Chisa, announced the project had raised $3.5 million in funding in a Medium post. Dark is a holistic project that includes a programming language (Darklang), an editor and an infrastructure. The value of this, according to Chisa, is simple: "developers can code without thinking about infrastructure, and have near-instant deployment, which we’re calling deployless." Along with Chisa, Dark is led by CTO, Paul Biggar, who is also the founder of CircleCI, the CI/CD pioneering company. The seed funding is led by Cervin Ventures, in participation with Boldstart, Data Collective, Harrison Metal, Xfactor, Backstage, Nextview, Promus, Correlation, 122 West and Yubari. What are the key features of the Dark programming language? One of the most interesting features in Dark is that deployments take a mere 50 milliseconds. Fast. Chisa says that currently the best teams can manage deployments around 5–10 minutes, but many take considerably longer, sometimes hours. But Dark was designed to change this. It's purpose-built, Chisa seems to suggest, for continuous delivery. “In Dark, you’re getting the benefit of your editor knowing how the language works. So you get really great autocomplete, and your infrastructure is set up for you as soon as you’ve written any code because we know exactly what is required.” She says there are three main benefits to Dark’s approach: An automated infrastructure No need to worry about a deployment pipeline ("As soon as you write any piece of backend code in Dark, it is already hosted for you,” she explains.) Tracing capabilities are built into your code. "Because you’re using our infrastructure, you have traces available in your editor as soon as you’ve written any code. There's undoubtedly a clear sense - whatever users think of the end result - that everything has been engineered with an incredibly clear vision. Dark has been deployed on SaaS platform and project tracking tools Chisa highlights how some customers have already shipped entire products on Dark. Chase Olivieri, who built Altitude, a subscription SaaS providing personalized flight deals, using Drark is cited by Chisa, saying that "as a bootstrapper, Dark has allowed me to move fast and build Altitude without having to worry about infrastructure, scaling, or server management." Downside of Dark is programmers have to learn a new language Speaking to TechCrunch, Chisa admitted their was a downside to Dark - you have to learn a new language. "I think the biggest downside of Dark is definitely that you’re learning a new language, and using a different editor when you might be used to something else, but we think you get a lot more benefit out of having the three parts working together." Chisa acknowledged that it will require evangelizing the methodology to programmers, who may be used to employing a particular set of tools to write their programs. But according to her the biggest selling point is that it will remove the complexity around deployment by bringing an integrated level of automation to the process. Is Darklang basically like AWS Lambda? The community on Hacker News compares Dark with AWS Lambda, with many pessimistic about its prospects. In particular they are skeptical about the efficiency gains Chisa describes. "It only sounds maybe 1 step removed from where aws [sic] lambda’s are now," said one user. "You fiddle with the code in the lambda IDE, and submit for deployment. Is this really that much different?” Dark’s Co-founder, Paul Biggar responded to this in the thread. “Dark founder here. Yes, completely agree with this. To a certain extent, Dark is aimed at being what lambda/serverless should have been." He continues by writing: "The thing that frustrates me about Lambda (and really all of AWS) is that we're just dealing with a bit of code and bit of data. Even in 1999 when I had just started coding I could write something that runs every 10 minutes. But now it's super challenging. Why is it so hard to take a request, munge it, send it somewhere, and then respond to it. That should be trivial! (and in Dark, it is)" The team has planned to roll out the product publicly in September. To find out more more about Dark, read the team's blog posts including What is Dark, How Dark is a functional language, and How Dark allows deploys in 50ms. The V programming language is now open source – is it too good to be true? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 4733

article-image-microsoft-cloud-services-dns-outage-results-in-deleting-several-microsoft-azure-database-records
Bhagyashree R
04 Feb 2019
2 min read
Save for later

Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records

Bhagyashree R
04 Feb 2019
2 min read
On January 29, Microsoft Cloud services including Microsoft Azure, Office 365, and Dynamics 365 suffered a major outage. This resulted in customers experiencing intermittent access to Office 365 and also deleting several database records. This comes just after a major outage that prevented Microsoft 365 users from accessing their emails for an entire day in Europe. https://twitter.com/AzureSupport/status/1090359445241061376 Users who were already logged into Microsoft services weren’t affected; however, those that were trying to log into new sessions were not able to do so. How did this Microsoft Azure outage happen? According to Microsoft, the preliminary reason behind this outage was a DNS issue with CenturyLink, an external DNS provider. Microsoft Azure’s status page read, “Engineers identified a DNS issue with an external DNS provider”. CenturyLink, in a statement, mentioned that their DNS services experienced disruption due to a software defect, which affected connectivity to a customer’s cloud resources. Along with authentication issues, this outage also caused the deletion of users’ live data stored in Transparent Data Encryption (TDE) databases in Microsoft Azure. TDE databases encrypt information dynamically and decrypt them when customers access it. As the data is stored in encrypted form, it prevents intruders from accessing the database. For encryption, many Azure users store their own encryption keys in Microsoft’s Key Vault encryption key management system. The deletion was triggered by a script that automatically drops TDE database tables when corresponding keys can no longer be accessed in the Key Vault. Microsoft was able to restore the tables from a five-minute snapshot backup. But, those transactions that customers had processed within five minutes of the table drop were expected to raise a support ticket asking for the database copy. Read more about Microsoft’s Azure outage in detail on ZDNet. Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020 Outage in the Microsoft 365 and Gmail made users unable to log into their accounts Microsoft Office 365 now available on the Mac App Store
Read more
  • 0
  • 0
  • 4729

article-image-unreal-engine-4-22-update-support-added-for-microsofts-directx-raytracing-dxr
Melisha Dsouza
15 Feb 2019
3 min read
Save for later

Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR)

Melisha Dsouza
15 Feb 2019
3 min read
On 12th February, Epic Games released a preview build of Unreal Engine 4.22, and a major upgrade among numerous other features and fixes is the support for real-time ray tracing and path tracing. The new build will extend its preliminary support for Microsoft's DirectX Ray-tracing (DXR) extensions to the DirectX 12 API. Developers can now try their hands at ray-traced games developed through Unreal Engine 4. There are very limited games that support raytracing. Currently, only  Battlefield V (Ray Traced Reflections) and Metro Exodus (Ray Traced Global Illumination) feature ray tracing effects, which are developed in the proprietary Frostbite 3 and 4A Game Engines. [box type="shadow" align="" class="" width=""]Fun Fact: Ray tracing is a much more advanced and lifelike way of rendering light and shadows in a scene. Movies and TV shows use this to create and blend in amazing CG work with real-life scenes leading to more life-like, interactive and immersive game worlds with more realistic lighting, shadows, and materials.[/box] The patch notes released by the team states that they have added low level support for ray tracing: Added ray tracing low-level support. Implemented a low-level layer on top of UE DirectX 12 that provides support for DXR and allows creating and using ray tracing shaders (ray generation shaders, hit shaders, etc) to add ray tracing effects. Added high-level ray tracing features Rect area lights Soft shadows Reflections Reflected shadows Ambient occlusion RTGI (ray traced global illumination) Translucency Clearcoat IBL Sky Geometry types Triangle meshes Static Skeletal (Morph targets & Skin cache) Niagara particles support Texture LOD Denoiser Shadows, Reflections, AO Path Tracert Unbiased, full GI path tracer for making ground truth reference renders inside UE4. According to HardOCP,  the feature isn't technically tied to Nvidia RTX but since turing cards are the only ones with driver support for DirectX Raytracing at the moment, developers need an RTX 2000 series GPU to test out Unreal's Raytracing. There has been much debate about the RTX offered by NVIDIA in the past. While the concept did sound interesting at the beginning, very few engines adopted the idea- simply because previous generation processors cannot support all the features of NVIDIA’s RTX. Now, with DXR in the picture, It will be interesting to see the outcome of games developed using ray tracing. Head over to Unreal Engine’s official post to know more about this news. Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial] Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 0
  • 4720

article-image-introducing-nushell-a-rust-based-shell
Savia Lobo
26 Aug 2019
3 min read
Save for later

Introducing Nushell: A Rust-based shell

Savia Lobo
26 Aug 2019
3 min read
On August 23, Jonathan Turner, an Azure SDK developer introduced a new shell written in Rust, called Nushell or ‘Nu’. This Rust-based shell is inspired by the “classic Unix philosophy of pipelines, the structured data approach of PowerShell, functional programming, systems programming, and more,” Turner writes in his official blog. The idea of Nushell struck when Turner’s friend Yehuda Yatz demonstrated the working of Powershell. Yatz asked Turner if he could join in his project “we could take the ideas of a structured shell and make it more functional (as opposed to object-oriented)? What if, like PowerShell, it worked on Windows, Linux, and macOS? What if it had great error messages?” Turner highlights the fact that “everything in Nu is data”; this means when a user tries other commands and realize that they are using the same commands to filter, to sort, etc. Rather than having the need to remember all the parameters to all the commands, they can just use the same verbs to act over our data, regardless of where the data came from. Nu also understands structured text files like JSON, TOML, YAML, and allows users to manipulate their data, and much more. “You get used to using the verbs, and then you can use them on anything. When you’re ready, you can write it back to disk,” Turner writes. Nu also supports opening and looking at the text and binary data. On opening a source file, users can scroll around in a syntax-highlighted file. Further on opening an xml, they can look at its data. They can even open a binary file and look at what’s inside. Turner mentions that there is a lot one might want to explore with Nushell. Hence, the team has released Nu with the ability to extend it with plugins. Nu will look for these plugins in your path, and load them up on startup. Rust language is the major backbone for this project and Nushell would not have been possible without Rust, Turner exclaims. Nu internally uses async/await, async streams, and employs liberal use of “serde” to manage serializing and deserializing into the common data format and to communicate with plugins. Nushell GitHub page reads, “This project has reached a minimum-viable product level of quality. While contributors dogfood it as their daily driver, it may be instable for some commands. Future releases will work fill out missing features and improve stability. Its design is also subject to change as it matures.” The team will further work towards stability, the ability to use Nu as the main shell, the ability to write functions and scripts in Nu, and much more. Users can also read the book on Nu, available in both English and Spanish language. To know more about this news in detail, head over to Jonathan Turner’s official blog post or visit Nushell’s GitHub page. Announcing ‘async-std’ beta release, an async port of Rust’s standard library Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]
Read more
  • 0
  • 0
  • 4719

article-image-apple-releases-ios-12-2-beta-1-for-developers-with-custom-screen-time-scheduling-pwa-improvements-among-other-features
Sugandha Lahoti
28 Jan 2019
3 min read
Save for later

Apple releases iOS 12.2 beta 1 for developers with custom screen time scheduling, PWA improvements among other features

Sugandha Lahoti
28 Jan 2019
3 min read
Apple released the next major iOS update, iOS 12.2 beta 1 to developers on January 24, 2019. This update boasts of features like custom downtime scheduling, as well as major updates to PWA. Custom screen time scheduling According to a report by 9to5Mac, iOS users will be offered a custom downtime scheduler in the latest iOS update. Users will now be able to adjust the Screen Time feature per the days of the week. Although previous iOS versions had a similar downtime scheduler, it was limited to be applied every day. With iOS 12.2 beta 1, users can either choose to use the same schedule everyday, or customize it depending on which day of that week it is. You can use it by navigating to Settings > Screen Time > Downtime. https://twitter.com/Mr_SamSpencer/status/1089161676983844865 PWA improvements Apple has made major improvements to Progressive web apps by introducing new features for developers. Mike Hartington, developer advocate for Ionic framework gives us a glimpse of new improvements in a tweet. New experimental features include Web Auth, Web Animations, WebMeta, pointer events, intersection observer etc. Service workers are removed from the experiments list and are enabled by default. External sites are loaded via SFViewController. This means authentication flows and still work without leaving the PWA. The current state of any app is maintained, even if the app goes in the background. You can view the native app as well as the PWA of the same app in the search. Users are generally excited for Apple making improvements to its PWA. A comment on Hacker news reads, “This is great for user rights and moves the needle more towards a decentralized and open ecosystem, while maintaining strong security guarantees to the end-user.” However, users also want Apple to consider supporting Push Notifications for PWAs. Other UI features 9to5Mac notes the following new UI updates made to Apple iOS 12.2 beta 1. New Screen Mirroring icon in Control Center New full screen Apple TV Remote Control Center interface New “Speakers & TVS” in Home app settings More detailed Apple Wallet UI for Recent Transactions Updated details button in Wallet card UI Tap a transaction for more detail Card details feature bubbly inset rectangles rows Motion & Orientation Data is new Safari toggle in iOS Settings Air Quality Index reading in Maps Safari warns about websites not supporting HTTPS Fill in a search suggestion without submitting the search Keyboard color picker Inline Safari music playback Album name full song search results in Music app iOS 12.2 will bring Apple News to Canada Developers can head to Settings > General > Software Updates to start downloading iOS 12.2 beta 1, if they have a previous iOS 12 beta installed. Non-developers can enter the public beta program by visiting beta.apple.com on the device they wish to enroll in the beta.  Currently, there is no public beta release. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft Office 365 now available on the Mac App Store Tim Cook cites supply constraints and economic deceleration as the major reason for Apple missing it’s earnings target
Read more
  • 0
  • 0
  • 4716
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-tensorflow-1-11-0-releases
Pravin Dhandre
28 Sep 2018
2 min read
Save for later

TensorFlow 1.11.0 releases

Pravin Dhandre
28 Sep 2018
2 min read
It’s been just a month since the release of TensorFlow 1.10, and the TensorFlow community introduces the newer version 1.11 with few major additions, lots of bug fixes and numerous performance improvements. Major Features of TensorFlow 1.11.0: Prebuilt binaries built for Nvidia GPU Experimental tf.data integration for Keras Preview support for eager execution on Google Cloud TPUs Added multi-GPU DistributionStrategy support in tf.keras for model distribution Added multi-worker DistributionStrategy support in Estimator C, C++, and Python functions added for querying kernels Added simple Tensor and DataType classes to TensorFlow Lite Java Bug Fixes and Other Changes: Default values for tf.keras RandomUniform, RandomNormal, and TruncatedNormal initializers changed Added pruning mode for boosted trees Old checkpoints do not get deleted by default Total disk space for dumped tensor data limited to 100 GB. Added experimental IndexedDatasets Performance Improvements: Enhanced performance for StringSplitOp & StringSplitV2Op Regex replace operations improvised with max performance. Toco compilation/execution fixed for Windows Added GoogleZoneProvider class for detecting Google Cloud Engine zone tensorflow Import enabled for tensor.proto.h Added documentation clarifying the differences between tf.fill and tf.constant Added selective registration target using the lite proto runtime Support for bitcasting to and from uint32 and uint64 Estimator subclass added and can be created from a SavedModelEstimator Added argument leaf index modes Please see the full release notes for complete details on added features and changes. You can also check the GitHub repository to find various interesting use cases of TensorFlow. Top 5 Deep Learning Architectures A new Model optimization Toolkit for TensorFlow can make models 3x faster Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi
Read more
  • 0
  • 0
  • 4710

article-image-zabbix-4-2-release-for-data-collection-processing-and-visualization
Fatema Patrawala
03 Apr 2019
7 min read
Save for later

Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization

Fatema Patrawala
03 Apr 2019
7 min read
Zabbix Team announced the release of Zabbix 4.2. The latest release of Zabbix is packed with modern monitoring system for: data collection and processing, distributed monitoring, real-time problem and anomaly detection, alerting and escalations, visualization and more. Let us check out what Zabbix 4.2 has actually brought to the table. Here is a list of the most important functionality included into the new release. Official support of new platforms In addition to existing official packages and appliances, Zabbix 4.2 will now cater to the following platforms: Zabbix package for RaspberryPi Zabbix package for SUSE Enterprise Linux Server Zabbix agent for Mac OS/X Zabbix agent for MSI for Windows Zabbix Docker images Built-in support of Prometheus data collection Zabbix is able to collect data in many different ways (push/pull) from various data sources including JMX, SNMP, WMI, HTTP/HTTPS, RestAPI, XML Soap, SSH, Telnet, agents, scripts and other data sources, with Prometheus being the latest addition to the bunch. Now the 4.2 release will offer an integration with the exporters using native support of PromQL language. Moreover, the use of dependent metrics will give the Zabbix team ability to collect massive amounts of Prometheus metrics in a highly efficient way: this way they get all the data using a single HTTP call and then just reuse it for corresponding dependent metrics. Zabbix can also transform Prometheus data into JSON format, which can be used directly for low-level discovery. Efficient high-frequency monitoring We all want to discover problems as fast as possible. Now with 4.2 we can collect data with high frequency, instantly discover problems without keeping excessive amount of history data in the Zabbix database. Validation of collected data and error handling No one wants to collect incorrect data. With Zabbix 4.2 we can address that via built-in preprocessing rules that validate data by matching or not matching regular expression, using JSONPath or XMLPath. Now it is also possible to extract error messages from collected data. This can be especially handy if we get an error from external APIs. Preprocessing data with JavaScript In Zabbix 4.2 you can fully harness the power of user-defined scripts written in JavaScript. Support of JavaScript gives absolute freedom of data preprocessing! In fact, you can now replace all external scripts with JavaScript. This will enable all sorts of data transformation, aggregation, filtering, arithmetical and logical operations and much more. Test preprocessing rules from UI As preprocessing becomes much more powerful, it is important to have a tool to verify complex scenarios. Zabbix 4.2 will allow to test preprocessing rules straight from the Web UI! Processing millions of metrics per second! Prior to 4.2, all preprocessing was handled solely by the Zabbix server. A combination of proxy-based preprocessing with throttling gives us the ability to perform high-frequency monitoring collecting millions of values per second without overloading the Zabbix Server. Proxies will perform massive preprocessing of collected data while the Server will only receive a small fraction of it. Easy low level discovery Low-level discovery (LLD) is a very effective tool for automatic discovery of all sorts of resources (filesystems, processes, applications, services, etc) and automatic creation of metrics, triggers and graphs related to them. It tremendously helps to save time and effort allowing to use just a single template for monitoring devices with different resources. Zabbix 4.2 supports processing based on arbitrary JSON input, which in turn allows us to communicate directly with external APIs, and use received data for automatic creation of hosts, metrics and triggers. Combined with JavaScript preprocessing it opens up fantastic opportunities for templates, that may work with various external data sources such as cloud APIs, application APIs, data in XML, JSON or any other format. Support of TimescaleDB TimescaleDB promises better performance due to more efficient algorithms and performance oriented data structures. Another significant advantage of TimescaleDB is automatic table partitioning, which improves performance and (combined with Zabbix) delivers fully automatic management of historical data. However, Zabbix team hasn’t performed any serious benchmarking yet. So it is hard to comment on real life experience of running TimescaleDB in production. At this moment TimescaleDB is an actively developed and rather young project. Simplified tag management Prior to Zabbix 4.2 we could only set tags for individual triggers. Now tag management is much more efficient thanks to template and host tags support. All detected problems get tag information not only from the trigger, but also from the host and corresponding templates. More flexible auto-registration Zabbix 4.2 auto-registration options gives the ability to filter host names based on a regular expression. It’s really useful if we want to create different auto-registration scenarios for various sets of hosts. Matching by regular expression is especially beneficial in case we have complex naming conventions for our devices. Control host names for auto-discovery Another improvement is related to naming hosts during auto-discovery. Zabbix 4.2 allows to assign received metric data to a host name and visible name. It is an extremely useful feature that enables great level of automation for network discovery, especially if we use Zabbix or SNMP agents. Test media type from Web UI Zabbix 4.2 allows us to send a test message or check that our chosen alerting method works as expected straight from the Zabbix frontend. This is quite useful for checking the scripts we are using for integration with external alerting and helpdesk systems etc. Remote monitoring of Zabbix components Zabbix 4.2 introduces remote monitoring of internal performance and availability metrics of the Zabbix Server and Proxy. Not only that, it also allows to discover Zabbix related issues and alert us even if the components are overloaded or, for example, have a large amount of data stored in local buffer (in case of proxies). Nicely formatted email messages Zabbix 4.2 comes with support of HTML format in email messages. It means that we are not limited to plain text anymore, the messages can use all power of HTML and CSS for much nicer and easy to read alert messages. Accessing remote services from network maps A new set of macros is now supported in network maps for creation of user-defined URLs pointing to external systems. It allows to open external tickets in helpdesk or configuration management systems, or do any other actions using just one or two mouse-clicks. LLD rule as a dependant metric This functionality allows to use received values of a master metric for data collection and LLD rules simultaneously. In case of data collection from Prometheus exporters, Zabbix will only execute HTTP query once and the result of the query will be used immediately for all dependent metrics (LLD rules and metric values). Animations for maps Zabbix 4.2 comes with support of animated GIFs making problems on maps more noticeable. Extracting data from HTTP headers Web-monitoring brings the ability to extract data from HTTP headers. With this we can now create multi-step scenarios for Web-monitoring and for external APIs using the authentication token received in one of the steps. Zabbix Sender pushes data to all IP addresses Zabbix Sender will now send metric data to all IP addresses defined in the “ServerActive” parameter of the Zabbix Agent configuration file. Filter for configuration of triggers Configuration of triggers page got a nice extended filter for quick and easy selection of triggers by a specified criteria. Showing exact time in graph tooltip It is a minor yet very useful improvement. Zabbix will show you timestamp in graph tooltip. Other improvements Non-destructive resizing and reordering of dashboard widgets Mass-update for item prototypes Support of IPv6 for DNS related checks (“net.dns” and “new.dns.record”) “skip” parameter for VMWare event log check “vmware.eventlog” Extended preprocessing error messages to include intermediate step results Expanded information and the complete list of Zabbix 4.2 developments, improvements and new functionality is available in Zabbix Manual. Encrypting Zabbix Traffic Deploying a Zabbix proxy Zabbix and I – Almost Heroes
Read more
  • 0
  • 0
  • 4700

article-image-say-hello-to-faster-a-new-key-value-store-for-large-state-management-by-microsoft
Natasha Mathur
20 Aug 2018
3 min read
Save for later

Say hello to FASTER: a new key-value store for large state management by Microsoft

Natasha Mathur
20 Aug 2018
3 min read
The Microsoft research team announced a new key-value store named FASTER at SIGMOD 2018, in June. FASTER offers support for fast and frequent lookups of data. It also helps with updating large volumes of state information which poses a problem for cloud applications today. Let’s consider IoT as a scenario. Here billions of devices report and update state like per-device performance counters. This leads to applications underutilizing resources such as storage and networking on the machine. FASTER helps solve this problem as it makes use of the temporal locality in these applications for controlling the in-memory footprint of the system. According to Microsoft, “FASTER is a single-node shared memory key-value store library”. A key-value store is a NoSQL database which makes use of simple key/value method for data storage. It consists of two important innovations: A cache-friendly, concurrent and latch-free hash index. It maintains logical pointers to records in a log. The FASTER hash index refers to an array of cache-line-sized hash buckets, each with 8-byte entries to hold hash tags. It also consists of logical pointers to records that have been stored separately. A new concurrent and hybrid log record allocator. This helps in backing the index which includes fast storage (such as cloud storage and SSD) and main memory. What makes FASTER different? The traditional key-value stores make use of log-structured record organizations. But, FASTER is different as it has a hybrid log that combines log-structuring with read-copy-updates (good for external storage) and in-place updates (good for in-memory performance). So, the hybrid log head which lies in storage uses a read-copy-update whereas the hybrid log tail part in main memory uses in-place updates. There is a read-only region in memory that lies between these two regions. It provides the core records another chance to be copied back to the tail. This captures temporary location of the updates and allows a natural clustering of hot records in memory. As a result, FASTER is capable of outperforming even pure in-memory data structures like the Intel TBB hash map. It also performs far better than today’s popular key-value stores and caching systems like the RocksDB and Redis, says Microsoft. Other than that, FASTER also provides support for failure recovery as it consists of a recovery strategy in place which helps bring back the system to a recent consistent state at low cost. This is different than the recovery mechanism in traditional database systems as it does not involve blocking or creating a separate “write-ahead log”. For more information, check out the official research paper. Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft announces the general availability of Azure SQL Data Sync  
Read more
  • 0
  • 0
  • 4695

article-image-deepmind-introduces-openspiel-a-reinforcement-learning-based-framework-for-video-games
Savia Lobo
28 Aug 2019
3 min read
Save for later

DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games

Savia Lobo
28 Aug 2019
3 min read
A few days ago, researchers at DeepMind introduced OpenSpiel, a framework for writing games and algorithms for research in general reinforcement learning and search/planning in games. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. It also includes a branch of pure Swift in the swift subdirectory. In their paper, the researchers write, “We hope that OpenSpiel could have a similar effect on general RL in games as the Atari Learning Environment has had on single-agent RL.” Read Also: Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football OpenSpiel allows evaluating written games and algorithms on a variety of benchmark games as it includes implementations of over 20 different games types including simultaneous move, perfect and imperfect information games, gridworld games, an auction game, and several normal-form / matrix games, etc. It includes tools to analyze learning dynamics and other common evaluation metrics. It also supports n-player (single- and multi-agent) zero-sum, cooperative and general-sum, one-shot and sequential games, etc. OpenSpiel has been tested on Linux (Debian 10 and Ubuntu 19.04). However, the researchers have not tested the framework on MacOS or Windows. “since the code uses freely available tools, we do not anticipate any (major) problems compiling and running under other major platforms,” the researchers added. The purpose of OpenSpiel is to promote “general multiagent reinforcement learning across many different game types, in a similar way as general game-playing but with a heavy emphasis on learning and not in competition form,”  the researcher paper mentions. This framework is “designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad.” Read Also: DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Design constraints for OpenSpiel The two main design criteria that OpenSpiel is based on include: Simplicity: OpenSpiel provides easy-to-read, easy-to-use code that can be used to learn from and to build a prototype rather than a fully-optimized code that would require additional assumptions. Dependency-free: Researchers say, “dependencies can be problematic for long-term compatibility, maintenance, and ease-of-use.” Hence, the OpenSpiel framework does not introduce dependencies thus keeping it portable and easy to install. Swift OpenSpiel: A port to use Swift for TensorFlow The swift/ folder contains a port of OpenSpiel to use Swift for TensorFlow. This Swift port explores using a single programming language for the entire OpenSpiel environment, from game implementations to the algorithms and deep learning models. This Swift port is intended for serious research use. As the Swift for TensorFlow platform matures and gains additional capabilities (e.g. distributed training), expect the kinds of algorithms that are expressible and tractable to train to grow significantly. While OpenSpiel has some tools for visualization and evaluation, the α-Rank algorithm is also a tool. The α-Rank algorithm leverages evolutionary game theory to rank AI agents interacting in multiplayer games. OpenSpiel currently supports using α-Rank for both single-population (symmetric) and multi-population games. Developers are excited about this release and want to try out this framework. https://twitter.com/SMBrocklehurst/status/1166435811581202443 https://twitter.com/sharky6000/status/1166349178412261376 To know more about this news in detail, head over to the research paper. You can also check out the GitHub page. Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks
Read more
  • 0
  • 0
  • 4695
article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 4688

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 4686

article-image-say-hello-to-sequoia-a-new-rust-based-openpgp-library-to-secure-your-apps
Natasha Mathur
02 Aug 2018
3 min read
Save for later

Say hello to Sequoia: a new Rust based OpenPGP library to secure your apps

Natasha Mathur
02 Aug 2018
3 min read
GnuPG developers have recently begun working on Sequoia, a new OpenPGP implementation in Rust. OpenPGP is an open, free version of the Pretty Good Privacy (PGP) standard. It defines standard formats for emails and other message encryption and is based on the original PGP (Pretty Good Privacy) software. Sequoia is an OpenPGP library that provides easy-to-use cryptography for applications. It helps you protect the privacy of your users and is easy to incorporate into your application, no matter what language you use. It helps you manage your keys better as its keystore stores keys and updates them so that new keys or revocations are discovered in a timely manner. It is currently in development led by three former GnuPG developers, Neal H. Walfield, Justus Winter, and Kai. The project is funded by the  p≡p foundation, where each of the aforementioned developers has been working since fall 2017. What motivated the developers for this new implementation was their experience with GnuPG, a free software replacement for Symantec's PGP cryptographic software. PGP or Pretty Good Privacy is a program which is used to encrypt and decrypt texts, emails, files, directories, etc. to increase the security of data communications. According to Neal H. Walfield, GnuPG posed several problems as “it is hard to modify due to lack of unit tests and tight component coupling”. He also mentioned other reasons like how a lot of developers are unsatisfied with GnuPG’s API and that GnuPG can’t be used on iOS due to GPL. The developers also have major social and technical goals in mind for Sequoia. “The social goals are -- to create an inclusive environment in our project, it should be free software and -- community-centered,” says Neal. Here’s the video of Neal introducing the new OpenPGP library:  Sequoia  On the technical side, the team is taking a different approach. They are putting the library API first, and a command-line interface tool, second. Neal says that the team “encourages” the users to use the library. They also aim to create an API which is friendly, easy to use and supports all modern platforms such as Android, iOS, Mac, etc. Let’s have a look at how Sequoia is built. Starting at the bottom level, we have the OpenPGP library which provides the low-level interface. There are two services built on top of this library, namely, Sequoia network service ( helps with accessing keyservers) and Sequoia-store which is used for accessing and storing the public keys along with the private keys.    Architecture of Sequoia On top of these three, there is a Sequoia library, a high-level API. If it’s a rust application, then it can use this library directly or else it can access the library via FFI ( foreign function interface). Apart from this, the vision for Sequoia is “a nice OpenPGP implementation -- with focus on user development, and its community” says Neal. For more information on Sequoia, check out the official Sequoia documentation. Will Rust Replace C++? Mozilla is building a bridge between Rust and JavaScript Perform Advanced Programming with Rust
Read more
  • 0
  • 0
  • 4682
article-image-python-3-9-alpha-1-is-now-ready-for-testing
Vincy Davis
22 Nov 2019
3 min read
Save for later

Python 3.9 alpha 1 is now ready for testing

Vincy Davis
22 Nov 2019
3 min read
Three days ago, the team behind Python announced the release of Python 3.9.0a1, which is the first out of the six planned alpha releases of Python 3.9. The final stable version of Python 3.9 is slated to release in May 2020. An alpha release indicates that developers can start testing the new features and check for bug fixes but are not recommended to use it in production. Last month, the previous stable version, Python 3.8 was released with features like walrus operator, positional-only parameters support for Vectorcall. Read More: Core Python team confirms sunsetting Python 2 on January 1, 2020 Let’s look at some of the raw features that you can be expected in the upcoming Python 3.9 version. Some improvements introduced in Python 3.9.0a1 Language Changes The __import__() function which is invoked by the import statement will now raise ImportError instead of ValueError. In the previous versions, the latter used to occur when a relative import went past its top-level package. Starting from Python 3.9.0a1, the absolute path of the script filename will be specified on the command line: the __file__ attribute of the __main__ module. The sys.argv[0] and sys.path[0] will become an absolute path rather than a relative path. Also, the traceback will now display the absolute path for __main__ module frames in this case. The encoding and errors arguments in the debug build and development mode will now be checked in the string encoding and decoding operations. Improved Modules ast: It is added in the indent option to dump() and produces a multi-line indented output. asyncio: It can now use coroutine which is a generalized form of subroutines. Subroutines enter and exit at only two different points, while coroutines can be entered, exited, and resumed at many points. Moreover, asyncio.run() is updated to use the new coroutine. New functions like curses.get_escdelay(), curses.set_escdelay(), curses.get_tabsize(), and curses.set_tabsize() and constants F_OFD_GETLK, F_OFD_SETLK and F_OFD_SETLKW is included in Python 3.9.0a1. Few Python users have already started testing the Python 3.9.0a1 release. https://twitter.com/codewithanthony/status/1197559895744110592 The next alpha release for Python 3.9 is scheduled for 16th December 2019. To know more about Python 3.9.0a1, check out the official documentation. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track Severity issues raised for Python 2 Debian packages for not supporting Python 3 Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3
Read more
  • 0
  • 0
  • 4682

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 4678