Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-nordvpn-reveals-it-was-affected-by-a-data-breach-in-2018
Savia Lobo
22 Oct 2019
3 min read
Save for later

NordVPN reveals it was affected by a data breach in 2018

Savia Lobo
22 Oct 2019
3 min read
NordVPN, a popular Virtual Private Network revealed that it was subject to a data breach in 2018. The breach came to light a few months ago when an expired internal security key was exposed, allowing anyone outside the company unauthorized access. NordVPN did not inform users then as they wanted to be "100 percent sure that each component within our infrastructure is secure." Details of the breach were traced back to March 2018 when one of NordVPN’s data centers in Finland, from whom they rent their servers from showed signs of unauthorized access. The attacker gained access to the server by exploiting an unsecured remote management system by the provider. In a press release statement, NordVPN explained "only 1 of more than 3000 servers we had at the time was affected." and that the company immediately terminated its contract with the data center provider after it learned of the hack. Even though the company had intrusion detection systems installed to find data breaches, it could not predict a remote data management system left by the data center provider. On the other hand, NordVPN said it was unaware that such a system existed. The company also said, "We are taking all the necessary means to enhance our security. We have undergone an application security audit, are working on a second no-logs audit right now, and are preparing a bug bounty program." They further added, "We will give our all to maximize the security of every aspect of our service, and next year we will launch an independent external audit ... of our infrastructure to make sure we did not miss anything else." NordVPN said that the attacker did not gain access to activity logs, user-credentials, or any other sensitive information. NordVPN maintains what it says is a strict "zero logs" policy. "We don’t track, collect, or share your private data," the company says on its website. In a statement to TechCrunch, NordVPN spokesperson Laura Tyrell said, “The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either.” She further added, “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.” Based on a few records posted online, other VPN providers such as TorGuard and VikingVPN may have also been compromised. A spokesperson for TorGuard told TechCrunch that a “single server” was compromised in 2017 but denied that any VPN traffic was accessed. Users are furious that NordVPN did not inform them on time. https://twitter.com/figalmighty/status/1186566775330066432 https://twitter.com/bleepsec/status/1186557192549404672 To know more about this news in detail, you can read NordVPN’s complete press release. DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants StockX confirms a data breach impacting 6.8 million customers Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator
Read more
  • 0
  • 0
  • 4878

article-image-introducing-firefox-sync-centered-around-user-privacy
Melisha Dsouza
14 Nov 2018
4 min read
Save for later

Introducing Firefox Sync centered around user privacy

Melisha Dsouza
14 Nov 2018
4 min read
“Ensure the Internet is a global public resource… where individuals can shape their own experience and are empowered, safe and independent.” -Team Mozilla Yesterday, Firefox explained the idea behind Firefox Sync as well as how the tool was built keeping in mind user’s privacy. Because sharing data with a provider is a norm, the team found it important to highlight the privacy aspects of Firefox Sync. What is Firefox Sync? Firefox Sync lets a user share their bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. This feature re-defines how users interact with the web. Users can log on to Firefox with Firefox sync, using the same account across multiple devices. They can even access the same sessions on swapping devices. With one easy sign-in, Firefox sync helps users access their bookmarks, tabs, and passwords. Sync allows users logged on from one device to be simultaneously logged on to other devices. Which means that tasks that started on a user’s laptop in the morning can be picked up on their phone even later in the day. Why is Firefox Sync Secure? By default, Firefox Sync protects all user synced data so Mozilla can’t read it. When a user signs up for sync with a strong passphrase, their data is protected from both attackers and from Mozilla.  Mozilla encrypts all of a user’s synced data so that it is entirely unreadable without the key used to encrypt it. Ideally, even a service provider must never receive a user’s key. Firefox takes care of this aspect when a user signs into their Firefox account with a username and passphrase which are sent to the server. Traditionally, on receiving the username and passphrase at the server, it is hashed and compared with a stored hash. If a match is found, the server sends the user his data. While using Firefox, a user never sends over their passphrase. Mozilla transforms a user’s passphrase on their computer into two different, unrelated values such that the two values are independent of each other. Mozilla sends an authentication token, derived from the passphrase, to the server which serves as the password-equivalent. This means that the encryption key derived from the passphrase never leaves a user’s computer. In more technical terms, 1000 rounds of PBKDF2 is used to derive a user’s passphrase into the authentication token. On the server size, this token is hashed with scrypt so that the database of authentication tokens is even more difficult to crack. The passphrase is then derived into an encryption key using the same 1000 rounds of PBKDF2. It is domain-separated from the previously generated authentication token by using HKDF with separate info values. This key is used to unwrap an encryption key (obtained during setup and which Mozilla never see unwrapped), and that encryption key is used to protect a user data.  The key is used to encrypt user data using AES-256 in CBC mode, protected with an HMAC. Source: Mozilla Hacks How are people reacting to this feature? Sync has been well received by customers. A user on Hacker news commented how this feature makes “Firefox important”.  Sync has also been compared to Google Chrome since Chrome's sync feature collects their users' complete browsing histories. One user commented on how Mozilla’s privacy tools will make him “chose over chrome”. And since this approach is relatively simple to implement, users are also exploring the possibility of “implement a similar encryption system as a proof of concept”. In a time where respecting the privacy of a user is so unusual, Mozilla sure has caught our attention with its approach to be more “user privacy-centric”. You can head over to Mozilla’s Blog to know other approaches to building a sync feature for a browser and how Sync protects user data. Mozilla pledges to match donations to Tor crowdfunding campaign up to $500,000 Mozilla shares how AV1, the new the open source royalty-free video codec, works Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs
Read more
  • 0
  • 0
  • 4875

article-image-delphi-community-edition-announced
Pavan Ramchandani
27 Jul 2018
2 min read
Save for later

Delphi Community Edition announced!

Pavan Ramchandani
27 Jul 2018
2 min read
Embarcadero has made a significant announcement of launching the community edition for its premium products Delphi, a cross-platform IDE and C++Builder, a powerful C++ IDE. With the community edition, the developers can start using both the products without any charge and access most of the features that are part of the Professional Edition. Apart from the developers, they have the free access to the organizations with less than $5,000 in annual revenue. This announcement is getting a big welcome in the community, considering the offerings for developers, startups, freelancers who have struggled to enter the Delphi ecosystem for years. Delphi has been unpopular among the native applications developers. This may be because of the entry point pricing. As such, this move seems to ease that barrier at least for the developers, using different IDEs. Delphi's community edition is said to provide access to all the features and components from the Professional edition. This will permit developing open source projects at no cost to the developers. Apart from the normal community edition offering, Delphi and C++Builder have free trial versions of the Pro, Enterprise, and Architect version of the products available. Embarcadero did not talk about RAD Studio, one of the 3 premium tools apart from Delphi and C++Builder in its lineup. RAD Studio is a platform to write, compile, and deploy cross-platform applications. You can download the starter edition for Delphi and C++Builder from the Embarcadero’s community website. In case you want to try other offerings, you can opt for a 30 days trial. Delphi: memory management techniques for parallel programming Implementing C++ libraries in Delphi for HPC [Tutorial] Delphi Cookbook
Read more
  • 0
  • 1
  • 4866

article-image-corona-labs-open-sources-corona-its-free-and-cross-platform-2d-game-engine
Natasha Mathur
03 Jan 2019
3 min read
Save for later

Corona Labs open sources Corona, its free and cross-platform 2D game engine

Natasha Mathur
03 Jan 2019
3 min read
Corona Labs announced yesterday that it’s making its free and cross-platform 2D game engine, Corona, available as open source under the GPLv3 license and commercial licenses. The license for builds and releases remains unchanged and the change applies only to the source code of the engine. Corona is a popular game engine for creating 2D games and apps for mobile, desktop systems, TV platforms, and the web. It is based on Lua language and makes use of over 1,000 built-in APIs and plugins, and Corona Native extensions (C/C++/Obj-C/Java). According to Vlad Sherban, product manager for Corona Labs, the Corona team had been discussing making Corona open source ever since it got acquired by Appodeal, back in 2017. “We believe that this move will bring transparency to the development process, and will allow users to contribute features or bug fixes to make the project better for everyone,” said Sherban. The team also mentions that transitioning to open source would help them respond quickly to market shifts and changes. It would also ensure that Corona stays relevant at all times for all mobile app developers. Moreover, now that Corona is open source, it will bring more visibility to the development process by letting users see what the engine team is working on and where the project is going. It will also offer extra benefits for businesses as they will be able to acquire a commercial license for source code and customize the engine for certain commercial projects. Additionally, Corona Labs won’t be collecting any statistics from apps built with daily build 2018.3454 or later. When Corona Labs was a closed source product, it used to collect basic app usage stats such as the number of sessions, daily average users, etc. With Corona available as open source now, there is no need to collect this data. “Powered by the new open source model and supported by the development of new features and bug fixes will make Corona more community driven — but not without our help and guidance --- going open source will provide confidence in the future of the engine and an opportunity to grow community involvement in engine development,” said Sherban. NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day” Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices
Read more
  • 0
  • 0
  • 4863

article-image-netflix-open-sources-polynote-an-ide-like-polyglot-notebook-with-scala-support-apache-spark-integration-multi-language-interoperability-and-more
Vincy Davis
31 Oct 2019
4 min read
Save for later

Netflix open sources Polynote, an IDE-like polyglot notebook with Scala support, Apache Spark integration, multi-language interoperability, and more

Vincy Davis
31 Oct 2019
4 min read
Last week, Netflix announced the open source launch of Polynote which is a polyglot notebook. It comes with a full scale Scala support, Apache Spark integration, multi-language interoperability including Scala, Python, SQL, and provides IDE-like features such as interactive autocomplete, a rich text editor with LaTeX support, and more. Polynote renders a seamless integration of Netflix’s Scala employed JVM-based ML platform with Python’s machine learning and visualization libraries. It is currently used by Netflix’s personalization and recommendation teams and is also being integrated with the rest of the Netflix research platform. The Netflix team says, “Polynote originated from a frustration with the shortcomings of existing notebook tools, especially with respect to their support of Scala.” Also, “we found that our users were also frustrated with the code editing experience within notebooks, especially those accustomed to using IntelliJ IDEA or Eclipse.”  Key features supported by Polynote Reproducibility A traditional notebook generally relies on a Read–eval–print loop (REPL) environment to build an interactive environment with other users. According to Netflix, the expressions and the results of a REPL evaluation is quite rigid. Thus, Netflix built the Polynote’s code interpretation from scratch, instead of relying on a REPL. This helps Polynote to keep track of the variables defined in each cell by constructing the input state for a given cell based on the cells that have run above it. By making the position of a cell important in its execution semantics, Polynote allows the users to read the notebook from top to bottom. This ensures reproducibility in Polynote by increasing the chances of running the notebook sequentially. Editing Improvements Polynote provides editing enhancements like: It integrates code editing with the Monaco editor for interactive auto-complete. It highlights errors internally to help users rectify it quickly. A rich text editor for text cells which allows users to easily insert LaTeX equations. Visibility One of the major guiding principles of Polynote is its visibility. It enables live view of what the kernel is doing at any given time, without requiring logs. A single glance at a user interface imparts with many information like- The notebook view and task list displays the current running cell, and also shows the queue to be run. The exact statement running in the system is highlighted in colour. Job and stage level Spark progress information is shown in the task list. The kernel status area provides information about the execution status of the kernel. Polyglot Currently, Polynote supports Scala, Python, and SQL cell types and enables users to seamlessly move from one language to another within the same notebook. When a cell is running in the system, the kernel handovers the typed input values to the cell’s language interpreter. Successively, the interpreter provides the resulted typed output values back to the kernel. This enables the cell in a Polynote notebook to run irrespective of the language with the same context and the same shared state. Dependency and Configuration Management In order to ease reproducibility, Polynote yields configuration and dependency setup within the notebook itself. It also provides a user-friendly Configuration section where users can set dependencies for each notebook. This allows Polynote to fetch the dependencies locally and also load the Scala dependencies into an isolated ClassLoader. This reduces the chances of a class conflict of Polynote with the Spark libraries. When Polynote is used in Spark mode, it creates a Spark Session for the notebook, where the Python and Scala dependencies are automatically added to the Spark Session. Data Visualization One of the most important use cases of a notebook is its ability to explore and visualize data. Polynote integrates with two open source visualization libraries- Vega and Matplotlib. It also has a native support for data exploration such as including a data schema view, table inspector and  plot constructor. Hence, this feature helps users to learn about their data without cluttering their notebooks. Users have appreciated Netflix efforts of open sourcing their Polynote notebook and have liked its features https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/suzatweet/status/1187531789763399682 https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/julianharris/status/1188013908587626497 Visit the Netflix Techblog for more information of Polynote. You can also check out the Polynote website for more details. Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Netflix adopts Spring Boot as its core Java framework Netflix’s culture is too transparent to be functional, reports the WSJ Linux foundation introduces strict telemetry data collection and usage policy for all its projects Fedora 31 releases with performance improvements, dropping support for 32 bit and Docker package
Read more
  • 0
  • 0
  • 4854

article-image-baidu-announces-clarinet-a-neural-network-for-text-to-speech-synthesis
Sugandha Lahoti
23 Jul 2018
2 min read
Save for later

Baidu announces ClariNet, a neural network for text-to-speech synthesis

Sugandha Lahoti
23 Jul 2018
2 min read
Text-to-speech synthesis has been a booming research area, with Google, Facebook, Deepmind, and other tech giants showcasing their interesting research and trying to build better TTS models. Now Baidu has stolen the show with ClariNet, the first fully end-to-end TTS model, that directly converts text to a speech waveform in a single neural network. Classical TTS models such as Deepmind’s Wavenet usually have a separately text-to-spectrogram and waveform synthesis models. Having two models may result in suboptimal performance. ClariNet combines the two models into one fully convolutional single neural network. Not only that, their text-to-wave model significantly outperforms the previous separate TTS models, they claim. Baidu’s ClariNet consists of four components: Encoder, which encodes textual features into an internal hidden representation. Decoder, which decodes the encoder representation into the log-mel spectrogram in an autoregressive manner. Bridge-net: An intermediate processing block, which processes the hidden representation from the decoder and predicts log-linear spectrogram. It also upsamples the hidden representation from frame-level to sample-level. Vocoder: A Gaussian autoregressive WaveNet to synthesize the waveform. It is conditioned on the upsampled hidden representation from the bridge-net. ClariNet’s Architecture Baidu has also proposed a new parallel wave generation method based on the Gaussian inverse autoregressive flow (IAF).  This mechanism generates all samples of an audio waveform in parallel, speeding up waveform synthesis dramatically as compared to traditional autoregressive methods. To teach a parallel waveform synthesizer, they use a Gaussian autoregressive WaveNet as the teacher-net and the Gaussian IAF as the student-net. Their Gaussian autoregressive WaveNet is trained with maximum likelihood estimation (MLE). The Gaussian IAF is distilled from the autoregressive WaveNet by minimizing KL divergence between their peaked output distributions, stabilizing the training process. For more details on ClariNet, you can check out Baidu’s paper and audio samples. How Deep Neural Networks can improve Speech Recognition and generation AI learns to talk naturally with Google’s Tacotron 2
Read more
  • 0
  • 0
  • 4849
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-react-native-0-61-introduces-fast-refresh-for-reliable-hot-reloading
Bhagyashree R
25 Sep 2019
2 min read
Save for later

React Native 0.61 introduces Fast Refresh for reliable hot reloading

Bhagyashree R
25 Sep 2019
2 min read
Last week, the React team announced the release of React Native 0.61. This release comes with an overhauled reloading feature called Fast Refresh, a new hook named ‘useWindowDimensions’, and more. https://twitter.com/dan_abramov/status/1176597851822010375 Key updates in React Native 0.61 Fast Refresh for reliable hot reloading In December last year, the React Native team asked developers what they dislike about React Native. Developers listed the problems they face when creating a React Native application including clunky debugging, improved open-source contribution process, and more. Hot reloading refreshes the updated files without losing the app state. Previously, it did not work reliably with function components, often failed to update the screen, and wasn’t resilient to typos and mistakes, which was one of the major pain points. To address this issue, React Native 0.61 introduces Fast Refresh, which is a combination of live reloading with hot reloading. Dan Abramov, a core React Native developer, wrote in the announcement, “In React Native 0.61, we’re unifying the existing “live reloading” (reload on save) and “hot reloading” features into a single new feature called “Fast Refresh”.” Fast Refresh fully supports function components, hooks, recovers gracefully after typos and mistakes, and does not perform invasive code transformations. It is enabled by default, however, you can turn it off in the Dev Menu. The useWindowDimensions hook React Native 0.61 comes with a new hook called useWindowDimensions, which can be used as an alternative to the Dimensions API in most cases. This will automatically provide and subscribe to window dimension updates. Read also: React Conf 2018 highlights: Hooks, Concurrent React, and more Improved CocoaPods compatibility support is fixed In React Native 0.60, CocoaPods was integrated by default, which ended up breaking builds that used the use_frameworks! attribute. In React Native 0.61, this issue is fixed by making some updates in podspec, which describes a version of a Pod library. Read also: React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] Check out the official announcement to know more about React Native 0.61. 5 pitfalls of React Hooks you should avoid – Kent C. Dodds #Reactgate forces React leaders to confront community’s toxic culture head on Ionic React RC is now out! React Native VS Xamarin: Which is the better cross-platform mobile development framework? React Native community announce March updates, post sharing the roadmap for Q4
Read more
  • 0
  • 0
  • 4842

article-image-numpy-1-17-0-is-here-officially-drops-python-2-7-support-pushing-forward-python-3-adoption
Vincy Davis
31 Jul 2019
5 min read
Save for later

NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption

Vincy Davis
31 Jul 2019
5 min read
Last week, the Python team released NumPy version 1.17.0. This version has many new features, improvements and changes to increase the performance of NumPy. The major highlight of this release includes a new extensible numpy.random module, new radix sort & timsort sorting methods and a NumPy pocketfft FFT implementation for accurate transforms and better handling of datasets of prime length. Overriding of numpy functions has also been made possible by default. NumPy 1.17.0 will support Python versions 3.5 - 3.7. Python 3.8b2 will work with the new release source packages, but may not find support in future releases. The Python team had previously updated users that Python 2.7 maintenance will stop on January 1, 2020. NumPy 1.17.0 officially dropping Python 2.7 is a step towards the adoption of Python 3. Developers who want to port their Python 2 code in Python 3, can check out the official porting guide, released by Python. Read More: NumPy drops Python 2 support. Now you need Python 3.5 or later. What’s new in NumPy 1.17.0? New extensible numpy.random module with selectable random number generators NumPy 1.17.0 has a new extensible numpy.random module. It also includes four selectable random number generators and improved seeding designed for use in parallel processes. PCG64 is the new default numpy.random module while MT19937 is retained for backwards compatibility. Timsort and radix sort have replaced mergesort for stable sorting Both the radix sort and timsort have been implemented and can be used instead of mergesort. The sorting kind options ‘stable’ and ‘mergesort’ have been made aliases of each other with the actual sort implementation for maintaining backward compatibility. Radix sort is used for small integer types of 16 bits or less and timsort is used for all the remaining types of bits. empty_like and related functions now accept a shape argument Functions like empty_like, full_like, ones_like and zeros_like will now accept a shape keyword argument, which can be used to create a new array as the prototype and overriding its shape also. These functions become extremely useful when combined with the __array_function__ protocol, as it allows the creation of new arbitrary-shape arrays from NumPy-like libraries. User-defined LAPACK detection order numpy.distutils now uses an environment variable, comma-separated and case insensitive detection order to determine the detection order for LAPACK libraries. This aims to help users with MKL installation to try different implementations. .npy files support unicode field names A new format version of .npy files has been introduced. This enables structured types with non-latin1 field names. It can be used automatically when needed. New mode “empty” for pad The new mode “empty” pads an array to a desired shape without initializing any new entries. New Deprications in NumPy 1.17.0 numpy.polynomial functions warn when passed float in place of int Previously, functions in numpy.polynomial module used to accept float values. With the latest NumPy version 1.17.0, using float values is deprecated for consistency with the rest of NumPy. In future releases, it will cause a TypeError. Deprecate numpy.distutils.exec_command and temp_file_name The internal use of these functions has been refactored for better alternatives such as replace exec_command with subprocess. Also, replace Popen and temp_file_name <numpy.distutils.exec_command> with tempfile.mkstemp. Writeable flag of C-API wrapped arrays When an array is created from the C-API to wrap a pointer to data, the writeable flag set during creation indicates the read-write nature of the data. In the future releases, it will not be possible to convert the writeable flag to True from python as it is considered dangerous. Other improvements and changes Replacement of the fftpack based fft module by the pocketfft library pocketfft library contains additional modifications compared to fftpack which helps in improving accuracy and performance. If FFT lengths has large prime factors then pocketfft uses Bluestein's algorithm, which maintains O(N log N) run time complexity instead of deteriorating towards O(N*N) for prime lengths. Array comparison assertions include maximum differences Error messages from array comparison tests such as testing.assert_allclos now include “max absolute difference” and “max relative difference” along with previous “mismatch” percentage. This makes it easier to update absolute and relative error tolerances. median and percentile family of functions no longer warn about nan Functions like numpy.median, numpy.percentile, and numpy.quantile are used to emit a RuntimeWarning when encountering a nan. Since these functions return the nan value, the warning is redundant and hence has been removed. timedelta64 % 0 behavior adjusted to return NaT The modulus operation with two np.timedelta64 operands now returns NaT in case of division by zero, rather than returning zero. Though users are happy with NumPy 1.17.0 features, some are upset over the Python version 2.7 being officially dropped. https://twitter.com/antocuni/status/1156236201625624576 For the complete list of updates, head over to NumPy 1.17.0 release notes. Plotly 4.0, popular python data visualization framework, releases with Offline Only, Express first, Displayable anywhere features Python 3.8 new features: the walrus operator, positional-only parameters, and much more Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 4842

article-image-darpas-2-billion-ai-next-campaign-includes-a-next-generation-nonsurgical-neurotechnology-n3-program
Savia Lobo
11 Sep 2018
3 min read
Save for later

DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program

Savia Lobo
11 Sep 2018
3 min read
Last Friday (7th September, 2018), DARPA announced a multi-year investment of more than $2 billion in a new program called the ‘AI Next’ campaign. DARPA’s Agency director, Dr. Steven Walker, officially unveiled the large-scale effort during D60,  DARPA’s 60th Anniversary Symposium held in Maryland. This campaign seeks contextual reasoning in AI systems in order to create deeper trust and collaborative partnerships between humans and machines. The key areas the AI Next Campaign may include are: Automating critical DoD (Department of Defense) business processes, such as security clearance vetting in a week or accrediting software systems in one day for operational deployment. Improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies. Reducing power, data, and performance inefficiencies. Pioneering the next generation of AI algorithms and applications, such as ‘explainability’ and commonsense reasoning. The Next-Generation Nonsurgical Neurotechnology (N3) program In the conference, DARPA officials also described the next frontier of neuroscience research: technologies for able-bodied soldiers that give them super abilities. Following this, they introduced the Next-Generation Nonsurgical Neurotechnology (N3) program, which was announced in March. This program aims at funding research on tech that can transmit high-fidelity signals between the brain and some external machine without requiring that the user is cut open for rewiring or implantation. Al Emondi, manager of N3, said to IEEE Spectrum that he is currently picking researchers who will be funded under the program and can expect an announcement in early 2019. The program has two tracks: Completely non-invasive: The N3 program aims for new non-invasive tech that can match the high performance currently achieved only with implanted electrodes that are nestled in the brain tissue and therefore have a direct interface with neurons—either recording the electrical signals when the neurons “fire” into action or stimulating them to cause that firing. Minutely invasive: DARPA says it doesn’t want its new brain tech to require even a tiny incision. Instead, minutely invasive tech might come into the body in the form of an injection, a pill, or even a nasal spray. Emondi imagines “nanotransducers” that can sit inside neurons, converting the electrical signal when it fires into some other type of signal that can be picked up through the skull. Justin Sanchez, director of DARPA’s Biological Technologies Office, said that making brain tech easy to use will open the floodgates. He added, “We can imagine a future of how this tech will be used. But this will let millions of people imagine their own futures”. To know more about the AI Next Campaign and the N3 program in detail, visit DARPA blog. Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology DARPA on the hunt to catch deepfakes with its AI forensic tools underway  
Read more
  • 0
  • 0
  • 4836

article-image-kotlin-1-3-60-released-kotlin-worksheets-support-kotlin-native-targets
Sugandha Lahoti
19 Nov 2019
2 min read
Save for later

Kotlin 1.3.60 released with Kotlin Worksheets, support for the new Kotlin/Native targets and other updates

Sugandha Lahoti
19 Nov 2019
2 min read
Kotlin 1.3.60 was released yesterday with new features, as well as quality and tooling improvements. This release adds support for more Kotlin/Native platforms and targets. It also improves the Kotlin/MPP IDE experience. For Kotlin/JS, Kotlin 1.3.60 adds support for source maps and improves the platform test runner integration. The team has also significantly enhanced some “create expect” quick-fixes to the multiplatform side of Kotlin. IntelliJ IDEA and Kotlin Eclipse IDE plugin updates Scratch files are now redesigned and improved to let you see the results, which are shown in a different window. The Kotlin team is working on enhancing the user experience with Kotlin Gradle build scripts. Developers can set function breakpoints in the Kotlin code. The debugger will then stop execution on entering or exiting the corresponding function. Multiple improvements to Java-to-Kotlin converter. The kotlin-eclipse plugin now supports experimentally incremental compilation for single modules. Improvements to Kotlin/Native compiler in Kotlin 1.3.60 The Kotlin/Native compiler has compatibility with the latest tooling bits: XCode 11 and LLVM 8.0. It also adds new platforms/targets such as watchOS, tvOS, and Android (native). Kotlin 1.3.60 adds experimental symbolication of iOS crash reports for release binaries (including LLVM-inlined code, which is one step further than what XCode is able to decode). Thread-safe tracking of Objective-C weak/shared references to Kotlin objects. Support for suspend callable references. The ability to associate a work queue with any context/thread, not just the ones created ad⁠-⁠hoc through Worker.start. The kotlinx.cli project has been (mostly) rewritten and is included in this release of the Kotlin/Native compiler. The runtime performance of Kotlin/Native compiler has also been improved: interface calls are now up to 5x faster, and type checks up to 50x faster in Kotlin 1.3.60. The team has also shared upcoming changes planned for Kotlin 1.4 which is to be released in 2020. Currently, Kotlin 1.4 is available in the experimental state. You can find the complete list of Kotlin 1.3.60 changes in the changelog. Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and more. Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines Microsoft announces .NET Jupyter Notebooks
Read more
  • 0
  • 0
  • 4824
article-image-spotify-releases-chartify-a-new-data-visualization-library-in-python-for-easier-chart-creation
Natasha Mathur
19 Nov 2018
2 min read
Save for later

Spotify releases Chartify, a new data visualization library in python for easier chart creation

Natasha Mathur
19 Nov 2018
2 min read
Spotify announced, last week, that it has come out with Chartify, a new open source Python data visualization library, making it easy for data scientists to create charts. It comes with features such as concise and user-friendly syntax and consistent data formatting among others. Let’s have a look at these features in this new library. Concise and user-friendly syntax Despite the abundance of tools such as Seaborn, Matplotlib, Plotly, Bokeh, etc, used by data scientists at Spotify, chart creation has always been a major issue in the data science workflow. Chartify solves that problem as the syntax in it is considerably more concise and user-friendly, as compared to the other tools. There are suggestions added in the docstrings, allowing users to recall the most common formatting options. This, in turn, saves time, allowing data scientists to spend less time on configuring chart aesthetics, and more on actually creating charts. Consistent data formatting Another common problem faced by data scientists is that different plotting methods need different input data formats, requiring users to completely reformat their input data. This leads to data scientists spending a lot of time manipulating data frames into the right state for their charts. Chartify’s consistent input data formatting allows you to quickly create and iterate on charts since less time is spent on data munging. Chartify Other features Since a majority of the problems could be solved by just a few chart types, Chartify focuses mainly on these use cases and comes with a complete example notebook that presents the full list of chart types that Chartify is capable of generating. Moreover, adding color into charts greatly help simplify the charting process, which is why Chartify has different palette types aligned to the different use cases for color. Additionally, Chartify offers support for Bokeh, an interactive python library for data visualization, providing users the option to fall back on manipulating Chartify charts with Bokeh if they need more control. For more information, check out the official Chartify blog post. cstar: Spotify’s Cassandra orchestration tool is now open source! Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer 8 ways to improve your data visualizations
Read more
  • 0
  • 0
  • 4823

article-image-google-partners-with-wordpress-and-invests-1-2-million-on-an-opinionated-cms-called-newspack
Bhagyashree R
18 Jan 2019
2 min read
Save for later

Google partners with Wordpress and invests $1.2 million on “an opinionated CMS” called Newspack

Bhagyashree R
18 Jan 2019
2 min read
On Monday, Google announced that it has partnered with Automattic Inc., the parent company of WordPress.com, to develop an advanced open-source publishing and revenue-generating platform for news organizations named Newspack. Under the Google News Initiative, they have invested $1.2 million towards their efforts in building this platform. The purpose of this platform is to help journalists put their full energy in covering stories instead of worrying about designing websites, configuring CMSes, or building commerce systems. Google mentioned in the post, “It is trying to help small publishers succeed by building best practices into the product while removing distractions that may divert scarce resources. We like to call it "an opinionated CMS:” it knows the right thing to do, even when you don’t.” It will also provide publishers full access to all the plugins created by the WordPress developer community. Automattic, in an announcement, called for small and medium-sized digital news organizations to become charter participants in the development of Newspack. If you want to become one of the partners, you can fill in the form issued by Automattic, which is due by 11:59 p.m. Eastern Time (UTC -5:00) on February 1. The platform’s beta version is estimated to be released near the end of July and will be made available to publishers globally later this year. To get a better idea of the features and capabilities needed by publishers and their business impact, Automattic will be working with Spirited Media and News Revenue Hub. Spirited Media operates local digital news sites in Denver, Philadelphia, and Pittsburgh, and News Revenue Hub provides revenue solutions for digital publishers. In addition to Google, other funding organizations for this platform include The Lenfest Institute for Journalism, ConsenSys, the organization backing Civil Media, and The John S. and James L. Knight Foundation. WordPress 5.0 (Bebo) released with improvements in design, theme and more Introduction to WordPress Plugin Google and Waze share their best practices for canary deployment using Spinnaker
Read more
  • 0
  • 0
  • 4809

article-image-introducing-intels-openvino-computer-vision-toolkit-for-edge-computing
Pravin Dhandre
17 May 2018
2 min read
Save for later

Introducing Intel's OpenVINO computer vision toolkit for edge computing

Pravin Dhandre
17 May 2018
2 min read
Almost after a week of Microsoft’s announcement about its plan to develop a computer vision develop kit for edge computing, Intel smartly introduced its latest offering, called OpenVINO in the domain of Internet of Things (IoT) and Artificial Intelligence (AI). This toolkit is a comprehensive computer vision solution, that brings computer vision and deep learning capabilities to the edge devices smoothly. OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit supports popular open source frameworks like OpenCV, Caffe and TensorFlow. It supports and works with Intel’s traditional CPUs, AI chips, field programmable gate array (FPGA) chips and Movidius vision processing unit (VPU). The toolkit presumes the potential to address a wide number of challenges faced by developers in delivering distributed and end-to-end intelligence. With OpenVINO, developers can simply streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Computer vision limitations related to bandwidth, latency and storage are expected to be resolved to an extent. This toolkit would also help developers in optimizing AI-integrated computer vision applications and scaling distributed vision applications which generally needs a complete redesign of solution. Until now, edge computing has been more of a prospect for an IoT market. With OpenVINO, Intel stands as the the only industry leader in delivering IoT solutions from the edges, providing an unparalleled solution to meet AI needs of businesses. OpenVINO is already being used by companies like GE Healthcare, Dahua, Amazon Web Services and Honeywell across their Digital Imaging and IoT Solutions. To explore more information on its capabilities and performance, visit Intel’s official OpenVINO product documentation. A gentle note to readers: OpenVINO  is not to be confused with Openvino, an open-source winery and wine-backed cryptoasset, Openvino. Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? AWS Greengrass brings machine learning to the edge Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT
Read more
  • 0
  • 1
  • 4805
article-image-a-new-wpa-wpa2-security-attack-in-town-wi-fi-routers-watch-out
Savia Lobo
07 Aug 2018
3 min read
Save for later

A new WPA/WPA2 security attack in town: Wi-fi routers watch out!

Savia Lobo
07 Aug 2018
3 min read
Jens "atom" Steube, the developer of the popular Hashcat password cracking tool recently developed a new technique to obtain user credentials over WPA/WPA2 security. Here, attackers can easily retrieve the Pairwise Master Key Identifier (PMKID) from a router. WPA/WPA2, the Wi-Fi security protocols, enable a wireless and secure connection between devices using encryption via a PSK(Pre-shared Key). The WPA2 protocol was considered as highly secure against attacks. However, a method known as KRACK attack discovered in October 2017 was successful in decrypting the data exchange between the devices, theoretically. Steube discovered the new method when looking for new ways to crack the WPA3 wireless security protocol. According to Steube, this method works against almost all routers utilizing 802.11i/p/q/r networks with roaming enabled. https://twitter.com/hashcat/status/1025786562666213377 How does this new WPA/WPA2 attack work? The new attack method works by extracting the RSN IE (Robust Security Network Information Element) from a single EAPOL frame. RSN IE is an optional field containing the PMKID generated by a router when a user tries to authenticate. Previously, for cracking user credentials, the attacker had to wait for a user to login to a wireless network. They could then capture the four-way handshake in order to crack the key. However, with the new method, an attacker has to simply attempt to authenticate to the wireless network in order to retrieve a single frame to get access to the PMKID. This can be then used to retrieve the Pre-Shared Key (PSK) of the wireless network. A boon for attackers? The new method makes it easier to access the hash containing the pre-shared key, which needs to be cracked. However, this process takes a long time depending on the complexity of the password. Most users don’t change their wireless password and simply use the PSK generated by their router. Steube, in his post on Hashcat, said,"Cracking PSKs is made easier by some manufacturers creating PSKs that follow an obvious pattern that can be mapped directly to the make of the routers. In addition, the AP mac address and the pattern of the ESSID  allows an attacker to know the AP manufacturer without having physical access to it." He also stated that attackers pre-collect the pattern used by the manufacturers and create generators for each of them, which can then be fed into Hashcat. Some manufacturers use patterns that are too large to search but others do not. The faster one’s hardware is, the faster one can search through such a keyspace. A typical manufacturer’s PSK of length 10 takes 8 days to crack (on a 4 GPU box). How can users safeguard their router’s passwords? Creating one’s own key rather than using the one generated by the router. The key should be long and complex by consisting of numbers, lower case letters, upper case letters, and symbols (&%$!) Steube personally uses a password manager and lets it generate truly random passwords of length 20 - 30. One can follow the researcher's footsteps in safeguarding their routers or use the tips he mentioned above. Read more about this new WiFi security attack on Hashcat forum. NetSpectre attack exploits data from CPU memory Cisco and Huawei Routers hacked via backdoor attacks and botnets Finishing the Attack: Report and Withdraw
Read more
  • 0
  • 2
  • 4802

article-image-github-now-allows-repository-owners-to-delete-an-issue-curse-or-a-boon
Amrata Joshi
09 Nov 2018
3 min read
Save for later

Github now allows repository owners to delete an issue: curse or a boon?

Amrata Joshi
09 Nov 2018
3 min read
On Saturday Github released the public beta version for a new feature to delete issues. This feature lets repository admins, delete an issue from any repository, permanently. This might give more power to the repository owners now. Since the time Github tweeted about this news, the controversy around this feature seems to be on fire. According to many, this new feature might lead to the removal of issues that disclose severe security issues. Also, many users can take help of the closed issue and resolve their problems as the conversation history of repository sometimes has a lot of information. https://twitter.com/thegreenhouseio/status/1060257920158498817 https://twitter.com/aureliari/status/1060279790706589710 In case, someone posts a security vulnerability publicly as an issue, it might turn out to be a big problem to the project owner, as there’s a high possibility of people avoiding the future updates coming on the same project. This feature could be helpful to many organizations, as this feature might work as a damage control for them. Few of the issues posted by users on Github aren’t really issues, so this feature might be helpful in that direction. Also, there are a lot of duplicate issues which get posted on purpose or mistakenly by the users, so this feature could work a rescue tool! In contrast to this, a lot of users are opposing this feature. This feature might not be so helpful because no matter how fast one erases a vulnerability report, the info gets leaked via the mail inbox. The poll posted by one of the users on Twitter which has 71 votes as of the time of writing, shows that 69% of the participants disliked this feature. While only 14% of users have given a thumbs up to this feature. And the rest 17% have no views on it. The poll is still on, it would be interesting to see the final report of the same. https://twitter.com/d4nyll/status/1060422721589325824 The users are requesting for a better option which might just highlight a way to report security issues in a non-public way. While few others prefer an archive option instead of deleting the issue permanently. And some others just strongly favor removing the feature. https://twitter.com/kirilldanshin/status/1060265945598492677 With many users now blaming Microsoft for this feature on Github, it would be interesting to see the next update on the same feature, could it possibly just be an UNDO option? Read more about this news on Github’s official Twitter page. GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation GitHub now allows issue transfer between repositories; a public beta version GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage
Read more
  • 0
  • 0
  • 4801